Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup June 2004

JProfiler
Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning


Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:


Training online
Threading Essentials course


JProfiler
Get rid of your performance problems and memory leaks!


Back to newsletter 043 contents

It seems as though we've been struck by another naive attempt to compare the speed of Java with that of C++. I'll not name the source of the article, because I just don't believe that these types of comparisons are useful. [see this month's news section - ed.] The first thing, just what are we comparing? If it's the expense of computing a task that has been coded in one language as opposed to another, then which task do we chose? In my earlier writing, I've stated that a benchmark needs to be structured so that it answers a specific question. "Which language is faster" is not a specific question. We all know that with the current state of technology, Java (combined with JIT and the HotSpot execution profiler) can dynamically generate native code that is much more optimal than what can be produced by a static C++ compiler. And, this is only the first of a myriad of problems when one answers that question.

In today's world, computing power has increased to the point where there is enough power to solve most of the business problems that we currently are trying to solve. That is not to say that there are not a whole bundle of problems which we'd like to solve but for which there is not enough computing power on the planet to solve them. But, in general, businesses are not interested in those problems. With this in mind, does it really matter if C++ is a tiny bit faster than Java, or if Java is a tiny bit faster than C++? Shouldn't it be: what is the overall cost of developing, operating and maintaining applications? After all, that is the true bottleneck in almost every application's life cycle.

The JavaRanch

We draw our first question from the Java Ranch where the question asks about the wisdom of converting a Vector to an ArrayList before iterating over it. First lets start with some background information. The poster of the question is getting the Vector returned to him from TopLink. TopLink is an O/R mapping tool that is currently owned by Oracle. It is a tool that pre-dates Java and hence was one of the first O/R mapping tools to exist in Java. As a consequence of it's age, it does have remnants of the older 1.1 version of Java. In this case, although Vector has been retrofitted to the Java 2 collections framework, it is a legacy class. Part of the legacy is that it is synchronized which does impart a small performance penalty on those who use it. On the other hand, ArrayList is not synchronized and consequently imposes no performance penalty. The question is, how big is the performance penalty and does it over-ride the cost of converting and then using an ArrayList iterator. Unfortunately there is no way of telling unless we can see how the information is being used. In this case, the proper answer is most likely tied to the number of objects held by the vector. Converting a smallish vector doesn't seem worth the trouble nor is it likely to have any effect on performance in your application. That said, the only way that one can really know is to take a measurement. Again we run into a case of: don't guess - measure!

JavaGaming.org

It would seem that the "write once, run anywhere" slogan is showing cracks. The latest evidence comes from a lengthy but interesting discussion on how much slower the OSX JVM appears in the benchmarks. I do say appears because there are some significant differences in Mac hardware that make a clean port of Sun's JVM code to OSX impossible. The result of differences in hardware is that Apple has just not been able to include many optimizations that are found in the Sun JVM. These optimizations include those found in the server version of HotSpot. That said, since the default setting uses the client HotSpot profiler, you may not notice any difference in performance. Apparently the gamers are finding that the differences in hardware result in better performance with Apple's G4 than with Intel's Pentium technology though those trying to use the G4 as a server maybe a little disappointed.

The Server Side

From the serverside we have an interesting question regarding how one might setup a transaction pool to reduce the number of interactions with the DB. Other bits of information contained in the post indicated that the application was expected to be accepting 15,000 connections in a few hours. As I was reading the responses to the question, my mind was mulling over whether or not this load required such a complex solution. Just as I was starting to do some calculations in my head, I ran into a response that did it for me.

The analysis started by assuming that a few hours was 3. An assumption was made that 20 rows of data would be inserted with each transaction. This estimate was based on the type of work being performed, users filling out forms. So, from there we have 15,000 / (3hr * 360 sec/min) or 1.4 transactions per second. It would seem that this transaction rate is well within the grasp of every commercial RDB available today and that doing any amount of work to reduce the number of transactions would be a premature optimization.

In our final posting, we find a J2EE application that is approaching its capacity in dealing with a legacy application. The J2EE application is required to pull/push time-critical information in as fast as possible. It then goes on to list a number of constraints. It was interesting that the first posting pointed out that they needed to define the term, "fast as possible". For one thing, you can never be "fast as is possible" because that definition changes with every change within the technology. The data is marked as being time-sensitive. That said, it should have a "shelf-life" defined in the business rules. It is information like this that can help bound performance requirements so that engineers have reasonable targets to work toward.

The response goes on to ask the question, what is the resource constraint that is limiting the applications performance? The original post didn't offer any hints as to what that resource constraint was but it did offer some solutions that were being examined. It was very difficult to assess if any solution will offer the required relief given the information provided.

Kirk Pepperdine.


Back to newsletter 043 contents


Last Updated: 2024-12-27
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/roundup043.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us