Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup February 2004

JProfiler
Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning


Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:


Training online
Threading Essentials course


JProfiler
Get rid of your performance problems and memory leaks!


Back to newsletter 039 contents

Just recently, I was talking to a couple of people that thought that C++ was the best language to use for the development of business applications. They also made a claim that OO techniques were not always best for performance. Both of these remarks caught me by surprise because I had thought the tremendous benefits of OO in helping to simplify complexity would have made the performance question moot. I was also surprised to find that they had been engaged in building a billing system using C++.

After politely listening to their point of view, I started to ask them a few questions. I was not all that surprised to learn that they hadn't really thought about these types of questions before. It would be remiss to discredit the intelligence of these people without first realizing that they had successfully deployed a complex billing application all written in C++! This is no small feat. The reason I was not surprised is that it is not all that common for those of us who have not pondered on performance.

So what were my questions? I asked them what they had found to be the bottlenecks in their application. I then asked them if they felt that communications with the database might have a larger impact on performance than the choice of language. All at once their eyes lit up when they realized where I was going.

By the end of the conversation I had easily convinced them that they could no longer say that an application written in C++ would out-perform an application written in any other language. They also came away from the table with a new view on performance and performance tuning. Even though performance will always be gauged by perception, statements about performance need to be backed up with empirical measurements that can be related to requirements of your end users.

Sadly, their view was that management felt they could solve all of their performance problems by simply purchasing more hardware. Even though they felt that this belief was bit silly, they didn't bother to question it. In the end, they realized that not all performance problems can be solved by throwing more hardware at the problem. Without understanding what the underlying cause of the performance problem is, one can spend a lot of money buying more hardware and not see any tangible results from the investment.

The Server Side

The first Java related question that we look at from the serverside is: do object relational mapping tools work well with large applications? It takes a deeper examination of this question to understand the complexity that is involved in mapping relational tables to objects. If we understand the differences in object and relational models then we can understand what needs to happen in order to effectively move between them. Both models have a notion of identity. In the relational world, we have keys. Keys allows us to uniquely identify a piece of data if that identity is indeed important to us. Objects also have identity but that identity is maintained by the underlying system. In many cases, the relational model tolerates duplication. In the world of objects, duplication of identities amounts to identity theft. So amongst the duties that an O-R mapping tool will take on is the responsibility to maintain object identity, a problem that is typically not tackled head on in a home grown persistence framework.

Another duty that O-R mapping tools take on is the all to important task of generating the mappings. Commercial O-R mapping tools often come with the capability to automatically generate the mappings for you. This can be a huge win in development time if one is dealing with a large application. In addition, the O-R mapping tool can decouple your application from the underlying database implementation. In the case of Toplink, Oracle engineers are responsible for ensuring that the tool generates the most efficient bindings to the underlying database. Thus using such a tool allows development teams to take advantage of features that enhance performance while minimizing the dependency on the underlying database implementation. Given this, I would think that there would be very few instances when I'd not consider using and O-R mapping tool.

In yet another thread on the O-R mapping topic, the question of the scalability of Hibernate comes into question. It's an interesting question because Hibernate, being a new product, has most likely not been integrated into some of the larger systems that could answer the question definitively. But, from the blogs of Gavin King, we do know that a lot of emphasis has been put on performance. Evidence is Gavin's recent discovery that using finalize to close JDBC objects was causing a significant drag on performance. Still the question remains, was it a big enough drag to be noticeable in a large application? For if one surveys these applications, one is most likely to find that the primary bottlenecks will lie in places like network communications etc. Though there are definitely situations where O-R mapping tools may be the bottleneck, I like to quote one of those responsible for Toplink when asked by customers how fast Toplink will run. His response, "how fast can your bicycle go?"

The JavaRanch

Moving onto the Java Ranch one will find a number of questions that really pertain to micro-performance benchmarking. Some of the questions looked downright silly but still, they are real questions and it is best to tackle them head on. In this case, we should use the infamous quote from Knuth, "early optimization is the root of all evil." Now of course with all such statements that take on an extreme position the author is leaving himself open to some criticism. But in each of the instances I can cite, it seemed as if the posting was opting for some early optimizations that will most likely not make any difference. All of this brings into question the usefulness of the results of a micro-benchmark; or does it?

The results of a micro performance benchmarking exercise are most useful when one has identified a bottleneck and is interested in determining if a change in the code could result in the required performance improvement. Results taken outside of that context may or may not hold. So although I feel that it's useful to know the answers to some of the smaller questions, one needs to understand when it's best to apply these lessons and when it's best to avoid early optimizations.

To illustrate the point just made, let's consider the question asked in regards to the potential performance difference in iterators and enumerations. This is an interesting question because the use of these two classes is distributed enough throughout one's application that it often makes sense to mandate a preference. In other words, all things being equal, one should choose one technique over the other. Here's the code that was posted.

import java.util.*;
public class Performance {
    public static void main(String[] args){
        Vector v=new Vector();
        Object element;
        Enumeration enum;
        Iterator iter;
        long start;
//        
        for(int i=0; i<1000000; i++)
            v.add("New Element");
//        
        enum=v.elements();
        iter=v.iterator();
        start=System.currentTimeMillis();
        while(iter.hasNext()){
            element=iter.next();
        }
        System.out.println("Iterator took " + (System.currentTimeMillis()-start));
//     
        System.gc();   //request to GC to free memory 
//    
        start=System.currentTimeMillis();
        while(enum.hasMoreElements()){
            element=enum.nextElement();
        }
        System.out.println("Enumeration took " + (System.currentTimeMillis()-start));   
    }
}

If we examine the code we see that it only considers the class java.util.Vector. That in itself is not a problem as long as we realize the Iterator and Enumeration are both interfaces and as such, the results of the MBM can only be applied to that class. But, does this code really answer the fundamental question? It's difficult to tell because even though Enumeration was reported as being faster, there was no data posted to back up that claim.

In the ideal world, one would expect that the execution time for a constant set of instructions should be constant. In the real world, there are things that come to interrupt and in doing so, inject variability into the overall run time. That said, in measuring an effect, we need to try to eliminate as many of the interruptions as we possibly can. The author of this code tries to do just that by injecting a System.gc() in the middle of the code. In taking that step, he is trying to eliminate the effects of GC. The question is has he gone far enough to eliminate the effects of garbage collection? And again, the answer is, we don't really know although an inspection of the code suggests that the answer is no. Using statistics would help us to see things that are otherwise not visible, but was not done here.

JavaGaming.org

Meanwhile, over at java gaming, the quest for the perfect gaming experience continues. As could be expected, the gamers have started investigating the capabilities of the JDK 1.5. Early indications are that we should all expect an increase in overall performance though just how much has yet to have been discovered. Certainly the NIO package has received some attention as preliminary results have caused at least one jaw to drop.

Reverting back to the older technology, there is a very good discussion on generational GC as one of the game developers tries to sort out the details. Included in this thread are excellent descriptions of incremental GC and the train algorithm used to support concurrent GC.

The question centers on what happens to large objects that are not released before GC and what happens if Eden is full. Objects are normally created in Eden. In incremental GC terms, objects are aged by the number of GC cycles that they survive. On each successive GC, objects are moved from one generational space to another until they are tenured to old space. If an object being created does not fit into Eden, then it is created in old space directly. Old space is intended to segregate older object away from the younger ones. This segregation reduces the overall cost of GC by reducing the number of object to scan. Understanding the details of GC can help you to decide which of the GC algorithms one should choose and how to best configure it so that it runs as optimally as it can.

Final Word

As you may know, Jack and I also write a performance column for IBM Developerworks. What is interesting about this is the manner in which IBM collects data for feedback. The current article gives a pretty detailed description on how the run time executes a cast and what happens if that cast fails. The entire description is framed in a real question that was posted on one of the forums that we regularly cover. The best (negative) response that I've seen to date is one that complains that we went through a long involved explanation only to conclude that the best thing to do was to follow a best practice. Big surprise. But, is this criticism really valid? There are plenty of "best practices" that hinder performance. The fact that this best practice doesn't should be taken as good news. But apparently (at least in one case), it was not. The issue is how do we determine which best practice is good for performance and which are there for other purposes or which practice may not be best for anything at all unless we question them. If asking why results in my being criticized for finding the "obvious" answer, then I'm willing to accept that criticism. One thing is for certain, as long as I'm able to ask questions, the question why will be a part of that repertoire.

Kirk Pepperdine.


Back to newsletter 039 contents


Last Updated: 2024-04-28
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/roundup039.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us