Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup February 2005

Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning

Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:

Training online
Threading Essentials course

Get rid of your performance problems and memory leaks!

Back to newsletter 051 contents

This month the Java Performance newsletter is able to do something that is not very typical for a newsletter. We get to announce that Joe Ottinger has joined TheserverSide as an editor. Joe and I worked at JDJ for a short time and we've co-authored for IBM Developerworks on non-performance related subjects. I do wish him well in his new position.

The Server Side

This month we will start with TheServerSide where we get to ask the question: when is a memory leak not a memory leak? The real title of the thread is "My first heap/performance analysis" ( The thread starts with a description of the process used to gather up the profiling information. The reason for investigating memory is a common one, the familiar OutOfMemoryException followed by the JVM exiting to the operating system. Not a joyous happening in anyone's books! The reaction to this event is (of course) an investigation into the problem, and the posting goes into some details about how memory and execution profiles were collected. A little later on in the thread a suggestion is made to use the JVM setting of -Xms128m -Xmx512m. Analysis with this configuration showed that the memory leaks magically disappeared. When it comes to computing, I don't believe in magic and in this instance there is a perfectly reasonable explanation as to what is happening.

As the application is working, it creates large numbers of objects. These objects take up space and they retain this space until they are eligible to be garbage collected. As we all know, an object is only eligible for GC when no other object holds onto it. Let's speculate for a moment that this system has a session that collects objects that are related to it. Let's continue by saying that these objects only get released when the session is released. In this scenario the session could consume more and more memory (and consequently appear to be a memory leak) until there is no more memory to consume. In the original tests, that limit was set to 128M. In the final tests, the JVM's Java heap was allowed to consume four times more memory. With this much extra memory, it is more likely that a session would be terminated before running the JVM out of memory.

Thus what appears to be a memory leak is just the result of a lack of memory. But beware, adding more memory to a system may appear to solve a problem but, in reality, it may be just making a problem. The only way to know for sure is to profile. A flat memory utilization curve signals a healthy application. An upward sloping curve means you've still got a problem which translates into more work.

Here is a quick nifty trick to avoid building a DOM tree if you really want to build a XML document. You can find the entire thread at

  1. get ResultSet
  2. while there is data
     2.1. read a new row
       2.1.1. read a field
       2.1.2. StringBuffer.add (<tag> + field + </tab>)
       2.1.3. read a field
       2.1.4. StringBuffer.add (<tag> + field + </tab>)
  3. send the data to the browser.

To use Java or not to use Java, that was the question presented in a thread about how to best parse a 20Mb text file ( The question started off very ambiguously which translated into some not so useful answers, but once it was discovered that the structure of the file was |aaa|bbb|this field|xxx|yyyyyy|, the suggestion came in loud and clear that this should be done with awk and sed. Now I've used these tools in the past to parse files with a similar structure and I have to admit that they performed quite well. So my first inclination was to say, yes, that sounds like the best answer. But with any hunch, it can be very right or it can be very wrong. The only way to tell is to benchmark them. In this case, the Java option was benchmarked and it came in under 5 seconds. We don't have a benchmark for the Unix scripting idea but it seems fair to believe that it will be close to 5 seconds also. That said, we've already got burnt once by saying that Java is too slow for this task.

The JavaRanch

From the Java Ranch we start with a question of Vector vs. ArrayList ( The posting starts with a story about how converting from Vector to ArrayList resulted in a 5% jump in CPU utilization while providing a better overall response. The originator of the post was confused as to how that could be. The obvious answer is that using Vector, being synchronized, resulted in some synchronization overhead that prevented the application from consuming as much of the CPU as it needed. Removing synchronization allowed the application to use more CPU and return the results of the calculation back to the users in a shorter amount of time. There is no faster resource in your computer than the CPU. Because of this, it's the resource that you want to be bottlenecked on.

The homework question of the month comes in the form of a puzzler ( What is the best way to subtract two strings. For example, misinformation - inform = misation. I'll let you all think about this one for a while.

Sometimes a performance problem is just beyond your reach. This was the case in our first posting from (;action=display;num=1107915137). The posting complains that OO hasn't provided the performance that is needed. In this case, the developer is working with the Java2D implementation for the Mac. Unfortunately what the developer didn't know is that Java2D implementation is somewhat broken. Apple plans to release a fix with the J2SE 5.0. So the answer isn't avoid using objects as the title suggests but rather to fully investigate where the problems may lie.

Finally we get some early indication that the JDK 1.6 (6.0?) is going to offer some performance improvements in the area of math. The results on a Pentium 4.2.8 Ghz, 1 G Ram.

                     Volatile Long  |  raw (64bits)  |  math of FPU
JDK 1.6.0 ea b19     60790273           2147483647        2475860
JDK 1.5.0-b64        18814675           2147483647         550070      
JDK 1.4.2           183486238              5688282        1921968

All measurements in loops per second.

You can see the test at;action=display;num=1106148201.

Kirk Pepperdine.

Back to newsletter 051 contents

Last Updated: 2022-06-29
Copyright © 2000-2022 All Rights Reserved.
All trademarks and registered trademarks appearing on are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed:
Trouble with this page? Please contact us