Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup March 2005

JProfiler
Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning


Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:


Training online
Threading Essentials course


JProfiler
Get rid of your performance problems and memory leaks!


Back to newsletter 052 contents

Your application is running fine for a few hours and then is starts going slower and slower and until you finally have to restart it. What could be causing this degradation in performance? Let's enter TheServerSide and see ...

The Server Side

One reason why an application could slowly degrade in performance is that the application is suffering from a memory leak; if left to it’s own devices, it would eventually run out of memory. If your application is displaying these types of symptoms, consider running a memory profiler. Odds are the JVM is performing regular full garbage collections. These garbage collections will get longer and longer over time as the cost of each GC increases, as the number of objects left in the system increases. Don’t forget, GC will stop all of your application threads while it executes. A stop gap measure maybe to consider moving to a parallel and/or concurrent version of the garbage collector.

Another discussion at the ServerSide considered why "more" is not always "better". The question posed was: why does increasing the number of threads in the JBoss thread pool result in a decreased number of transactions? The response to this question is a bit mixed up in that it assigns the task of context switching between threads to the JVM. This used to be true when the JVMs implemented “green threads” but this is no longer the case with modern JVMs. Green threads are different than native threads in that they are simulated by the JVM. As alluded to, JVMs now map threads directly onto an OS thread. The result is that it is the operating system's responsibility to context switch between threads. When you have more threads, then you have more threads to switch through. Switching through more threads takes time away from processing and this is the fundamental reason why setting up a thread pool with too many threads is not a good idea. The recommended number of threads to put into any thread pool is twice the number of CPUs on the machine. That means on a 4 CPU machine, thread pools should be limited to 8. This information is backed up by a nice little benchmark that the author of the question posted.

In our next post, the author was looking for a reference to back his argument that co-locating processes on a single piece of hardware will offer better performance for a typical J2EE application. The obvious answer to this question is that co-locating the database with the application server will move the communications from the network to a local loop back-channel. Local loop back-channels are the most efficient inter-process communication mechanism after using shared memory. However one must take care to ensure that the supporting hardware has enough memory, CPU and I/O channels to handle the extra load.

The JavaRanch

Moving on to the Java Ranch, in our first post we have question regarding a JSP based client application which performs fine when executed locally but is very slow when accessed using a public IP. Steve Souza responds to this post with the perfect advice. The application needs to be monitored so that you can see which part of the application is performing poorly. He points out that JAMon will soon be supporting more features that will allow it to be more easily hooked into Servlets as a Servlet filter. He gives a brief explanation as to how the current incarnation can be hooked up as a Servlet filter. There is no doubt that JAMon can give you an interesting view into your applications performance. You can find Steve’s offering at http://www.jamonapi.com. [Although on an aside, a common cause for this local-fast/remote-slow is if you are doing a reverse DNS lookup during the page access - ed.]

In our next discussion, the author was asking if there were any standard tools to benchmark his application. The posting went on to explain that his benchmark kept producing wildly differing results. The code was included as part of the posting. Unfortunately the code presents yet another example as to how not construct a micro performance benchmark.

There is much written about micro performance benchmarks both showing how useful they can be as well as how easy it is to get them wrong. In Brian Goetz’s IBM DeveloperWorks article, "Anatomy of a flawed benchmark", he lists a check list of things that need to be included in the benchmark. Even when these things are included, our own IBM DeveloperWorks article entitled "When Good Benchmarking goes Bad", I show how a benchmark can offer misleading results. In contrast to Brian’s article, our article actually shows the benchmark producing a reliable and believable set of results. This does prove that the technique is useful but it also demonstrates that you need to apply it with great care. The articles can be found at http://www-128.ibm.com/developerworks/java/library/j-perf03085/ and http://www-128.ibm.com/developerworks/java/library/j-jtp02225.html

The next post asks a question about the relative performance of bean managed persistence (BMP) and container managed persistence (CMP). Both of these persistence strategies are core to the Enterprise Java Bean (EJB) server Entity Bean (EB) technology. The difference between BMP and CMP is that in BMP, the programmer is responsible for writing the persistence code. In CMP, the container will create the code to persist the data automagically. BMP code tends to be eager in that it loads everything that it can all in one shot. CMP code tends to load data as it needs it. However, EB technology has gotten a pretty bad rap in the last couple of years. Many authors that include Bruce Tate (see interview with Bruce Tate in an earlier newsletter) say the key to EJB performance is to eliminate the use of EB. In fact, the advice given in this thread follows that line of thinking.

Although one can’t ignore the depth of the evidence that EB technology can perform rather poorly when used with non-optimal designs, there are also a fair number of EB applications that we’ve had some involvement in that do perform rather well. So although one can get good performance using EB technology, it would appear to be a difficult target to achieve.

JavaGaming.org

We start our look at Java Gamining with a question asking if HotSpot can make a tail recursive method as efficient as a loop. The coding technique known as recursion involves having a method call itself. Calculating factorials is a classic example. We could use a loop as follows

public int factorial(int n) {
   int product = n;
   for ( int i = n – 1; i > 0; i--)
        product *= i;
   return product;
}

The recursive equivalent is as follows

public int factorial(int n) {
   if ( n == 0)
      return 1;
   else
      return ( n * factorial( n – 1));
}

What makes this latter form of the method inefficient is that every call creates a new stack frame. This can quickly cause the stack to overflow which has the not so interesting side effect of halting execution. The optimization is to convert the code to a tail recursive version. A tail recursive version would be

public int factorial(int n) {
    return factorialHelper( n, n-1);
}

public int factorialHelper( int product, int n) { if ( n <= 0) return product; return factorial( n * product, n--); }

The difference between this code fragment and the prior one is that we’ve introduced a helper method. This helper method is not always needed but in this case it helps to transform the method call to include the current product. Including this current product in the call means that the last thing that the helper method does is call itself and hence, the name, tail recursion.

The beauty of a tail recursive method is that the there is no reason to return to it. A “smart” compiler will take advantage of this by causing the method to reuse the current stack frame instead of creating a new one. This means that when the terminating condition is reached, there is no need to unwind the stack (it hasn’t been built up) as the helper can directly return to the caller.

Unfortunately Java doesn’t specify that the HotSpot compiler recognizes tail recursion and that implementation is left to the devices of the JVM author. The danger is if you write a tail recursive method it may be fine on one JVM only to run out of stack space on another. It may only be a little crack in the WORA mantra but it can be a devastating one.

Java Gamers live and die with a good timer which unfortunately, Java running on Windows has never really had. If you are in the situation where you are looking for a good timer on Windows, then you may want to consider Gage. Gage relies on native code and as such will only work on Windows. But, by all reports it does offer some interesting advantages over using System.currentTimeMillis();

Our final posting looks at JRockit. JRockit is an interesting JVM that has been pushing a lot of performance related server side advancements. It is interesting that the developers at Java Gaming are interested in JRockit as it certainly doesn’t play well on the desktop. That said, the benchmarks listed in this thread clearly show that it works very well when it is used as intended, running long term server processes. JRockit has a very different feel than the more tradition general purpose JVMs. That said, it comes with a number of featues that include low overhead monitoring capabilities that blow away all the rest of the JVMs. If you are having performance problems with your server, you may want to consider this alternative JVM.

Discussions Referenced

http://www.theserverside.com/discussions/thread.tss?thread_id=32328
http://www.theserverside.com/discussions/thread.tss?thread_id=32665
http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=15&t=001002
http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=15&t=001000
http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=15&t=000991
http://www.javagaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=Tuning;action=display;num=1109921294
http://www.javagaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=Tuning;action=display;num=1110075500
http://www.javagaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=Tuning;action=display;num=1102701442;start

Kirk Pepperdine.


Back to newsletter 052 contents


Last Updated: 2024-04-28
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/roundup052.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us