Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup April 28th, 2003

JProfiler
Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning


Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:


Training online
Threading Essentials course


JProfiler
Get rid of your performance problems and memory leaks!


Back to newsletter 029 contents

I?m writing this column sitting at 37 thousand feet (11277 meters for the rest of the world) hurtling past Melbourne Florida at mach 0.81. For those of you who may not be all that familiar with the east coast of Florida, Melbourne is just south of the historic Cape Canaveral, home of the Kennedy Space Center. I?ve flown over this site many times and I always enjoy the bird?s eye view of the launch sites. It is from this site that the crew of the Challenger and the Columbia enjoyed their last minutes of terra firma as they prepared for their journey into space. It was only a short 500 years ago that that Ponce de Leon explored this area during his quest for the fountain of youth. At that time, these arduous voyages to the "new world" were fraught with danger. Many vessels and their crews were lost as the old world crossed the chasm between it and the new world. But, the dangers did not deter these adventurous souls.

Just as the Spanish galleons represented one of the most complex machines of it?s time, there is no doubt that the Space Shuttle is, to date, one of the most complex machines ever built. And just as even the most minor of failure on a galley often had catastrophic consequences, so too is this the case for the Space Shuttle. And just as the loss of these Galleys did not stop the flow of explorers from crossing the expanse between the continents, I firmly believe that the latest tragedy will not stop the flow of explorers ever eager to cross the expanse that separates us from the unknown.

JavaGaming.org

A while back, I asked if anyone had noticed that the -O option stopped doing anything Well, it comes as no surprise that a participant of the Java gaming discussion group noticed that because the byte codes were not longer optimized, class files were often larger than the original source files. It was upon making this observation that he went about looking for a better compiler. The most promising possibility is GCJ from the GNU foundation. GCJ is a compiler that can generate byte or native code from either source of byte codes. What follows is a lengthy review of the GCJ project in which the author describes the problems that were encountered and how they were solved. In the end one can conclude that GCJ is still not ready for prime time.

On a different thread, a post is asking for help in locating a class optimizer. The resulting discussion yielded two useful tips. First, class optimizers are pretty much only good for reducing the size of a class file. Secondly, most of the optimizations are done by the JIT. Now if you don't know how extreme game programmers are, consider this. The originator of the post was looking for a 135 byte reduction in size! I can?t imagine why but then again, I?m not on the gaming front line.

The JavaRanch

Over at the saloon at the Java Ranch, a greenhorn asked if there was any speed difference between ++j and j++. Though the answer would appear to be fairly obvious, the members of the discussion group did the prudent thing, they tested it. In one test, ++j ran in 8800ms and j++ ran in 5700ms. Here?s the code

            long start=System.currentTimeMillis());
            for (int i = 0; i < Integer.MAXINT; ++i);
            system.out.println(start-System.currentTimeMillis());
            start=System.currentTimeMillis());
            for (int i = 0; i < Integer.MAXINT; i++);
            system.out.println(start-System.currentTimeMillis());

These results would seem to suggest that the pre-fix and the post-fix operators are not equal in expense. But, the member switched the code about and reran the tests.

            long start=System.currentTimeMillis());
            for (int i = 0; i < Integer.MAXINT; i++);
            system.out.println(start-System.currentTimeMillis());
            start=System.currentTimeMillis());
            for (int i = 0; i < Integer.MAXINT; ++i);
            system.out.println(start-System.currentTimeMillis());

The results were 8800 ms for the first loop and 5700ms for the latter. The difference is most certainly due to the HotSpot optimization. So, although the micro-benchmarks can be useful, they can lead one to erroneous results if one is not careful.

In another discussion thread following on the same micro-benchmarking theme, the discussion centered around the speed of HashMap vs HashTable. This discussion was mentioned in an earlier round-up so, I will not repeat that discussion. What is new is that the sheriff started speculating. In this case, a greenhorn stepped in to remind everyone that garbage collection is non-deterministic and can interfere with timings. Also, it does take some time for HotSpot to decide that it?s worth the time to optimize a section of code. As mentioned before, this does have the effect of skewing the timings.

Having said this, micro-benchmarks can provide valuable information when they are conducted with care. But since care needs to be taken when running any benchmark, the results are not all that surprising.

A greenhorn posted a note about his application running the VM out of memory. In this case, the greenhorn was storing ResultsSets into a Vector. During the discussion, it became clear that not only should the amount of data being returned from the server be restricted, it was also seen that the Vector class was a less than optimal solution for the overall problem. Trouble is, Vector had been deeply embedded throughout the entire application. Consequently, the expense of replacing this class with a more appropriate one was prohibitive. One comment that came out was that is that greenhorn should be looking not to preserve a rigid architecture that was causing the problems. I can?t comment on the architecture but it does seem to me that the people who made the small decision to refer to a class instead of an interface did not take into consideration the long term effects of their seemingly trivial choice. But, more on that in a future column.

The Server Side

From the server side we have the typical number of application server centric questions. The first we?ll deal with comes from a list member that is trying to get Jboss to scale. The architecture in question is running with a webserver, JBoss and a database. The post offered no information on where the problem (if any) might lie thus making it difficult to provide a specific answer. Even so, some advice was offered that if followed would allow one to scale this architecture. The first step is to determine where the bottleneck exists. The can be achieved by running a stress test. Once the bottleneck has been located, one can try to load balance the system so that more of the constrained resource can be used to enlarge capacity beyond the critical choke point. The suggestion put forth was that this choke point would most likely be the database. In this case, moving the database off to it?s own piece of hardware is the solution of the day.

In another discussion that contained about as much information as you could ask for, the member was having trouble with his application hitting a hard performance wall. The application relied heavily on JMS. The client uses RMI to hit a stateless session bean. In turn, the bean passes the request to MDB. The MDB then processes the request. In all, the request passes though three queues. Though the processing times stay fairly constant, the rate cannot be increased. This result had been seen on several different systems using several different application servers and JMS products. One hypothesis that was put forth which has merit is that the system has maxed out on the number of requests it can marshal and un-marshal. The poster noted a rate of 50-60 messages per second. For the sake of argument, lets assume that 50 messages per second is the rate. If we assume a single CPU system, then we will realize a 20ms per message service time for marshalling, transference, and un-marshalling. If the queue is synchronized, then one must consider that cost also in the 20ms. What is known is that messaging systems are sensitive to both the size of their payloads as well as the rate of requests. What is not known is how large this payload is. So, is this number unreasonable? It?s hard to say. What is certain is that it?s not adequate.

Finally, performance at the bottom line

Do you know the effects of performance on your company's bottom line? (Now ex) CEO Mr. Greenberg of McDonalds does! His estimate is that a six second reduction in service time at the drive through window results in a 1% increase in sales.

Kirk Pepperdine.


Back to newsletter 029 contents


Last Updated: 2024-04-28
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/roundup029.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us