Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup April 2004

Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning

Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:

Training online
Threading Essentials course

Get rid of your performance problems and memory leaks!

Back to newsletter 041 contents

The JavaRanch

Maintainability! Even though Jack and I have both covered this topic extensively, it still keeps coming up in the news groups so it seems worthwhile to repeat the message. Your first goal when developing software is to concern yourself with maintainability. In an OO system, this means that you use encapsulation to hide the implementation away from prying eyes. In Java we do this by using get/set methods that are accompanied with having the instance variable be private or protected to the class in which it resides. The question is, what effect does this have on performance? As was demonstrated by the ranch hand using a microbenchmark, the difference in using get/set methods over direct access over several billion calls was immeasurable. Given this, why would someone risk the highly likely maintenance headaches by directly accessing instance variable I will never know! But still, I see the technique being used quite often and the excuse offered is that it is there for performance. The lesson learned is: measure, don't guess.

In the next discussion, the original question asks if one should bundle multiple small SQL queries together. The premise for the question is that the application's performance suffers when a query is involved. A ranch hand offers a number of good tips on how one might improve the performance of a database. This list includes the usual suspects of using stored procedures and to analyze your database design. The bartender asked the interesting question. Very simply put, how do you know this? In other words, how does one know that the database queries are the source of the problem? The answer is, unless you are measuring, you don't. This is where a tool like IronEye can be very helpful. Also, database vendors include a lot of statistics that can be really helpful when diagnosing performance problems. In this case, it wasn't even clear if the network was at fault. With the proper measurements, one can determine the source of the problem. Once the problem has been properly diagnosed, the corrective action often becomes readily apparent. Lesson learned, measure twice, cut once.

The last discussion that we will look at is one regarding data transfer object, commonly referred to as DTOs and most recently reclaimed as Value Object by Martin Fowler (as originally named by the Smalltalk community). The discussion itself was short and it sort of touches on serialization in a naive fashion but I will ignore all of that to dig into the real point. DTOs or VOs or whatever you'd like to call them are extensively evil for two very simple reasons. First, they violate encapsulation and in doing so increase the cost of maintenance. Secondly, they act as a parallel hierarchy and in doing so increase the cost of development and maintenance.

DTOs got into vogue as a result of two problems, not knowing how to trim a large object graph that needed to be pushed across the network and not having a proper container to accept them at the receiving end (as is the case with enterprise java beans). The solution to the later problem is to use plain old java objects. The answer to the former is not so simple. If you have an object graph that you need to trim, then it's best done by customizing the serialization. The other option is to use a VM that supports the distribution of objects in discrete chunks. Unfortunately the only VM that I know that supports this option is now defunct. That said, one can still gain some benefit by examining the design and determining if these large object graphs (and the relationships that they represent) are necessary. It is quite possible that one can make better gains by making reductions in the model. If that fails, you can always to back off to using DTOs as they are certainly less evil then falling into the revolving door anti-pattern.

The Server Side

Switching our attention to the server side, we find a conversation regarding low MDB performance in JBoss 3.0. The post starts out by saying that performance is slow when sending out 2000 messages per second. It is really difficult to offer good advice when a post is this vague about the problem. It seems as though the posting is about point-to-point messaging. The question is, is JBoss messaging capable of handling 2000 messages per second? The one thing that needs to be considered when one asks the type of question is; can the hardware feed JBoss with 2000 messages per second. At that rate, it would not be difficult to saturate the network. For example, if you consider a network has been rated at 10 megabits per second. A nominal value for that network might look like 4 megabits per second. [Network bandwidth rating is peak one-way flow, and is not normally achieved for most types of network traffic which include collision management. Average available bandwidth can easily be one third of the peak rating]. If we do the sums, we can see 4 megabits works out to 4194304 bits per second or 524288 bytes per second. If we further divide that by the number of messages per second (2000), we get 262 bytes per message. If other words, if your message is bigger than 262 bytes, then at 2000 messages per second, you will saturate the network. One should note that although it's configurable, it's not uncommon to see packet sizes of 512 bytes. That means that realistically, we can expect to feed the server about 1000 1 packet sized messages per second maximum. If the message size gets larger that this, then one can see that the problem just gets worse. The good news is that most internal networks support 100 megabit or larger transfer rates. But even at 100 megabit, we will saturate at 2620 bytes (assuming the same constraints which admittedly is very dangerous without doing a measurement). Can we determine if the network is saturated? Obviously not, as we unfortunately don't have enough information to make that determination. But if we did, we certainly have the means to find out.

In a discussion regarding JVM parameters that are good for performance, I've finally found a posting that is the equivalent of "where is the go fast button". As much as I wish there were a go fast button, there isn't one. Wait a second, maybe that's a good thing because that would make our training courses redundant. Seriously though, tuning is setting targets, determining your goal (throughput vs. response time) and then making measurements to help you decide on which JVM settings will help you achieve your goal.

I'd like to report on the discussions at java gaming but I'm afraid that there is nothing new this month. it would appear as if everyone is busy trying out each others games after the latest get together.

Kirk Pepperdine.

Back to newsletter 041 contents

Last Updated: 2022-06-29
Copyright © 2000-2022 All Rights Reserved.
All trademarks and registered trademarks appearing on are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed:
Trouble with this page? Please contact us