Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers|

Our valued sponsors who help make this site possible
New Relic: Try free w/ production profiling and get a free shirt! 

Site24x7: Java Method-Level Tracing into Transactions @ $12/Month/JVM. Sign Up! 

Kirk's Roundup March 25th, 2002

jKool for DevOps
Light up your Apps & get a cool t-shirt

JProfiler
Get rid of your performance problems and memory leaks!


Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:


New Relic
New Relic: Try free w/ production profiling and get a free shirt!

Site24x7
Site24x7: Java Method-Level Tracing into Transactions @ $12/Month/JVM. Sign Up!


jKool for DevOps
Light up your Apps & get a cool t-shirt

JProfiler
Get rid of your performance problems and memory leaks!


Back to newsletter 016 contents

Load balancing or load distribution?

Throughout my career in the IT industry, I?ve been shown many different techniques to efficiently utilize redundant computing resources. In fact, I?ve even contributed to the fray by writing a few myself. Every single technique (including mine) were called load-balancing algorithms. But, was it correct to label them all as load balancing? And if not, what should they have been called. Well, as it turns out, in many cases, these techniques (including, gulp, mine) only distributed load. What?s the difference you ask? From a server's perspective maybe, not a lot. But, from a user's perspective, the difference can be quite significant. It all comes down to one important metric, response time. To demonstrate this, lets go back to my experiences with airport security at Charles du Galle (CDG) in Paris.

CDG's security stations have a very unique queuing system. If any of you have been through CDG, then you?ll know what I mean. The queue to each security station is designed in such a way that you cannot tell how long the queue is, nor can you easily escape once you?ve made your choice of queue. In general terms, the response time that I experience is a composite of my service time at the security station and the sum of the service time of everyone else in front of me. In other airports, once I?ve entered a queue, I can jump to another queue if I feel that this move will improve my response time. One cannot do this at CDG. In CDG, the physical layout of the queues reduces the problem to a single queue, single server scenario. Even though entrances to other security stations at other airports also appear to be a single queue, single server scenario, it is possible for you to jump queues allowing the system to achieve a moderate amount of load balancing.

Both of these types of security queues are a far cry from the single queue multiple server experience that I have at the ticket counter within airports. And since I?m a frequent flyer, I often get to stand in the priority queue. In this regard, I'm generally quite happy about the response times I experience.

So, how does this help with the question of what is load balancing and what is load distribution? It all comes down to this; load balancing is intelligent load distribution. In other words, I?m going to try to queue up to the the server who is currently under the least load. Since the queues at CDG offer me no clues as to how long I might have to wait, it models a simple load distribution system. At other airports, when I can see that the line to one security station is short, I can use that information to effect an intelligent balance of the load on the system.

Clearly the system at the ticket counter gives the fairest share as individuals all wait for the first available server. And as an added bonus, a priority queue is available to ensure good response time for those more important tasks. So, the next time you look at a load balancing system, ask the question: which line should I stand in?

JavaDevTalk

The month we start the round-up with http://javadevtalk.com, a site started by Chris Wells, Paul Oehler, Anthony Eden, Dustin Williams and Riyad Kalla (whose photo bears a striking resemblance to Homer Simpson). Each Java discussion group has taken on it?s own personality and, true to form even with it?s short existence, so too has javadevtalk. I recommend that you all meet the new kid on the block.

We start with a thread discussing string comparison. The initial code

int i=0;
while (string1.substring( i, i+1).equals(string2.substring( i, i+1))) {
    i+=1;
};
return i;

was condensed to

int i;
for(;string1.charat( i) == string2.charAt( i); i++);
return i;

While one participant responded that one should consider using the new regex feature in the JDK1.4, this might be a heavy implementation for such a simple search. One other point, if string1.equals(string2) == true, both implementations will throw a StringIndexOutOfBoundsException which is probably not desirable.

In an attempt to stir up a good conversation, the moderator posted a topic with the title "Avoid SWING when you can". The argument is quite detailed and the responses are quite thoughtful. As such, it is best that they not be summarized here.

The JavaRanch

Treading over to familiar turf, lets see what?s happening at the Salon. In the category of are we worrying about the right thing, we can find a question asking about the differences in memory usage between the subclass and containment relationship between two objects. The answer is: yes of course there is a difference in the amount of memory used to represent each relationship. The real problem is that each of subclass and containment notationally express an important design features. Don't ignore this important distinction for the sake of saving a few bytes of memory.

The question of using importing * was raised once again. Since * is used as a notational convention that does not survive compilation, it has no effect on performance. Having said this, using * can affect readability and hence understandability of source code. For instance, if you import com.mycomp.mypackage.*, others who may have to read the code may not be able to easily understand which classes are being used.

In another thread, the poster found that his GUI performance was less than optimal. He was looking for recommendations which tool to use to help fine tune the application. One response was to compile using the -O option. The bartender stepped in and pointed out that the -O is ignored. Why is -O ignored? Here?s an opinion from the Ant developers list. The JDK 1.3 compiler was completely re-written from scratch. Since Hotspot was included by default in the JDK 1.3, the authors felt no need to add an extra layer of code for optimization purposes. This does not say that this will change in the future.

JavaGaming.org

Moving on to www.javagaming.org we find an interesting thread where WORA seems to break down which results in a performance problem for our gaming friends. The story starts when a gamer comments that he cannot use the keyboard because the key press appears to be broken in Linux. Further discussion leads to the discovery that it may not be a problem with the VM but the behavior is typical to the way Linux functions. The VM must rely on the OS to provide it with key events. Of course, this behavior is specific to each platform. In the end, there is a suggested work around. This does point out the importance of testing your application on all intended target platforms.

The Server Side

Finally, lets visit www.theserverside.com to see what?s being talked about there. It was pointed out that there is a .net implementation of the pet store application at http://www.gotdotnet.com/team/compare/petshop15whatsnew.asp. For those familiar with java pet store, this maybe a good introduction into .net.

Next to this was a posting questioning the techno speak "the application/architecture should be scalable". Fair question, just what does this mean? I?ll combine two answers to see if we can come up with an answer. A scalable application will maintain a stable performance profile under load. When load strains the underlying hardware, a scalable architecture will allow the application to maintain a stable performance profile as the capacity of the underlying hardware is increased.

The question of entity bean scalability continues to be a hot topic. As EJB technology matures and developers become more familiar with it, one would hope that techniques that aid performance would be developed. Again this month there are more threads regarding EB scaling. The questions of how to scale or will it scale keeps being asked. The posts that claim that EB do scale never offer any real clues on how to develop scalable EB systems. Witness this response.

"You need to know what you're doing. If you do not have any experience and if you don't dare to even open a book on J2EE, then you'll most likely fail."

Finally

One final note on the subject of load distribution, what follows is a true story as related to me. "As I approached the security station at Hartsfield (Atlanta?s airport), I was directed to one of the queues by a blind person. The person used round robin to direct people to one of the several queues. He was oblivious to the number of people in the queue. Since the queue he pointed me to was very long, I ignored him and went to a shorter one." The point here is not to make jokes about the blind employee. I?m sure he was doing his utmost to carry out the task assigned to him. More to point is that people ignored his direction because they had better information than he did. The real question is, if these people decided against round robin in the real world, then why is it used so often in the world of computing?

I do owe a note to BEA?s WLS team as just as my last round-up went to press, they published ECPerf results. Not to be outdone, IBM announced results on March 13th. The measurements show WebSphere rates at $13/BBops and WebLogic runs in at $18/BBops. Details can be seen at http://ecperf.theserverside.com/ecperf.

Kirk Pepperdine.


Back to newsletter 016 contents


Last Updated: 2017-03-01
Copyright © 2000-2017 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/roundup016.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us