Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup March 2006

Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning

Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:

Training online
Threading Essentials course

Get rid of your performance problems and memory leaks!

Back to newsletter 064 contents

In just the past few months, the Java community has started placing a lot more emphasis on threading. It is not that threads have previously been ignored, but in the quickly disappearing world of single core machines, the community has started asking the right questions. Can we really scale out when running on the increasing number of multi-core, multi-cpu machines? That question has left a number in the community worried. The most common comment made continues to be, do multi-threaded Java applications work accidentally? In other words, is it possible that single CPU systems have protected us from some potentially hideous threading errors? Now while some of this may sound like scare mongering, how many applications would you trust would run as expected in a completely different environment then the one that it has been tested to run in.

Another indication that the community is concerned about this is the emergence of companies such as Azul Systems (Sun?s efforts not with standing). I recently had the pleasure of interviewing Bob Pasker, the deputy CTO at Azul, who has made significant contributions to this companies efforts to improve Java?s scale out story.

One of the factors that can significantly limit an application's ability to scale out is thread contention. Thread contention occurs whenever several threads attempt to access a non shareable resource. Serious corruption can occur unless the access to that non-shareable resource is protected with a hard lock. Currently in Sun?s JVM that means that the thread asks the operating system for a lock. This is not an inexpensive operation. Moreover, if you do happen to have multiple threads accessing that non-shareable resource, they will forced to wait until thread ahead of them in the queue is finished. The result of this effect can be very devastating on performance. One question is: do we really need have such an air tight lock protecting things? Quite often the answer is no, not really.

Consequently, we should start to see if we can segregate the cases where we don?t need that blanket contention protection to see if we can reduce the expense. Another question is: how long do we need to wait before we can acquire the heavily guarded resource? If the answer is "only a very short period of time" then maybe we can avoid the expense of asking the OS for a lock and instead get a spin-wait lock. The spin-wait lock would cause the thread to loop until it was signaled to move forward. The JIT would be tuned to decide when a thread should acquire a spin-wait as apposed to acquiring a spin-lock from the OS. This scheme will be appearing in the next version of Java.

In regards to the first question, companies such as Azul are asking the question, why lock at all? Instead why don?t we use an idea that seems to work in relational database technology. That idea is to start a transaction, do some work, and then on the commit, perform a calculation to determine if we have a write-write set conflict. If we do have a write-write conflict then we would fail the transaction and let the client retry the calculation. How this translates to hardware is in the form of transactional memory.

Transactional memory is a more optimistic approach to locking. As the name implies, transactional memory is a hardware assisted way to provide a means of detecting when multiple threads have accessed the same piece of memory. It also provides a means of rolling back the threads so that they may retry what they had done with the new values. The entire idea behind this strategy is, don?t lock unless you have to. Now it is not clear that Azul Systems is using this scheme in their hardware, however they are using some variant that allows many threads to access critical sections of code, detect when a conflict has occurred, then roll back the threads that were operating with dirty data. The results have been very promising. In one of their white-papers the claim (and there is no reason to not believe them) that they can achieve the same performance with a Hashtable (heavily synchronized) as they see with a selectively synchronized HashMap in a heavily threaded environment.

The current efforts to improve the scale-out characteristics of Java in a multi-core environment may not offer the silver bullet that solves that problem. However it is reassuring to know that by the time that the multi-core becomes more mainstream, there will be products in the market that will be able to help us scale out our applications.

Back to newsletter 064 contents

Last Updated: 2023-05-25
Copyright © 2000-2023 All Rights Reserved.
All trademarks and registered trademarks appearing on are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed:
Trouble with this page? Please contact us