Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

The Roundup September 2005

JProfiler
Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning


Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:


Training online
Threading Essentials course


JProfiler
Get rid of your performance problems and memory leaks!


Back to newsletter 058 contents

Before I start this edition of the round-up I want to shamelessly plug the Java In Action event that will be held in Orlando (Oct 5-7). For this first time in history I'm going to reveal to anyone who shows up how I manage to get though a performance tuning engagement. And if that isn't exciting enough, I'm planning a special surprise for some lucky attendee. If you're at the Disney Yacht Club come and join me, Dr. Heinz Kabutz (a Java Champion and the Java Specialist), and some of the other power point jockeys that will be there.

History and Persistence Ideals

Have you ever come across a piece of history and wonder, how the heck were people able to function with said device or way of doing things. This summer I went to the renaiscence festival held every year in Visigrad. It was interesting watching knights hide behind their shields in an open field while being shot at with real arrows. And although the armor and shields afforded them the needed protection against that type of weapon, the weight was huge disadvantage in hand-to-hand combat. This point was demonstrated by having the archers pin down their target while the villagers attacked and eventually overwhelmed their walking tank-like opponents.

In a similar flash-back, I have recently spent some time looking at a system that uses a database that maintains one table for every domain object. Relationships between domain objects is maintained with oid (object identifier), fields being maintained in a shadow persistence hierarchy. Objects are reconstructed by doing queries on tables using oid as primary and foreign keys. The idea worked until systems became so laden with data that these heavy operations came to take seconds to complete. Just like the knights of old, the design became their downfall.

To be fair, the application was written in an era when we were all young and idealistic. What we really wanted were object databases but the establishment forced reality upon us by insisting that we use relational technology. Not wanting to abandon our purist ways, we built ODBMS in Java and SQL code. The fundamental problem with this approach is that objects tend to be very fine-grained beasts whereas relational databases like to take on larger chunks of work. This is because they are designed to work on minimal chunk sizes that reach far beyond what any reasonable object would consider taking on. Another point of contention is in how joins work. When a relational database does a join it doesn't consider much of the data in the table. Consequently it is not much more expensive to join large datasets then it is to join small datasets. It is about like using a forklift to pick up a box of paper one sheet at a time. Unfortunately it is a mistake that we would repeat with EJB Entity Beans (a.k.a. the revolving door anti-pattern).

It is interesting that I keep running into systems built on these old ideals given that most of them failed right out of the gate. It's also interesting to see how our ideas of how to build a distributed system have changed over the years. And with that lets look at the current set of performance problems developers are talking about in this installment of the round-up.

JavaGaming.org

This month there has been so much activity at Java Gaming that I've decided just to focus on that discussion group. I'll start with an update to a thread that I previously reported on regarding GC and Windows threads. To summarize what was previously reported; it was observed that there were cases when Windows would lock up for brief periods of time. These lock-ups seemed to coincide with long GC runs so from there it was surmised that GC running in the JVM was causing all execution threads running in Windows to block. The effect was most noticeable when Eclipse had been running for long periods of time. The update on this tread is that by using Ptrace it was discovered that the JVM is running threads at priority 15. To understand exactly what this means one need to understand that most application threads run at a priority of 8 and one must have admin privileges to run above 10. Anything above 15 is considered to be a real time thread. So when all is said and done, threads in the JVM did have the capability to block Windows threads. Good news is that it will be fixed in the JDK 1.5.0_06. Until then you can use the following flags; -XX:JavaPriority10_To_OSPriority=10 -XX:JavaPriority9_To_OSPriority=9.(http://192.18.37.44/forums/index.php?topic=10269.0)

In yet another interesting thread on GC stalls, the discussion fettered out this (http://www.molebox.com/) interesting link. For those of you who are fascinated enough with this newsletter that you don't want to click through, MoleBox is a way of packaging and executable, DLLs and data into a single file that is its self-executable. Now I'm not sure how this helps the stutter at startup problem that this thread was focused on but it seems to have worked in at least one instance.

The thread itself is interesting in that it lays out a sensible procedure for heap sizing using the -verbose:gc flags. Well worth the read if you are fine tuning memory allocation on your Java application. If you're like me, then just go and buy that extra gig of memory, it only costs 100 bucks or so. However be careful because sometimes too much memory can be as big a problem as not enough. (http://192.18.37.44/forums/index.php?topic=10895.0)

One of the interesting things about the Java Gaming forums is the active participation of so many of the Sun JVM engineers. There is nothing like having the input of one of these engineers to help clear the air. And clear the air was the order of business for Ken Russell in this thread. The topic for discussion was a little lesson on what the client HotSpot compiler may or may not do with the code that you feed it. The thread started with an observation that the -client (or default) client side compiler would not inline virtual calls and that it did not remove range checking. Ken jumped in to point out that the client does support de-optimization as well as the in-lining of virtual calls which has been in place since the 1.4. He also confirmed that the client does not eliminate range checking. Starting with the next release, there will be a concerted move to combine the best of the -client with the -server optimizations. In order to achieve this, the compiler group will continue to develop a multi-stage compiling strategy. It's not a complete solution but it appears to be more maintainable then using a method caching strategy. (http://192.18.37.44/forums/index.php?topic=10620.0)

It is a common belief that declaring something final will benefit performance. The question put in this thread asked if there was something to this belief. For many reasons, the performance advantages that one used to see by using final have all gone away. This is because of dynamic profiling (done at runtime) does a much better job of deciding on how to performance tune something then developers do. Best advice, use final for design reasons and let HotSpot do it's job of deciding on what to inline and what to leave alone. (http://192.18.37.44/forums/index.php?topic=10665.0)

Now here is a measurement that you've got to respect. The question; are Object casts processor expensive? The answer is 3 nanoseconds. Now for the next question, how does one make such a measurement? About the only way to measure this is to look at the machine code needed to perform the operation. Casts take a single instruction and due to pipelining in the CPU, the cast can occur in one clock tick. Just goes to prove that performance tuning at this level has a very poor ROI. (http://192.18.37.44/forums/index.php?topic=1760.0)

For those of you who are using -Xprof to get profiling information in the 1.5, you maybe interested to know that changes to it have made it less useful then it was. The overhead from using this profiler now runs in excess of 10% and the output is not so reliable. Lets hope this gets fixed in the next release as there are indications that the authors are reluctant to fix it. In the meantime you can always use the new profiler found in NetBeans. It apparently is much better then prof. (http://192.18.37.44/forums/index.php?topic=1760.0)

And that concludes our look at what has been happening in the performance tuning section of Java Gaming this month. I look forward to seeing some of you in Orlando.

Kirk Pepperdine.


Back to newsletter 058 contents


Last Updated: 2024-12-27
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/roundup058.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us