|
|
|
Back to newsletter 055 contents
It's time for yet another edition of the round-up. This month sees the hype around open source really starting to heat up. You know you're about to enter a special time in our industry when a single topic leaks out into the mainstream press. The good thing is that it's getting easier for project teams to introduce an OSS tool or product into their organization. The interesting thing is that the historical objection that OSS has no support or indemnification is being re-evaluated. Another good thing is that teams now have the ability to diagnose and fix performance problems that may exist in OSS products that they are using. One case in point, OSCache has been a source of memory leaks for many application teams. That is until one member of a team in Boston dug in and made the appropriate fix. In the past all one could do is register a complaint and hope that the company supplying the product was able to do something about it. Ironic to think that the loss of a support contract may actually give you better support options.
And now onto the serverside where we find a posting that asks the question: is the modern JVM obsolete? I do find it peculiar that someone would label something modern then ask if it's obsolete, but it's an interesting question none the less. The focus of this post is with the new 64 bit architectures and how well the JVM will stand up in this environment. The first thing that passed through my mind when I read this was, how would garbage collection cope with these large memory spaces?
Current garbage collection technology is based on reachability. An object is considered live if it is reachable from some predefined point (called a root). Roots can be things like a JNI or RMI entry point, a static variable and other things that have "permanence" in the heap. As you create objects and hook them up into your object schema (known as an object graph) you unwittingly link these objects to some root. Your objects are also linked into a separate (hidden) object table. This dual linking scheme give the JVM two different means of accessing each and every object in your system.
As your application does its work, it will necessarily change the state of objects in your domain. To do this is will most likely create new objects and then assign them to the place holders (variables) that you've defined. When this happens, the old object is dereferenced and as such is no longer "reachable". This definition of reachability is defined in the context of an application. The JVM still keeps a reference to the object in the object table. What garbage collection does is compare the object table with that of the object graph. Any objects that are in the table but not in the graph are deemed to be garbage and the memory is reclaimed. In this scheme, the cost of reclaiming memory that is no longer being used by the application is a function of how much has been left behind. In other words, the more live objects you have, the more you have to search through in order to determine which one are reachable and which ones are not.
Now all of this wouldn't be that big a deal until you consider that the application and the garbage collector need to change the same supporting data structures in a thread safe manner. The net result is that when the garbage collector is working, the application cannot. This is known as "stop-the-world" GC or GC pause. These pause times can have a fairly detrimental effect on your applications performance. I can cite one instance where back to back GCs were resulting in JVMs being dropped from the application that they were participating in. As you could imagine, this caused many headaches for those trying to support the production environment.
Because of the problems surrounding "stop-the-world" GC, Sun (and others) have put and continue to put a lot of work into reducing the cost of reclaiming memory. One of their solutions has been to subdivide heap into many smaller pieces and then GC each of these pieces separately. So as you can see, the trend to larger heap/memory spaces runs counter to this strategy. Another solution has been to parallelize the GC algorithms. This solution works well if you have a multi-CPU machine. If not, then they actually run slower than their single threaded cousins. Yet another set of solutions allows GC and the application to run in parallel for certain phases of the collection process (known as concurrent GC). Again, to do so results in a more expensive GC but you do lose the long pause times. The net result is that the research is paying off in that we are getting better options that are each becoming more and more efficient.
The cruel joke is that while the engineers have been diligent in their efforts to make GC better, we have been doing a level best to abuse those efforts by the way we use the JVM. My answer to the question "is the modern JVM obsolete" is no, not at all. That said, those involved in improving GC are in a constant race against our abilities to further stress that element of the technology. Currently it's difficult to see who is winning as the race appears to be neck in neck. For other opinions see http://www.theserverside.com/discussions/thread.tss?thread_id=33588.
The next posting explores the question of using stored procedures instead of Java selects. At odds here are the competing interests of design, uniformity and the need to keep execution close to the data. Both Jack and myself stress that good design should win and should only be overridden where performance issues have been definitely identified. There are many reasons for this, the least of which is that when a program is well designed, it is usually much easier to find and fix performance problems. We also suggest that stored procedures are a means to boost performance in certain cases. These cases typically involve a reduction in the utilization of the network. Which leads right into the question, should one do a select in Java vs. doing the select in a stored procedure. My answer: if all one is doing in the stored procedure is a select, then I'd be very surprised if there were any performance gains over performing the select in Java. As always, your mileage may differ so it is recommended that you benchmark your application so that you can avoid making the wrong choice. The thread is http://www.theserverside.com/discussions/thread.tss?thread_id=32141.
The last question of the day from TSS is: does anyone have any information regarding Spring Message Driven POJOs vs MDB. Unfortunately there were no takers on this one. Sounds like a good opportunity for an interesting benchmark. The URL is http://www.theserverside.com/discussions/thread.tss?thread_id=33899 just in case someone does respond.
The first posting we look at from the Java Ranch is short and sweet. The question: what is the most efficient way to copy an object? The three choices offered up are use clone(), create a new object and populate it or serialize and de-serialize the object. The clear winner in this author's humble opinion was the one offered up in the posting. Have the object that you want to copy have a constructor that accepts one of it's own as a parameter. The constructor can look after picking apart the argument so that it can populate itself. The thread can be found at http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=15&t=001044.
Here is an interesting question: what is the quickest way to remove all duplicates from an ArrayList. The answer provided is nifty.
HashSet h = new HashSet(); h.addAll( arrayList); arrayList.clear(); arrayList.addAll(h);
Apparently the performance of this piece of code is not all that bad either. You can find the entire discussion at http://saloon.javaranch.com/cgi-bin/ubb/ultimagebb.cgi?ubb-get_topic&f=15&t=000368.
"What is the cost to throw an exception" is the next question, but I think the real question is: is there a cost of using exceptions in your code. The answer is, the cost of using an exception is not seen until the exception is thrown. At that point exceptions consume more then their fair share of resource. This is justification for the design principle, restrict the use of exceptions to exceptional cases. You can find some interesting details on the exact costs at http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=15&t=001038.
The first thread we start with from Java Gaming asks a simple question, does Intel's hyper-threaded processor interfere with Java's timers? The answer to that question is inconclusive but what does come out are a number of interesting facts about the hyper-threaded CPU. The most interesting fact is that a hyper-threaded CPU is neither as stable as a dual CPU machine nor does it offer much of a performance advantage. To sum up, the groups (as a whole) seemed very much unimpressed with the Hyper-threaded technology. The other Intel CPU problem mentioned in this thread is speedstep. In my last article at IBM-DeveloperWorks I dive into this particular problem and how it can effect your benchmark. If you are looking to buy a HT Pentium the recommendation from this group would be to look for a non-threaded version at about the same clock speed. Full details at http://www.javagaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=Tuning;action=display;num=1115390465.
Do you do any work with JNI? If so then you know it's not an easy interface to work with. For those looking for Sun to simplify the JNI, you can take solace that the complexity is necessary as seen in this extract: "The syntax of JNI is designed to protect the VMs state from errant C code and thus is not likely to change." written by Jeff Kesselman. The recommended route for passing large chunks of data between Java and C code are the buffer classes in the nio package. You can find more information at http://www.javagaming.org/cgi-bin/JGNetForums/YaBB.cgi?board=Tuning;action=display;num=1115151026
Unlike people writing business applications, people in the gaming community rely on micro-benchmarking. Even so, the information that the gaming community fetters out of these micro-benchmarks can be of interest to those of us coding for suits. Take the recent discussion regarding the performance of floats over doubles. It's a very long discussion (mostly because its tough getting a microbenchmark to give you results that you can trust) that does draw out a few interesting points. First something that I didn't know, floats are converted to doubles before they used. JRockit seems to be using AMD64 optimizations that the Sun JVM currently isn't (Peter Kessler posted in another forum that it's coming ). JRockit performs the benchmark 4 times faster then the Sun JVM. This point is very interesting because using JRocket (with its focus on the server side) is not considered a viable option by the gaming crowd. Finally someone mentioned the -XX:+PrintCompilation option. Typical output will include lines such as these.
58 s b java.util.Vector::size (5 bytes) 60 * b java.util.zip.Inflater::inflateBytes (0 bytes) 61 b java.lang.Class$MethodArray::removeByNameAndSignature (71 bytes) 64 ! b java.io.BufferedReader::readLine (304 bytes) 129 * b java.io.FileOutputStream::write (0 bytes) 3 % b com.jpt.tipsdb.servlet.WildcardQuery::doValidQuery @ 112 (300 bytes)
There is very little documentation describing what all of these means. The output reads from left to right, line number, a column for flags, class::method and finally a size. The possible flags are;
b - blocking compiler * - generated a native wrapper % - on stack replacement ! - method contains exception handlers s - methods is synchronized
There do appear to be things you can do to "help" hotspot but unfortunately, they are not documented. For example, it is unclear what is meant by "blocking compiler". One would guess that the code execution is halted while the method is being compiled. However in the JDK 1.5, there are at least two threads allocated to compiling methods. These threads work in parallel to the running application which one imply that this activity is non-blocking. More documentation from Sun would be certainly be helpful. That said, now that we have access to the source for the JVM, we should have all the documentation we need.
Back to newsletter 055 contents