Back to newsletter 026 contents
Before starting this months round-up, I?d like to give a final thought about the recent Middleware/Microsoft benchmarking debacle. Although Ricard Orberg makes good arguments in the Java Developers Journal about the folly of such an exercise, lets eliminate the rhetoric and consider this. It's tough that the Java Pet Store was never intended to be a benchmark, it is a real example of an application that both participants have reworked. The message to a non-technical management team is that .NET outperforms the J2EE. If you want more evidence, try using both the Java and .NET versions of Web Services side by side. From a technical perspective, we all know the value J2EE offers over .NET but try explaining this value of J2EE (in dollars) to the business side of the house. Now, we need to explain why this benchmark is flawed without sounding like we?re all winners. There are many times in life that you need to know the answer before you ask the question. Stepping into a benchmarking exercise like this is like not knowing about all of the exits before rushing into a fire-fight. Sorry Ed, but your team has just earned the famed Meadow Muffin award.
From all reports, the JDK 1.4 has proved to be much more performance oriented than its predecessors. But as one poster found out, there is no guarantee that your application will run faster in the JDK1.4. In fact, this poster experienced a 7x increase in run time. After profiling the application, it was noted that the newly included support for XML was responsible for the performance degradation. Once these classes were over-ridden with the previously used xalan package, the application ran as expected.
We also had a question regarding whether compiling a Java program to a native binary would improve performance. Though it did draw a bit of discussion regarding HotSpot etc, the best advice is the old adage, make it work, make it right, and make it fast by profiling.
Moving on to www.javagaming.org, you?ll find that the discussion groups are supporting a new and much improved look. And, all of this happened without affecting the quality of the discussions. For proof, lets consider a question regarding micro benchmarking. At question was the performance characteristics of the following three loops.
for (int i = 0; i < array.length; i++) // do stuff // for (int i = array.length - 1; i >= 0; i--) // do stuff // for (int i = array.length; --i >= 0; ) // do stuff
The second for loop would supposedly take advantage of a special instruction to test against 0. Though the answer to this question is VM/processor dependent, in the Wintel world, this is no longer a valid optimization. One argument for the first example is that processors may optimize front to back memory access. I don?t know of any processor that optimizes back to front access.
This discussion was followed up with a discussion on escape analysis. Using escape analysis, the VM would recognize when an object only exists on a local stack. The benefit that can be derived from this type of analysis is the ability to use a much cheaper form of GC. The JET VM has successfully used this technology to reduce the impact of garbage collecting these short lived objects.
Write Once Run Anywhere? Java game programmers certainly have a different view on this moniker as is evidently clear in a discussion of threads on Linux. In order to provide a smooth animation experience, gamers live and die on the quality of the timers that they use. Consequently a number of discussions center around the granularity, accuracy, and consistency of different timers. The common measurement is fps or frame per second. In this posting, the programmer was conveying observations he made while experimenting with sleep() in a timing loop. With a sleep time of 12ms, the frame rate was 33 per second. When the sleep time was set to 0, the frame rate jumped to 200 per second. Changing the sleep time to 1ms dropped the frame rate to 50. As it turns out, this behavior is related to the granularity of the timer. Calling sleep(1) will most likely not sleep for 1 millisecond. Instead, it will sleep for the time specified by the granularity of the clock. On Linux, this is typically 1ms. On Windows however, this usually results in a 10-15ms sleep. Were there other difference which affected performance? Well, the most significant one would be support (or lack there of) for the underlying graphics card. The lesson here is that even though Java goes a long way to support the WORA mantra, it does not completely bridge the gap.
From the Middleware company (www.theserverside.com) there is an interesting thread regarding the choice of middleware. Eventually, the discussion broke down into a comparison of RMI to SOAP. There certainly are trade-offs between the two technologies. First, RMI offers a much better performance profile than SOAP. On the other hand, RMI is an all Java solution. Because SOAP uses XML as metadata, it offers the possibilities of integrating non-Java clients. In the end, the choice should be driven by your requirements.
Another thread of interest involves the discussion regarding two different EJB architectures. The first uses Stateless Session Beans to wrap POJO (plain old java objects) that were retrieved directly from the database. The second architecture uses the same Stateless Session Bean to retrieve and wrap Entity Beans. The thread starts with a plea for configuration tips to improve the Entity Bean performance. What the poster has stumbled upon is the fact that Entity Beans are expensive to use. In this case, the data is read only so, there appears to be no benefit to incur the cost of using them.
Finally, we have a posting from a developer who is designing a typical ERP application. The developer wishes to use an application server but does not want to use an EJB server. In short, he was planning on building his own application server and was looking for any tips on how to handle concurrency. It was suggested that if performance was an issue, using a caching technology may result in a significant performance difference. Where is the Cache JSR when you need it?
Back to newsletter 026 contents