[an error occurred while processing this directive]
Back to newsletter 035 contents
How does HotSpot speed up applications compared to a simple JIT?
HotSpot is an example of an adaptive Java Virtual Machine (JVM). Both HotSpot and JIT (Just-in-time-compiler) JVMs generate native machine code for the CPU that they execute on, by compiling Java bytecodes into native machine code. Both HotSpot and JIT JVMs can (and do) apply optimizations to the code they compile, including inlining, dead-code elimination, and a host of other compiler optimizations. HotSpot adds some further speculative optimizations compared to other existing JIT JVMs, such as speculative type assignments, but these could theoretically be applied by other JIT JVMs too.
The main conceptual difference between HotSpot and standard JIT JVMs is that HotSpot is far more selective about which methods to compile into native machine code. Compilation for machine code generation takes time, dead time for the application, and the more optimizations applied during the compilation phase the longer the compilation takes. Sun's solution to this dilemma was to give the JIT compiler more time to apply optimizations when compiling methods, by compiling fewer methods. The trick is to decide which methods will give the biggest boost to the application if they were compiled. HotSpot identifies these "big boost" methods by initially running all code in interpreted mode, whilst running a simple profiler at the same time in parallel. Each time the profiler identifies that a particular method is a bottleneck, i.e. that a method is taking a significant proportion of execution time (a "hot spot" in the code), that method is targeted for compilation using extensive optimizations.
The result is that only a small percentage of the application methods are compiled into native machine code, but those methods include the code in which the application spends most of its time. The extra compilation optimizations that HotSpot has the time to apply more than makes up for the remaining code that is still run in intepreted mode. The overall result can be significant speedups compared to the more simple JIT strategy of compiling most methods as quickly as possible with fewer optimizations.
Ther are other aspects to HotSpot that speed up applications, but nothing really that you couldn't roll into plain JIT JVM. The main difference is the strategy of running in interpreted mode, profiling in parallel and targeting which methods to compile into native code while applying more extensive optimizations.
HotSpot also has internal tuning: how long does the profiler have to identify a bottleneck before HotSpot chooses to compile it; how many optimizations should it apply; etc. Because there are these internal options, it is possible to come up with more than one configuration for HotSpot to work. At the moment, the Sun JVM has two configurations, -client and -server, which use Hotspot technology tuned to work in different ways. The -client mode applies fewer optimizations, targets startup time and tries to get into the native code stage quickly. The -server mode is assumed to have more time to get into full flow, so takes more time in deciding which methods to compile, and tries to apply as much optimization as possible. Nevertheless, some longer running benchmarks show -client mode faster, and some short run benchmarks show -server mode faster. All of which indicates that this is more of an art than a science.
In addition, the general HotSpot strategy isn't always guaranteed to give the best results. Other JVMs such as the IBM JVM and BEA's JRockit don't use HotSpot technology, remaining with plain JITs, and for many applications, especially compute-intensive ones, these other JVMs can outperform the Sun JVMs.
The JavaPerformanceTuning.com team
Back to newsletter 035 contents