[an error occurred while processing this directive]
Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips October 2012
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 143 contents
Slow death - the cause and the remedy (Page last updated October 2012, Added 2012-10-29, Author Vladimir Sor, Publisher plumbr). Tips:
- A typical memory leaking heap has two phases before it actually throws an OutOfMemoryError: 1) processing runs normally, but the heap gradually fills until it reaches 2) the point where garbage collection frequency becomes sufficient to dominate application time, after which the application does very little application work, but maxxes all cores available to the garbage collector. This latter phase is characterised by CPUs being heavily used, heap mostly full, and useful application workrate falling to hugely lower levels than normal. The JVM can limp along like this for a varying amount of time before it eventually throws and OOME.
- You should be logging garbage collection stats with appropriate flags, e.g. -XX:-PrintGCDetails -XX:-PrintGCTimeStamps -Xloggc:gc.log
- If your JVM is spending a significant amount of time in GC, there is a problem that needs fixing. Typically either the JVM needs more memory, or it is leaking memory, or you are creating too many objects compared to the work you are doing
- The UseGCOverheadLimit flag (on by default) will throw an OOME if the JVM is spending more than 98% of it's time in garbage collection, and less than 2% of the heap is being recovered.
An Introduction to WebSphere Application Server Performance Tuning Toolkit: Part 1 (Page last updated October 2012, Added 2012-10-29, Author Shishir Narain, Wang Yu, Tao Zhang, Publisher IBM). Tips:
- The most common performance problems arise from: synchronization blocking; deadlocks; high CPU usage; connection leaks; memory leaks.
- A performance problem caused by excessive synchronization forcing many threads to be idle as they wait for a share resource is easily identified from a stack trace, which will show idle threads waiting on the shared resource and one thread using it.
An Introduction to WebSphere Application Server Performance Tuning Toolkit: Part 2 (Page last updated October 2012, Added 2012-10-29, Author Shishir Narain, Wang Yu, Tao Zhang, Publisher IBM). Tips:
- A deadlock occurs when two or more threads are stuck waiting for each other to release a lock. Neither thread can proceed. Deadlocks are easily identified by taking a stack trace; normally deadlocks are explicitly reported by the JVM stack trace, otherwise you look for idle threads waiting on resources, and correlate which threads have hold of those resources. Clearly if an idle thread holds one resource and is waiting for another, and that other resources is held by another idle thread which is (ultimately) waiting on the first, you have a deadlock.
- Where a JVM is using up a lot of CPU, you need to identify the threads consuming CPU (using either system or JVM JMX thread-level monitoring), then examine the stack traces for those threads consuming the CPU. This should identify the code causing the high CPU usage.
- Connection leaks happen you acquire a connection from the connection pool and never close it.
- Connections that are not on a live stack (i.e. being used by a live thread) are likely to be leaks.
- Memory leaks occur when a program doesn't free a reference to an object, so preventing the garbage collection from reclaiming the object's memory.
- To find a memory leak, generate a heap dump and use eclipse memory analyzer.
Introduction to Architecting Systems for Scale (Page last updated April 2011, Added 2012-10-29, Author Will Larson, Publisher Irrational Exuberance). Tips:
- Scalability: The ideal system increases capacity linearly with adding/improving hardware. This is scalability. Adding severs is termed horizontal scalability; increasing the power of particular server is termed vertical scalability.
- Redundancy: In an ideal system, the loss of a server should simply decrease system capacity by the same amount it increased overall capacity when it was added.
- Horizontal scalability is often enabled when combined with load balancing across redundant systems.
- A Smart Client understands it has a pool of servers and load balances across the servers, automatically detecting server non-availability and newly added servers. It is very difficult to get a Smart Client to load balance correctly.
- A Hardware Load Balancer is the highest performanc solution to load balancing, but very expensive and often difficult to configure optimally.
- A Software Load Balancer (e.g. HAProxy) provides a proxying service on each box needing one or more pooled services, managing healthchecks and removing and returning machines to the proxy pools according to your configuration, as well as balancing across all the machines in those pools.
- Caching consists of: precalculating results, pre-generating expensive indexes, and storing copies of frequently accessed data in a faster backend. Caching in-memory tends to be faster than any other type of caching, but is limited in memory size and any memory management overhead costs.
- Content Distribution Networks (CDN) take the load of serving static media off of your application servers, and also provide local geographic distribution, meaning your static assets will load more quickly and with less strain on your servers. If you don't use an external CDN, it's worth using a separate lightweight HTTP server from a subdomain to serve your static resources, as this allows you to easily configure for the static resource distribution.
- Cache invalidation is vital to avoid displaying incorrect data: each time the master copy of a value changes, you need to write the new value into the cache or simply delete the current value from the cache.
- Move whatever processing that can be, off-line.
- The common mechanism for processing asynchronous requests is to put the request onto a message queue for dedicated off-line servers to process.
- If a large scale application is dealing with a large quantity of data, you're likely to add support for map-reduce.
- Separating the platform and web application allow you to scale the layers independently.
How to Tune Java Garbage Collection (Page last updated September 2012, Added 2012-10-29, Author Sangmin Lee, Publisher cubrid). Tips:
- XML and JSON tend to have large memory requirements.
- GC tuning can be categorised into two aspects: minimizing the number of objects passed to the old area; and decreasing Full GC execution time.
- The basic principle of GC tuning is to apply different GC options to two or more server runs and compare the results, then stick with the options have demonstrated enhanced performance or better GC time.
- The main heap sizing options are : -Xms (start size); -Xmx (Maximum size); -XX:NewRatio (Ratio of New area and Old area); -XX:NewSize (New size); -XX:SurvivorRatio (Ratio of Eden and Survivor).
- You only need to set the Perm size to avoid OutOfMemoryErrors - set with the -XX:PermSize and -XX:MaxPermSize.
- The main GC algorithm options are: Serial (-XX:+UseSerialGC); Parallel (-XX:+UseParallelGC -XX:ParallelGCThreads=value); Parallel Compacting (-XX:+UseParallelOldGC); CMS (-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=value -XX:+UseCMSInitiatingOccupancyOnly); G1 (-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC).
- GC Tuning procedure is: 1. Specify your targets for acceptable pause times 2. Monitor the GC; 3. Determine if tuning is needed; 4. Set the GC algorithm and heap sizes; 5. Go back to 2.
Opinion: Performance Testing (Page last updated October 2012, Added 2012-10-29, Author Alex Collins, Publisher javacodegeeks). Tips:
- The first step to being able to setup automated performance testing is to follow good development, build and test practices.
- Key to automating performance testing is to have a one touch integration test that can: Build your app; Deploy to the test environment; Execute the tests; Report the results.
- While unit tests pass or fail; performance tests produce metrics. Metrics are best compared either to targets, or to metrics generated from previous tests.
- Performance tests need to: Expose metrics; Sample/record the metrics; Run the same test from the same baseline; Report on the results within a tool.
Back to newsletter 143 contents
Last Updated: 2019-08-29
Copyright © 2000-2019 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us