Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips March 2020
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 232 contents
Quickly Analysing A Heap Memory Leak (Page last updated May 2019, Added 2020-03-29, Author Jack Shirazi, Publisher Devoxx). Tips:
- The methodology for analyzing memory leaks is four steps: 1. Do I have a leak (and does it needs fixing - use gc logs to see); 2. What is leaking (instances of which classes - compare two heap histograms separated by enough time to see the leak); 3. What is keeping the objects alive (what instances in the app - you need a heap dump to find these); 4. Where is it leaking from (the code where the objects are created and/or assigned - any memory profiler which can sort on GC generations and provide object creation traces).
- You can identify if you have a leak by scanning garbage collection logs and identifying the value of the heap used AFTER full GCs. If those values are consistently increasing, you have a leak. So ensure that you have GC logging on. Finding the heap used after full GCs can be subtle for concurrent GC algorithms, but using a gc logs analyzer such as GCViewer often makes it straightforward (it tends to be visually obvious).
- Heap histograms can be generated by
jmap -histo:live PID and although this operation impacts the JVM performance, it's much more lightweight than getting a full heap dump. Comparing histograms over time is an easy way to identify the classes of leaking objects.
- To find what is keeping an object alive, you need a heap dump and a heap dump analyzer. Find the roots holding on to the leaking memory (often a large memory chunk easily identified) and find your application instances which are holding the objects and keeping them alive.
- A memory profiler which can sort on GC generation counts and also provides object creation traces let's you find where in the code the leaking objects are being created or assigned. A GC generation count is NOT the age of the objects of a class, it is the number of different ages of objects of a class, where the age is the number of GCs the object has survived. So if there are two objects alive of class X, one which has survived 93 GCs and the other 51 GCs, this is a generation count of 2. If there were a third object which had also survived 51 GCs, the generation count would still be 2. Leaking objects will have a high generation count, all other objects (after Full GCs) will have low generation counts.
GC Tuning & Troubleshooting Crash Course (Page last updated January 2020, Added 2020-03-29, Author Ram Lakshmanan, Publisher JaxLondon). Tips:
- 3 metrics: latency/pause time (use the distribution, not averages); throughput (GC efficiency); footprint (memory but also CPU). -Xlog:gc*:file=...
- The overall process space includes: heap space, metaspace, thread stacks, GC info space, code space, socket spaces, JNI space, and more.
- Tune by: 1. removing all GC options (except logging) to check the defaults (as they may now be better as versions change); 2. See why the GC is being triggered and eliminate the worst cases; 3. eliminate calls to System.gc() from the application including 3rd-party libraries (eg RMI) -XX:+DisableExplicitGC; 4. target high allocation rates (unnecessary string duplicated, collections sized incorrectly).
- pre-size collections to the final size needed; create collections lazily; dereference the collection rather than clearing it; avoid boxed primitives; eliminate finalizers; avoid duplicating strings;
Waste Free Coding (Page last updated April 2019, Added 2020-03-29, Author Greg Higgins, Publisher LessJava). Tips:
- The app processes 2.2 million csv records per second in a 3MB heap with zero gc on a single thread in Java
- Avoid allocating multiple strings on reading data by using a single reused read buffer to process the data
- Avoid creating multiple objects when converting raw data to structured data by using a reusable structure object that can process the structure and then be used again for the next chunk of data
- Use zero-allocation converters to convert raw data from buffers into (reusable) structure objects
- Use primitive data types and avoid autoboxing
- Partition the data to use single-threaded processing, this avoids locks, data sharing and also avoids queues
Monica-Beckwith-on-Java-Garbage-Collection (Page last updated September 2019, Added 2020-03-29, Author Monica Beckwith, Publisher ieeeComputerSociety). Tips:
- You can increase the heap to avoid the collection long enough for some processes (cluster nodes, daily GCs, batches)
- Copying collector pauses tend to be proportional to the number of objects being copied
- Users noticing pauses depends on both pause durations and the frequency of the pauses (occasional longer pauses can be better than many short ones depending on how much the user is affected by the one and the other)
- Some common GC tuning options: if references are a problem, turn on parallel reference processing; tune the generation sizes based on the live dataset sizes; compressed oops can give good improvement; test if using the numa aware allocator -XX:+UseNUMA can improve performance
- You should enable -XX:+PrintGCDetails -XX:+PrintGCTimeStamps (Java9+ -Xlog:gc*) to see allocation and promotion rates and heap occupancies.
- Statistical analysis of GC events help to tell you if you have a problem, but to identify the problem you need to investigate the outliers
- Use a baseline to compare against changes to see which improve the situation. Always try one change at a time!
- If you need less than 1 millisecond pauses, you cannot have garbage collection. Servers tend to have network overheads above that so GC is probably not the limiting factor.
- Off heap and reusing buffers is a reasonable approach to avoid garbage collection for ultra-low pause applications
Exploring Java Heap Dumps (Page last updated November 2018, Added 2020-03-29, Author Ryan Cuprak, Publisher Devoxx). Tips:
- Memory leaks have many reasons, some less common are: faulty clone methods; duplicate singletons; cache logic bugs.
- NetBeans (open source) profiler has a profiler API which can be used standalone - https://github.com/apache/incubator-netbeans/tree/master/profiler/lib.profiler/src/org/netbeans/lib/profiler/heap using HeapFactor.createHeap(). Note some profiler calls are slow - prefer Javadoc "Speed: normal" calls. Note the API itself doesn't cache the heap dump into memory, but you could inadvertently do that so be careful.
- Heap dumps can be generated with -XX:+HeapDumpOnOutOfMemoryError, jmap, jshdb, jcmd, cntrl-break, and from the HotspotDiagnostic platform mbean both programmatically or via any JMX client.
- Analyse heaps from gcroots, threads, or class types. Roots are: classes (loaded by classloader), thread, stack local, monitor, JNI reference, something held by the JVM. Generally it's best to filter out internal JVM classes since these won't be causing the leak.
- When processing large heaps, you might need to cache intermediate analyses on disk - and make sure you note which objects have been processed to avoid looping in the same set of objects.
- Heap processing is IO bound
The Trouble with Memory (Page last updated September 2019, Added 2020-03-29, Author Kirk Pepperdine, Publisher QCon). Tips:
- Excessive memory churn is a hugely common application bottleneck (because of the speed difference between CPU and memory fetches), but it's not well monitored so it's often not identified. GC logs produce fairly good data
- Reducing memory churn (eg by code changes, reduced logging) can improve application throughput dramatically
- Target hot allocation sites in the code to identify where memory churn is being caused
- High memory churn rates quickly fills the young generation which causes frequent GCs and more copying, faster aging so faster promotion, and more frequent tenured cycles
- Rule of thumb allocation rates - above 1GB/sec is a problem, below 300MB/sec is okay
- Profilers can interfere with the JVM optimizers, eg an object escaped by the JIT compiler can be stopped from being escaped because the profiler is tracking the object.
Back to newsletter 232 contents
Last Updated: 2023-01-29
Copyright © 2000-2023 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us