Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips March 2017
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 196 contents
https://www.infoq.com/presentations/pmu-hwc-java
Speedup Your Java Apps with Hardware Counters (Page last updated February 2017, Added 2017-03-29, Author Sergey Kuksenko, Publisher QCon). Tips:
- CPU Cache locality and avoiding cache misses is critical to highest performing code.
- Iterating over multi-dimensional arrays is more efficient if you iterate over the last index in the tightest loop because you gain cache locality.
- Sequential access to memory (cache lines) works with CPU vectorization to speed things even more.
- If you have a performance issue, ask: What is slow (monitoring); where is it slow (profiling); how do I make it faster (tuning).
- For analysing performance issues, take a top-down approach, the higher level issues need to be fixed first: System (Network, Disk, OS, CPU, Memory); JVM (GC, Heap, JIT, Classloading); Application (algorithms, synchronization, threading, API); Microarchitecture (CPU caches, data alignment, pipeline stalls).
- Hardware counters are useful when you have high %user CPU utilization.
- A profiler shows where the application is spending its time, but doesn't always show why (eg if it's due to CPU cache misses, you'll just see that the code is 'hot' but not that the cause is from CPU cache misses).
- Reasons for 100% %user CPU utilization include: An inefficient algorithm; pipeline stalls from memory loads or stores; pipeline flush due to mispredicted branches; expensive instructions; insufficient instruction level parallelism.
- Typical tuning options at the CPU level include: reduce the number of instructions needed to execute the algorithm; change the data structure to reduce memory stalls (waiting for memory); change the program logic to reduce branch stalls (eg from mispredicted branches); change the order of operations so that they execute in parallel across the cores more easily (if in a loop each subsequent operation depends on the result of the last one, this is difficult); breakdown long latency operations into multiple simpler ones.
- The most frequent cause of CPU level inefficiency is being memory bound: cache misses from not having data close together in memory or not accessing data close together sequentially (the caching system optimizes for these two); dTLB misses (may take hundreds of cycles); NUMA (eg sharing data across cores); memory bandwidth not large enough; false sharing in caches.
- dTLB misses can be solved by reducing the working set size of the procedure; or by enabling large pages (-XX:+UseLargePages).
- False sharing is where two variables located close to each other in memory are on the same cache line but are updated by two different threads - the result is that the two core caches duel in trying to update the same cache line. The @Contended annotation can help if you have this issue.
https://www.infoq.com/articles/Troubleshooting-Java-Memory-Issues
Troubleshooting Memory Issues in Java Applications (Page last updated March 2017, Added 2017-03-29, Author Poonam Parhar, Publisher InfoQ). Tips:
- "OutOfMemoryError: Java Heap Space" means the heap doesn't have any free space left in the Java heap, and can't continue with the program execution.
- The most common cause of heap space OutOfMemoryError is that the specified maximum Java heap size is not big enough for the live objects. The second most common cause is a memory leak (unintentionally held objects).
- You should have GC logging enabled, good options are -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:[gc log file]
- Monitor the live set size (heap used after full GCs). If this is increasing under stable load, then most likely there is a memory leak.
- Heaps can be monitored with VisualVM, Java Mission Control, JConsole, and the GC logs.
- Heap dumps can be obtained with: jcmd [process id/main class] GC.heap_dump filename=heapdump.dmp; jmap -dump:format=b,file=snapshot.jmap [pid]; JConsole, using the HotSpotDiagnostic Mbean; and the -XX:+HeapDumpOnOutOfMemoryError.
- The number of attempts at trying full GCs before giving up and throwing an OutOfMemoryError can be tuned for the parallel collector with -XX:GCTimeLimit and -XX:GCHeapFreeLimit.
- Heap histograms can be obtained with: -XX:+PrintClassHistogram and Control+Break; jcmd [process id/main class] GC.class_histogram filename=Myheaphistogram; jmap -histo [pid]; jmap -histo [java] core_file
- Java Flight Recordings with heap statistics enabled shows the heap objects and the top growers in the heap over time
- Heap dumps can be analyzed with: Eclipse MAT; VisualVM; jhat; JOverflow plugin for Java Mission Control; Yourkit.
- OutOfMemoryError can be caused from excessive use of finalizers if the finalizer thread can't keep up with the rate at which the objects become eligible for finalization. To monitor the number of objects that are pending finalization you can use: JConsole (in VM Summary page); jmap ? finalizerinfo [pid]; a Heap Dump.
- OutOfMemoryError in Metaspace or PermGen needs that space to be configured for more memory, or there is a classloader leak. For the latter, enable -XX:+TraceClassUnloading ?XX:+TraceClassLoading (and don't use -Xnoclassgc, but do use ?XX:+CMSClassUnloadingEnabled with CMS).
- OutOfMemoryError: Native Memory can be produced from insufficient swap space, and insufficient process memory - most likely because the OS has had all memory used up from the JVM and other processes together; or if running 32-bit JVM you could hit the 32-bit limit for the process.
- 64-bit JVMs with compressed oops can hit native memory limits - -XX:HeapBaseMinAddress=n can help to rebase the JVM memory leaving more room for the native heap.
- JVM Native Memory Tracking (NMT) tracks native memory that is used internally by the JVM. Enable with -XX:NativeMemoryTracking=summary -XX:NativeMemoryTracking=detail and use jcmd [pid] VM.native_memory to get the details.
- Native tools such as dbx, libumem, valgrind, purify can assist in finding native memory leaks outside the JVM.
https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/
Using jemalloc to get to the bottom of a memory leak (Page last updated December 2015, Added 2017-03-29, Author frederico, oswald, thomaslee, Publisher Government Digital Service). Tips:
- Monitor the heap and native memory usage, an out of memory error from the heap will show the heap hitting maximum; hitting an out of memory error without the heap near maximum indicates a native memory leak.
- A memory leak outside the heap could be Metaspace, or native memory [or trying to spawn too many threads].
- If you have an out of memory error from outside the heap, monitor Metaspace size (eg using VisualVM) to confirm or eliminate it as the space where the error is happening.
- Record youe JVM and system CPU, Memory and IO usage indefinitely, so you can identify when the system changes in some way - this helps narrow down when a change may have caused an issue.
- jemalloc allows you to to record the underlying native memory allocation which is useful if you have a native memory leak. Using the associated tools, and with stack traces, you can find what is causing the leak.
Jack Shirazi
Back to newsletter 196 contents
Last Updated: 2024-11-29
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips196.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us