Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips March 2019
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 220 contents
https://www.infoq.com/presentations/memory-jvm
The Trouble with Memory (Page last updated February 2019, Added 2019-03-28, Author Kirk Pepperdine, Publisher QCon). Tips:
- Loitering objects (eg session objects with too long timeouts) can cause memory issues; check the lifetime of objects that live more than a few GCs. If they last more than the tenuring threshold, they will be moved to the old generation and impact performance. Allocation rate affects how fast objects (that survive many GCs) get pushed into the old generation. Above 1GB/sec tends to be too high.
- Logging can be a bottleneck, to check see what the performance is if you remove the logging completely.
- Big allocations are expensive if they go over the threshold of the fast allocation path.
- Increased occupancy of old generation tends to correlate with higher pause times.
- Profilers instrumenting allocations will cause an object that could be escaped (allocated only on the stack) to become actually instantiated on the heap and cause misleading measures. The GC logs don't lie though and will tell you the allocation rate.
- Check the memory eviction policy of your cache. It needs one. One good rule of thumb is if the object is not touched for 5 minutes, it probably won't be.
- Generally a higher tenuring threshold is better unless you really understand your object lifecycle well.
- Try large young generations (significantly bigger than old generation size) and see if that helps.
https://www.infoq.com/presentations/cpu-microarchitecture
"Quantum" Performance Effects: beyond the Core (Page last updated February 2019, Added 2019-03-28, Author Sergey Kuksenko, Publisher QCon). Tips:
- Shrinking the dataset (so more fits into the CPU cache) and sequentially accessing memory (to work with prefetching) are two good techniques for speeding up memory access.
- Different GC algorithms align data differently and so can impact performance of data access due to CPU cache effects.
- If threads are contending to fill the CPU cache (making the cache repeatedly evict data from the other thread), this can slow performance significantly - sometimes enough to make sequential processing faster than multi-threaded processing.
- If you need the highest performance, you can't use shared hardware (eg cloud VMs) as you cannot guarantee lack of cache conflicts.
https://www.infoq.com/presentations/scalability-performance-benchmark
Scaling up Performance Benchmarking (Page last updated February 2019, Added 2019-03-28, Author Anil Kumar, Monica Beckwith, Publisher QCon). Tips:
- Push different types of requests onto different queues so that long processing requests don't delay quick or high priority requests.
- Scale up tactics: Add memory; Tune garbage collection; Optimize the task scheduler (eg fork-join pool size).
https://www.opsian.com/blog/virtualjug-continuous-profiling-in-java-tutorial/
Continuous Profiling in Java (Page last updated January 2019, Added 2019-03-28, Author Richard Warburton, Sadiq Jaffer , Publisher VirtualJUG). Tips:
- The production environment is the environment with the actual problems and correct workload, so is the ideal place to get profile information; all other environments are different in multiple ways and can produce misleading or incorrect profiler information, and completely miss problems.
- Production profiling tells you what code is using what proportion of which resources, so is quite different from logging and monitoring. Most profiling is too high overhead to run in production.
- Instrumentation resolution matters for identifying problems - if you are getting a metric every 5 seconds but the problem lasts for under 5 seconds, you will miss the issue that the metric could show.
- CPU time is useful for diagnosing computational hotspots and inefficient algorithms. Hot methods in CPU time are the top consumers of the CPU.
- Wall clock time is useful for diagnosing what is stopping the CPU from getting on to execute your application.
- Instrumenting Profilers insert timing code into the running application, so are accurate for what they are measuring but add significant overhead and distort the time spent in quick methods, so often produce inaccurate profile information compared to the performance of the uninstrumented application.
- Sampling Profilers record stack samples at regular intervals, so cannot produce accurate timings for methods, but will show the methods that are most often being executed which is very useful. However sampling profilers that only work at JVM safepoints can produce distorted information because you may miss code that run between safepoints, and the sampling interval gets distorted. Also, pausing the application at safepoints too often to get samples, impacts the application performance. -XX:PrintSafepointStatistics shows safepoint statistics. AsyncGetCallTrace lets you sample you sample the stacks while avoiding safepoints.
- If you profile all the time, you don't need to have special access for ad-hoc profiling, and you can review profile data from transient incidents that occurred when no one could profile at the time, and across version changes.
Jack Shirazi
Back to newsletter 220 contents
Last Updated: 2024-08-26
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips220.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us