Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips December 2019
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 229 contents
https://www.infoq.com/articles/lessons-learned-performance-testing/
Lessons Learned in Performance Testing (Page last updated November 2019, Added 2019-12-31, Author Jir? Holu?a, Publisher InfoQ). Tips:
- Spend extra time getting your testing harness correct to avoid basic recurring problems.
- Always distinguish between latency and throughput tests as they are two fundamentally different aspects of the system.
- For latency tests fix the throughput to eliminate variance in your measurements
- Do not throw away performance information by using aggregations; record as much information as possible.
- Performance regression can be prevented up to a very high level; use automation to achieve this.
- Use a realistic testing environment otherwise your results are mostly misleading. Realistic hardware, sufficient load, appropriate thread counts, production sized and varied data.
- If there's a performance problem, it's usually visible in multiple places, each piece of the puzzle supporting or refusing the hypothesis
- Collect every metric that you reasonably can. EG system stats (CPU, memory, disk I/O, context switched), network stats (especially important for our distributed software), garbage collection logs, profiling data using Java Flight Recorder, operation times sampling, sizes and timing of internal thread pools, statistics about the internal pipelines and buffers.
https://www.infoq.com/presentations/interactions-servers-databases-transactions/
Concurrency, Scalability and Transactions -- Myths and Surprises (Page last updated December 2019, Added 2019-12-31, Author Renan Ranelli, Publisher The Conf). Tips:
- Concurrency means you cannot guarantee that one statement in one thread executes before another in another thread, unless you add some type of concurrency control.
- Isolation control can be used in place of concurrency control to ensure consistent outcomes
- Reading and writing should be done within the same transaction otherwise you are likely generating a concurrency bug
- Transactions with side-effects are problematic as if the transaction fails, the side-effect needs to be cancelled, which can be hard to coordinate and is yet another way of generating concurrency bugs
- Uncontrolled retries are a recipe for overload failure
- Logging and tracing are pillars of observability, but are quite difficult to use to debug concurrency
https://www.youtube.com/watch?v=bLvohWUAnDU
8ms/99th write percentile latency - is it fast? (Page last updated July 2019, Added 2019-12-31, Author Maciej Lasyk, Publisher GeeCON). Tips:
- Are your users happy? That's the question to ask to drive whether performance and reliability changes need to be made. Users, not monitoring, decides adequacy.
- 99.999% availability is very expensive to achieve, equating to 24 seconds of downtime allowed in a 28 day period - to achieve that you have to have a very high level of automation and self-healing in milliseconds, and that is quite expensive to achieve.
- Use Error Budgets, SLAs, SLOs, and SLIs to achieve high availability. Not more than around 3-5 SLOs per user journey (or you'll have too much information).
https://www.schneems.com/2019/11/07/why-does-my-apps-memory-usage-grow-asymptotically-over-time/
Why does my App's Memory Use Grow Over Time? (Page last updated November 2019, Added 2019-12-31, Author Richard Schneeman, Publisher schneems.com). Tips:
- Total process memory use goes up as the number of threads are increased, and memory use for each thread depends on the largest possible request it will ever serve; so total process memory will depend on all of thread count and thread lifecycle and request patterns
- As your application executes over time, because of thread count growth and thread memory growth, it will increase in memory until it hits a steady-state which handles the maximum load and highest memory requests.
- If you want your application to use less memory, you need to target one of: number of threads, largest possible request, or how you handle the distribution of incoming requests. Scaling out with further servers let's you reduce thread count of individual servers, but at the cost of an overall memory increase.
- Reducing object allocation is the best path to reducing your overall memory needs, and tends to produce further additional performance benefits, but is a lot of work. If following this tuning path, it's best to target the requests that require the highest memory.
Jack Shirazi
Back to newsletter 229 contents
Last Updated: 2025-03-25
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips229.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us