Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips April 2013
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 149 contents
Tuning Websphere Application Server for enhanced performance (Page last updated April 2013, Added 2013-04-29, Author Agile Support Team, Publisher agileload). Tips:
- Performance targets usually include: Throughput (number of requests in a period that can be processed by the system); Response time (the time from initiating a request until the result is displayed); Maximum time for batches.
- Tuning: Run end-to-end full system performance tests, collecting logging and tracing and metrics, and compare the results to the targets. If performance meets the targets then ensure that you have planned for future growth. If not, locate the bottleneck and reduce the impact from it (improve hardware, reduce resource contention, improve algorithms), then start again.
- Sharing requests across many instances of a resource is known as Workload management, which is important for scalability in serving more concurrent requests.
- Sending requests to alternate instances is called Load balancing; routing subsequent requests for a particular session to the same instance is called affinity.
- The "balancing" in load balancing can be done with different algorithms. Many so-called load balancing algorithms are not "balancing" algorithms, but simple distribution algorithms (i.e. requests are not actually balanced across available resources, but instead distributed in a way that ignores how heavily the resources is currently being used, for example randomly).
- Common load balancing algorithms (ignoring affinity) include: simple round-robin (each request goes to the next resource in a list); weighted round-robin (higher weighted resources get requests more frequently than lower weighted resources, resources of the same weight have requests distributed evenly); random distribution; utilisation-balanced (requests are sent to the resource which currently has the lowest utilisation).
- High availability or Resiliency is the ability of a system to tolerate failures and still remain operational. If your system purports to be highly available, you should define the expected availability level, e.g. system availability of 99.9% represents at most 8 and 3/4 hours of outage in a single year. Calculate the business loss of the downtime and make sure that the business case justifies the total cost of making the system available to the required standard.
Performance Testing Java Applications (Page last updated March 2013, Added 2013-04-29, Author Martin Thompson, Publisher skillsmatter). Tips:
- Performance is about throughput & latency/response time. You need to specify targets for these.
- Focusing on outliers can improve lots of the system.
- As you increase load, response time increases until it becomes unusable.
- Scaling economically: every time you add a node, and you get the same N more requests can be serviced with the same response times.
- Set a transaction budget: the response time for each request, broken down into each major component.
- An ideal system has throughput increase linearly as load increases, until it reaches a maximum and maintains that maximum. Most systems have a stress point where throughput starts to actually decrease with further load (essentially the load has become a denial of service attack); you can design your system so that throughput stays constant regardless of load by using "backpressure" - preventing any further requests entering the system until capability is available to handle the request, thereby preventing overload of the system. Response times will increase indefinitely, but there is nothing you can do about that without adding more system capability.
- Keep your queues bounded.
- Keep your business code decoupled from the frameworks used to tie the system together and provde the underlying services.
- Walking through memory linearly will perform better than randomly accessing memory locations (because the underlying system is built that way).
- Write your performance tests and targets before the code, this means you are always aware of where the performance is. Performance tests that fail must fail the build.
- Concurrency tests should look for consistency of results (invariance checks) when running multi-threaded.
- Binary formats are much faster than formats like XML for parsing, marshalling and unmarshalling.
- Many common parsing libraries are not very efficient.
- Measure network response times seperately from server response times otherwise you can't tell where the time was spent.
- Use a (logarithmic) histogram to display latency results to get a good idea of the variation.
- System.currentTimeMillis() gets the clock time, including any time fixes such as NTP corrections - which means you could get time shifts included in a measurement.
- Use roundtrip measurements so that you measure time on the same machine - and ideally on the same core.
- Tests shouldn't be running the same dataset all the time, as that produces unrealistic cache performance.
- Startup code makes the system recovery time slower - so should also be included in performance targets and tests.
- Simple clean code is usually faster and can be optimised more simply (so more often) by the compiler.
Tuning Garbage Collection for Mission-Critical Java Applications (Page last updated March 2013, Added 2013-04-29, Author Dr. Andreas Müller, Publisher mgm techology partners). Tips:
- First step tuning is to configure memory limits (and initial size equal to max to reduce step-by-step increases during startup).
- The primary garbage collection tuning goal (for generational GC) is to collect as much garbage as possible in the young generation and make old generation pauses as infrequent and short as possible.
- AdaptiveSizePolicy is on by default for parallel GC; it automatically sizes the internal heap spaces. It delivers reasonable behavior for many scenarios but it may not be optimal for your server. To switch it off and explcitly set your young gen sizes to fixed values use -XX:-UseAdaptiveSizePolicy
- The concurrent CMS collector attempts to minimize old generations pause times, and is activated with -XX:+UseConcMarkSweepGC.
- The option -XX:CMSInitiatingOccupancyFraction=80 would start the CMS GC when the old gen was 80% occupied. Starting CMS when the old gen is too occupied or too fragmented can result in it not being able to complete the GC in time, in which case it will fail over to a stop-the-world long pause GC, which is usually undesireable. Starting it too early causes it to run too often, which uses up more CPU than necessary, and possibly fragment the heap earlier.
- The G1 collector avoids fragmentation. It does not aim at the shortest possible pauses, but instead at controlling pause times by placing an upper limit on their duration which is maintained in a best-effort approach.
- For the G1 collector, setting generation sizes is in conflict with setting pause time targets and will prevent the G1 collector from doing what it was designed for. With G1, set the overall memory size using "-Xmx" (and "-Xms") and (optionally) a GC pause time target (the default target is 200 milliseconds); leave all the rest to the G1 collector.
Java concurrency: the hidden thread deadlocks (Page last updated January 2013, Added 2013-04-29, Author Pierre-Hugues Charbonneau, Publisher javaeesupportpatterns). Tips:
- Deadlocks essentially involves two threads waiting forever for each other; often the result of synchronized objects or ReentrantLock lock-ordering problems.
- The JVM identifies most deadlocks (listed in stack dumps or by explicitly asking using the platform mbeans).
- Where a deadlock is caused reversing the ordering of acquiring locks from both a synchronized object and ReentrantLock read lock, the JVM does not (yet) detect this.
Top 8 Application Performance Landmines (Page last updated April 2013, Added 2013-04-29, Author Andreas Grabner, Publisher compuware). Tips:
- Optimize by tuning the code, reducing SQL overhead, implementing application caching.
- Bloated front-ends can cause performance issues (especially for downloaded browser based front-ends).
- Use caching, compression, CDNs, and minimize images, functions, and features you add to keep your front-end from bloating.
- Third-party services are here to stay. You need to monitor their impact to your application and ensure you have service-level agreements - which tend to expose what they are capable of.
- Ask your third-party service provider: Have you load tested your systems for when multiple large customers experience peak traffic simultaneously? What is the escalation path when we discover a performance issue that is affecting our customers? How well did your system perform during the 8 busiest hours over the last 12 months?
- Your need to be prepared for the scenario where a third-party service or CDN suffers a severe outage or begins to seriously degrade your site performance (e.g. remove their tags and failover to reduced provision).
- The most obvious hints on whether you have a network or application problem can be seen by checking for the Network and Server time outliers compared to the values of the baseline traffic.
- Database access causing bad performance is the problem seen the most within applications. Bad patterns include: fetching too much data; fetching data inefficiently; individual statements that take too long to execute; misconfigured connection pools; application code that holds on to connections for too long (blocking other requests).
- Memory and Garbage Collection problems are very prominent issues in any enterprise application. Memory problems include: high memory usage; wrong cache usage strategies; class loading; large classes; native memory; memory leaks. Garbage collection typically needs some tuning.
Why average latency is a terrible way to track website performance - and how to fix it (Page last updated February 2013, Added 2013-04-29, Author Mike Volodarsky, Publisher mvolo). Tips:
- Averages are a bad way to track performance of any aspect where outliers are important. For example if 20% of displays of a webpage take 10 seconds, but the average is 2 seconds; it's highly likely that close to 20% of viewers are abandoning the site despite the apparent good 2 second average display time. Similarly a terrible single request can skew the average to make it look unnacceptable where all requests except that terrible one might be fine.
- Latencies can vary according to time of day, or any other aspect that causes regular changes to the load - it's more useful to know the percentage of latencies that are unnaccaptable rather than the average latency.
- You often care less about the latency and more about the number of users seeing bad performance - the latency is a proxy for that, but if it's not good enough (e.g. necause it fails to identify the outliers), use a better proxy.
- The industry standard Apdex index, calculated as ApdexT = (#Satisfied requests + #Tolerating requests / 2) / (#Total requests), where the "tolerated" boundary is 4 times the satisfactory boundary e.g. if up to 2 seconds is satisfactory then 2 to 8 seconds is tolerated; if 80% of requests are satisfactory but the remaining 20% are unnacceptable (over 8 seconds) then Apdex2s is 80% (compared to 100% for everything satisfactory). The Apdex index does a good job of showing when even a small percentage of users have bad performance.
- A "Satisfaction" score bands each type of request into Fast/Sluggish/Too Slow/Failed and combines requests to show the overall percentage in each category.
- Be careful when using averages to make assumptions about how things work, because averages can hide critical outliers.
Back to newsletter 149 contents
Last Updated: 2019-08-29
Copyright © 2000-2019 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us