Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips April 2008
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 089 contents
Interviewing Joe Walker (Page last updated April 2008, Added 2008-04-29, Author Kirk Pepperdine, Publisher Fasterj.com). Tips:
- To simulate a thousand users, you've got to have the memory and the resources to fire up a thousand browsers, which is a very inefficient way of doing things. But if you emulate the browsers, you can easily miss out on their actual performance bugs. In order to do a 100% realistic testing you need the real browser bugs, because the workarounds might affect performance.
- Safari has memory leak on XHR (XMLHttpRequest) which of course AJAX is built on. You can get around that memory leakage by using iframes. DWR automatically handles this and similar performance bugs in other browsers.
Oh, go ahead -- prematurely optimize (Page last updated February 2008, Added 2008-04-29, Author Scott Oaks, Publisher java.net). Tips:
- Too many developers think that they don't have to care about the performance of their code at all, or at least not until the code is completed. This is just wrong.
- Network calls are a LARGE inefficiency. If you're designing a multi-tier app that uses the network alot, you want to pay attention to the number of network calls you will make and the data involved in them.
- You can safely ignore the small inefficiencies 97% of the time. That means that you should pay attention to small inefficiencies 1 out of every 33 lines of code you write.
- You've been told that "uncontended access to a synchronized block is almost free", but that's not quite true -- crossing a synchronization boundary means that the JVM must flush all instance variables presently held in registers to main memory.
- A synchronization boundary prevents the JVM from performing certain optimzations, because it limits how the JVM can re-order the code.
- Caching the size of the array for a loop test is usually more efficient e.g.
for (int i = 0, j = v.size(); i < j; i++).
- It's great when you can find a simple CPU-intensive set of code to optimize, but it's even better if developers pay some attention to writing good, performant code at the outset and you don't have to track down hundreds of small things to fix.
So You Want to Write a Micro-Benchmark (Page last updated April 2008, Added 2008-04-29, Author John Rose, Publisher Sun). Tips:
- Micro-benchmarks can only measure only a limited range of JVM performance characteristics.
- Always include a warmup phase in a micro-benchmark which runs your test kernel all the way through, enough to trigger all initializations and compilations before timing.
- Run micro-benchmarks with -XX:+PrintCompilation, -verbose:gc, etc., so you can verify that the compiler and other parts of the JVM are not doing unexpected work.
- Be aware of the difference between -client and -server, and OSR and regular compilations (-XX:+PrintCompilation can help)
- Be aware of initialization effects in micro-benchmarks.
- Do not print for the first time during your micro-benchmark timing phase, since printing loads and initializes classes.
- Do not load new classes outside of the micro-benchmark warmup phase (or final reporting phase), unless you are testing class loading specifically (and in that case load only the test classes).
- Do not take any code path for the first time in the timing phase of your micro-benchmark.
- Inspect the compiled code before forming theories about what makes something faster or slower (using various -XX:+Print... opions, see http://wikis.sun.com/display/HotSpotInternals/PrintAssembly).
- Run your benchmark on a quiet machine, and run it several times, discarding outliers.
- For benchmarks, use -Xbatch to serialize the compiler with the application, and consider setting -XX:CICompilerCount=1 to prevent the compiler from running in parallel with itself.
If you can't visit the floor (Page last updated March 2008, Added 2008-04-29, Author Kirk Pepperdine, Publisher Kirk Pepperdine). Tips:
- It is always worth visiting the people that are using the system - watch how they work and interact with the system to see the pain points, jitters and pauses.
- Try to determine what the most frequent work activity is and what work activity takes the longest.
- Questions to ask the application users: 1. Describe any activities that you perform where the system is alway slow to respond
- Questions to ask the application users: 2. Describe any activities that you perform where they system is sometimes slow to respond
- Questions to ask the application users: 3. Are there any times of the day when you find the system is slower
- Questions to ask the application users: 4. Are there any days of the week when you find that the system is slower
- Questions to ask the application users: 5. Are there any activities that you avoid or have trouble completing
- Questions to ask the application users: 6. What action do you take most often when you find things slowing down
Performance Engineering - a Practitioner's Approach to Performance Testing (Page last updated March 2008, Added 2008-04-29, Author Alok Mahajan, Nikhil Sharma , Publisher TheServerSide). Tips:
- Non-functional requirements should include the kinds of usage and load the application will be subjected to, and the expected response times.
- The performance specifications of any system should include items for Response Time, Throughput and Resource Utilization.
- The first step of any performance testing project should be to determine the required SLAs.
- The Workload details throughput rates, load figures, list of transactions, types of transactions, response times, projected growth, load patterns, load distributions, transaction distribution, peak windows etc.
- Performance testing should use a representative set of transactions that simulate the expected or actual loads - including batches that might be signifcant.
- Performance testing should use a production-like environment - as close to the production hardware as possible.
- Monitoring is an inherent aspect of Performance Testing. All the resource utilizations need to be monitored throughout the duration of the test.
- Performance testing should have at least an equal amount of data in the test environment as the production data, and a similar distribution of data too.
- Performance tests should include stress tests (equivalent to the absolute peak loads expected throughout the year); long tests to determine the application does not run out of resources over time; backlog tests to determine the effects of components being restarted separately; and tests representative of expected throughputs.
- Key performance tests report graphs include: CPU Utilization vs. No. of users; Throughput vs. number of users; Response Time vs. throughput
- Best Practices for performance testing include: automating the tests so that they are reproducible with minimal additional effort; maintaining correct state and data for the database; validate Little's law; designing scenarios to achieve right mix of transactions; avoiding tuning, if you don't have to.
- Little's law states that the total number of users in the system is equal to the product of throughput and response time. While calibrating your load test runs, you should always cross check the test results (throughput, response time, user-load simulated) to identify whether the load generator nodes themselves are not becoming the bottleneck in the system.
Trading Consistency for Scalability in Distributed Architectures (Page last updated March 2008, Added 2008-04-29, Author Floyd Marinescu, Charles Humble, Publisher InfoQ). Tips:
- Avoid use transactions that span physical resources because of the dependencies it creates across multiple components.
- Client failures can tie up database resources for longer than acceptable if the transaction spans the client-to-db
- Distributed transactions make an application dependent upon multiple databases, bringing down the effective availability of the client.
- Design for a lack of transactions, building in failure modes that allow the client to succeed even in the event of database availability issues.
- Declarative transactions, such as those in EJB's are a sledge hammer approach to simplifying transaction management, as they assume that every database operation behind the bean is of equal importance.
- If your application needs to reach hundreds of transactions per second, you're going to find that distributed transactions won't cut it
- To achieve consistency in large scale systems, you have to give up on ACID and instead use BASE: Basically Available; Soft state; Eventually consistent;
- Relax the need for the data to be consistent at the end of each client request and you have opened the window to eliminate distributed transactions and can use other mechanisms to reach a consistent state.
- Replace transactional processing with in-memory processing and a separate recovery mechanism which recovers state if the in-memory processing fails - this leads to some latency before globally consistent state is achieved, but provides higher scalability
- Scaling in the large is NOT adding resources to an architecture designed to scale in the small - you have to break away from conventional patterns, such as ACID and distributed transactions.
- Of the three properties of shared-data systems - data consistency, system availability and network partitioning tolerance - only two can occur at the same time to achieve high scalability.
- For large distributed systems (Amazon, eBay), network partitions are a given. This means an architect dealing with very large distributed systems has to decide whether to relax requirements for consistency or availability.
- If you choose to relax consistency, then the developers need to decide how to handle a situation where a write to the system is not immediately reflected in a corresponding read.
Wait-Time Analysis Method (Page last updated April 2007, Added 2008-04-29, Author Don Bergal, Publisher JavaDeveloperJournal). Tips:
- It's important not just to measure overall latency but to break it down into sufficient detail so that you can identify the main causes of latency and fix those.
- You should look at the end-user's view of latency, intermediate metrics can easily give the wrong view of the latency.
- You need to measure the time taken for every step of every user initiated request from to start to finish.
Back to newsletter 089 contents
Last Updated: 2021-08-29
Copyright © 2000-2021 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us