Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips September 2005
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 058 contents
Analyzing Performance (Page last updated July 2005, Added 2005-09-30, Author Michael Kelly, Publisher informIT). Tips:
- Take the time to determine where the real problems may be hiding rather than focusing too much on the problems that are easy to identify.
- Understand the usage of the application and the business drivers within which it operates in order to understand how much performance testing is required for the application.
- The shape of a typical response-time degradation curve can be broken down into four regions: The single-user region (low load); The performance plateau (load below maximum capacity); The stress region is where the application degrades gracefully - exceeding system capacity; The knee in performance is the point where performance degrades rapidly.
- Use the response-time degradation curve to determine how far away from targets the system is.
- Scott Barber's user community modeling language (UCML), is a technique to visually depict complex workloads and performance scenarios.
- Create a series of performance tests to measure application performance in terms of a specific user type or an aggregate of all user types, overall response given a specific load, effects of load on any given type of user, etc.
- Instead of focusing on what's wrong with the application performance, focus on what's right to better determine where problems exist
- Focus on performance plateaus and stress areas (where the application is performing well) instead of the knee in performance to better understand the larger picture of how the application operates.
- Performance plateaus are the best performance you can expect without further performance tuning
- Stress areas are the max recommended user load at which the application "degrades gracefully".
- "Good enough" testing is the process of developing a sufficient assessment of the current quality of the application, and then making both wise and timely decisions concerning the cost and value of further improvement.
Urban performance legends, revisited (Page last updated September 2005, Added 2005-09-30, Author Brian Goetz, Publisher IBM). Tips:
- [Article destroys the myth that C/C++ performance must be better than Java].
- The common code path for new Object() in HotSpot 1.4.2 and later is approximately 10 machine instructions, whereas the best performing malloc implementations in C require on average between 60 and 100 instructions per call.
- Many real-world C and C++ programs, such as Perl and Ghostscript, spend 20 to 30 percent of their total execution time in malloc and free, far more than the allocation and garbage collection overhead of a healthy Java application.
- The malloc/free approach deals with blocks of memory one at a time, whereas the garbage collection approach tends to deal with memory management in large batches, yielding more opportunities for optimization.
- Replacing malloc with a conservative (BDW) garbage collector for a number of common C++ applications, resulted in many of these programs exhibiting speedups.
- Object pooling is now a serious performance loss for all but the most heavyweight of objects, and even then it is tricky to get right without introducing concurrency bottlenecks.
- With a copying collector (as in the JVM), for most objects the direct garbage collection cost is zero because a copying collector does not need to visit or copy dead objects, only live ones.
- Stack-based allocation is extremely risky in most C/C++ applications, but can be done safely and automatically in Java by the JVM using escape analysis. Escape analysis is being implemented for the 1.6 JVM.
Performance Testing (Page last updated September 2005, Added 2005-09-30, Author Matt Maccaux, Publisher BEA). Tips:
- You should define a set of performance use cases based on historical data or on approximations based on anticipated usage.
- Benchmark tests are great for gathering repeatable results in a relatively short period of time.
- The best way to benchmark is to change one and only one parameter between tests.
- Early in the development cycle, benchmark tests can be used to determine performance problems.
- Later in the development cycle, complex tests can be used to determine how the system will perform under different load patterns (called capacity planning, soak tests, and peak-rest tests, and are designed to test "real-world"-type scenarios by testing the reliability, robustness, and scalability of the application).
- It is always a good idea to run a series of baseline tests first to establish a known, controlled environment to compare your changes with later.
- Benchmark tests should be reproducible, otherwise they are not useful.
- The more users hitting the server, the more load will be generated; the shorter the think-time between requests from each user, the greater the load will be on the server.
- Throughput tends to increase at a constant rate and then at some point level off (system has reached saturation).
- When the system reaches the point of saturation, the throughput of the server plateaus, and you have reached the maximum for the system given those conditions. If server load continues to grow, the response time of the system also grows as the throughput plateaus.
- If lots of simulated users are doing approximately the same thing at the same time during the test, you get a wave-like utilization of resources which is normally unrealistic.
- Capacity-planning tests how far a given application can scale under a specific set of circumstances, e.g. How many servers do I need to support 8,000 concurrent users with a response time of 5 seconds or less.
- Total numbers of users don't mean much - What you really need to know is how many of those users will be hitting the server concurrently.
- Soak tests are long-duration tests that test the overall robustness of the system, showing any performance degradations over time via memory leaks, increased garbage collection, or other problems in the system.
- Peak-rest tests determine how well the system recovers from a high load, goes back to near idle, and then goes back up to peak load and back down again.
Using IBM WebSphere Application Server Performance Tools (Page last updated June 2005, Added 2005-09-30, Author Karunakar Bojjireddy, Joel Meyer, Wenjian Qiao, Srini Rangaswamy & Hari Shankar, Publisher Certification Magazine). Tips:
- The Performance Monitoring Infrastructure (PMI) is compatible with and extends the Java 2 Platform, Enterprise Edition (J2EE) specification.
- Performance analysis of the infrastructure may use tools such as vmstat and top to gather system information. This information may be analyzed to detect bottlenecks that might exist at the OS level, such as insufficient memory, slow processor, etc.
- Monitoring overhead is low for most PMI data except for JVMPI metrics and EJB per-method metrics.
- Metrics to measure: Average response time (including servlet and enterprise beans).
- Metrics to measure: Number of requests and transactions
- Metrics to measure: Number of live (concurrent) http sessions
- Metrics to measure: webserver and applications server thread pools and db connection pools
- Metrics to measure: JVM memory and garbage collection frequency
- Metrics to measure: CPU, I/O and system paging
- Detect resource constraints and performance bottlenecks in the system using request metrics, and take corrective measures to optimize these.
- A thread dump can pinpoint problem areas to a significant degree of granularity (e.g., which SQL statement is causing a particular problem), but thread dumps are not recommended for production scenarios.
- Consider the scenario where the pool size is set to too low. In this case, the application waits to get a database connection, affecting the application throughput.
- Setting the thread pool count too low will reduce the throughput by throttling the CPU usage and affecting performance.
- If thread pool count is set too high, CPU cycles are wasted on context switching among the threads.
- If the concurrent thread pool count has hit its upper limit and the CPU usage is low, you can infer that it might boost performance if the thread pool size is increased.
- Collect response time metrics to find what servlet in your application is slow.
- A cache that is too small may result in frequent cache evictions, causing work to be recomputed needlessly.
- A cache that's too large will result in storage that never gets used.
- If the application throughput is maxxed while CPU is still available, this often indicates an I/O bottleneck.
- Improper JVM heap size settings are the cause of many performance problems. If the JVM is spending too much time in garbage collection (GC), the JVM heap size should be increased.
- If the time spent in garbage collection is more than 10 percent, increase the JVM Heap Size.
Designing SOA Web Services for Performance (Page last updated August 2005, Added 2005-09-30, Author David Linthicum, Publisher WebServicesJournal). Tips:
- For a performance mode, determine the information production rate from any service and the information consumption rate from any service; include transformation and routing latencies; also determine the number of times a service is able to provide a response in an instance of time (e.g. in a second).
- The more processing you can place at the origin of the service, the better your SOA will perform.
- Service invocations that take a second or more to produce behavior, or information bound to behavior, will cause big problems when you align them with hundreds of other services that are doing the same thing.
- The use of too many fine-grained services may cause performance problems.
- Many BPEL engines are notoriously poor performers, and can become the bottleneck for the SOA.
- The more services you leverage, the more performance problems your SOA will have.
- Scalability should be a major consideration within the SOA design.
Back to newsletter 058 contents
Last Updated: 2018-02-27
Copyright © 2000-2018 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us