Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips February 2010
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 111 contents
http://www.informit.com/articles/article.aspx?p=1398608
Design Patterns in Java: Proxy (Page last updated November 2009, Added 2010-02-25, Author Steven John Metsker, William C. Wake, Publisher informit). Tips:
- A proxy object can take the responsibility that a client expects and forward requests appropriately to an underlying target object. This lets you intercept and control execution flow, providing many opportunities for measuring, logging, and optimizations.
- A classic example of the Proxy pattern relates to avoiding the expense of loading large images into memory until they are definitely needed - avoid loading images before they are needed, letting proxies for the images act as placeholders that load the required images on demand.
- Designs that use Proxy are sometimes brittle, because they rely on forwarding method calls to underlying objects. This forwarding may create a fragile, high-maintenance design.
- [Article provides an example of using the Proxy pattern to delay loading images until needed, for performance and memory optimization.]
- Dynamic proxies let you wrap a java.lang.reflect.Proxy object around the interfaces of an arbitrary object at runtime. You can arrange for the the proxy to intercept all the calls intended for the wrapped object. The proxy will usually pass these calls on to the wrapped object, but you can add code that executes before or after the intercepted calls.
- [Article provides an example of using a dynamic java.lang.reflect.Proxy to measure execution times of method calls and log if this is too long.]
http://java.dzone.com/articles/performance-tuning-resources
Performance Tuning Resources For Web Clients (Page last updated December 2009, Added 2010-02-25, Author Craig Dickson, Publisher JavaLobby). Tips:
- There is a strong push internally (in 2010) at Google to include page speed into Google's page ranking algorithm - in other words, the faster your page loads, the better your page would rank in Google's search results.
- [Article covers a number of current (in 2010) links and tools and discussions around optimizing website speed.]
http://www.insideria.com/2009/11/udp-socket-connections-for-los-1.html
Using UDP socket connections for low-latency and loss-tolerant scenarios (Page last updated November 2009, Added 2010-02-25, Author Ian McLean, Publisher InsideRIA). Tips:
- TCP ensures that everything arrives to its destination safely and that things arrive in the right order. This means additional processing is required before the data is sent and also after the data is received. TCP can also down-throttle bandwidth during periods of high volume. TCP is the best choice for a majority of communications but there are times when it is not the best fit. The additional processing involved in TCP and its potentially throttled bandwidth can translate to higher CPU usage and higher latencies.
- UDP protocol lacks all of the transmission management that TCP provides: no 3-way handshake; no guarantee that data sent is received in the right order. This simplicity can allow UDP to perform faster than TCP - but at the cost of possible data loss.
- UDP is ideal for situations that require low latency and where packet loss is inconsequential, such as 2-way video and audio streaming, and online multiplayer games.
- One-way video streaming is different from two-way real-time video: the one-way viewer typically wants to see video without interruptions, and doesn't need to see the video the exact moment it is sent, so is happier to wait for the video to be buffered on the display and presented with fewer interruptions and no losses (jumps) in the video - this is ideal for TCP; the two-way real-time video needs to be up-to-the-moment otherwise there is a delay in the chat making a conversation very difficult to hold, low latency is important whereas some loss in data is acceptable if the alternative is delayed receipt of communications - this is ideal for UDP.
- UDP doesn't do any additional processing of its data, so compared to TCP there is far less strain on the CPU during times of high volume on a system. This means that a UDP server can maintain a greater number of simultaneous connections without running out of system resources.
http://blog.dynatrace.com/2010/02/09/week-3-myths-and-truths-about-performance-measurement-overhead/
Myths and Truths about Performance Measurement Overhead (Page last updated February 2010, Added 2010-02-25, Author Alois Reitbauer, Publisher ). Tips:
- Measuring a system modifies the behavior of the system itself - in order to collect our measurements we modify the system and execute additional code. This code execution consumes system resources.
- You should test the overhead of the performance management solution (i.e. performance with and without the system present).
- [The article author suggests measurement overhead should be measured according to the percentage cost of the transaction or throughput rather than CPU or memory.]
- In a sampling-based approach to measuring performance, the execution stack of all running threads is analyzed in a defined interval and the overhead depends only on the sampling rate - the higher the sampling rate, the higher the overhead; independent of the actual application. However sampling provides measurement errors, and the longer the sampling interval, the less accurate the measurement gets (though the lower the overhead).
- Sampling is useful for detecting CPU-bound and global application problems even in high load environments.
- In event-based measurement, times entering and leaving a method are measured so are precise, but overhead depends on the number of measurements.
- If you do not carefully select what we measure, our measurement data will be inaccurate leading us to wrong conclusion.
- Synchronization problems are very sensitive to execution times, which makes them really hard to track down. Changes in execution profile by introducing performance measurement can be sufficient to miss synchronization problems or produce phantom ones.
- In order to support very high-throughput with low performance measurement overhead, raw performance data must be immediately sent to a separate CPU, where the aggregation and data reconstruction of the performance data should happen.
http://saasinterrupted.com/2009/12/16/the-most-common-flaw-in-software-performance-testing/
The most common flaw in software performance testing (Page last updated December 2009, Added 2010-02-25, Author Ashish Soni, Publisher saasinterrupted). Tips:
- The biggest reason for a discrepancy in performance between stress tests and production is due to the stress test being executed on a very different data set than what is in production.
- Ensure that the data set you run your stress tests on is representative of the data set in your production system.
- One simple and easy way to run meaningful performance tests is to take a snapshot of your production data (minus any personal/private information of course) and to execute the stress test on this data set.
- If it is not possible to get usable production data to run tests on, it is worth creating a synthetic data driver, which reproduces production-like data in a non-production system, in a configurable way to simulate a specified volume.
- If you?re on an Oracle DB, you can use the Data Masking option to take production data and scramble it so that they it is representative but not real anymore.
- If you can record real traffic from your production system and replay it on the test system, that can provide an excellent real world test.
- For tests, having a production like DB is important but another important factor is to be able to reproduce real world activity
.
http://www.performanceengineer.com/blog/performance-is-not-a-check-box/
Performance is not a Check Box (Page last updated August 2009, Added 2010-02-25, Author Charlie Weiblen, Publisher performanceengineer). Tips:
- Application Performance is not just a testing activity, it is part of your product and therefore must be addressed throughout the product life-cycle in order to be effective and achieve the best results.
- Performance requirements (non-functional requirements) are typically pass/fail criteria, for example: are N concurrent users supported; are N transactions per second, bytes per second, handled; are response times below Y.
- Functional requirements have performance impacts and need to be evaluated, for example: data sizes supported;
browsers supported (Recent browsers perform much better than old); can asynchronous requests be used; how much data needs to be presented; can long lists of data be paginated.
- Managing application requirements is the first step in managing performance.
- A single-threaded process cannot be expected to satisfy hundreds of requests per second.
- Performance engineers need to consider the following aspects of Design and Architecture; Use of frameworks, libraries, dependencies; potential architectural bottlenecks ? database access, messaging, flow of data; user interface design.
- Early detection of performance defects reduces product development and testing time. Some of the things that help catch performance defects early include: code analysis (e.g. PMD, FindBugs); integration tests capturing performance data; profiling (look for poor algorithms, slow SQL queries, etc.); comprehensive performance test plans.
- Effective large-scale performance tests are critical to: Validate requirements are met; Identify system bottlenecks, concurrency issues; Test scalability and stability.
- Once the application is live, the user experience must be measured and monitored in order to identify any performance problems.
- Live production data provides key feedback to help refine and improve performance requirements, design, architecture and testing.
- Managing Application Performance is more than just performance testing, it is managing the user experience.
Jack Shirazi
Back to newsletter 111 contents
Last Updated: 2024-12-27
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips111.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us