Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips November 2003
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 036 contents
http://www.JavaPerformanceTuning.com/news/interview036.shtml">
Interviewing Bruce Tate (Page last updated November 2003, Added 2003-11-28, Author Kirk Pepperdine, Publisher JavaPerformanceTuning.com). Tips:
- Manage performance gradually throughput the development process, otherwise you?re either building in too much performance (expensive), or you?re missing on your performance goals.
- Measure performance requirements using JUnit test cases (e.g. with JUnitPerf).
- Performance tuning starts with the requirements gathering, e.g. news system as fast as old; priority functions must be quick; e-commerce sites must have a fast interface.
- Four seconds is a benchmark for maximum web based user interface response time.
- Lots of performance problems come from roll-your-own persistence frameworks.
- Lots of small lightning fast queries can add up to give an overall slow response.
- Combine remote queries to speed up overall performance.
- In some cases EJBs are inappropriate and simple Java objects and simple persistence is much faster.
http://www.j2life.com/bitterjava/java_performance.html">
The top 5 performance pigs (Page last updated September 2001, Added 2003-11-28, Author Bruce Tate, Publisher J2Life). Tips:
- Iterating over a major system boundary (e.g. with remote calls) is expensive. Individual requests may be short, but combining many short requests adds up to a huge overhead and slow response.
- Buffer remote calls to send them together in one remote request. The fa?ade pattern and the distributed command bean can help you manage this type of buffering.
- Caching can dramatically reduce round tripping. Caching in the network, the edge servers and throughout a dynamic application saves nearly an order of magnitude.
- Don't make everything an EJB. Objects having requirements to be persistent, transactional and shared are the best candidates. Tables that rarely change can be implemented more efficiently with other means, like a stateless session bean.
- Overcomplexity can be bad for performance. User interfaces have frequently created more events than even the fastest processors could handle. Deep inheritance graphs can make code all but unreadable, and require a stiff performance tax.
- It is easy to develop database models with no organizational forethought or any index structure at all, and this will have a large performance cost.
- Develop a strategy that will allow strong, efficient access of database data, with minimal redundancy and good performance. The object to database mapping is critical.
- Use "Explain" to understand database the access plan, and alter queries for the most optimal use with the database.
- Make sure that database statistics are run at regular intervals, and that the database is tuned correctly.
- Two important tuning parameters for most databases are the row buffer (which is basically a in-memory cache) and the lock list.
- A large row buffer will allow frequently accessed database rows to stay in memory, so that subsequent reads do not have to access the disk.
- A large lock list will help to avoid unnecessary lock escalation (where the database trades in many row level lock for a single table level lock, badly crippling concurrent access.)
- Long units of work can accumulate too many locks, eat up log space, and harm concurrency.
- Applications with a high number or very large objects can take significantly longer to load.
- Internet pages with many graphics take significantly longer to load than those with few graphics.
- If only a portion of a method needs synchronization, use synchronized blocks around critical sections instead of synchronizing entire blocks.
- Identify critical resources and code fragments, and surround only those critical sections with synchronized sections.
- Overuse of synchronized carries a performance penalty and also restricts concurrency.
- Exclusive locks (aka synchronize) are often too restrictive: many applications should use a different locking model especially when reads are more common than writes.
- Use the fastest algorithm for a problem, not the easiest to implement.
- Applications that require random access to a collection with no traversal should use an o(1) hash table rather than an o(n) array.
- Don't complicate code to gain unnecessary performance or memory improvements.
- Objects allocated for session state or custom connection frameworks are multiplied, so beware of creating too many there.
- HTML user interfaces are much faster than applets because of the long initial applet download time.
http://www.manning.com/tate/chap01.pdf">
Ch1 of Bitter Java, Bitter tales (Page last updated March 2002, Added 2003-11-28, Author Bruce Tate, Publisher Manning). Tips:
- In most cases, readability in general is far more important than optimization. Where optimization is necessary, comment ALOT.
- Use the right algorithm and right class for the job.
- Early performance checks can point out design flaws.
- Internet applications are particularly vulnerable to communication overhead.
http://www.manning.com/tate/chap06.pdf">
Ch6 of Bitter Java, Bitter memories (Page last updated March 2002, Added 2003-11-28, Author Bruce Tate, Publisher Manning). Tips:
- An object?s life cycle goes from being unallocated, to allocated, to live, to unused, to garbage-collected. Java memory leaks [unintentional object retention] are objects that are reachable, but not live.
- Event-notification design patterns (e.g. change listeners in AWT, Swing, JavaBeans) and singletons are common locations for memory leaks.
- Every "add listener" action should have a corresponding "remove listener" action somewhere else.
- Use WeakReferences to hold objects that are added but not explicitly removed from collections. Use SoftReferences to cache objects.
- Likely sources for unintentional object retention are: caches, session states, user interfaces, EJB containers.
- Caches and session states should have elements timed out.
- Examine objects with long life cycles and try to make sure references to transient objects are removed.
- Use
finally
code blocks to ensure dereference of objects even when Exceptions are triggered.
- Identify memory leaks by monitoring memory and seeing that the amount of heap used after each garbage collection is always increasing.
- Memory leaks do not need to be fixed unless the application's life span is long enough for the memory leak to become a problem.
- To determine the cause of a memory leak you need a tool which lets you: Trigger garbage collection on demand; Examine the heap; Determine the references to an object.
- Fix memory leaks by: Removing the object reference directly by setting it to another value; Removing an object from a collection; Weakening the references using Java reference objects; Shortening the life cycle of the referent; Shortening the life cycle of the object; Removing the object from the code; Refactoring the code.
http://www.manning.com/tate2/bejb_06.pdf">
Ch6 of Bitter EJB, Bitter messages (Page last updated June 2003, Added 2003-11-28, Author Bruce Tate, Mike Clark, Bob Lee, Patrick Linskey, Publisher Manning). Tips:
- StreamMessage may provide better performance than MapMessage where keys are not necessary, i.e. when the message format is well defined.
- Fat messages can cause a decrease in message throughput.
- Choose a message type capable of carrying the simplest payload that meets the messaging needs.
- Avoid carrying extra data that may be needed by future versions.
- Omit data that a consumer could derive from the information already in the message.
- Take into account how often the message will be delivered and whether the delivery of the message must be guaranteed.
- Message size and message frequency combine to make message throughput. Maximum message throughput is usually limited, so size or frequency may need to be decreased.
- Replace data in messages with references to the data from some other source. But this strategy can cause problems of too much work or external access to shared resources by message consumers. A good option is to include a reference and also sufficient information to enable message consumers to decide whether they need to access the reference.
- Optimal message size is a balance between sending the minimum data possible, and sending sufficient data to reduce the work done by the message consumer.
- The overhead of parsing XML by message consumers will elongate the time required for the consumer to process the message. This extra processing may in turn limit the overall message throughput of the application.
- Start with the simplest message type and benchmark its performance. Then, if other message formats (like XML) become necessary, you?ll have performance to compare that message against.
- Guaranteed message delivery comes at a price in terms of resources and performance. Maximum reliability may be too slow, you should balance your reliability needs with the required performance.
- Persistent messages (the default) are slower and have a higher overhead than non-persistent messages.
- Point-to-point messages not consumed at a rate equal to or greater than their rate of arrival may cause the message queue to grow unchecked, putting additional strain on the JMS server.
- If a durable subscriber is disconnected for relatively long periods of time, and messages have a long life span, the JMS server is burdened with having to manage all outstanding messages.
- Store only those messages which are critical to deliver.
- Excessive coupling counters asynchronous messaging. If an immediate synchronous reply is required, JMS may be an unecessary overhead.
- Acknowledge successful receipt of a message before processing it to avoid the JMS server needing to re-send messages.
- If a message drive beans onMessage() method takes too long, it may decrease throughput.
- Message selectors enable messages to be filtered, thus letting consumers avoid messages they don't need to handle. But message selectors have some overhead.
- The performance of an individual message?s delivery cycle may be markedly different when the JMS server is under load. Use realistic automated tests to measure performance for your application.
- Factors that should be considered when writing and running messaging performance tests: Message throughput; Message density; Message delivery mode; Production rate versus consumption rate; The length of time the test runs.
- Plotting the number of messages processed as a function of time will help pinpoint where message throughput plateaus.
- When using point-to-point messaging, plotting the queue size over time will clearly indicate when messages are being backlogged.
http://www.manning.com/tate2/bejb_09.pdf">
Ch9 of Bitter EJB, Bitter tunes (Page last updated June 2003, Added 2003-11-28, Author Bruce Tate, Mike Clark, Bob Lee, Patrick Linskey, Publisher Manning). Tips:
- Failure to start measuring performance early invites the danger that significant problems will crop up later, when redesigns are no longer economical.
- There are two measurements of performance: response time and throughput. You tune and test applications differently, depending on which measurement is the focus.
- You can throttle the application for consistent performance by limiting the number of concurrent requests each resource can handle, queueing waiting requests.
- Applications that scale well can deliver higher throughput by adding more resources while maintaining acceptable response times.
- When the average response time of a business transaction becomes intolerable under load, the application has reached its maximum effective throughput.
- Premature code level optmizations are usually a waste of time, and can complicate code unecessarily and even intorduce bugs.
- If your business object doesn?t require concurrent read and write access while retaining stringent transactional integrity, then the use of an entity bean may incur unnecessary complexity and performance overhead. A servlet or session EJB using JDBC is often sufficient.
- Bean-managed persistence entity beans may suffer from hard-coded SQL, difficult-to-maintain database logic, and n + 1 database calls to load n bean instances. Entity beans using container-managed persistence generally are more efficient and easier to develop, if used properly.
- If not designed carefully, custom primary key generators may require synchronization that becomes a scalability bottleneck. Better scalability, with less work, may be realized by using automatic primary key generators already provided by your database. To help, JDBC 3.0 includes new methods to facilitate the retrieval of automatically generated fields.
- If the data in your cache is changing more often than it?s being used, then the number of cache hits may not justify the complexity of caching while preserving data integrity.
- The use of custom resource pools in the name of better performance may prevent your application server from managing resources effectively.
- Specify performance targets.
- "make it run, make it right, make it fast.? In that order.
- Simple designs that use well-factored, modular code are more amenable to performance tuning than more complicated designs.
- Well-factored code is easier to change, and code that?s easy to change is easier to tune.
- If you delay considering performance until right before the application goes live, it?s usually too little too late.
- Plan for performance and constantly consider the current state and goals of the application by taking performance measurements and making corrections to the plan throughout the project.
- Guidelines for performance planning: Understand the application?s usage patterns; Prioritize performance requirements; Write automated performance tests with defined performance targets; Build modular components; Revise plans as necessary; Understand the available system configuration options; test on production-quality hardware as soon as possible;
- Use a profiler, don't guess where bottlenecks are.
- Change one thing at a time.
- Measure the performance effect of each change.
- Stop tuning when you reach your performance goal.
- A tuning methodology: Choose quantifiable performance goals; Profile identify hot spots; Write and run an automated test that baselines current performance; Make a change intended to improve performance; Run the automated test again to measure the gain (or loss) in performance; Repeat as often as necessary until the application meets its performance goals.
- The test environment has to be predictable, and the tests must be repeatable: caches, pools and other mechanisms must be accounted for.
- Automate performance testing.
- To simulate realistic production traffic and usage patterns, you need to test your application?s performance with representative data, tool versions, workloads, network latency, and hardware capacity.
- Write "worst case scenario" tests, like user-login storms,
- Don't underestimate the value of a tool that can monitor the performance of the real system, no real substitute exists for tuning an application in production: usage patterns in a live system may behave differently than expected.
Jack Shirazi
Back to newsletter 036 contents
Last Updated: 2025-01-27
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips036.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us