Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips October 2008
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 095 contents
Five Lessons That Life Holds For Performance Tuning (Page last updated October 2008, Added 2008-10-30, Author Jack Shirazi, Publisher fasterj). Tips:
- Identifying and fixing performance issues requires you to ask: What is happening and why did it happen; What changed to cause it; What are the performance targets; How do I achieve those targets.
- Target fixing problems that are tractable; try to bypass ones that aren't
- Look at things from a different angle, a change of perspective can often help you identify the cause or the solution of an issue.
- Plan your performance. Don't get overwhelmed by the detail, prioritise what is important and focus on them.
- When comparing lists (and most things) for equality remember that a comparison failure tends to occur much more quickly than a successful comparison.
Is your concurrent collector failing you? (Page last updated October 2008, Added 2008-10-30, Author Kirk Pepperdine, Publisher kodewerk). Tips:
- String s = new String(); gives us a string object and that = binds the object into either a local or instance context. It is this reference that keeps an object live in memory. Once the context in which it is bound goes out of scope or we execute s = anotherObject, the first object is said to unreachable and the collector is free to reclaim memory.
- In order for garbage collection (gc) to work, all objects must be seperately registered in another structure to keep track of them for the gc. Since this structure will be mutated by potentially many threads, it must be implemented in a thread safe manner. And when the garbage collector runs, it must have exclusive access to this structure - hence the collector will pause our mutating application threads while it works with this structure. This commonly known as the "stop-the-world" behavior or gc pause.
- Concurrent Mark and Sweep (CMS) minimizes stop-the-world behavior by making a copy of the special structure that all objects are registered in, using that out-of-date structure, and reconciling differences later. This takes more CPU, but allows for much shorter pauses.
- CMS threads are regulated so that they will only consume 50% of the CPU. That is configurable though I wouldn't recommend adjusting unless you have a very very good reason to do so.
- If taking some expertly suggested action makes the user experience worse, ignore the advice you've been given even if you've found it in the documentation and roll back the change.
- CMS can actually allow you threads to generate garbage even faster than without it, causing the situation that you build up reclaimable objects faster and actually slow the system down. CMS needs testing as an option, and should not be assumed to work better than other collection algorithms.
- If the heap gets fragmented, CMS must compact. CMS compaction times can be significant. Its most likely to happen when old space is heavily littered with live objects.
- A typical scenario for CMS compaction to occur is with an application that creates both large and small objects and then holds onto these objects long enough so they "leak" into old space.
- If your application is creating and using specialized classloaders, you need to be aware that even if you cut all ties to them, CMS may not be able to recover them.
- Unless you have class unloading enabled, all of the classloader activity could fill up old space and you'll eventually start thrashing on GC. Prior to the 1.6, you must also specify that you want CMS to collect perm space.
- A big clue that you are suffering from a CMS failure is: same application, same usage patters, same JRE and so on so that the only difference is in the collector and CMS leaves heap full where as the regular pause collector doesn't.
Maximizing Java Performance with Bespoke Programming (Page last updated August 2008, Added 2008-10-30, Author Adrian Marriott, Publisher JDJ). Tips:
- Tests should be run run several times and averages and deviations used, on isolated systems, with no other users or applications operating.
- As a rule, the fewer instructions a program executes the faster it runs. This implies that you can usually outperform any generic components you might be using, by rewriting for your specific situation.
- [Article describes an efficient mechanism for sorting numbers as an example of how specialised algorithms can be significantly faster than generic ones].
Why 0x61c88647? (Page last updated September 2008, Added 2008-10-30, Author Dr. Heinz M. Kabutz, Publisher The Java Specialists' Newsletter). Tips:
- ThreadLocals are garbage collected when the thread is garbage collected. However, if the thread is a member of a thread pool, then the values may never be garbage collected, leading to a memory leak.
Automating Java Performance Tuning (Page last updated September 2008, Added 2008-10-30, Author Carl Brahms, Publisher Oracle). Tips:
- Java performance tuning is an ongoing and often long process. You can rarely solve performance problems with one shot.
- To produce the best performance you need clear performance goals, a well-thought out design, a solid implementation, and thorough performance tuning.
- You first step should be to determine what your performance goals are.
- The expected behavior and number of users, amount of data, and size of requests largely determine what type of decisions you will make in tuning.
- You can adjust: pool sizes, tuning connection backlog buffering, caching, JDBC and JMS settings, setting priorities with work managers, clustering and many more.
- Make sure the proper OS and network settings are tuned for the application's requirements
- Monitor your server's disk and network I/O and CPU utilization while under load.
- Your database block size, pool size and other vendor-specific performance tuning settings should be examined.
- Keep in mind the point is to meet your performance goals, not to eliminate every single bottleneck. There will always be some bottleneck.
- Design your application using proven performance patterns and keep it simple. Poorly designed applications can be the cause of system resource, network or database bottlenecks.
- Try adjusting the size of your heap and its generations. As a general sizing rule, you want to have around half the heap free at the end of each garbage collection. Another way of saying this is the heap should be at least two times the size of its live objects.
- The most basic of heap performance tuning steps is to set the minimum and maximum heap size to be equal.
- A larger heap reduces the frequency of garbage collection but might take longer to execute the larger garbage collections.
- Be careful not to exceed the total size of the physical RAM. OS paging memory to disk will drastically reduce performance.
- Choose the garbage collector that minimizes garbage collection pause times, and improves the garbage collection throughput.
- Productive tuning gets the developers, architects, system engineers, QA testing, network engineers and DBAs to work together as a team. Having a cross-discipline participation in the tuning process can make for shorter work, better results, and ultimately reduce the cost and time it takes.
- Many performance problems are introduced by minor changes to the functionality of the application that nobody predicted would impact anything.
- Outages can be caused by not tuning or by tuning incorrectly, and a properly tuned environment runs more predictably and is more stable.
- [Article describes using the Arcturus Applicare's Tune Wizard to automatically test and partially tune applications].
- The generic tuning cycle is: Generate load; monitor; analyse behavior; tune and repeat unless you have reached the target performance.
Are all stateful Web applications broken? (Page last updated September 2008, Added 2008-10-30, Author Brian Goetz, Publisher IBM). Tips:
- Web applications that use HttpSession for mutable data usually do so with insufficient coordination, exposing themselves to concurrency hazards.
- HttpRequest data only persists for the lifetime of the request; HttpSession data persists for the lifetime of a session between a user and the application; ServletContext data persists for the lifetime of the application.
- getAttribute() and setAttribute() methods may be called at any time by different threads, so need to be thread-safe.
- Web applications are by default stateless. Once a Web application stores data in a shared container like HttpSession or ServletContext, you've turned it into a concurrent one, and you have to think about thread-safety within the application.
- Thread safety is about properly coordinating access to mutable data that is accessed by multiple threads
- Once a Web application wants to share data across requests, the application developer must pay attention to where that shared data is and ensure that there is sufficient coordination (synchronization) between threads when accessing the shared data to avoid threading hazards.
- One concurrency failure mode is atomicity failure, where one thread is updating multiple data items and another thread reads the data while they are in an inconsistent state
- One concurrency failure mode is visibility failure between a reading thread and a writing thread, where one thread modifies the cart but the other sees a stale or inconsistent state for the cart's contents
- Mutable objects placed in scoped containers should have their state transitions made atomic either through synchronization or through the atomic variable classes in java.util.concurrent.
- Serializing requests on an HttpSession makes many concurrency hazards go away, the SpringMVC framework offers a way to ask for this, and the approach can be reimplemented in other frameworks easily.
Monitoring performance in a WebSphere Portal environment (Page last updated October 2008, Added 2008-10-30, Author Jesse T. Bickmore Dharmesh Mistry Kevin Dzwonchyk Gourishankar Menon, Publisher IBM). Tips:
- Important cache statistics include: Current number of entries; Highwater number of entries; Configured Maximum size; Hits; Misses; hitrate (successful cache lookups compared to the total number of lookups); Evictions; Explicit removals; Number of lookups; Number of inserts;
- If a cache is too small, then that can cause lookups to occur too often (e.g. database lookups), which can result in performance degradation.
- If a cache is too big then it can waste memory in the JVM and could result in low memory conditions or increased paging.
- A cache that expires too often can result in unnecessary performance degradation.
- JVM metrics worth monitoring include JDBC, JVM memory usage, servlet transport threads, and database connections.
- To add debug timing informationto a jsp, you can use <% long start = java.lang.System.currentTimeMillis(); %> and <!-- TOTAL TIME: <%= java.lang.System.currentTimeMillis() - start %>ms -->
Back to newsletter 095 contents
Last Updated: 2019-06-30
Copyright © 2000-2019 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us