Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips March 2007
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 076 contents
http://www.fasterj.com/articles/apmscore1.shtml
The APM score (Page last updated March 2007, Added 2007-03-29, Author Jack Shirazi, Publisher Fasterj.com). Tips:
- Monitor the operating system on server machines, and client machines where these are under your control (cpu load, memory usage, paging, file I/O, network I/O, local disk I/O, remote disk I/O, more)
- The client experience should be monitored (e.g. response times).
- Monitor operating system processes statistics (per process threads, cpu, memory, I/O)
- Monitor JVM and application server level statistics (GC stats, heaps, locks, deadlock alerting, important methods, pools, caches, object creation, database communications, clustering, failover)
- Monitor Database level statistics
- Monitor Component interfaces are monitored (all component boundaries and coarse-grained framework interfaces)
- Monitor End-to-end important functionality
- Normal monitoring should have less than 5% overhead
- Service Level Agreements (SLAs) should be defined for technical statistics (e.g. no GC pause over 0.25 seconds, or GC sequential load less than 5%) and for business functionality (e.g. business transaction latency or throughput)
- Business Service Level Agreements (SLAs) should specify the costs of and benefits of any breaks, and should be accepted by the end-users and managers
- Service Level Agreements should be ranked
- SLAs should be monitored and alerted in real-time (where appropriate), daily, weekly and monthly
- SLA breaks and other alerts should not normally occur at a rate more often than a few a day (noise diminishes the value of alerts and should be minimized)
- Trends should be monitored and projected forward to indicate when thresholds will break, and projected breaks should be alerted with sufficient time to alter the system to avoid the breaks happening
- Ideally higher overhead profiling can be dynamically enabled and disabled in the system for problem resolution and tuning
- Monitoring should be recorded with sufficient detail so that any problems identified are fully analysable and the causes sufficiently identifiable from the recorded stats to either: resolve; or to reproduce for further investigation; or to identify which increased logging or increased profiling would help identify the issue
- A good simulation of the production system should be used as a performance testbed, with harnesses that generate realistic loads and with realistic configurations used
- Performance regression tests should be performed for any version releases, which identify any improvements or degradations in performance from changes to existing features
- Performance regression tests should become fully automated
- Load tests should be run which identify where the current system will break and the headroom available in the current system before throughput or responses begin to degrade below SLAs (run at least once per year for any significant architecture components)
- New features should have new SLAs defined which are performance tested against those SLAs prior to release to production
- The same monitoring capabilities in production should ideally be used in performance testing and for performance tuning (using higher overhead profiling where necessary)
- New features should be analysed for capacity requirements, SLAs and performance requirements, from the requirements stage (i.e. prior to design)
- Performance requirements should be considered at design and implementation phases to ensure targets will be met within the existing or proposed architecture
- The performance team should have or gain code-level performance tuning and memory tuning experience, and experience of eliminating memory leaks (or unintentional object retention) and JVM tuning experience .
http://w.on24.com/r.htm?e=26427&s=1&k=08D37CDD64A0C89DBC70C0CDA60D4146&partnerref=atssc_sitepost_02_12_07
Jim Kelly on Performance Monitoring (Page last updated February 2007, Added 2007-03-29, Author Kirk Pepperdine, Publisher TheServerSide). Tips:
- profiling tools give you a lot of information, but it can be difficult to determine causes of problems because there is so much information
- The same divisions you see in an organisation chart, you often see in the software as well. So, for example, application developers tend not to be database experts, so it's very frequent for application developers to write database queries that don't behave very well.
- A clear goal a good starting point for any top-down analysis of a problem, and top-down analysis is really the best way to get to the bottom of an issue in the fastest time.
- The open source Glassbox Inspector looks for commonly occurring problems like slow running database queries or connectivity problems trying to connect to a web service or Java thread contention.
http://www.devx.com/Java/Article/33943
DTrace and Java (Page last updated March 2007, Added 2007-03-29, Author Jarod Jenson, Publisher DevX). Tips:
- DTrace allows for the dynamic instrumentation of an application
- DTrace probes are first-class citizens in the eyes of Java when run on a Solaris 10 system - the DTrace probes in Java have zero disabled probe-effect.
- DTrace probes be enabled/disabled dynamically.
- From DTrace you can observe your Java code in action, the JVM itself, JNI code, and the kernel to identify and correct performance problems
- The probes with measurable performance impact are disabled by default. These are probes for object allocation, method invocation, and Java monitor events: -XX:+DTraceAllocProbes; -XX:+DTraceMethodProbes; -XX:+DTraceMonitorProbes; -XX:+ExtendedDTraceProbes (the last enables all probes).
- If you wanted to enable all of the DTrace probes in your application after it has started, you could do that with jinfo: jinfo -flag +ExtendedDTraceProbes <pid of JVM>
- A simple DTrace invocation allows you to easily see which objects are being allocated and what the total size of those allocated objects was over the life of the DTrace run (pid 116977 is used in the example): $ dtrace -n hotspot116977:::object-alloc'{@[copyinstr(arg1, arg2)] = sum(arg3)}'
- DTrace's jstack() action can be used to easily identify the code path that led to a probe firing (pid 116977 is used in the example): $ dtrace -n hotspot116977:::object-alloc'/copyinstr(arg1, arg2) == "[I"/{@[jstack(20,2048)] = count()}'
- With this basic DTrace script, you can see a power of two, bucketed distribution of monitor acquisition wait times on a per-thread basis:
hotspot$1:::monitor-contended-enter { self->ts = timestamp;} hotspot$1:::monitor-contended-entered / self->ts / { @[tid] = quantize(timestamp - self->ts); self->ts = 0; }
- No matter how well you write your Java code, you could be at the mercy of a scalability issue from JNI libraries. With DTrace, you can track these issues down.
- Solaris's plockstat(1M) will report lock contention statistics.
- malloc can have high contention; an alternate allocator such as libumem can eliminate this.
http://www.netbeans.org/kb/articles/nb-profiler-uncoveringleaks_pt1.html
Uncovering Memory Leaks Using NetBeans Profiler (Page last updated February 2007, Added 2007-03-29, Author Jiri Sedlacek, Publisher netbeans.org). Tips:
- A memory leak is a particular kind of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed (Wikipedia definition).
- A common cause of memory leaks is when small objects are constantly being added to a Collection over time but they are never removed from that Collection.
- For memory leaks of a few objects, memory leak detection can be done by comparing memory snapshots taken before the action to snapshots taken after the action and then searching for unnecessary references to that extra objects from a heap walker tool.
- The type of leak when there are many small objects that are continuously accumulating without being released can be difficult to detect because typically it's not related to any concrete action and doesn't use a noticeable amount of memory until the program has run for a long time.
- Surviving Generations is the number of differerent Surviving Generations that are currently alive on the heap, where each generation is objects surviving a garbage collection
- Usually the total number of Surviving Generations is quite stable, with a few long-lived objects over, say, 3 Surviving Generations, and short-lived objects representing, for example, 5 Surviving Generations.
- After some period of time, the number of Surviving Generations should become stable (if there is no memory leak) because all long-lived objects have already been created and newly-created short-lived objects are periodically being released from the heap.
- In a memory leaking application the number of Surviving Generations grows, and this can be a good identifier for a memory leak.
- In the absence of a memory leak the Surviving Generations value starts to grow during an application's startup, but after some time it should reach some stable limit and level off.
- If the Surviving Generations line continues to grow all the time then the application is most likely leaking.
- [Article describes how to monitor the Surviving Generations in the netbeans profiler to identify the existence of a memory leak].
http://www.devx.com/Java/Article/33872
Java 6 Navigable Interfaces (Page last updated February 2007, Added 2007-03-29, Author Narendra Venkataraman, Publisher devX). Tips:
- Navigable interfaces provide methods to retrieve a view of either a map based on a range of keys or a subset of a set based on a range of entries. ConcurrentSkipListMap and TreeMap implement the NavigableMap interface, while ConcurrentSkipListSet and TreeSet implement the NavigableSet interface.
- subMap and subSet are quick search result methods.
- Keys in ConcurrentSkipListMap are sorted alphabetically, and ConcurrentSkipListMap also is on the order of two to three times faster than the Properties class.
- Navigable interfaces are useful wherever you have in-memory cached data and you want to perform range-based queries upon keys of a map or entries of a set.
Jack Shirazi
Back to newsletter 076 contents
Last Updated: 2025-02-25
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips076.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us