Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips June 2009
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 103 contents
Performance tuning is about applying localized optimizations (Page last updated April 2009, Added 2009-06-30, Author Kirk Pepperdine, Publisher pepperdine). Tips:
- If you are not making life better for your users, you need to roll back on the "optimization".
- Throughput is inversely proportional to service time. Placing more load on a system will have the effect of increasing service time due to the effects of queuing which will in turn increase latency.
- If you decrease the latency in one subsystem you will increase the pressure on downstream subsystems (queues). If the downstream component's performance is close to the tipping point, the overall performance degradation could swamp out the positive benefits gained from tuning the upstream part of a system.
- The concurrent (CMS) collector isn't a good choice if your system relies on another system (external or otherwise) that is on the tipping point of it's performance curve. because the CMS collector allows application threads to reach any downstream bottleneck faster than it normally would. Once the downstream bottleneck pushed past the tipping point, it will add more to the response time budget than any gains made by the collector performance improvement.
An Introduction to Concurrent Java Programming (Page last updated May 2009, Added 2009-06-30, Author Stephen B. Morris, Publisher informIt). Tips:
- Excessive use of synchronization may lead to code that doesn't scale well.
- Semaphore objects allow you to specify exactly how many threads can concurrently run a piece of code, e.g. to allow at most 2 threads to concurrently run methodX(), you could specify all calls to methodX() wrapped with Semaphore.acquire() and Semaphore.release() calls, where the Semaphore is defined with 2 permits, e.g. new Semaphore(2, true).
- ReentrantLock.tryLock will acquire the lock is no other thread holds the lock within the parameter specified waiting time.
- Concurrent locks such as ReentrantLock() allow for a timed retry mechanism and the ability to check the number of outstanding attempts to lock the resource.
- The difference between concurrent and synchronized accessis that a concurrent object is thread-safe, but access to it is not controlled by a single exclusive lock, such as a semaphore; in contrast access to a synchronized object is governed by a single exclusive lock.
- CopyOnWriteArrayList allows any number of concurrent readers as well as a writer, without pausing any of these reader/writer threads.
- CopyOnWriteArrayList is ideal for situations in which you have many readers of a list but very infrequent updates. The reads (list iterations) are guaranteed to produce the state prior to any additions, in a thread-safe manner.
- A synchronized mechanism is coarse-grained and excludes other threads from gaining access to the protected resource; a concurrent mechanism is fine-grained and allows other threads to gain access to the protected resource in a controlled and possibly limited manner.
- ConcurrentLinkedQueue, which is an unbounded thread-safe queue that allows you to add and remove elements from the queue in a thread-safe manner.
Challenges of Monitoring, Tracing and Profiling your Applications running in "The Cloud" (Page last updated May 2009, Added 2009-06-30, Author Andreas Grabner, Publisher dynaTrace). Tips:
- Cloud Computing Platforms provide the ability to dynamically add additional resources as needed
- Cloud Data Services uses the data storage interface like In-Memory-Data-Grid to query objects from the data store, add or manipulate data. Accessing the data via this interface enables the application to scale depending on the required bandwidth, concurrent users or amount of concurrent HTTP requests. With increasing load on the application, the Cloud Computing Platform can deploy additional virtual machines in order to handle the additional number of transactions.
- In both traditional and cloud-run applications you want to make sure to limit the number of roundtrips over remoting boundaries or to the database and make sure that your SQL statements are well written and only return the data that you need.
Performance Analysis in 30 Minutes (Page last updated May 2006, Added 2009-06-30, Author Madhu Konda, Publisher Sun). Tips:
- The first thing to look at cpu utilization. If you are not able to max out the cpus in spite of loading up users, you could have locking and synchronization issues.
- Look for runnable processes in the run queue. If you have processes in run queue, and have idle time on cpu then you have a scalability issue.
- It is ok to have run queue equal or little more than the number of cpus as long the system is running at full cpu utilization.
- You should have cpus spend more time in user time than kernel time - a good rule of thumb is about 4:1.
- Lock contention is one of the major reasons for performance problems.
- An mpstat "smtx" value indicates the number of times CPU failed to obtain a mutex immediately. If smtx value exceeds 500 for any CPU, and system time is greater than 20%, then it is possible that mutex contention is happening on the system.
- You could have cpu cycles chewed up by other sub-systems such as I/O, network or memory swapping - monitor at regular intervals (e.g. iostat -xnz 5;netstat -an; vmstat -S; on Solaris).
- If you find a single Java thread is maxxed on the CPU, you determine the thread ID, and then dump the stack traces from the Java process, and determine the activity on the stack that corresponds to the native thread ID.
Java Performance Tuning resources (Page last updated June 2009, Added 2009-06-30, Author Madhu Konda, Publisher Sun). Tips:
- In jdk 6.0_14(update 14) turn on Compressed OOPs -XX:+UseCompressedOops when using -d64 switch.
High-Performance Oracle JDBC Programming (Page last updated April 2009, Added 2009-06-30, Author Yuli Vasiliev, Publisher Oracle). Tips:
- Connection pooling and statement pooling can significantly improve performance of database-intensive applications.
- Reusing database connection objects representing physical database connections utilized by an application can result in significant performance gains, provided that the application interacts with the database intensively.
- You won?t benefit from using a connection pool if your application connects to its underlying database only rarely.
- Caching statements that are issued only once during program execution may actually degrade performance, due to the overhead associated with putting and then keeping such statements in the cache.
- Oracle Universal Connection Pool provides the ability to validate connections on borrow. Validating connections on borrow is a useful technique, because it enables you to check whether a connection is still valid before you start using it.
- Useful connection pool properties to tune include: initial, maximum, and minimum pool size; maximum connection reuse time & maximum connection reuse count (remove the connection from the pool after a time or number of uses); remove connection if not used after an amount of time; wait timeout for getting connections (rather than block or immediate fail if all connections in use); timeout used connection or idle connection (for connections that have been obtained from the pool).
- To make a Statement object poolable or not poolable, you can use its setPoolable() method.
Anatomy of a Java Finalizer (Page last updated June 2009, Added 2009-06-30, Author Jinwoo Hwang, Publisher JDJ). Tips:
- If an instance of a class implements a non-trivial finalize() method, its space may not be recycled by the garbage collector in a timely fashion.
- Instances that implement non-trivial finalize() methods will not be immediately reclaimed by the Java garbage collector when they are no longer referenced. Instead, the Java garbage collector appends the objects to a special queue for the finalization process, during which a "Finalizer" thread will execute each finalize() method of the objects.
- Only after successful completion of the finalize() method will an object be handed over for Java garbage collection to get its space reclaimed by a subsequent garbage collection.
- If a "Finalizer" thread cannot keep up with the rate at which higher priority threads cause finalizable objects to be queued, the finalizer queue will keep growing and cause the Java heap to fill up. Eventually the Java heap will get exhausted and a java.lang.OutOfMemoryError will be thrown.
- If you want to run cleanup tasks on objects, consider finalizers as a last resort; better to implement your own cleanup method which will be more predictable.
- [article runs through some finalizer examples, and using pmat and HeapAnalyzer on garbage collection output and the IBM Thread and Monitor Dump Analyzer on the thread dumps].
Back to newsletter 103 contents
Last Updated: 2019-12-31
Copyright © 2000-2019 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us