Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips October 2004
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 047 contents
The ABCs of Synchronization, Part 1 (Page last updated August 2004, Added 2004-10-31, Author Jeff Friesen, Publisher java.net). Tips:
- Java 1.5 (5.0) introduces the java.util.concurrent.locks.Lock interface for implementing locking operations that are more extensive than those offered by synchronized methods and synchronized statements.
- You may need to synchronize access when iterating a collection if another thread could update that collection at the same time.
- Deadlock is the situation where locks are acquired by multiple threads, no thread holds its desired lock but some hold the locks needed by other threads
The ABCs of Synchronization, Part 2 (Page last updated September 2004, Added 2004-10-31, Author Jeff Friesen, Publisher java.net). Tips:
- J2SE 5.0's java.util.concurrent.locks package includes Condition, an interface that serves as a replacement for Object's wait and notify methods. You can use Condition implementations to create multiple condition variables that associate with one Lock object, so that a thread can wait for multiple conditions to occur.
- [Article includes a simple producer/consumer example].
- Java's memory model permits threads to store the values of variables in local memory for performance reasons. This means that different threads can see different values for a variable where access/update to the variable is not synchronized.
- Excessive synchronization can cause a program's performance to suffer.
- Each variable marked volatile is read from main memory, and written to main memory - local (thread) memory is avoided.
- J2SE 5.0 introduces java.util.concurrent.atomic, a package of classes that extends the notion of volatile variables to include an atomic conditional update operation, which permits lock-free, thread-safe programming on single variables.
- A countdown latch is a synchronizer that allows one or more threads to wait until some collection of operations being performed in other threads finishes. This synchronizer is implemented by the CountDownLatch class.
- A cyclic barrier is a synchronizer that allows a collection of threads to wait for each other to reach a common barrier point. This synchronizer is implemented by the CyclicBarrier class and also makes use of the BrokenBarrierException class.
- An exchanger is a synchronizer that allows a pair of threads to exchange objects at a synchronization point. This synchronizer is implemented by the Exchanger<V> class, where V is a placeholder for the type of objects that may be exchanged.
- A semaphore is a synchronizer that uses a counter to limit the number of threads that can access limited resources.
Java Threads, 3rd Edition, Chapter 5 (Page last updated September 2004, Added 2004-10-31, Author Scott Oaks, Henry Wong, Publisher O'Reilly). Tips:
- Programs can perform poorly because of excessive or incorrect synchronization.
- Acquiring a contended lock (one that is held by another thread) is more expensive than acquiring an uncontended lock and, more significantly, you have to wait for it to be unlocked which can be a real drag on performance.
- You can use the volatile keyword for an instance variable (other than a double or long) to avoid synchronizing.
- Unsychronized access to variables that are not volatile are not guaranteed to retrieve the latest value for tha variable, since another thread could hold a more recent value.
- JVM, CPU and compiler optimizations applied to statements could re-order the statements if the order does not matter within a method block. But the order may matter for multi-threaded access, causing corrupt data. In these situations synchronization is necessary [chapter includes a nice example of that].
- Synchronization is required to prevent race conditions that can cause data to be found in either an inconsistent or intermediate state.
- Not all race conditions need to be avoided - only the race conditions within thread-unsafe sections of code are considered a problem.
- Shrink the synchronization scope to be as small as possible and reorganize code so that threadsafe sections can be moved outside of the synchronized block.
- The ++ operator is not atomic. AtomicInteger has a method that allows the integer it holds to be incremented atomically without using synchronization.
- Atomic classes (AtomicInteger, AtomicLong, AtomicBoolean, and AtomicReference) enable you to build complex code that requires no synchronization at all.
- The Atomic classes getAndSet( ) method atomically sets the variable to a new value while returning the previous value, without acquiring any synchronization locks. Other methods allow atomic conditional updates (*compareAndSet); atomic increments and decrements (incrementAndGet, decrementAndGet, getAndIncrement, and getAndDecrement); and atomic pre/post addition (addAndGet and getAndAdd).
- AtomicIntegerArray, AtomicLongArray, and AtomicReferenceArray allow you to atomically operate on individual array elements.
- AtomicIntegerFieldUpdater, AtomicLongFieldUpdater, and AtomicReferenceFieldUpdater allow you to perform atomic operations on volatile variables.
- AtomicMarkableReference and AtomicStampedReference allow a mark or stamp to be attached to any object reference.
- The purpose of synchronization is not to prevent all race conditions; it is to prevent problematic race conditions.
- 5.0 java.util.concurrent atomic classes are not a direct replacement of the synchronization tools: using them may require a complex redesign of the program, even in some simple cases.
- The purpose of atomic variables is to reduce the synchronization required, in order to improve performance.
- The purpose of atomic variables is not to remove race conditions that are not threadsafe; their purpose is to make the code threadsafe so that the race condition does not have to be prevented.
- It is necessary to balance the usage of synchronization and atomic variables. Synchronization blocks other threads to allow atomic operations; atomic variables allows threads to execute in parallel with threadsafe operations.
- Implementing condition variable functionality using atomic variables is possible but not necessarily efficient. You may be swapping a potentially blocking operation using synchronization with multiple spinning operations using atomic variables.
- The tradeoffs between using atomic variables vs. synchronization is that of using pessimistic locking vs. optimistic locking, i.e. locking and waiting for locks to ensure an operation succeeds (synchronization), or not locking and retrying on failure to ensure an operation succeeds (atomic variables).
- You can implement atomic support for any new type by simply encapsulating the data type into a read-only data object - the data object can then be changed atomically by atomically changing the reference to a new data object.
- Atomically setting a group of variables can be done by creating an object that encapsulates the values that can be changed; the values can then be changed simultaneously by atomically changing the atomic reference to the values.
- Performing bulk data modification and other advanced atomic data manipulations may use a large number of objects, which can have a significant overhead. So using atomic variables has to be balanced with using synchronization, including the costs of different algorithms and extra objects.
- Thread local variables' most common use is to allow multiple threads to cache their own data rather than contend for synchronization locks around shared data.
Java Threads, 3rd Edition, Chapter 6 extract (Page last updated September 2004, Added 2004-10-31, Author Scott Oaks, Henry Wong, Publisher O'Reilly). Tips:
- The java.util.concurrent Semaphore class is essentially a lock with an attached counter. If a semaphore is constructed with its fair flag set to true, the semaphore tries to allocate the permits in the order that the requests are made, as close to first-come-first-serve as possible. The downside to this option is speed: it takes more time for the virtual machine to order the acquisition of the permits than to allow an arbitrary thread to acquire a permit.
- The java.util.concurrent barrier (CyclicBarrier class) is simply a waiting point where all the threads can sync up either to merge results or to safely move on to the next part of the task. In every exception condition, the barrier simply breaks, thus requiring that the individual threads resolve the matter.
- The java.util.concurrent CountDownLatch is like a barrier, but releases threads after a count down.
- The java.util.concurrent Exchanger class allows data to be passed between threads.
- The only time you need to lock data is when data is being changed, that is, when it is being written. So it is more efficient to allow multiple concurrent read locks preventing a write, whereas a write lock should prevent any read locks being obtained. This behavior is supported by the java.util.concurrent ReadWriteLock and ReentrantReadWriteLock classes. A further efficiency is that threads that own the write lock can also acquire the read lock, accomplished by acquiring the read lock before releasing the write lock.
Improve J2EE Application Performance with Caching (Page last updated September 2004, Added 2004-10-31, Author Scott Robinson , Publisher developer.com). Tips:
- Caching is a tried-and-true technique for improving the efficiency of an application, but you should be thoughtful about where and when you use it.
- Store frequently-referenced data at an easily accessible location in the application - databases can be enabled to do this very easily.
- A J2EE server can passively do entity bean caching, if you're using entity beans to do data access.
- Try JSP cache tags which can cachepage fragments.
- Use a Servlet 2.4 caching filter.
- Turn repetitive data into Java objects (using JCache)
- A good general rule of thumb is to implement caching when it eliminates the need for a remote call.
- A good J2EE-specific rule of thumb is to implement caching when it eliminates a need for one tier to make a call to an underlying tier.
- Consider if caching at a particular point reduces network activity significantly.
- Factor in the volume of data that needs to be cached - it is not normally efficient to cache an entire database in the application server.
- Static data is easy to cache, dynamic is not.
- Caching adds complexity in the way of extra failure points, multithreading and cache coherency issues.
J2EE Design Patterns (Page last updated November 2003, Added 2004-10-31, Author Alan Baumgarten, Publisher BEA). Tips:
- Using the Session Facade pattern to wrap EJBs with a Session bean improves performance as only the calls between the client and the session bean go across the network, while calls from the session bean to
- the entity beans are local to the EJB container.
- Performance can be enhanced through the use of local interfaces. Local interfaces provide support for "lightweight" access from within an EJB container, avoiding the overhead associated with a remote interface.
- The Session Facade pattern allows transaction optimization, for example by enclosing the entity beans calls in the session bean method, the database operations are automatically grouped together as a transactional unit (to ensure that they participate in the same transaction, the entity bean methods should be assigned the "Required" transaction attribute). With container-managed transactions, the container would begin a transaction when each EJB method starts to execute and end the transaction when the method returns.
- Distributed applications often waste time looking for network resources.
- The Service Locator pattern caches the remote resource reference, reducing the number of remote lookups made.
Back to newsletter 047 contents
Last Updated: 2019-12-31
Copyright © 2000-2019 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us