Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Tips August 2012
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 141 contents
The importance of embracing the performance lifecycle (Page last updated August 2012, Added 2012-08-29, Author Keri-Anne Lounsbury, Kevin Yu, Publisher IBM). Tips:
- Possible outcomes of not adequately considering performance in your development cycle include: Website failures during peak times; Website slow downs; Patchwork solutions by adding more hardware and software. All can lead to lost revenue, customer loss from the bad customer experience, and costly rework needed.
- Application performance is typically dependent on responsiveness. Acceptable performance needs to be defined for your application it may be a few seconds for a retail site to place orders, or a few milliseconds for a trading application, or even tens of seconds for a banking transaction.
- Performance is not merely about user experience, it contributes to a site's reputation, boosts customer loyalty, and can have significant impact on site revenue. Performance is about optimization and speed and contributing to the bottom line of a business.
- Considering performance only at the end of development leads to not being able to address issues on time or having to patch and get by.
- The performance lifecycle is about considering performance in all aspects of the project, not as only a one-time event, so avoiding a "band-aid" approach that would lead to an increasingly complex application that becomes more difficult to manage and optimize.
- In some circumstances the root cause of bad performance could be due to system architecture, which would be next to impossible to address late in the project cycle.
- Activities for the design phase of the performance lifecycle include: Reviewing and validating non-functinoal requirements; Determining application flow; Educating developers on performance; Reviewing component design focused on performance; Preparing system maintenance strategies; Identifying key performance indicators for monitoring.
- Activities for the coding and unit testing phase of the performance lifecycle include: Developer-performed application profiling; The initial caching strategy; System maintenance strategies; Building a performance verification environment (including data); The performance testing plan; Planning priorities and risk mitigation; Instrumenting code to track key performance indicators.
- Activities for the performance testing phase of the performance lifecycle include: Creating scripts and test data; Validating the environment as production-like; Testing at all scales; Endurance testing; Refining the key performance indicators monitoring strategy.
- Activities for the launch and post-launch phase of the performance lifecycle include: Migrating the tuned settings to production; Monitoring and troubleshooting the production system; Analyzing production access logs; Performance testing any post-launch fixes; Planning to maintain adequate performance for future changes.
- Responsibilities across the performance lifecycle include: Performance architect with ownership of performance-related non-functional requirements; Project manager responsible for overall coordination of performance engagement activities; performance specialists for testing and scripting, for middleware, for databases, and for caching.
- Performance priorities at the requirements stage: Establish key performance and business objectives; Define user expectations and transactional objectives for the next 12-24 months; Identify capacity requirements and identify tradeoffs between functional requirements and achievable performance.
- Performance priorities at the design stage: Architect a solution that can achieve the performance requirements; Design and build a performance test environment; Identify and buy/build tools to support your performance strategy; Identify any architecture limitations that could challenge any non-functional requirements.
- Performance priorities at the coding and unit testing stage: Measure the response time of business critical steps; Profile application execution to identify patterns violating best practices.
- Performance priorities at the performance testing stage: Build test cases from simple, common functions to more complex scenarios and environment configurations; Measure the performance of the application as load scales up to projected peak; Identify and resolve the root causes of resource constraints and application bottlenecks.
- Performance priorities at the launch and post launch stage: Instrument and monitor key performance indicators in the production environment; Use the data captured from production to optimize planning for future changes.
Chapter 4 of "The Well-Grounded Java Developer": Modern concurrency (Page last updated August 2012, Added 2012-08-29, Author Benjamin J. Evans, Martijn Verburg, Publisher Manning). Tips:
- Java's threading model allows: objects to be shared between threads; object fields to be changed by any thread with mutator access to the object; the thread scheduler to pause any thread at (almost) any time. This means the thread could be half-way through changing an object's fields when it gets paused and another thread could make changes to that object leaving the object in an inconsistent state. To avoid this, Java allows locks which can be used to ensure one thread can complete changes before any other thread can start changes.
- To ensure data consistency you need to ensure that any non-private method always accesses the object in a well-defined and consistent state.
- One strategy for thread safety is to never return from a non-private method in an inconsistent state (e.g. by locking out entry to non-private methods when the object IS in an inconsistent state, and not changing state while non-private methods are executing.
- "Hide" data by restricting external communication of each subsystem as much as possible.
- Make the internal structure of each subsystem as deterministic as possible
- Sources of concurrency overhead include: lock management; context switching; thread management; scheduling; non-local memory access; algorithms that prevent parallel processing.
- Only one thread at a time can proceed through a synchornized block, other threads trying to enter the block are suspended until the current thread releases the lock to the block.
- A static synchronized method locks the Class object, not the instance.
- A fully synchronized class: initializes all fields in every constructor; has no public fields; has consistent state after returning from any nonprivate method; has all methods synchronized; calls other object methods only when this object is in a consistent state; only allows calling of its non-private methods when in a consistent state.
- A fully synchronized class tends to be bad for performance if it used concurrently in any time-critical code.
- Always acquire locks in the same order to avoid deadlocks.
- The synchronize keyword doesn't just acquire a lock, it also synchronizes memory; when the block is entered, changes are synchronized from main memory; when the block is entered, changes are synchronized to main memory.
- A volatile field ensures that the value seen by any thread for that field is always the value in main memory; and that any value written to the field is always flushed to main memory before the write completes.
- volatile variables do not introduce any locks (so cannot be deadlocked).
- A volatile variable should only be used where writes to the variable don?t depend its current state. E.g. x++ where x is a volatile variable is not guaranteed to increment the variable by one (as another thread could change it's value between the read and the write).
- Immutable objects either have no state or have only final fields and are always safe for concurrent use because their state can't be mutated so they can never be in an inconsistent state.
- Immutability is a very powerful technique to avoid concurrency issues and overheads, and you should use it whenever feasible.
- ReentrantReadWriteLock can provide better lock performance than ReentrantLock in cases where there are many readers but few writers.
- Where locks are being obtained in sequence, if an attempt to get a second lock fails, the thread should release the lock it's holding and wait briefly.
- CountDownLatch provides a synchronization pattern that allows for multiple threads to all agree on a minimum amount of preparation that must be done before any thread can pass the CountDownLatch synchronization barrier.
- ConcurrentHashMap is a concurrent version of HashMap, which is more efficient than a synchronizedMap().
- CopyOnWriteArrayList is a threadsafe list (by the addition of copy-on-write semantics: any operations that mutate the list will create a new copy of the array backing the list).
- BlockingQueue allows threads to synchronize throughput (with additionally a sort of buffer space) by blocking the producer or consumer when the other is not available for work.
- ArrayBlockingQueue is very efficient when an exact bound is known for the size of the queue, whereas LinkedBlockingQueue may be slightly faster under some circumstances.
- TransferQueue has been written to take into account modern compiler and processor features and can operate with greater efficiency than BlockingQueue.
- The fork-join framework provides basic methods to support the splitting up of large tasks, and it has automatic scheduling and rescheduling.
- In Java 7 the default sort algorithm for arrays has changed to TimSort (previously QuickSort) - a version of MergeSort that has been hybridized with an insertion sort.
- The fork-join framework implements "work-stealing" which works with the thread scheduler to reduce context switching and efficiently balance workloads across threads.
- The fork-join framework is suitable for use when: a problem's subtasks work without explicit cooperation or synchronization between the subtasks; and the subtasks calculate some value from their data without altering it; and divide-and-conquer is natural for the subtasks.
What is a PermGen leak? (Page last updated July 2012, Added 2012-08-29, Author Nikita Salnikov-Tarnovski, Publisher plumbr). Tips:
- Every classloader holds references to all the classes it has loaded, thus it is a potential memory leak - you might not realise the classes are still loaded.
- Several JVMs, including the Oracle HotSpot JVM, use a "permgen" heap space to store internal structures including class structures. Note that permgen is due to disappear in Oracle HotSpot by the time of the 8.0 JVM release.
- A memory leak in Java is the situation where the application should no longer be holding on to objects, but it inadvertently is because of programmer error. As a consequence, these are not eligible for collection by the garbage collector, and if these types of objects continue to be created by the application and not released, they will eventually cause an out-of-memory error.
- A PermGen OutOfMemoryError is simply the situation where the permgen does not have enough space to load further classes, probably because too many classes are currently being held in the permgen by ClassLoaders.
- Java web applications have their classes loaded into permgen from the EAR/WAR, and they can stay there when the application is undeployed, thus filling permgen with unnecessary classes, which can easily cause a PermGen OutOfMemoryError.
- If some thread continues to run after an application is undeployed to an application server, it will keep alive a reference to the classloader that loaded it, and that in turn will keep alive references to all the classes it previously loaded, thus potentially causing a PermGen OutOfMemoryError. To eliminate this issue, you need to terminate any threads - including third-party spawned threads - of an application when it is undeployed (e.g. using a servlet context listener to determine when the application is shutdown).
- JDBC drivers are registered with the DriverManager, which will reference the driver which in turn will reference it's classloader which in turn references all the classes it loaded. If your application has finished with the driver, you need to de-register the driver or it is potentially causing a permgen memory leak.
- The root cause for the majority of PermGen OutOfMemoryErrors is a reference to an object or a class loaded by the application's classloader that is no longer necessary.
- Most PermGen OutOfMemoryErrors can be eliminated by finding out what reference is being held to the object/class keeping a the ClassLoader alive, and then adding a shutdown hook to your web application to remove the reference during application's undeployment, either by using a servlet context listener or by using the API provided by your third party library.
Java Multi-threading and the Challenges of Parallel Computing (Page last updated July 2012, Added 2012-08-29, Author KL Nitin, Sangeetha S , Publisher developer.com). Tips:
- Concurrency in Java is supported with the Thread class, Runnable interface, synchronized and volatile keywords - conventionally create a Runnable class, override the run () method and create a thread with that Runnable as argument; the run() method is implicitly invoked when the Thread.start() method is called.
- The equivalent of
(new Thread(runnableInstance)).start() using java.util.concurrency is
Executors.newFixedThreadPool(1).execute(runnableInstance), but the java.util.concurrency has more flexiblity as the thread can be reused subsequent to the runnableInstance finishing. Note that in both these cases, the call to execute the runnable cannot directly return a value to thread initiating the runnable running in another thread.
- If you need to execute code on a different thread but return a value to the initiating thread, you can use a java.util.concurrent.Callable.
- You can get values back from another thread using Callables, e.g.
List<Future<T>> results = Executors.newFixedThreadPool(1).invokeAll(collectionOfCallables);T value1 = results.get(0).get();
- ExecutorService.invokeAll() is a blocking call, and will block all Callables if not enough threads are available until the current set of Callables are processed. You should call ExecutorService.shutdown() when you are finished with your thread pool.
Top Performance Mistakes when moving from Test to Production: Deployment Mistakes (Page last updated August 2012, Added 2012-08-29, Author Andreas Grabner, Publisher dynatrace). Tips:
- Excessive logging is a common cause of performance problems
- Missing files in deployment can significantly impact user experience (which is core to user impression of performance). It is important to remember to deploy all content, including all static resources such as CSS, JS or Image files.
- Before deploying to production, deploy your application to a production-like environment to make sure that you test in a realistic environment.
- Execute a full set of tests against the application to verify deployment works correctly. Look for missing resources as well as errors and performance issues.
- If the application is multi-region, execute load from different locations around the world, making sure performance is adequate from all regions.
- An incorrect web server access setting can impact the user experience, by denying access to parts of the application - this is easily done when development and testing have been done in a non-production environment which has significantly more access for users (fewer restrictions), while production systems are frequently locked down much more tightly. In the worst case, instead of actual failure in production (which would be reported immediately), you could have initial access denied followed by an automatic failover that succeeds, making the application perform unexpectedly slowly as it takes longer paths than intended.
- Beware slow or misconfigured web server modules and extensions - many are often deployed automatically as part of the default configuration, when they should be reconfigured or removed.
- Use modules that are really needed, rather than those that are available and installed as default or just-in-case.
Back to newsletter 141 contents
Last Updated: 2017-11-28
Copyright © 2000-2017 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us