Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips June 2015
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 175 contents
http://nginx.com/blog/introduction-to-microservices/
Introduction to Microservices (Page last updated May 2015, Added 2015-06-28, Author Chris Richardson, Publisher nginx). Tips:
- Monolithic applications are fast to build, test and deploy when they're small, but as the functionality grows it becomes slow to deploy, slow to start and difficult to understand and debug.
- Typically the larger the application, the longer the start-up time is.
- Monolithic applications are difficult to scale, different modules can have different and conflicting resource requirements meaning none get the ideal resource allocation.
- If all modules are running in one process, a critical bug or memory leak affects everything instead of just the component causing that.
- The application entry points need to be responsible for tasks such as load balancing, caching, access control, API metering, and monitoring.
- Microservices have a challenge keeping data synchronized across the services
http://www.infoq.com/articles/The-OpenJDK9-Revised-Java-Memory-Model
The OpenJDK Revised Java Memory Model (Page last updated May 2015, Added 2015-06-28, Author Monica Beckwith, Publisher InfoQ). Tips:
- When one thread writes to a volatile variable, that write is not the only thing visible to other threads; other threads see ALL the writes visible to the thread that wrote to the volatile variable.
- A volatile is cheaper than a synchronization, but multiple volatile fields in a method could be more expensive than a single synchronization.
- Volatile long and double fields and references are always guaranteed read and write atomicity; non-volatile long and double fields have no such guarantee, they can be written in 2 instructions so it is possible to read a corrupt value if you read after one write has been applied; whether this can actually occur in any instance this tends to depend on the JVM and the CPU (some guarantee atomicity for 64-bit operations).
- Volatile can be used for atomicity and for memory ordering purposes. For atomicity it could theoretically be removed for CPUs that can guarantee atomicity of longs and doubles; but it is difficult for the JVM to determine that is the exclusive reason for the volatile keyword so it is unlikely to be optimised away.
http://www.infoq.com/presentations/panel-java-performance
Java Performance Panel (Page last updated March 2015, Added 2015-06-28, Author Todd Montgomery,Gil Tene,Charles Humble,Mike Barker,Tal Weiss, Publisher InfoQ). Tips:
- Avoiding allocation in the core processing let's you avoid GC pauses completely, but requires very specialised programming styles. Going offheap avoids GC but is still allocation - you have to manage it instead of the JVM. Initialising all objects before core processing is a more common technique.
- There are many cases where avoiding allocation is possible just by rewriting code taking care to avoid allocations.
- Oracle JVMs can be tuned down to about 5ms pauses if you are very careful with your code. To get below this you need to avoid allocations, go off-heap, or switch to JVMs that can guarantee lower pause times (eg Zing).
- If you write "normal" applications with many third-party libraries, it's very difficult to get GC all pauses down to below 10ms.
- If the OS buffers and caches fill, it starts to evicts pages, and this impacts IO which in turn impacts any IO operations your application does (by queueing those operations); a possible strategy is to ensure your entire system is restricted in how much it reads into memory by being very careful about what can be paged in.
- If you need high performance code with low memory, first replace the Java collections with your own (or other known faster, smaller collections); and go off heap.
- @Contended is available in Java8 (you need to turn a flag on) and it keeps fields apart so that they don't get generated into the same cache line, thus avoiding false sharing.
- -XX:+UseCondCardMark can make multi-threaded code faster.
http://www.oraclejavamagazine-digital.com/javamagazine/march_april_2015#pg58
Improving the Performance of Java EE Applications (Page last updated March 2015, Added 2015-06-28, Author Josh Juneau, Publisher Oracle). Tips:
- Tune the application by tuning (in order): the application code; the application server; the JVM; the OS and platform.
- Common poor coding practices that lead to bad performance are: Over serialization and deserialization; overuse of finalizers; too much synchronization; unnecessarily holding on to variables; too much logging (particularly System.out.println()); not releasing idle sessions; failing to close resources.
- Run performance and load tests against the application and compare the results against previous runs.
- Disable autodeployment and dynamic reloading capabilities, these are intended for development and usually impact performance badly in production.
- Use server clustering and load balancing.
- Review all default configurations, they are frequently inappropriate for production.
- Profile and load test applications before deploying to production.
- Monitor server resources like memory, CPU usage, database usage, and any shared resources.
http://www.infoq.com/presentations/scalability-ebay-google-kixeye
Scalability Lessons from eBay, Google, and Real-time Games (Page last updated February 2015, Added 2015-06-28, Author Randy Shoup, Publisher InfoQ). Tips:
- There may already be a library that does what you want fast and small - search for what's available!
- Use standard data everywhere, eg utf-8, UTC timezone, and for storage and transfer.
- The highest priority is that the system should be available.
- A microservice architecture lets you scale very flexibly. Build microservices on top of microservices that are solving specific problems.
- Respond as rapidly as possible to clients. Never block, be reactive using asynchrony. Queue complex events for asynchronous processing.
- Instrument everything, measurement beats guessing every time.
- Attack the primary bottleneck and repeat until performance is good enough.
- If CPU, Memory and IO are all fine, the bottleneck is likely to be locking or contention (including cache contention).
- The "Normal" (Gaussian, or bell curve) distribution frequently does NOT apply to the data you are measuring. Long tail (power law) distributions are more common to many system measurements (eg latency).
- Use percentiles rather than means and standard deviations. The worst cases, the outliers, are often more important than other cases.
- Use the cloud to allow you to scale as needed. Have clients backoff when servers are loaded and are autoscaling; where possible use predictive scaling to anticipate when you need to spin up new instances.
https://plumbr.eu/outofmemoryerror
java.lang.OutOfMemoryError (Page last updated June 2015, Added 2015-06-28, Author plumbr, Publisher plumbr). Tips:
- The "java.lang.OutOfMemoryError: Java heap space" is triggered when the application attempts to add more data but there is not enough room in the heap. Use -Xmx to set the size of the heap.
- Typical reasons for getting an OutOfMemoryError are: the heap is sized too low for the application; sudden operations spikes exceeds the expected memory; memory leaks.
- A common memory leak is incorrectly implementing the hashCode()/equals() contract for a class then adding objects of that class to a map.
- The "java.lang.OutOfMemoryError: GC overhead limit exceeded" error occurs when the heap repeatedly has very little memory available, but it has just enough to continue the current allocations and then garbage collection frees up a similar small amount of space. If this continues so that less than 2% of the heap is recovered each time but the JVM is spending over 98% of it's time GCing, then the error happens.
- The "java.lang.OutOfMemoryError: PermGen space" error is when PermGen has been exhausted. This typically happens from too many classes (often from hot-redeploys or class generation, eg from reflection or servlets) or too large classes are loaded to the permanent generation (or in older versions of the JVM, pre 1.7 or early 1.7) too many interned strings. You can expand PermGen using -XX:MaxPermSize=N.
- To find classloader leaks, take a heap dump with jmap -dump:format=b,file=dump.hprof "process-id", open that in a heap dump analyzer (eg Eclipse MAT) and look for duplicate classes, especially related to your application. The shortest path to the GC roots of inactive classloaders will identify where the leak is occurring from.
- Enable class unloading with the CMS collector with -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled.
- A classloader leak into metaspace will either cause a Metaspace OutOfMemoryError (if the metaspace max size is set) or native allocation failures when physical RAM and swap is exhausted if no max metaspace size is set.
- The "java.lang.OutOfMemoryError: Unable to create new native thread" error is caused from being unable to allocate memory for the process (for 32bit JVMs) or from the machine (native memory is exhausted or too fragmented), or possibly because the OS or process has a limit on the nimber of threads it can create (eg with ulimit).
- The "java.lang.OutOfMemoryError: Out of swap space?" error is caused by the system running out of swap space - if you see this, your whole setup is wrong. Do some application and system capacity planning. Meanwhile performance was awful, why didn't you already alert on that way before the error was thrown!
- The "java.lang.OutOfMemoryError: Requested array size exceeds VM limit" error is thrown when you're trying to create too big an array. Usually this is a leak or a programming error, but if not go back to the drawing board and do some research on how to manage huge data structures (probably off heap).
- The out of memory: "Kill process or sacrifice child" error is thrown when the operating system "Out of memory killer" kernel job selects the JVM to be terminated.
Jack Shirazi
Back to newsletter 175 contents
Last Updated: 2024-11-29
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips175.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us