Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips April 2024
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 281 contents
https://medium.com/@AlexanderObregon/java-memory-leaks-detection-and-prevention-25d1c09eaebe
Java Memory Leaks: Detection and Prevention (Page last updated November 2023, Added 2024-04-28, Author Alexander Obregon, Publisher Medium). Tips:
- Memory leaks happen when objects no longer needed by the application continue to be accidentally referenced, preventing the garbage collector from reclaiming their space.
- Common causes of memory leaks include: Static fields (often holding collections) not being managed appropriately; Listeners and callbacks that are not unregistered when no longer needed; Cached objects not properly evicted when they are no longer needed; Adding objects to collections and failing to remove them when they are no longer needed; Resources (like database connections, etc) not closed properly; Non-static inner class instances keeping their outer class instances (they hold an implicit reference to their outer class).
- Common symptoms of memory leaks include: the garbage collector gradually working harder and harder to free up memory; Increasing memory consumption over time; Frequent garbage collection activities; OutOfMemoryError exceptions.
- Identifying memory leaks is usually best done by taking a heap dump and analyzing that with Eclipse MAT or VisualVM, as well as some commercial memory analysis tools. JFR can be used to record data from a running application, which can then be analyzed with JMC to understand memory allocation patterns, identify memory leaks, and optimize memory usage.
- Strategies for preventing memory leaks include: Ensure that objects are in scope only as long as they are needed; Use static fields cautiously; Avoid static collections that grow indefinitely; Always unregister listeners and callbacks when they are no longer needed; Limit the size of caches; Remove objects from collections when they are no longer needed; Be cautious with inner class instances; Always close resources (eg files, streams, connections) after use; profile your application memory usage; Unit and integration test to check for memory leaks.
https://www.infoq.com/presentations/vertical-stability/
Maximizing Performance and Efficiency in Financial Trading Systems through Vertical Scalability and Effective Testing (Page last updated February 2024, Added 2024-04-28, Author Peter Lawrey, Publisher InfoQ). Tips:
- Allocating small amounts of memory is efficient, but the more or larger or more concurrently you try to allocate, the more likely you are to hit CPU-cache sharing limitations and slow down allocation times enormously (by an order of magnitude!). For small objects allocated at high concurrency, allocation overhead is much larger than GC overhead.
- The total maximum allocation rate across all JVMs and threads is 10-15 GB/second. Reducing the allocation rate can often increase throughput.
- Distributed unique timestamps using a central server to provide the timestamps takes at least 100 microseconds. A UUID takes 300 nanoseconds but doesn't map directly to a timestamp. By taking a nanosecond timestamp and substituting the lowest 2 digits with the host ID, you can have a unique timestamp without a central server in 40 nanoseconds (using shared memory makes it unique per host).
- The biggest source of latency tends to be persistence (DB, durable messaging, HA, disk stored, etc). By minimizing the persistence, you can optimize your latency. A redundant copy on another machine is often much faster than a copy on local disk.
https://dip-mazumder.medium.com/how-to-determine-java-thread-pool-size-a-comprehensive-guide-4f73a4758273
How to Determine Java Thread Pool Size: A Comprehensive Guide (Page last updated September 2023, Added 2024-04-28, Author Dip Mazumder, Publisher The Java Trail). Tips:
- Thread pooling mitigates the overheads of creating threads (latency and additional work by both the JVM and the operating system). However managing threads can add overhead so the decision to use pools is not clear cut. Thread pools can also help to manage the resources used by threads.
- Tune the thread pool size to extract optimal performance from your system and gracefully navigate peak workloads.
- Thread pool size should not exceed the connection pool size for datastore requests (otherwise threads will be waiting for connections which is inefficient); the pool size should also not exceed the handling capacity of any external service being used by the pool (so as not to overwhelm that service); additionally thread pool size should be configured below the threshold needed to overwhelm the CPUs available, though this depends on the nature of the tasks the threads handle, ie CPU intensive (small pools, not exceeding core count) vs IO intensive (pools size below the size that overwhelms overall CPU and overall IO handles). Monitoring context switching can help to decide for these cases.
- If you have spare CPU available, you can optimize CPU intensive tasks by dividing them into smaller subtasks and distributing those subtasks across multiple CPU cores.
- You can optimize IO intensive tasks by: Caching frequently accessed data; Load balancing: tasks across multiple threads; Using SSDs rather than spinning disks; Using efficient data structures such as hash tables and B-trees; Eliminating unnecessary file operations (eg don't open and close the same file multiple times).
- For CPU intensive tasks, a common rule of thumb to size the thread pool is to use the number of CPU cores available.
- For I/O intensive tasks, sizing the thread pool is to have enough threads that keep the I/O devices busy without overloading them.
- The formula for sizing thread pools is Number of threads = Number of Available Cores * Target CPU utilization * (1 + (Wait time / Service time)) where the target CPU utilization is the percentage of CPU time that you want your application to use, wait time is the amount of time that threads spend waiting for I/O operations to complete and service time is the amount of time that threads spend performing computations.
Jack Shirazi
Back to newsletter 281 contents
Last Updated: 2024-10-29
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips281.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us