Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips February 2016
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 183 contents
https://www.computer.org/web/the-clear-cloud/content?g=7477973&type=blogpost&urlTitle=performance-patterns-in-microservices-based-integrations
Performance Patterns in Microservices based Integrations (Page last updated February 2016, Added 2016-02-29, Author Rohit Dhall, Publisher Computing Now). Tips:
- Throttling can be used both to ensure service is shared evenly amongst clients, and to prevent the server from being overloaded.
- Set timeouts to ensure threads don't remain blocked for too long.
- Use dedicated thread pools for different types of tasks so that one type of task cannot dominate resources.
- Use a "circuit breaker", a termination of request sending to a downstream system when that system is unavailable so that the service doesn't wait unnecessarily for responses.
- Use asynchronous communications between systems to decouple the systems.
http://examples.javacodegeeks.com/core-java/util/concurrent/completablefuture/java-completionstage-completablefuture-example/
Java CompletionStage and CompletableFuture Example (Page last updated February 2016, Added 2016-02-29, Author Nawazish Khan, Publisher Java Code Geeks). Tips:
- CompletionStage (and it's concrete implementation CompletableFuture) abstracts units or blocks of computation which may or may not be asynchronous that can be piped together.
- CompletableFuture supplyAsync(Supplier supplier)/supplyAsync(Supplier supplier, Executor executor) takes a Supplier - which accepts nothing (it normally wraps a task) and "supplies" an output - and returns a CompletableFuture which can be chained for further (potentially async) actions.
- CompletableFutures even for async tasks can be chained with branches (eg CompletableFuture.acceptEitherAsync()) as well as sequentially (eg CompletableFuture.thenCompose()); the braching can include exception handling (eg CompletableFuture.exceptionally()).
- CompletableFuture.cancel(boolean mayInterruptIfRunning) allows you to cancel running tasks, throwing a CancellationException (if the task is already completed, the cancel() operation retrns false).
http://codingjunkie.net//completable-futures-part1/
Java 8 CompletableFutures Part I (Page last updated January 2016, Added 2016-02-29, Author Bill Bejeck, Publisher codingjunkie). Tips:
- Execution of CompletableFuture computations can be one of the following: Default execution (possibly the calling thread); Async execution by using a provided Executor or the default executor of the CompletionStage (someActionAsync).
- When adding follow-on CompletionStage objects, the previous task needs to complete successfully in order for the follow on task/stage to run - though there are methods to deal with failed tasks and errors.
- You can pass the result of one stage of CompletableFuture execution to the next in the chain, eg with CompletableFuture.thenAccept() and CompletableFuture.thenComposeAsync() and CompletableFuture.thenCombineAsync(). The different modes depends on how the chain needs to process.
- CompletableFuture.acceptEither() let's a chain proceed with whichever of a set of tasks completes first.
- Ordering is not guarateed when combining CompletionStages.
http://karunsubramanian.com/websphere/ten-things-you-need-to-know-about-java-memory-leak/
Ten things you need to know about Java memory leak (Page last updated February 2016, Added 2016-02-29, Author Karun Subramanian, Publisher karunsubramanian). Tips:
- The maximum heap size is set by the -Xmx flag of the Java command line. Objects are created in the heap and creating too many without releasing them will lead to an out of memory error.
- Native memory is outside Java heap but still within the total process size. Any library that uses native memory and has leaks can cause an out of memory error (or even a segmentation fault) in native memory.
- A memory leak tends to be obvious by looking at the heap utilization. A normal pattern has a sawtooth with a relatively stable base; a memory leak shows an always increasing base.
- You can take a heap dump with "jmap -dump:format=b,file=<dump file path> <pid>". Beware that taking a heap dump pauses the JVM. Eclipse Memory Analyzer is a good tool to analyse heap dumps.
- Common reasons for memory leaks include: not cleaning up resources correctly when exceptions are encountered; using unbounded collections;
- Heap histograms can be useful as a starting point in troubleshooting memory leak: "jmap -histo <pid>"
http://blog.smartbear.com/api-testing/api-performance-in-2016/
How to Create a High Performing API: A New Perspective for 2016 (Page last updated February 2016, Added 2016-02-29, Author Bob Reselman, Publisher smartbear). Tips:
- Correct database indexing is critical to performance. Using too many will impact write performance; too few will impact read performance.
- Separating read functionality from write functionality (eg using denormalization) at the database level can be a critical design decision when it comes to performance.
- Where a lot of data or data that needs a long time to process is returned from an API, callbacks giving intermediate segments of data is a good model to provide a performing API. This needs to be designed in to the API.
- The fastest API is one that has to do NOTHING - use caching to enable retrieval of data rather than re-accessing/computing where possible.
- The more work your API has to do, and the more state it has to hold on the server, the longer it takes and the more brittle it will be.
- Give a lot of attention to how your API is writing and reading data.
- If you have APIs that could be used together, make sure that they work consistently with each other.
- Minimize what the API call does synchronously: use background processes; use a distributed cache.
http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html
Design of a Modern Cache (Page last updated January 2016, Added 2016-02-29, Author Benjamin Manes, Publisher highscalability). Tips:
- A cache's eviction policy tries to predict which entries are most likely to be used again in the near future to maximize the hit ratio. The Least Recently Used (LRU) policy is the most popular as it's very simple.
- Modern caches extend the cache usage history to include the recent past and for the eviction policy give preference to entries based on recency and frequency.
- Window TinyLFU is a cache eviction policy that uses history and popularity (with aging), letting new entries build popularity in a window before letting it be eligible for eviction.
- A scavenger thread to periodically sweep the cache and reclaim free space tends to work better than ordering entries by their expiration time.
- Concurrent access techniques in order of increasing throughput are: a lock; lock striping; splitting data structure to split writes; commit logging with asynchronous batched updates.
Jack Shirazi
Back to newsletter 183 contents
Last Updated: 2024-12-27
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips183.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us