Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips January 2020
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 230 contents
https://www.youtube.com/watch?v=2L1S0zfnIzo
AWS re:Invent 2019: Beyond five 9s: Lessons from our highest available data planes (Page last updated December 2019, Added 2020-01-29, Author Colm MacCarthaigh, Publisher Amazon). Tips:
- Quality is paramount - don't skip on any tests (units, integration, roll forward and back), simulations, test environments, etc. Every lesson learned should have at least one associated test. 90% of dev time is spent on testing.
- Model the system, testing only covers things you've thought of. Try to prove the system.
- Limit the blast radius (and know what it is). The service should only ever fail in a limited cell without impacting other cells in the service.More small services rather than fewer big ones.
- Shuffle sharding: Shard instances (combine them into small groups) to limit the number of instances a particular request source can reach, so that if that request source causes problems (a noisy neighbour), the number of instances impacted is limited (as opposed to full balancing of requests, where the requests could go to any instance, which could potentially impact all instances); Shuffle the shards so that any random shard can contain instances that overlap other shards giving many more shards, if a source request impacts a shard, other source requests going to other shards which include an impacted instance are only partially affected and since there are many more shards you are much less likely to be using the exact same shard as the bad one.
- Use modular separation: build the system using independent functional components which don't propagate errors to other components.
- Do live testing of requests on dummy servers to check if the request would cause a problem and reject those, before passing the request on to the live server.
- Make things as simple as possible. Say no to features. Features impact availability. Find another way of doing things.
- De-normalize datasets so queries are simple, ideally just a simple lookup.
- Monitor memory lookups.
- Write simple easy-to-follow code.
- Use a lot of redundancy - it may seem an unreasonable amount.
- Use a failure cache that can handle the most common cases in case the full service fails. But only run that cache on half the service (which should have double capacity) so that if the cache causes a problem it doesn't fail the service.
- Turn it off and on again should work - regardless of dependencies and dependency availability.
- Eliminate unnecessary dependencies.
- Degrade gracefully: a failure should only affect the feature it applies to, not the full service. Too high loads should shed any overhead not critical to the priority processing path, eg turning off logging, metrics
https://www.youtube.com/watch?v=0hiwMMRn0n4
Thread Safety with Phaser, StampedLock and VarHandle (Page last updated July 2019, Added 2020-01-29, Author Heinz Kabutz, Publisher GeeCON). Tips:
- Phaser is a replacement for CyclicBarrier and CountDownLatch.
- Phaser integrates with ForkJoinPool (as does CompleteableFuture), ie you can use it safely in tasks.
- Starting tasks at the same time is fiddly. Some ways are: spin a volatile boolean and trigger by flipping the boolean (Thread.onSpinWait() may help make the spin respond more efficiently); wait-notifyAll; CountDownLatch.await()/countDown(); and Phaser.arriveAndAwaitAdvance()/arriveAndDeregister(). The Phaser is the best of these options at getting the different threads to start closest together.
- StampedLock is a synchronizer, it allows thread-safe optimistic reads and pessimistic reads and writes. You can also pass the lock across threads so can lock a resources in an async application.
- StampedLock has 3 modes: exclusive write lock (you can try to upgrade an existing read lock to an exclusive write lock); non-exclusive read lock; and a non-exclusive optimistic (non-locking) read;
- StampedLock optimistic read procedure: stamp = tryOptimisticRead(), read state into locals, if(!validate(stamp)){readLock() try{read state}finally{unlockRead(stamp)}} - if the validate succeeded, we have valid state, if not you try to get a readlock and read.
- StampedLock efficient read is to try an optimistic read, validate it, and if it fails the validation do a pessimistic read (and unlock). If it passed the validation, the optimistic read is sufficient and efficient.
- VarHandles are as fast as Unsafe to read and write fields including volatile fields.
https://www.baeldung.com/java-common-concurrency-pitfalls
Common Concurrency Pitfalls in Java (Page last updated December 2019, Added 2020-01-29, Author Catalin Burcea, Publisher Baeldung). Tips:
- Reading from an object while it changes can give unexpected results, and leave it in a corrupted or inconsistent state.
- The best way to avoid concurrency issues and build reliable code is to work with immutable objects.
- One way to safely work with state in a multithreaded environment is to synchronize update and access to that state. This means that the state can only be used by one thread at a time, so limits performance
- CopyOnWriteArrayList achieves thread-safety by creating a separate copy of the underlying array for mutative operations like add or remove. Although it has worse performance for write operations than eg a Collections.synchronizedList, it provides better overall performance when you have significantly more reads than writes.
- ConcurrentHashMap is thread-safe and performs better than using Collections.synchronizedMap wrapper around a non-thread-safe Map.
- To work with non-thread-safe objects in a multi-threaded way, you can: create a new instance each time it is used; use them only through ThreadLocals; synchronize access and updates to the object using a synchronized block.
- A race condition occurs when two or more threads access shared data and at least one tries to change that data at the same time.
- Atomic* classes (like AtomicInteger) provides thread-safe management of various data types.
- For classes with synchronized accessors and updattors, every individual operation is thread-safe, but combinations of multiple method invocations are not synchronized together so can lead to inconsistent state. To avoid this, you need to synchronize the group of operations in one block. For example checking if a key is present, and if not then putting in the key and value into a thread-safe map could lead to the put operation happening more than once in concurrent threads, as each thread tests the presence, finds no key, then both put(). Methods like ConcurrentMap.putIfAbsent() are specifically available for this common multi-invocation operation that can lead to unexpected state even on thread-safe maps.
- Memory consistency issues occur when multiple threads have inconsistent views of what should be the same data. Since any thread can cache variables because it provides faster access compared to the main memory, this can easily occur. synchronized and volatile and many java.util.concurrent classes provide memory consistency across threads.
Jack Shirazi
Back to newsletter 230 contents
Last Updated: 2025-01-27
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips230.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us