Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips December 2017
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 205 contents
https://www.youtube.com/watch?v=dg-GVCGvuGg
Java Memory Model (Page last updated December 2017, Added 2017-12-29, Author Shimi Bandiel, Publisher Trainologic). Tips:
- Assignment to all variables except long and double are atomic.
- If several threads are executing the same method concurrently, and every thread will update fields to the same result, eg caching a hashcode for a structure that is immutable, it doesn't matter if the threads execute concurrently. You lose a little performance by having multiple threads repeating the work, but you gain by not needing to provide concurrency control.
- Volatile variables are all atomically assigned (including double and long). The latest value of the volatile variable is always visible to all threads.
- When a thread enters a synchronized block, it sees everything made visible by other threads; when it exits the block it makes visible all it's changes to other threads.
- False sharing, where two different threads update the same cache line on different cores, can be worked around by padding (adding dummy variables) between the variables that are on the same cache line so that they become stored on different cache lines.
- Volatile variables which are shared across cores can slow down processing if they are accessed and updated frequently because each core has to invalidate the other core caches when it is changed, stalling core processing
- LongAdder implements a long which can be updated across multiple threads with no performance penalty, by providing a separate long for each thread. The read is slow as it needs to accumulate values across all longs, but the writes are fast.
- Every access to a shared mutable object needs to be guarded to avoid unintended race conditions
- ReentrantReadWriteLock has high overheads and is not efficient if it is locking short operations. For long operations, like guarding IO, it should be fine.
- StampedLock is like a ReadWriteLock, but can read optimistically which is very efficient for low write contention scenarios
- Updating a shared mutable reference with an AtomicReference using final (so immutable) objects is thread safe and very speed efficient, but can generate a lot of garbage. Even with the GC overhead, it can be faster than any locking mechanism for managing concurrency
http://vmlens.com/articles/3_tips_volatile_fields/
3 Tips for volatile fields in java (Page last updated December 2017, Added 2017-12-29, Author Thomas Krieger, Publisher vmlens). Tips:
- When thread A writes to a volatile variable and subsequently thread B reads that same variable, the values of all variables that were visible to A prior to writing to the variable become visible to B after reading the volatile variable
- When you need to share a variable across threads and you can write that variable without first reading it (ie just overwrite it), you can use a volatile field. An example is a boolean flag that needs to be viewable across threads.
- You can use a volatile field for reading a variable but locks for writing it as long as there is no dependency across fields, and it's okay from a logic perspective for a write to be happening concurrently to a read.
- JDK 9 VarHandle exposes atomic compare-and-set operations directly on volatile fields, which means you can now do what was previously restricted to internal operations (ie previously accessible via hacks, but not intentionally public).
- You can use volatile fields if the writes do not depend on the current value , or if you can make sure that only one thread at a time can update the field
https://stackify.com/java-thread-pools/
Finally Getting the Most out of the Java Thread Pool (Page last updated September 2017, Added 2017-12-29, Author Eugen Paraschiv, Publisher Stackify). Tips:
- A thread pool cuts down on thread lifecycle overheads, and is usually more efficient than creating and destroying threads over the application lifetime.
- Three thread pool interfaces with increasing capabilities, Executor, ExecutorService and ScheduledExecutorService, provide control over thread pool task execution.
- A Java thread pool is composed of: a thread factory for creating new threads; the pool of worker threads; and a queue of tasks waiting to be executed by the pool - if all threads are active when a new task is submitted, they will wait in queue until a thread becomes available.
- Executors can create several types of thread pools: SingleThreadExecutor, a single thread pool with an unbounded queue; FixedThreadPool, an N-threads pool with an unbounded queue; CachedThreadPool, a pool that creates new threads as they are needed; and WorkStealingThreadPool, a pool targeted at efficiently using CPU cores.
- ExectorService lets you submit a Callable, giving a Future that you can then use to control the task
- An ExectorService pool will continue (even idle) until explicitly terminated with shutdown() or shutdownNow().
- The ScheduledExecutorService provides a pool that executes scheduled tasks, after a specific delay and at regular intervals.
- The ThreadPoolExecutor lets you have a pool which is bounded in size but grows and shrinks as threads are needed or become idle for a while.
- The ForkJoinPool tries to keep subtasks on the same CPU core, but when idle will take tasks scheduled for other threads from the ends of their queues to fully utilize cores.
- ThreadPoolExecutor provides good control over the number of threads and the tasks that are executed by each thread. This makes it suitable for cases when you have a smaller number of larger tasks that are executed on their own threads. ForkJoinPool is best used to speed up work in cases when tasks can be broken up into smaller tasks.
- A too large thread pool wastes resources which, if needed elsewhere, will badly affect performance; a too small thread pool will throttle throughput and also badly affect performance
- Queuing in thread pools is not controlled by the application, so you can have short tasks waiting for long tasks when ideally you might want them the other way round.
https://www.youtube.com/watch?v=E7oIJBy03z8
Health checking as a Service with Serf (Page last updated October 2017, Added 2017-12-29, Author Lorenzo Saino, Publisher HashiCorp). Tips:
- To handle non-simple health check scenarios, a simple threshold is insufficient. A cascading failure is where a simple threshold health check removes one host from a loaded cluster, and the resulting load just makes the remaining hosts go over the threshold one after the other, removing the whole cluster.
- If one host is dramatically different from other hosts, that's a candidate for removal from the cluster when it crosses a threshold, but if all hosts are behaving similarly then one host crossing the threshold should not result in removal. You need to monitor the group of hosts to achieve this, not hosts in isolation
- Cluster-aware anomaly detection takes the health-check metrics of all hosts and passes them through a filter consisting of: a de-noiser, an anomaly detector, and a hysterisis filter. A simple de-noising algorithm is a sliding window moving average, the window size lets you select how smooth the output signal is - larger is smoother, but the larger the window the longer before an anomaly is detected. Anomaly detection of an anomalous host in the cluster can be identified by taking the average for the cluster and the standard deviation, and identifying any host that lies outside N standard deviations, where N is the sensitivity. But one threshold can cause hosts to thrash across the threshold, so the hysterisis filter adds a second lower threshold, M standard deviations where M < N, which specifies when the host is now considered healthy enough to be considered healthy again to receive traffic.
Jack Shirazi
Back to newsletter 205 contents
Last Updated: 2024-12-27
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips205.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us