Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips June 2018
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 211 contents
http://vmlens.com/articles/techniques_for_thread_safety/
7 Techniques for thread-safe classes (Page last updated January 2018, Added 2018-06-27, Author vmlens, Publisher vmlens). Tips:
- Avoiding state is thread-safe: If a method neither accesses nor updates object state (ie instance and static variables), then it is thread-safe.
- Avoiding shared state is thread-safe: If a method accesses state that is only available within the current thread (eg thread local variables) then it is thread-safe.
- Pass state through queues: If all shared state is only mutated by single-consumer-queues, then the system is thread-safe.
- Immutable state is thread-safe.
- Mature concurrent datastructures (eg ConcurrentHashMap) are thread-safe.
- Synchronized blocks are thread-safe.
- Volatile fields are thread-safe if the writes to the field do not depend on the current value of the field.
- Atomic updates are thread-safe.
https://medium.com/@NetflixTechBlog/performance-under-load-3e6fa9a60581
Performance Under Load (Page last updated March 2018, Added 2018-06-27, Author Eran Landau, William Thurston, Tim Bozarth, Publisher NetflixTechBlog). Tips:
- Concurrency is the number of requests a system can service at any given time and is normally driven by a fixed resource such as CPU. Concurrency is normally calculated using Little's law: For a system at steady state, the concurrency limit is the product of the average service time and the average service rate (L = GW). Any requests in excess of the concurrency limit cannot immediately be serviced and must be queued or rejected.
- Queueing enables full system utilization in spite of non-uniform request arrival and service time. Systems fail when no limit is enforced on this queue; overall the arrival rate must match the exit rate.
- Dynamic concurrency control can use the TCP congestion control algorithm - this keeps track of various metrics to estimate the system's concurrency limit and constantly adjusts the congestion window size. It starts by sending requests at a low concurrency while frequently probing for higher concurrency by increasing the congestion window as long as latencies don't go up. When latencies do increase it assumes the the limit is reached and backs off to a smaller congestion window size. The limit is continually probed, resulting in a saw-tooth pattern. To apply this to concurrency control, compare the minimum latency (statically determined) and the time sampled latency (this ratio is referred to as the gradient); the concurrency limit is adjusted using the formula: newLimit = currentLimit x gradient + queueSize (queue size is tunable - a good default is the square root of the current limit).
https://blog.acolyer.org/2016/11/28/kraken-leveraging-live-traffic-tests-to-identify-and-resolve-resource-utilization-bottlenecks-in-large-scale-web-services/
Kraken: Leveraging live traffic tests to identify and resolve resource utilization bottlenecks in large scale web services (Page last updated November 2016, Added 2018-06-27, Author Veeraraghavan et al, Publisher OSDI 2016). Tips:
- Ensuring the load test workload is representative of real traffic by using shadow traffic: replaying logs of actual traffic.
- Use live traffic to load test in the test environment by routing copies to the tes systems.
- Using copies of live traffic (whether from logs or live routing) the test env will get traffic bursts, overloads etc, forcing the systems being tested to handle this, so increasing the system's resilience to faults.
- Performance testing as a live discipline changes the culture and can dramatically improve capacity.
- Two metrics are useful as proxies for the user experience: the web servers' 99th percentile responses time and HTTP fatal error rate (50x's).
https://www.youtube.com/watch?v=_SJgGeC2z74
Complicating Complexity: Algorithm Performance (Page last updated February 2018, Added 2018-06-27, Author Maurice Naftalin, Publisher JAX). Tips:
- Premature optimization is almost any optimization done before measuring actual performance.
- O(N) time complexity analyses tend to assume that: you can ignore constant factors; instructions have the same duration; memory doesn't matter; and instruction execution dominates performance. But on modern systems memory DOES matter and memory access can easily dominate over instruction execution. The processor can execute 300 instructions in the time it takes for data to be retrieved from main memory (RAM).
- Instruction execution is only one bottleneck. Others are disk/network IO, garbage collection and resource contention.
- CPU Cache misses can dominate performance. Two common reasons for cache misses are: insufficient capacity in the cache; and pre-fetching failing to predict the right data (normally the data adjacent in main memory to the last fetch, termed stride fetching).
- Objects referencing other objects are not normally in adjacent memory, so can't benefit from stride fetching which is potentially a big performance inefficiency.
- Objects have a lot of memory overhead so the data that is needed for the algorithm is only a small part of the data that is pulled in to the processor cache, which means more fetching of data is needed.
Jack Shirazi
Back to newsletter 211 contents
Last Updated: 2025-01-27
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips211.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us