Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips February 2023
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 267 contents
https://medium.com/javarevisited/top-performance-issues-every-developer-architect-must-know-part-2-concurrency-a15bd0b2b3b6
Top Performance issues every developer&architect must know - part 2 - Concurrency (Page last updated May 2022, Added 2023-02-26, Author Dinesh Chandgr, Publisher Javarevisited). Tips:
- Running multiple simultaneous threads is a simple task as long as they do not interact with mutable shared objects - objects that are shared or accessible by multiple threads but can also be changed by multiple threads.
- Immutable shared objects do not pose issues to multithreaded code.
- Deadlocks can occur when two or more threads need multiple shared resources to complete their task and they access those resources in a different order or a different manner.
- Too much synchronization results in a lot of slow and stalled threads.
- In livelock, two or more threads keep on transferring states between one another. Instead of waiting like with deadlocks, they do non-progressing wasted work. Avoid livelocks by checking for invalid states (like re-adding a message to a queue when it throws an exception).
- Sizing of that thread pool is important to performance. If the thread pool is too small then requests will unnecessarily wait when there are resources available that could be processing them; but if the thread pool is sized too large then too many threads are going to execute concurrently competing for the processing resources, causing context switching and reducing overall efficiency.
https://www.youtube.com/watch?v=SLklP4vQMas
Monitoring Latencies: How fast is your REST service? (Page last updated October 2022, Added 2023-02-26, Author Fabian Staber, Publisher Devoxx). Tips:
- Service time is the time a request takes to be processed by the server. Response time is the time a request takes from being initiated at the client and getting the response to the client. The response time includes service time but also transit time and delays from queueing at the server entry point.
- Your load test can bottleneck your generated load and give you misleading results. You need to ensure that the load being generated is not being artificially restricted by the load test harness.
- Measuring just the service time on the server side can give you a misleading idea of the client's performance view.
- Averages are heavily affected by outliers, so are not a good measure of latency because that one number doesn't really tell you much about the behaviour.
- Maximum latency values are only useful for load testing, where they give you the worst case scenario. For production, the odd outlier is usually not really useful, you can just ignore those.
- Percentiles (95%, 99%, 99.9%) are the interesting statistics to monitor for latencies. These are the best to use for alerting. But percentiles cannot be aggregated across instances in a cluster, for that you need to use histograms.
- Histograms can be aggregated across instances in a cluster, and you can use them to get approximate but useful enough percentile information. But histograms need the bucket boundaries defined - unless you use exponential boundaries (also called sparse or native).
https://foojay.io/today/low-latency-microservices-a-retrospective/
Low Latency Microservices, A Retrospective (Page last updated October 2022, Added 2023-02-26, Author Peter Lawrey, Publisher foojay.io). Tips:
- For consistent performance testing (or any testing that relies on time elapsed behaviour), you need a deterministic clock. Make the time an input to the test and have that (plus elapsed differences) flow through the testing.
- For low latency time resolution, nanosecond resolution is needed.
- Customized encoding (eg of Strings, Dates, Times, DateTimes) with efficient conversions are needed for low latency.
- Trivially copyable objects (where the majority or entire Java object can be copied as a memory copy without any serialization logic) provides optimal efficiency when moving object data across services.
- For low latency, transactionality needs to be as simple as possible. When you have to have a complex transaction, idempotency can significantly reduce the complexity of recovery.
- Although no garbage collections is ideal, that comes with a large maintenance burden. Code can be made more maintainable at the cost of the occasional garbage collection, and this is often an acceptable tradeoff.
- For lowest latencies you need to run at around 1% utilization! However at this utilization level, hardware tries to power save which directly counters the low latency. You need to add performance tests for when the CPU is running cold.
Jack Shirazi
Back to newsletter 267 contents
Last Updated: 2024-12-27
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips267.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us