Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips April 2022
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 257 contents
https://www.youtube.com/watch?v=2nzM8UGeKsE
Advanced Caching Patterns at Wix Microservices (Page last updated January 2022, Added 2022-04-29, Author Natan Silnitsky, Publisher DevoxxUK). Tips:
- Cache hit to cache miss ratio needs to be monitored, as if it's low, the cache is not helping.
- The "Reduce network failure risk" cache pattern consists of a reliable static datastore (network disks, S3, etc) which holds configuration data that is the most recent configuration so that any network failure to the primary store doesn't delay services. This type of cached data needs to fit into memory and should not be updated very often, and stale data is valid enough.
- The "Reduce data retrieval latency" cache pattern writes updates to caches (which are closer to the services) where the old data is in the caches, immediately after the primary store is updated, so providing as low latency as possible for the most frequently used data. On startup, the cache takes the most recent updates.
- The "Scale external traffic" cache pattern caches stable data outside the service to prevent load where the already calculated data is good enough. Usually referred to as a reverse proxy, where sites cache static data in caches closer to browsers. These caches need to be invalidated when the data becomes out of date, and in those cases the next requests go through to the back end and repopulate the cache from the newly generated data.
- A cache pattern decision chart: 1 - if your data is crucial for startup and not highly dynamic, use a file persistent storage mechanism closer to the services to store the data; 2 - if your data is highly dynamic, use a dedicated caching technology that can handle this, with caches closer to the service or lower latency than the primary data store; 3 - if your data is not highly dynamic and can be easily invalidated, use a reverse proxy.
https://medium.com/@learncsdesign/microservices-observability-design-patterns-bdfa5807f81e
Microservices Observability Design Patterns (Page last updated March 2022, Added 2022-04-29, Author Neeraj Kushwaha, Publisher Medium). Tips:
- Patterns that make troubleshooting easier: Health check; Log aggregation; Distributed tracing; Exception tracking; Application metrics; Audit logging.
- A liveness check is different from a health check: liveness says the service is running, but not that it is ready to serve requests. The service might be out of resources or still initializing. The health check (or readiness check) determines whether the service is ready to process requests.
- Log aggregation pipelines send logs of all service instances to a centralized logging server, allowing analysis and alerts across instances. Logging infrastructure should provide log aggregation, storing and searching.
- Distributed Tracing let's you identify request paths and focus on where an issue arises for problematic requests.
- Application Metrics enable monitoring and alerts, the most critical aspect of observability.
- Exception Tracking reports exceptions to a centralized service, which de-duplicates exceptions, generates alerts, and handles exception management and tracking.
https://blog.vanillajava.blog/2021/12/low-latency-microservices-retrospective.html?spref=tw
Low Latency Microservices, A Retrospective (Page last updated December 2021, Added 2022-04-29, Author Peter Lawrey, Publisher vanillajava). Tips:
- To ensure services produce the same results every time, whether in tests or between production and any redundant system, make time an input. This allows time-outs and clock triggered events to be tested, ensuring each system does the same things at the same point, producing the same deterministic output.
- Millisecond timestamps are insufficient, use nanosecond resolution timestamps.
- Use unique IDs for tracing events through a system.
- How the objects are stored can make a big difference in performance - encoding strategies include: encoding Strings and dates in long fields; object pooling Strings; storing text in a mutable/reusable field. Ultimately copying an encoded object should need no serialization.
- Simplify restarts with idempotency - eg replay any messages you are not sure were processed fully.
- For very low latency microservices, one thing that unexpectedly didn't matter was ultra-low garbage collection - occasional GCs were tolerable.
- Lowest latencies tend to be at around 1% of the peak capacity. At lower throughputs hardware tries to power save, increasing latency when an event does occur; At higher throughputs, you increase the chance of a message coming in while you are still processing previous ones, adding to latency.
- High throughput disk drives are specialist, many drives cannot sustain very high throughput for very long.
- Current ultra-low performance latency targets are 99.9% percentile at 100 microseconds (less strict) or the 99.99% percentile at 20 microseconds (more strict).
Jack Shirazi
Back to newsletter 257 contents
Last Updated: 2024-11-29
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips257.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us