Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips February 2013
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 147 contents
The State Of Open Source Monitoring (Page last updated November 2012, Added 2013-02-26, Author Jason Dixon, Publisher DZone). Tips:
- A traditional monitoring solution has a central gathering manager which gets metrics from hosts and services (via a pull or push) and provides a dashboard for alerts and recovery, and emits email and pager notifications.
- Monitoring refers to metrics collection, viewing trends, fault detection, notifications of thresholds breached, capacity planning, and even business analytics.
- A ping response doesn't really tell you that the system at the other end is usable. You need OS metrics to determine that.
- The first thing you tend to do on receiving an alert is view the trend data for that metric. But then you tend to check lots of other metrics to track root cause, such as disk, memory, swap, users logged in, network, connections, current transactions, any concurrent admin jobs - and even this is usually just the start.
- Metrics collection is the most important part of the monitoring workflow. It should be treated as part of the application from design level, with long-term data storage and recovery. Collect as much as possible, and store for as long as possible.
- Finding faults using metrics requires metric tracking (ideally in real-time), and thresholds that inidicate unnacceptable conditions. Ideally thresholds should be relative to the baseline or trend rather than absolute.
- Monitoring workflow consists of: collecting metrics; tracking them against a threshold; notifying if the threshold is breached; notifications are routed to the people that can handle them asap; providing dashboards (with drill down) that allow operators to see the status of (parts of) the system; providing trend visualization to allow identification and resolution of issues; and capacity analysis.
- A good monitoring solution should: be composable so only the parts needed are installed; has well-defined interfaces and protocols; doesn't require administrator privileges to install or use; is distributed; resilient (can route metrics around failed nodes); can be automated; can correlate metrics across a distributed system.
- A monitoring architecture consists of: sensors that collect metrics (ideally with no state holding required); sensors send metrics as an event stream to the store or to aggregators for storing; aggregators can track and convert data if needed; a state engine tracks changes and holds rules which define responses to changes (i.e. at it's lowest level it is a fault detector); storage engines handle long-term storage and management of metrics; a scheduler provides management of notification routing (e.g. to on-call ops, etc); notfiers responsible for composing alerts; visualizers for viewing data in various formats (graphs, dashboards, etc).
Increasing heap size - beware of the Cobra Effect (Page last updated October 2012, Added 2013-02-26, Author Nikita Salnikov-Tarnovski, Publisher plumbr). Tips:
- If you make your heap much larger, the most likely adverse consequence is (much) longer pause times. Below 4GB, pause times (of occasionally a few seconds) are likely to be manageable in user-facing many applications; however heaps of tens of gigabytes will have unnacceptable pause times of tens of seconds and even minutes.
- If pause times are too long because of a very large heap, try scaling horizontally with multiple smaller heap JVMs instead one larger heap JVM.
- If pause times are too long because of a very large heap and horizontal scaling is not possible, then try switching to garbage collectors that target smaller pause times at the cost of throughput: i.e. the concurrent or G1 collectors.
- When trying different garbage collectors, you must measure differences on a realistic application test, there is no way of accurately predicting the advantages and disadvantages for your application without measuring.
- When the CMS garbage collector is not fast enough to finish the collection before the tenured generation is full, it falls back to the standard stop-the-world GC, so you can still face 30 or more second pauses for heaps of above 16 GB.
- Azul Zing's pauseless C4 garbage collector is targeted at keeping low pause times (milliseconds) for very large heaps.
- If you cannot remove long pause times from horizontal scaling or changing garbage collectors, a less maintainable alternative is to allocate memory off the heap, e.g. using NIO allocateDirect() or with third-party tools such as Terracotta BigMemory or Apache DirectMemory.
Android performance tuning (Page last updated December 2012, Added 2013-02-26, Author Cameron McKenzie, Publisher TheServerSide). Tips:
- Stick with one WebView - delivering a UI through a single WebView means solving new problems like how to get the back button to work properly, but the performance gain is enormous.
- People want a quick start when they are accessing your mobile application - they don?t want to wait through a long load time.
- Split IO bound and storage bound processes into separate threads that can run in parallel on multi-core devices.
- Caching accelerates the mobile experience.
- Providing visual cues about the progress of files that were downloading or images that were slow to render creates a feeling that the application is loading faster.
- Creating the illusion of fluidity (around any choppiness in performance like waiting on a remote communication) makes the experience more enjoyable for the end user.
Notes on Distributed Systems for Young Bloods (Page last updated January 2013, Added 2013-02-26, Author Jeff Hodges, Publisher somethingsimilar). Tips:
- Distributed systems are different because connections (remote requests) often fail. And the failure mode may be partial, such that som part of a request may have succeeded, and it's not even possible to determine which part (for a while). This means you can't be certain you have a system-wide consistent view of data. You need to deaign from the ground up for frequent failure.
- Avoid coordinating machines wherever possible - often described as "horizontal scalability". The real trick of horizontal scalability is independence - keeping the requirement for consensus between machines to a minimum.
- "It's slow" on a distributed system is a problem that is orders of magnitude harder to solve than on a single machine.
- Bound resource utilization during times of overload and system failure, by using backpressure - dropping requests and signalling back to upstream systems that load is too high to process.
- Partial availability is better than no availability - but the system needs to be designed to handle partially successful requests rather than return failure in that situation.
- Collecting metrics from production systems is the only way to really know what your system is actually doing.
- Log files tend to have a disproportionate amount of space taken up by by the wrong log data. This makes them less useful than they could be. Prefer metrics to log files.
- Percentiles (50th, 99th, 99.9th, 99.99th) are more accurate and informative than averages in the vast majority of distributed systems. Averages are normally only meanignfull for normal distribution statistics, and distributed systems tend to not show normal distribution for the statistics of most interest.
- Estimating capacity requirements is important.
- Use feature flags to cutover parts of the system in a gradual way to avoid global failure. Accept that multiple versions of infrastructure and data is a norm, not a rarity. Feature flags are best understood as a trade-off, trading local complexity for global simplicity and resilience.
- The closer the processing and caching of your data is kept to its persistent storage (in time as well as space), the more efficient your processing will be, and the easier it will be to keep your caching consistent and fast.
- If multiple instances of requests for the same kind of data are made near to one another, they should be joined into one larger request.
- Writing cached data back to storage is bad - but can easily occur, so should be protected against.
- Systems created of subsystems with well defined APIs (services) are more robust and flexible.
Parallelization of a simple use case explained (Page last updated November 2012, Added 2013-02-26, Author Tomasz Nurkiewicz, Publisher nurkiewicz). Tips:
- An interleaved set of I/O and CPU processing needed to complete a request can potentially be speeded up by doing the I/O for the next portion in parallel with the CPU processing of the currently retrieved input.
- Threads are the way to achieve background loading of data, and using a thread pool is advised to give you greater control over the threads.
- Executors and ExecutorService provide simple code for obtaining and submitting execution to a thread pool. Use ExecutorService.submit() to submit the code to be executed.
- Size your pool according to the available resources for the particular problem being resolved.
- Both Spring and EJB support asynchronously running methods in a clean way.
A Detailed Look at The New File API in Java 7 (Page last updated October 2012, Added 2013-02-26, Author Dmitriy Rogatkin, Publisher InfoQ). Tips:
- java.nio.file.attribute and java.nio.file.spi provide more sophisticated file related operations separate from the older java.io.File. Path and Files classes separates out the path to a file, and operations on files (which were combined in the File class). FileSystem represents different file systems: local; remote; ISO images; Zip archives; and proprietary (for example an in-memory file system). FileStore represents file storage attributes.
- Java 7 nio file api provides memory efficient traversal of filesystems using FileVisitor; traversal is via Files.walkFileTree, and the FileVisitor receives callbacks as it visits files (FileVisitor.visitFile), directories (FileVisitor.preVisitDirectory, FileVisitor.postVisitDirectory) and if any exceptions get called FileVisitor.visitFileFailed).
- Java 7 nio file api supports (hard) links and symbolic links and access and manipulation of file attributes.
- Java 7 nio file api provides a watch mechanism on files and directories which allows for JVM optimization of file watching (though initial implementations were naive polling for many filesystems). A multiplexed capability supports watching for multiple events (across multiple files and directories) at the same time with one thread. The watch service polling mechanism supports blocking, non-blocking and blocking with timeout operations.
- Java 7 nio file api supports atomic operations on the file system, allowing synchronization of processes against file system.
- Reasons why you might consider migrating an older system to Java 7 nio file include: Memory problems in file traversing; A need to support file operations in ZIP archives; A need for fine grained control over file attributes; Needing watch services.
Back to newsletter 147 contents
Last Updated: 2018-04-29
Copyright © 2000-2018 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us