Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips July 2013
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 152 contents
http://williamedwardscoder.tumblr.com/post/53349845356/scaling-my-server
Scaling my Server (Page last updated June 2013, Added 2013-07-30, Author William Edwards, Publisher williamedwardscoder). Tips:
- You can move the DB into RAM and write-through to the database for persistence and eventual synchronization.
- The speed of writing to a DB is better measured in requests/sec rather than rows/sec - inserting 1000 rows in a batch statement is much faster than inserting 1000 rows in 1000 individual statements.
- Architect the system so that only live subscribers put load on the system, inactive subscribers should not load the live system.
- Stateless servers are trivially (horizontally) scalable.
- Sequence numbers throughout requests and responses enable you to let some servers be out-of-date at the cost of client retries or server synchronization overhead being added to queries.
- Paging data is an important optimization for client-server interactions as it handles both user and network aborts without overloading the server's queues with pending-out data.
- JSON can be a massive overhead, you may need a custom marshalling implementation and/or pre-serializing parts of the message.
- Strings created by StringTokenizer andd String.substring() point to the original character data instead of creating a copy (this is a double-edged sword).
- When holding huge amounts of data in memory, HashMaps and similar implementations are very memory inefficient, especially if storing primitave data types. GNU Trove collections can be far more memory efficient.
http://highscalability.com/blog/2013/6/10/the-10-deadly-sins-against-scalability.html
The 10 Deadly Sins Against Scalability (Page last updated June 2013, Added 2013-07-30, Author Todd Hoff, Publisher highscalability). Tips:
- Tune disk I/O - fix the hardware (e.g. use RAID 10 not RAID 5).
- Database locking and scanning overhead kills performance under load. Don't use the database as a queue.
- Don't use a database for text searching, use a dedicated text search solution.
- Cache at every layer of the system.
- If there is code that is consistently problematic, don't keep patching it, rewrite it from scratch.
- Object Relational Mappers are hard to optimize and tweak.
- Synchronous, Serial, Coupled and Locked Processes are all scalability blockers.
- Row level locking is better than table level locking.
- Use async replication.
- Use eventual consistency for clusters.
- A single database server is a choke point; use parallel databases and let a driver select between them.
- Having no metrics means you have no real visibility into your system, so you don't know where the bottlenecks are.
- Be able to (dynamically) turn off features so when a spike hits features can be turned off to reduce load.
http://www.infoq.com/articles/G1-One-Garbage-Collector-To-Rule-Them-All
G1: One Garbage Collector To Rule Them All (Page last updated July 2013, Added 2013-07-30, Author Monica Beckwith, Publisher infoQ). Tips:
- G1 is parallel, concurrent and has a multi-phased marking cycle; and can work with large heaps while providing reasonable worst-case pause times.
- G1 GC basic parameters are to set your heap ranges (using -Xms for min heap size and -Xmx for the max size) and a realistic pause time goal (using -XX:MaxGCPauseMillis).
- For G1 GC, neither the young nor the old generation has to be contiguous, so generation sizing is more dynamic.
- In G1 GC objects that span more than half a region are considered "Humongous objects" and are directly allocated into "humongous" regions in the old generation.
- G1 GC selects an adaptive young generation size based on your pause time goal, ranging anywhere from the preset min to the preset max sizes. When eden reaches capacity a young gc (evacuation pause) will occur. This is a STW pause that copies the live objects from the regions that make up eden, to the 'to-space' survivor regions.
- G1 tenuring works are with other collectors, objects are copied from survivor from-space regions to to-space regions until the tenuring threshold is reached, and the objects are promoted to old gen regions.
- When the occupancy of the total heap crosses the -XX:InitiatingHeapOccupancyPercent (default 45%) threshold, G1 GC will trigger a multi-phased concurrent marking cycle.
- A number of G1 flags control the exact number of old regions added to the CSets (the set of regions chosen to be collected): ?XX:G1MixedGCLiveThresholdPercent: (occupancy threshold of live objects in the old region to be included in the collection); ?XX:G1HeapWastePercent: (threshold of garbage that you can tolerate in the heap); ?XX:G1MixedGCCountTarget: (target number of garbage collections within which the regions with at most G1MixedGCLiveThresholdPercent live data should be collected); ?XX:G1OldCSetRegionThresholdPercent: (limit on the max number of old regions that can be collected during a mixed collection).
- G1 doesn't (yet as of java 7u40) give a hard guarantee for pause time goals, so is not necessarily suitable for low latency applications.
http://blog.newrelic.com/2013/06/25/more-from-velocity-most-commonly-missed-application-performance-issues/
Most Commonly Missed Application Performance Issues (Page last updated June 2013, Added 2013-07-30, Author David Spark, Publisher newrelic). Tips:
- For page downloads to browsers, use cache control headers to cache appropriately; break up scripts into (static) modules that can be cached in the browser.
- Monitor for memory leaks, and fix those if they are significant.
- Monitor the application's use of caching, and improve the hit ratios.
- Test to larger scales, especially to expected production scales.
- Expect variable and potentially large network latency; design and test with this in mind.
- It's common to fix the wrong thing; Don't wwaste time, measure the actual bottleneck and fix that.
- Measure the performance from the user's perspective (put in monitoring capability that gives you those metrics).
- Don't load all functionality on startup if this delays startup, load only the immediately required functonality and lazily load the rest or load on demand.
http://java.sys-con.com/node/2648336
Fix Memory Leaks in Java Production Applications (Page last updated May 2013, Added 2013-07-30, Author Andreas Grabner, Publisher Java Developers Journal). Tips:
- Increasing JVM heap is a temporary solution to fixing memory leaks, increasing the duration between crashes. Combined with more frequent restarts, this is a preventive approach (though not solving the underlying issue).
- Monitor the JVM memory spaces. If memory usage is overall constantly growing, this is likely to be a memory leak.
- Enable dumping the heap on out-of-memory-errors. The resulting heap dump can be analyzed to find which objects are using up most of the heap, and which objects are holding on to those, causing the memory leak.
- Find where the leaking objects are allocated to determine the start of their lifecycle; determine when the lifecycle should end and so when the objects should be garbage collected, and determine why these objects are not garbage collected (typically either the appropriate release code is missing, or it hasn't been properly enabled, or it is failing to keep up with the rate of generation of objects).
http://www.infoq.com/articles/Virtual_Panel_Performance_Tuning_Face_off
Performance Tuning Face-Off (Page last updated June 2013, Added 2013-07-30, Author Vikram Gupta, Ben Evans, Charlie Hunt, Kirk Pepperdine, Martin Thompson, Monica Beckwith, Publisher infoQ). Tips:
- Most performance problems, once detected, are relatively easily solvable - so the biggest difficulties are in diagnosing the problem. Learning to use a profiler should be a core competency of any developer.
- Measure, Don't Guess.
- Plan for performance, build a performance testing infrastructure, performance test regularly as part of the product lifecycle.
- Ask what is the: expected throughput; lowest throughput you can live with, and when and how long can the throughput drop to that level; throughput metric to measure; expected latency and acceptable variances in latency; worst acceptable latency; latency metric to measure; maximum amount of memory that can be used; memory usage metric to measure.
- You usually have a tradeoff between memory usage, cpu utilization, throughput and response times. To reduce one without increasing any other is possible, but is likely to require tremendously more work than just trading off one for another.
- You should understand and characterize the worst-case, expected and projected production performance behavior of the application.
Jack Shirazi
Back to newsletter 152 contents
Last Updated: 2025-05-29
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips152.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us