Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

News October 2010

JProfiler
Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning


Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:


Training online
Threading Essentials course


JProfiler
Get rid of your performance problems and memory leaks!


Back to newsletter 119 contents

This month a particular article caught my eye as a must read. Todd Hoff of highscalability fame wrote about an absolutely fascinating example of how your application living on the edge can turn gradual degradation into sudden catastrophe (see the article listings section below). The sequence, which caused downtime of FourSquare, was as follows:

- First off, over time, the load distribution across their servers slowly became slightly askew in memory usage (a 50:67 ratio), not anything that would normally be a problem - it is fairly common for load distribution across servers to be at least a little askew, expecially on the memory side since load balancing would usually target processor availability rather than memory (and many load distributors don't even load-balance, they just round-robin or equivalent).

- In this case, that meant that the system with the larger memory load just slightly exceeded RAM and memory started paging. Systems can recover or limp along for a while after paging kicks in, especially if your overall system has spare capacity and your load distributor can really balance, as more requests will be shunted over to the servers with capacity. But paging memory is definitely something that needs to be monitored for, because it's one of the biggest causes of potential dramatic performance slowdowns, a paging system can easily degrade by one or even two orders of magnitude slower than the equivalent unpaged system!

A note from this newsletter's sponsor

New Relic RPM - The Revolution in Java Performance Management is Here!
Affordable SaaS APM tool to monitor, troubleshoot, and tune apps
running on Websphere, Weblogic, Tomcat, Jetty, JBoss, Solr, Resin

- Here, the paging caused enough performance degradation to cause a backlog to build up. This in turn made the situation worse - the more the backlog, the more stress put on the system, and even if it just queues the incoming requests, that backlog will grow; the worse case is where processing actually slows down as the load gets ever larger and makes the system snowball into a death spiral (I'm not clear which of those happened with FourSquare, but in any case it was unsustainable).

- They hoped that providing a new server might have relieved the system and allowed it to recover, but in this case it failed to relieve the memory strain on the overloaded server as memory was so fragmented that it carried on paging. Again, my experience is that this is quite common, untried failover backups are rarely capable of relieving the stress on the system because the configuration is often not quite right.

Take FourSquare's experience and turn it to your advantage. Have a regular review of your system, looking for where it is on the edge of its capacity. Think about what happens when you exceed that capacity, and how the system will degrade, and whether that degradation will be handled gracefully or snowball into downtime. And do you have the monitoring in place to identify where your system is and when it exceeds those edges? Then test out your contingencies with actual representative loads, because anything less and you are just guessing.

Now on with this month's newsletter. We have all our usual Java performance tools, news, and article links. Javva The Hutt tells us about the NotFastEnoughException; Over at fasterj we have a new cartoon about how multi-cores were invented because of garbage collection; and, as usual, we have extracted tips from all of this month's referenced articles.

A note from this newsletter's sponsor

Get Your free trial of JProbe, the industry's best Java profiler.
JProbe resolves deep memory, performance and code coverage issues.
Save time and money. Get your copy today!

News

Java performance tuning related news.

Tools

Java performance tuning related tools.

A note from this newsletter's sponsor

ManageEngine: Application Performance Management for Java EE Apps.
Monitor App Servers: JBoss, WebSphere, WebLogic, JVMs and JMX Apps.
*** Monitor up to 25 App Servers, Databases & Servers at $795/Yr***.

Articles

Jack Shirazi


Back to newsletter 119 contents


Last Updated: 2024-09-29
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/news119.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us