Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers| Heap dump analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Concurrency, Threading, GC, Advanced Java and more ... 

News November 2010

Get rid of your performance problems and memory leaks!

Modern Garbage Collection Tuning
Shows tuning flow chart for GC tuning

Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:

Training online
Threading Essentials course

Get rid of your performance problems and memory leaks!

Back to newsletter 120 contents

A couple of months ago, I discussed the pros and cons of my preferred "fairly fast and scalable" architecture. When it came to the major issue of throughput versus latency, my preferred architecture (asynchronously pipelined connected components on a shared bus with distributed cache) loses out on the latency front for ultra-low latencies, and I suggested that you can achieve the best of both worlds if you provide alternative pathways in your framework which bypass the longer code path and achieve low latency where that is obtainable. Nowadays, when ultra-low latency means in the microseconds, is that a feasible option?

I spend a lot of time in the finance industry where certain types of applications really want ultra-low latency, and the industry constantly re-evaluates where it is and what offerings there are in that space. And I was fascinated to see one company, 29West (acquired this year by Informatica) are gaining pretty good traction in the ultra-low latency messaging space. 29West's "Latency Busters Messaging" product is claiming latencies of 40 microseconds for messaging across a standard 1GB network.

A note from this newsletter's sponsor

Get Your free trial of JProbe, the industry's best Java profiler.
JProbe resolves deep memory, performance and code coverage issues.
Save time and money. Get your copy today!

They've achieved this by cutting out the middlemen - by finding the minimal latency path for any two endpoints, and building in that route into their framework. For example when you communicate between sockets on the same machine, the operating system will identify that and use the loopback network for more optimal transmission, avoiding some of the network stack overhead. 29West goes even further and uses shared memory to completely eliminate all of the network stack. You could do the same in your application of course, but that adds a level of complexity and maintenance that the vast majority of applications won't do. And even if you did, if you haven't optimized all the other aspects of your messaging, the improvement in "loopback" calls won't even register since the other messaging calls were already much more of an issue. The loopback calls are just an illustration of how you can cut out overheads if you go back to the drawing board and apply the single remit: how can I message the absolute fastest way between any two endpoints in my application, and how can I put that into a framework so that my application can gain that advantage with minimal change.

I think this is part of an overall move by the IT industry away from genericity towards minimalism. Test driven software was a major step that way, producing simpler applications that handled what they were required to do without worrying too much about all the things they didn't need to do but might need to one day; and now that tens of microsecond latencies are seemingly a realistic target for messaging applications, I think we'll see more and more components and frameworks dedicated to doing one thing very well that can slot into place as required.

And about time too. Now on with this month's newsletter. As usual we have all our usual Java performance tools, news, and article links. Javva The Hutt has a guest columnist take over from him this month, Javvanelle The Huttess, who gives us a one month diary; Over at fasterj we have a new cartoon about out of the box configurations; and, as usual, we have extracted tips from all of this month's referenced articles.

A note from this newsletter's sponsor

ManageEngine: Application Performance Management for Java EE Apps.
Monitor App Servers: JBoss, WebSphere, WebLogic, JVMs and JMX Apps.
*** Monitor up to 25 App Servers, Databases & Servers at $795/Yr***.


Java performance tuning related news.


Java performance tuning related tools.

A note from this newsletter's sponsor

New Relic RPM - The Revolution in Java Performance Management is Here!
Affordable SaaS APM tool to monitor, troubleshoot, and tune apps
running on Websphere, Weblogic, Tomcat, Jetty, JBoss, Solr, Resin


Jack Shirazi

Back to newsletter 120 contents

Last Updated: 2024-01-29
Copyright © 2000-2024 All Rights Reserved.
All trademarks and registered trademarks appearing on are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed:
Trouble with this page? Please contact us