Back to newsletter 120 contents
A couple of months ago, I discussed the pros and cons of my preferred "fairly fast and scalable" architecture. When it came to the major issue of throughput versus latency, my preferred architecture (asynchronously pipelined connected components on a shared bus with distributed cache) loses out on the latency front for ultra-low latencies, and I suggested that you can achieve the best of both worlds if you provide alternative pathways in your framework which bypass the longer code path and achieve low latency where that is obtainable. Nowadays, when ultra-low latency means in the microseconds, is that a feasible option?
I spend a lot of time in the finance industry where certain types of applications really want ultra-low latency, and the industry constantly re-evaluates where it is and what offerings there are in that space. And I was fascinated to see one company, 29West (acquired this year by Informatica) are gaining pretty good traction in the ultra-low latency messaging space. 29West's "Latency Busters Messaging" product is claiming latencies of 40 microseconds for messaging across a standard 1GB network.
They've achieved this by cutting out the middlemen - by finding the minimal latency path for any two endpoints, and building in that route into their framework. For example when you communicate between sockets on the same machine, the operating system will identify that and use the loopback network for more optimal transmission, avoiding some of the network stack overhead. 29West goes even further and uses shared memory to completely eliminate all of the network stack. You could do the same in your application of course, but that adds a level of complexity and maintenance that the vast majority of applications won't do. And even if you did, if you haven't optimized all the other aspects of your messaging, the improvement in "loopback" calls won't even register since the other messaging calls were already much more of an issue. The loopback calls are just an illustration of how you can cut out overheads if you go back to the drawing board and apply the single remit: how can I message the absolute fastest way between any two endpoints in my application, and how can I put that into a framework so that my application can gain that advantage with minimal change.
I think this is part of an overall move by the IT industry away from genericity towards minimalism. Test driven software was a major step that way, producing simpler applications that handled what they were required to do without worrying too much about all the things they didn't need to do but might need to one day; and now that tens of microsecond latencies are seemingly a realistic target for messaging applications, I think we'll see more and more components and frameworks dedicated to doing one thing very well that can slot into place as required.
And about time too. Now on with this month's newsletter. As usual we have all our usual Java performance tools, news, and article links. Javva The Hutt has a guest columnist take over from him this month, Javvanelle The Huttess, who gives us a one month diary; Over at fasterj we have a new cartoon about out of the box configurations; and, as usual, we have extracted tips from all of this month's referenced articles.
Java performance tuning related news.
Java performance tuning related tools.
Back to newsletter 120 contents