Back to newsletter 158 contents
With the highest performance highly concurrent systems, the highest efficiency is increasingly targeted at non-locking algorithms. That doesn't always mean compare-and-swap, but often it does. To remind you, with a compare-and-swap operation you pass the existing value and the new value you want to store; if the compare-and-swap finds that the actual value currently stored is the "existing" one you passed, then it will store the new value you passed, otherwise it won't (because something else has changed the value since you got your "existing" one). The return code/object tells you whether the operation succeeded or failed (the "swap" tells you it returns the value that was at the location before it was overwritten if successful). Compare-and-swap operations are usually much more efficient than locking to update a value for two reasons: the cost of obtaining a lock is relatively high; and most values are read more often than they are updated. Locking is considered a pessimistic approach - you're assuming the worst case, that something else will try and change your value while you do, and so you lock them out so that you can guarantee you don't waste your operation. Compare-and-swap is an optimistic approach - you assume that mostly nothing else is going to change the value when you are, so it's best to just give it a go, and deal with (hopefully) the odd failure with a retry.
This should tell you that there are scenarios where locking can outperform a compare-and-swap approach, specifically where there are a high number of concurrent writers on a value. In this situation, compare-and-swap fails a lot and many threads can end up spinning as they keep retrying compare-and-swap operations; in contrast the lock is guaranteed to succeed it's update once the lock is obtained, no retries are needed, so fewer overall register executions will result in that high contention scenario.
Now step back to last November's newsletter. I included reference to, and extracted tips from, an article entitled "Lightweight Contention Management for Efficient Compare-and-Swap Operations". What this provides is a way to shift the point when locking becomes better than compare-and-swap quite a way - quite likely to where you are almost always better off using compare-and-swap. The technique is quite simple - just backoff (i.e. pause) before retrying the compare-and-swap operation when it fails. The article goes into multiple different strategies for determining how long to wait, but finds that the simplest - a constant small wait - is highly effective, in some cases providing an order of magnitude improvement in throughput of update operations! What this backoff does is give the other contending threads just enough time to slip in and update so that when it comes back to your thread's turn, it's got a gap where it can also slip in and do it's update. This technique should become part of your standard armoury if you are working on highly concurrent systems.
Now on to yet more links to Java performance tools, news, articles and, as ever, all the extracted tips from all of this month's referenced articles.
Java performance tuning related news.
Java performance tuning related tools.
Back to newsletter 158 contents