|
|
|
Back to newsletter 051 contents
I read your Question of the month: What does volatile do?, but I'm sure there could be some situations where synchronized has a lower overhead than volatile?
Here's the way I look at it.
For any thread using volatile or synchronized, the thread runs as follows
Volatile: | Synchronized: |
Get a global lock on the variable | Get a global lock on the monitor |
Update the one variable from main memory | Update all shared variables that have been accessed from main memory |
Process some statements | |
Write any change of the one variable back to main memory | Write all shared variables that have been changed back to main memory |
Release the lock | Release the lock |
So if you look at it like this, then volatile can never have a higher overhead than synchronized. It could have the same overhead if the synchronized block only consisted of an access or update to a shared variable, and the JIT was able to optimize away the method call overhead and match the locking overheads. Brian Goetz confirms that with the latest Java memory model, volatile-read and monitor-acquire have the same memory semantics, and volatile-write and monitor-release have the same memory semantics, so a volatile operation can never be more expensive than the corresponding monitor operation, and could be less.
Of course, there are few times when you could use one or the other of these two keywords. Normally, you want atomicity of
compound statement execution, and for that you need to use synchronized (or the new java.util.concurrent classes).
It is worth pointing out that increment (i.e. ++) and similar operations are not atomic in Java. So incrementing a volatile
variable volatileVar++
is NOT thread-safe. If you need thread-safe semantics i.e. no possibility of multiple
threads corrupting the variable value by having the updates unexpectedly interfere with each other, then you need to use
a synchronized block to increment a variable, e.g. synchronized(LOCK){myVar++}
, regardless of the overheads this causes.
Having said all that, it is worth pointing out one situation where this analysis is wrong. If the JVM analyses the runtime code, and determines that a synchronized() callpoint is competely unnecessary, it can eliminate the overhead completely. volatile, on the other hand, cannot be eliminated, so in that case volatile would be more expensive.
The JavaPerformanceTuning.com team
Back to newsletter 051 contents