|
|
|
Back to newsletter 171 contents
First a big thank you to you, my readers! Last month the newsletter contained a small mistake on the AtomicIntegerFieldUpdater. You didn't all see it, because many of you wrote in to tell me, and the first correction (by reader Nils Kilden-Pedersen) was quick enough that I was able to correct the newsletter as it was going out. Kudos to you all being so sharp! (The correct version is that AtomicIntegerFieldUpdater.getAndIncrement(object) is atomic).
In this month's extracted tips are several high-level ones, but I want to start with Ross Mason's article. You know when you have a background process that's doing heavy IO - the virus scan kicked in or backups have started, etc, and that slows the whole system down? Have you wondered why it was that when something is reading/writing to disk intensively it slows down everything - including the completely unconnected program you are running? Ross explains that the page cache gets used exclusively by the intensive IO application and any other IO operations get queued - which means that even if your program does almost no IO, that tiny IO operation it does gets queued up until the current flush completes, pausing your program. And of course that sequence of tiny IO operations getting queued continues to repeat until the intensive IO is finished. This has many implications, not least the following extracted tip:
"If an application thread is writing while the kernel is flushing and a GC starts at this time, the GC cannot proceed until the flush completes which then lets the application thread proceed to a safepoint. This can make a long pause (in the GC log, you would see a long pause but not very much usr nor sys time)."
Other tips I'll highlight are:
"A zero copy request (eg FileChannel.transferTo) has the kernel copy data directly from disk to a socket without going through the application, improving application performance and reducing context switches."
"If building a segmented block data structure where elements can be added, it is efficient to leave spare capacity in a block to allow inserts to be performed without having to split the block on each insert."
And three high-level checklist tips from Srinath Perera' article, notthing new, but nice to have together:
"Unix 'Load average' represents the number of processes waiting in the OS scheduler queue. Load average will increase when any resource is limited (e.g. CPU, network, disk, memory etc.). A load average of more than 4x number of cores is a (too) high load."
"If performance targets are not being met and the machine has unused capacity, you should: test increase concurrent request load; check for locks; increase thread pool size; check the network has additional capacity."
"If performance targets are not being met and the machine is fully loaded, you should: check for other processes loading the machine; CPU profile the application if its CPU usage of the application is high; check if garbage collection is taking more than 10% of application elapsed time; check for IO load; check if the machine is paging."
Now on to all our usual sections: links to tools, articles, news, talks and as ever, all the extracted tips from all of this month's referenced articles including those gems above.
Java performance tuning related news.
Java performance tuning related tools.
Back to newsletter 171 contents