|
|
|
Back to newsletter 067 contents
I recently estimated that our performance testing procedures at one of our customer sites caught 70% of the performance issues before they could reach production. To me that means a whopping 30% are getting through, not necessarily something to shout about. But our customer sees it differently. As far as they were concerned it meant 7 out of 10 potentially serious issues that would cost them money, but would never be identified in QA, were stopped well before reaching production. The costs of those 70% reaching production would be hugely more expensive than the cost of trapping those 70% in pre-production testing.
Why would those issues not be identified in QA? Performance testing throws up a different set of problems from QA testing. QA testing rarely identifies concurrency bugs (race conditions that can lead to unexpected behavior typically including deadlocks, resource leaks and data corruption), nor does it give you a realistic feel for execution speeds or where the bottlenecks in code are. And these are just as much bugs as any functional bug - a project can fail from not achieving usable performance as well as from not meeting functional requirements.
So, to some extent, it didn't surprise me when Joshua Bloch reported an integer overflow bug has existed for over 20 years in the most popular binary sort implementation (see the news items below). I see these sorts of bugs regularly. Functional testing tests all sorts of edge conditions to worm out incorrect code and failure modes, but they do not test high load and high scales which are the typical conditions under which seemingly correctly coded data structures and associated algorthms start to break.
Now read on for our other news, including the links to that binary sort issue, this months selected articles and new tools, and of course our many new extracted performance tips.
Java performance tuning related news.
Java performance tuning related tools.
Back to newsletter 067 contents