|
|
|
Back to newsletter 233 contents
You'll have seen more than one article or talk saying "microbenchmarks are really easy to get wrong, you should test on your actual application using real data and real load distributions. And now here's our microbenchmark using JMH on this small section of code ..."
It seems ironic, but there's a good reason why lots of people say "don't microbenchmark ... and here's our microbenchmark". It's because a lot of the speakers at conferences and authors of engineering blogs are engineers that don't build user applications. They mostly build tools, frameworks, libraries. All those things that you use to pull together a skeleton app, add your code to, and produce a user application.
These tools/frameworks/libraries are used in so many different ways in so many different user applications that the only option for their builders is to microbenchmark. They would love to have the performance data sent to them from every application out there, but of course that doesn't happen. So if you develop a user application, the advice - don't microbenchmark - is the right one to follow. Use your real data to target what needs actually improving. If you're a tools/frameworks/libraries developer, then sorry, you need to work 100 times harder at improving your code in all sorts of ways that may never be used - but you can't be sure how it will be used so you have to make that effort anyway.
Now on to articles, tools, news; and of course all the useful performance tips in from those articles are extracted into this month's tips page.
Java performance tuning related news
Java performance tuning related tools.
Back to newsletter 233 contents