[an error occurred while processing this directive]
Back to newsletter 045 contents
This month we interviewed Chuck Palczak, Senior Java Technologist for Veritas, who have a strong suite of Java application performance monitoring products. Chuck gives us his experience from years of ensuring customers can achieve excellent performance using Java.
Can you tell us a bit about yourself and what you do?
I'm a Senior Java Technologist for VERITAS Software Corporation, which means I help to define the features of the company's products - such as VERITAS Indepth for J2EE - for the Java market. I work in Denver, Colorado, and have been involved in application/system performance tuning and testing for more than 20 years.
What do you consider to be the biggest Java performance issue currently?
It's the issue of visibility. Java application developers/deployers really need to "see" what's going on in their application in order to find out what's causing a performance issue. Once they find it, then they can focus on fixing it. Performance issues often surface only when the application is stressed under real workloads. Providing visibility into performance issues - with low overhead - in production environments is a real challenge.
Do you know of any requirements that potentially affect performance significantly enough that you find yourself altering designs to handle them?
Yes. Mid-tier applications are, of course, in the middle. This means that they get requests for data from a front-end tier, read data from a back-end tier, process the data, and then format and send a response to a front-end tier. The biggest issue we see is that mid-tier applications are, surprisingly, often written without considering the impact of large numbers of distributed calls to other tiers. These applications typically run well in a development environment, but then do not scale when processing production workloads. Designing applications to minimize distributed calls whenever possible is almost always a good idea.
What are the most common performance related mistakes that you have seen projects make when developing Java applications?
People tend to focus on what they know best - for example, only on the app server or only on the database - and forget to look at the entire application. What you do on one tier can affect every other tier of the application. When you look at the entire flow of data - from the user through all tiers of the application to storage and then back to the user - you can get a total perspective on how the application works. Again, understanding how applications perform with real workloads as opposed to developer unit testing is key.
Which change have you seen applied in a project that gained the largest performance improvement?
Sometimes J2EE applications will load tens of thousands of database rows into memory and then perform some sort of filtering on the rows in memory. Although fixing this performance issue might require a database schema change, applications rewritten to make more intelligent use of SQL - instead of filtering in memory - can run 1000x faster. Believe it or not, in 2004, these issues are still more common than you would suppose.
Have you found any Java performance benchmarks useful?
Most large Java application performance problems are first addressed with application-specific coding changes that are related to distributed calls to other tiers, and benchmarks aren't too useful for that. Benchmarks become more useful when developing applications that aggressively use application server-provided functionality, such as state replication for high availability or extensive use of EJBs. Although progress is being made in this area, it is still difficult to translate the numbers provided by Java benchmarks into something that is meaningful for new application development.
Do you know of any performance related tools that you would like to share with our readers?
Absolutely. I obviously want to mention VERITAS Indepth for J2EE, because I helped to develop it. It provides you with great visibility into application performance problems through a GUI that lets you drill-down from an alert to where the problem lies. For example, it understands response time contributions from Java Servlets, JSP, EJBs, JMS, JNDI, JDBC, and XML. When used with other Veritas products, it correlates activity across web, multiple JVMs, and DB servers. It also has a SmarTune feature that gives you great advice on how to fix the problem.
One of the hardest things for a production monitoring tool to do is to balance information detail and overhead. Indepth for J2EE uses what we call "adaptive instrumentation" - a patent-pending adaptive byte-code instrumentation technology to maximize visibility into performance issues while minimizing overhead in production environments.
Do you have any particular performance tips you would like to tell our readers about?
Don't optimize before measuring performance. Measure performance under realistic workloads. There is no substitute for deep monitoring of applications in production. And, of course, be aware that distributed calls take a lot longer to execute than local method invocations!
Thank you for that interview Chuck, lots of really useful tips in there that should help our readers improve their application performance
The JavaPerformanceTuning.com team
Back to newsletter 045 contents