Back to newsletter 033 contents
This month we interviewed Steven Haines, J2EE architect for Quest Software, Inc.
JPT: Can you tell us a bit about yourself and what you do?
I am a J2EE Architect working for Quest Software as its J2EE Domain Expert. Quest's strength in the monitoring field is derived from the fact that we seek out experts in specific disciplines and bring them in to apply their knowledge toward developing monitoring and performance management software. My role has been to evaluate various application servers (BEA WebLogic, IBM WebSphere, Oracle Application Server, JBoss, etc.) and to determine what performance information we can gather from them. We are then able to define the criteria by which we determine whether an application server is running optimally. To date, we have delivered 24-by-7 diagnostics tools for WebLogic and WebSphere.
JPT: What do you consider to be the biggest Java performance issue currently?
Focusing in the J2EE arena, the biggest problems I have encountered are architectural. Most project teams do not spend enough time designing their applications for performance and scalability from the beginning. Luckily, design patterns have been formalized and more organizations are paying closer attention them. The key is to use these design patterns correctly.
JPT: Do you know of any requirements that potentially affect performance significantly enough that you find yourself altering designs to handle them?
Probably the biggest factor that affects the performance of application servers is memory. Regardless of the application domain, the largest hurdle when deploying applications is tuning memory and reducing that "pause" experienced when a major garbage collection runs. Therefore, projects that I have worked on have had significant requirements placed on memory, and specifically on HTTP Sessions. Reducing the amount of memory in HTTP Sessions can greatly improve performance. In the past, my designs have adjusted to scrutinize every potential byte that may enter a session. I identify global data to move to the application scope and data that can be stored in cookies in the user's browser. Finally, most remaining stateful information is shifted to the EJB tier where it can be more easily managed.
JPT: What are the most common performance related mistakes that you have seen projects make when developing Java applications?
The most prevalent mistake I have seen is the misuse of memory, specifically with respect to HTTP Sessions. Many development teams work in isolated environments, performing unit tests on their work, but seldom performing load tests. A memory-hungry application can stand up to a minimal load (20 users, or so) with great performance. But, as soon as the load is increased significantly, performance rapidly degrades. For example, storing 100K of data in a session object only yields a couple megabytes for 20 users, but an increase to 1,000 users pushes that count up to 100MB, just for user sessions. Furthermore, if the application is running in a clustered environment, the problem is compounded by the replication of session data between servers.
JPT: Which change have you seen applied in a project that gained the largest performance improvement?
The most significant gain I have seen applied in a project has been the result of adopting a pre-built architecture. Most developers want to build everything from scratch, "we can all do it better ourselves, right?" but pre-built architectures, like Struts, have been tested by thousands of users so we can be comfortable with their performance and scalability. The drawback of using a pre-built architecture is that developers have to learn how to use someone else's code. Once they overcome that learning curve, they are much more productive, and when deployment time comes, they are pleased with the results.
JPT: Have you found any Java performance benchmarks useful?
Unfortunately no - Standard benchmarks, such as ECPerf and SpecJAppServer, only measure the performance of an application server. Plus, most of the results posted by application server vendors are on servers tweaked to maximize the performance of the benchmark on their system. This is not necessarily going to reflect the performance of every organization?s applications. The only benchmark that matters is one that is generated for each organization?s particular applications on those application servers, running transactions representative of that organization?s end-users. I always recommend running a load tester, such as Benchmark Factory or Load Runner, on the application to determine throughput and then watching the application with a tracer, such as PerformaSure, to determine where the bottlenecks are.
JPT: Do you know of any performance related tools that you would like to share with our readers?
At Quest, we have some great tools that I have worked with extensively. If developers would like to see how well their WebLogic Server is running internally and isolate and diagnose configuration problems, I would highly recommend looking at Spotlight on WebLogic. For diagnosing application issues, we have PerformaSure, which allows users to trace requests "from HTML to SQL" identify "hot" transactions and pinpoint their locations. I would never deploy a real-time application without first running it through PerformaSure.
JPT: Do you have any particular performance tips you would like to tell our readers about?
Start with a solid architecture and be mindful of performance considerations from the beginning by minimizing the amount of data being stored in session objects. Developers should cache as much data as possible (limit the hops between tiers in the environment) and invest the money and time into tools to help identify bottlenecks before deploying the application into production.
JPT: Thank you for the interview.
Thanks for the opportunity to answer your questions. -Steve
(End of interview).