[an error occurred while processing this directive]
Published April 2003
Quest Software?s JProbe Suite 5.0 is a comprehensive, integrated toolkit for diagnosing and eliminating inefficiencies and code errors in Java applications, servlets, JSPs and EJBs. JProbe helps developers to understand what?s causing problems in an application, right down to the offending line of source code.
Quest Software provides solutions that help companies detect, diagnose and resolve performance problems in J2EE applications. JProbe works with PerformaSure (see the PerformaSure tool report), a multi-tier J2EE application diagnosis tool that analyzes transactions with its unique Tag and Follow technology.
This article provides an overview of the JProbe Suite and will show how JProbe Memory Debugger and JProbe Profiler can be used to solve two real-world Java application efficiency problems, highlighting some of the new features in version 5.0.
The JProbe Suite contains four tools to help you create efficient and programmatically correct applications:
JProbe Memory Debugger and JProbe Profiler are program efficiency tools. JProbe Memory Debugger helps you eliminate memory leaks and improve application efficiency by improving memory usage and reducing overhead related to excessive garbage collection. JProbe Memory Debugger also helps you improving application performance by identifying which objects are holding references to other objects in the heap, and calculating the size of memory leaks. JProbe Profiler combines a visual Call Graph interface and sophisticated data collection technology to provide highly accurate performance diagnostics to help you identify performance problems down to the offending line of code.
JProbe Threadalyzer and JProbe Coverage are program correctness tools. JProbe Threadalyzer is a powerful tool for detecting thread problems such as deadlocks, thread stalls and race conditions, identifying both actual and potential threading problems that can threaten the integrity of your application data. JProbe Coverage assists development and quality assurance teams in locating unexecuted code, making it easier to assess the reliability and accuracy of unit test and user acceptance test runs.
Each JProbe tool has two components, the JProbe Console and the JProbe Analysis engine.
The JProbe Console is the common user interface launched as an individual application in the JProbe Suite from which you can analyze session data collected by the JProbe Analysis Engine. The JProbe Analysis engine collects data, within the context of a session, on your stand-alone Java application or Application Server-based application.
The JProbe Analysis Engine requires a JVM that fully supports Sun?s JVM profiling interface called JVMPI. The profiling interface provides the hooks that the JProbe Analysis Engine needs to collect data. JProbe supports many popular JVMs on a variety of Operating Systems. For a complete list of supported JVMs and Operating Systems, please see the Quest Software Java solutions web site: http://java.quest.com/support/jprobe.
Data collected by the JProbe Analysis Engine can be viewed in real-time or stored in snapshot files for analysis at a later time. Real-time data provides a view of heap size and dynamic activity including object allocation and garbage collection. Snapshot data consists primarily of code timings, object instance summaries, and object references. The JProbe Analysis engine can also be configured to automate the collection of session data. Additionally, JProbe Profiler and JProbe Memory Debugger have a unique snapshot-differencing feature that allows you to compare the performance impacts of code changes on your applications and the use of system resources such as processor and memory.
JProbe Profiler and JProbe Memory Debugger allow you to analyze application performance in two session types, local session monitoring (see Figure 1-A) and remote session monitoring (see Figure 1-B).
The JProbe Suite installer has both a command line and GUI interface. The installation application allows for installing the various JProbe applications, Application Server integration tools and Integrated Development Environment (IDE) integration tools, as well as several helpful examples and integrated tutorials.
Integrating with popular Application Servers is a simple task with JProbe?s Application Server Integration tool, which can be run in a stand-alone mode or from the JProbe Console. Setting up an Application Server Integration with the Settings Wizard is quick and simple, you specify the type and version of Application Server along with any required Java parameters. The Application Server Integration can then be saved to a globally accessible directory for use by other developers. If you do not see your application server in the drop down list, Quest provides a J2EE Application Server Integration Portal that contains information and modules for integration with additional application servers.
Integration with a wide variety of IDEs is a simple task as well. You can integrate with your favorite IDE using JProbe?s IDE Integration Tool that is run from the JProbe console. The tool collects a small amount of information including the IDE to integrate with and specific configuration information for the IDE selected. The integration allows the JProbe tools to be used directly from your IDE to seamlessly work together to build efficient and programmatically correct applications.
For additional information on Application Server and IDE integration including a complete list of supported Application Servers and IDEs, please see the Quest Software Java solutions Integration Portal at: http://java.quest.com/support/jprobe.
JProbe Memory Debugger helps you improve memory usage in your Java application by identifying loitering objects and excessively allocated objects.
A loitering object is an object that is determined to be reachable by the garbage collector, but is no longer in use by your application. So if your program will never use an object again, but you fail to remove the reference to the object, the object will remain, or loiter, in memory indefinitely. This problem is commonly referred to as ?unintentional object retention? or loitering objects. Additionally, loitering objects can compound memory usage problems if that object is acting as an anchor to other objects that would otherwise be marked as unreachable by the garbage collector and therefore removed.
If your program creates an excessive number of objects, the Java heap will become larger, and garbage collection will take longer because there are more objects to evaluate. A simple example would be a method that allocates a new temporary object during each iteration of a loop, even though the object is never reused and is no longer required when the loop terminates.
The example application used in this article is a stock price charting servlet deployed in an Apache Tomcat 4.0 environment. Our application is exhibiting long page load times and we would like to minimize the end user wait time.
JProbe Memory Debugger is a highly use case-oriented tool. You will get the most benefit from analyzing memory usage when you work with your application in terms of typical use cases. A use case, in its simplest terms, is a sequence of interactions between a user and a system to achieve a goal.
We begin our performance investigation by starting JProbe Memory Debugger and beginning in the LaunchPad (see Figure 2). In the LaunchPad, we select the pre-configured Application Server we are using and what classes we would like to analyze. There are a variety of filtering and data collection options to allow us to view varying levels of information. Some options include filtering packages, classes, or methods and whether we want session data to be displayed or not.
JProbe Memory Debugger can monitor sessions in real-time and capture snapshots for analysis at a later time. Most garbage collection investigation is performed while working with JProbe Memory Debugger in real-time. Additionally, while working with JProbe Memory Debugger in real-time, you can be alerted if object counts exceed a user-definable threshold at the end of a use case. Threshold values are configured in the LaunchPad.
We begin performing our use cases with the application in JProbe Memory Debugger viewing the heap usage data in real-time with snapshots taken for analysis outside of the executing application. In addition to requesting garbage collection, you can view the individual object counts and the amount of memory currently allocated per class. The Heap Summary Window is also where you can capture snapshots via the Start Use Case and End Use Case buttons.
Once Tomcat has successfully started, we select Start Use Case. This will request a garbage collection to get a baseline view of heap usage in the Heap Summary Window and set a checkpoint to monitor heap activity beyond that checkpoint. The first use case to perform is to view the stock charts for several stocks. As we monitor the heap usage graph, we switch to the Garbage Monitor tab to determine if there are any methods that excessively allocate temporary objects. As we specify our classes in the Filter Allocating Methods combo box, we come across the assignData method of the StockChartDataSource class that seems to be working with objects inefficiently (see Figure 3).
We select the Finish Use Case button to capture a snapshot of the data for this use case. After having selected four stock charts to view, we see the most inefficient use of memory in the assignData method is the allocation and garbage collection of 2,040 instances of a double array. This array was responsible for over 2 MB of memory usage! (See the "Memory Released" column of the highlighted line in Figure 3.) We determine that this method is a prime candidate for optimization.
To continue our investigation, we select the Start Use Case button and select stock charts that we previously stored as our favorites. We only want to investigate the classes that we have written so we put our package name in the Filter Classes combo box of the Instance Summary tab (see Figure 4). We continue to perform the use case and navigate through four of our favorite stocks.
We monitor the instance Change Count and then end the use case by logging out of the web page and select the Finish Use Case button. This action will also request a garbage collection and capture the snapshot data for this use case. At this point, we know from our code that our object change counts should be zero. As it turns out, the StockChartDataSource class seems to be a problem area again. There are 12 instances of the class still in the heap at this point; thus we have identified a loitering object.
Now that we have a snapshot of the session data, we can look at many details regarding the loitering object in the Instance Detail View (see Figure 5). The Instance Detail View allows you to view object allocation information such as the size of the object, creation time, the number of referrers, and the number of references. We can also view the code at the point of allocation for a specific instance and calculate the size of any memory leaks.
Another unique feature of JProbe Memory Debugger that can be used from the Instance Detail View is the Memory Leak Doctor. This feature allows us to view the object reference graph and play ?what-if?? scenarios if we break various object references in the call graph. As we look at the StockChartDataSource class in the Memory Leak Doctor, we see the root of the reference graph is in StockDSFactory (see Figure 6-A). If we move along the object graph to the StockChartDataSource class, we can right-click on the reference and select Remove Reference from the pop-up menu to model the result if we changed our code to properly remove the reference. As we remove the reference, the Memory Leak Doctor informs us that by removing this reference, the object can now be garbage collected (see Figure 6-B).
Now that we know the root cause of the problem is in StockDSFactory, we determine that when the user logs out and the StockDSFactory is reset, we are not removing the references to StockChartDataSource that we have stored in a HashTable. We refactor the code and move on to the next step, which is again validation.
With the new code in place and running, we perform the same problematic use case as before while monitoring the heap activity in JProbe Memory Debugger. We log in and request four stock charts from our favorites and log out. Once we end the use case in JProbe Memory Debugger, a snapshot of the session data is captured. We will compare the snapshot that was run against our new code with the snapshot we ran against the code with the loitering object. The Snapshot Difference Window indicates that we have cleaned up 100% of the loitering StockChartDataSource objects (see Figure 6) and we have also reduced memory consumption by 38.27%. Additionally, we can use the snapshot differencing functionality to ensure that we did not introduce any new loitering objects or any new, unexpected objects in our packages.
JProbe Profiler helps you diagnose, repair, and validate code changes for Java efficiency issues related to inefficient algorithms. Some examples of inefficient algorithms include methods that exhibit one or more of the following behaviors: repeatedly invoking another method; repeating calculations instead of using cached results; running calculations on the main thread; using an inappropriate algorithm for the situation; or over-allocating objects. Other areas that JProbe Profiler can assist you in is identifying long running methods in third-party components as well as identifying inefficient interactions with JDBC classes such as long running SQL commands and inefficient result set processing.
We continue our performance investigation of the stock price charting servlet by working with the application in JProbe Profiler.
As with JProbe Memory Debugger, we begin our performance investigation by starting JProbe Profiler and beginning in the LaunchPad (see Figure 8). The LaunchPad is the common starting point for tools in the JProbe Suite. While in the LaunchPad, we select the pre-configured Application Server we are using and what classes we would like to analyze. There are a variety of filtering and data collection options to allow us to view varying levels of information. Some options include filtering packages, classes, or methods and whether we want session data at the method, or line level.
Like JProbe Memory Debugger, JProbe Profiler is a highly use case-oriented tool. You will get the most benefit from analyzing profile data when you work with your application in terms of typical use cases.
Like JProbe Memory Debugger, we can monitor sessions in real-time and capture snapshots for analysis at a later time. JProbe Profiler has a very handy feature while profiling an application in real-time. The Heap Usage window allows you to view memory utilization as you perform the application?s use cases. You may interact directly with the JVM by requesting a garbage collection to determine how memory is utilized during a particular use case. The Heap Usage Window allows you to capture snapshots of the session data for later analysis as well as manage the display of runtime data (see Figure 9). The Heap Summary Window of JProbe Memory Debugger provides a much deeper level of detail than the heap usage feature of JProbe Profiler.
When we start our initial performance analysis, we want to get an overview of our application code at a method level to identify methods that may be performance bottlenecks. Once we identify methods that are potential performance ?hotspots? we want to rerun the same use case, but set filters that allow us to see the line level detail in these methods to help us identify the root cause of the performance bottleneck. By using filters to focus in on just a few key methods we ensure that we are only getting the detailed information on what is important at that time, rather than collecting a potentially overwhelming amount of data.
The use case that we are testing against in this example is the rendering of multiple stock charts to an end user, one at a time. As we perform the steps of the use case to request the stock charts, we can view the heap activity in the Heap Usage Window. After we complete the use case, we capture a snapshot of the session data for further analysis.
Now that we have a snapshot file, we review the application?s calling sequences in the JProbe Call Graph Window. The Call Graph Window is a powerful visual tool that quickly points out application ?hot spots? in terms of where the most time is being spent in our code and how many times that code is being called.
In Figure 10, we quickly notice a hot spot indicated by the brightly colored red node in the call graph. Another point of note is that use case begins execution in the StockSelectServlet.doGet method.
Upon further inspection of the call graph, we realize that the most expensive method call, in terms of how much time is spent executing the method excluding time spent in methods that it calls (Method Time), is in JCServerChart.encodeAsPNG, which is a third-party component (see Figure 10). Since we cannot modify code in a third-party component, we can hide this information while we are analyzing the profiling data by right clicking on the node in the Call Graph window and selecting the Prune Subtree option. This also highlights an opportunity for us to investigate how our code interacts with the third-party component. We may want to optimize the implementation of the interaction or replace the component altogether if the performance impact is unacceptable. For now, we will just hide this information and proceed with investigating the performance characteristics of our own code.
Once we have hidden the method call in the third party control, we are quickly alerted to the next hot spot, which is in the StockChartDataSource.assignData method. The hot spot is consuming 24.6 % (4,527 ms) of the overall processing time. For more detail, we open the Method Detail Window for assignData (see Figure 11). This view provides a wealth of information about not only the method in question, but the method that called assignData as well as the methods that assignData called.
The Method Detail Window shows assignData taking a cumulative time of 4,827 ms, and a method time of 4,527 ms. Cumulative time is the time taken to execute the method including all the calls made by assignData to other methods. Method time is the time taken by the method excluding any calls to other methods. So we can see that the vast majority of time spent in assignData is within the method, and not as a consequence of calls to any other methods.
Now that we have found a problem area in a specific method, we will run through the use case again to collect data at the line level so that we can determine exactly which lines of code are responsible for the problem. We go back to the JProbe Profiler LaunchPad Window and change our Detail Level setting to Line. We perform the same use case and capture a snapshot of the session data.
Again, we open the snapshot file and see that the application is spending the most time in assignData exclusive of the methods that assignData calls. We bring up the source code in the Source Viewer window and JProbe Profiler takes us directly to the most time consuming section of the code in assignData (see Figure 12). The problem seems to be repetitive calls in a tight loop that assigns data to the charting control.
The Source code view leads us to re-evaluate the implementation of the assignData method where chart data is initialized. A closer inspection reveals that a re-factoring of this method should be more efficient. We want to implement the change and verify that the performance has improved. This brings us to the final step in our performance investigation: validation.
We profile the application again while performing the use case to request a stock chart for several different stocks and we capture another snapshot of our application?s performance data. At this point we need to validate that our code changes have had a positive impact on the performance of our application.
Upon initial inspection of the Call Graph window, the assignData method is no longer displayed as a hot spot (see Figure 13). We see that we are now spending 0.1 % (60 ms) of the overall application processing time in this method whereas before we were spending 26.1 % (4,527 ms).
At this point in our investigation we want to quantify the impact of all of our code changes on the performance of the application. JProbe Profiler?s snapshot differencing feature will allow us to determine exactly what the performance difference is. As we load the two snapshots and take the difference, we see that we made a 98.7% improvement gain (4,466 ms) in the Method Time for the assignData method (see Figure 14). The -4,466 ms value indicates a reduction in the time spent executing in the assignData method. Not only does the snapshot differencing allow us to verify that our code changes improved the performance of the application, it will also show us if our code changes adversely affected performance, which would be indicated by a positive value.
Now that we have made a performance gain in a specific method, what does that mean to the end user in terms of the performance of the application when performing this use case? Recall that the use case begins execution in the StockSelectServlet.doGet method. We can also use the Snapshot Difference Window to show us that the total performance gain that is visible to the end user is 4,466 ms (16.8% time savings), which is the time saved by our code changes in the StockChartDataSource.assignData method. This is shown in the Snapshot Difference Window in the Cumulative Time column. Cumulative Time is the total amount of time spent executing the method, including time spent in its sub-tree.
As any developer that has created Java applications will tell you, memory and performance issues can be extremely difficult and frustrating to detect, diagnose and ultimately resolve.
The JProbe Suite of tools with its developer-oriented features like filtering, visual indicators to various problems, detailed performance and memory metrics, source code browsing, and snapshot differencing helps even seasoned Java developers to quickly resolve performance and memory issues in their applications.
We have shown how the JProbe Memory Debugger and JProbe Profiler can quickly and easily help you pinpoint the memory and performance problems in your code and validate your code changes once they have been made.
Quest Software is a leading provider of application management solutions. Quest provides customers with Application Confidence(sm) by delivering reliable software products to develop, deploy, manage and maintain enterprise applications without expensive downtime or business interruption. Quest products increase the performance and uptime of business-critical applications and enable IT professionals to achieve more with fewer resources. Headquartered in Irvine, Calif., the company has offices worldwide and more than 18,000 global customers. For more information, visit www.quest.com.
Information on Quest Software's Java and J2EE performance solutions, including the JProbe Suite, can be found at: http://java.quest.com