Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Tips January 2005
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 050 contents
An HP UX-11i Performance Management Methodology (Page last updated October 2004, Added 2005-01-31, Author Peter Weygant, Chris Ruemmler, Robert Sauers, Publisher Prentice Hall). Tips:
- Performance management uses the following steps: Assessment; Measurement; Interpretation and Analysis; Identification of Bottlenecks; Tuning or Upgrading the System.
- Performance assessment should consider: System configuration; Application design; Performance expectations; Known peak periods; Changes in the system configuration or the application; Duration of any problem; options available.
- System configuration includes both hardware and software configuration: number of disk drives; disk data distribution; file systems or raw disks; file system parameters; how the application makes use of disks; memory size; available lockable memory; swap space; processor type and numbers; kernel configuration; values of tunable kernel parameters.
- Determine the following application design factors: inter-process communication; basic algorithms; disks utilization; is it compute-or I/O-intensive.
- Does the RDBMS support placing a table and the index that points to it on different disk drives? This is an important tuning technique for improving database performance.
- Consumer complaints will be in terms of the application but measurements will be based upon the available metrics
- Translate system measurements into application-specific terms.
- Determine the measurable criteria for satisfactory performance - this is the only way to know when performance tuning has successfuly completed.
- Objective rather than subjective measures must be used for describing performance (problems).
- The users of the application need to agree with you that you have finished tuning the system.
- Users are concerned with how long it takes to get a response from the system or application once the enter key is pressed - map performance to and from this consideration.
- Response time is the most commonly used measure of system performance and is typically quoted as one of the following: An average of seconds; A confidence level; A maximum response time.
- Sub-second response time of approximately 250 milliseconds would be considered optimal for no visible delay.
- Transaction performance depends on: Transaction complexity; Number of users; Think time between transactions; Transaction types and ratios.
- Acceptable response times depends on the type of application: no-delay < 0.25 seconds; graphical or read-intensive < 1 second; update pages 5-15 seconds depending on complexity.
- Performance is seen as poor when update response time exceeds 5 seconds and when there is no preparation to be done by the user.
- Read performance must be no more than 1?2 seconds to keep users satisfied.
- Users prefer consistent responsiveness rather than variable responsiveness.
- Batch jobs will interfere with performance and degrade interactive performance.
- Batch jobs should be run in off-hours.
- Users tolerate consistently poor response time better than response that is good one minute and poor the next (e.g. 3 seconds +/- 0.25 is more acceptable than 1.5 seconds +/- 1).
- When the system is tuned for throughput, variability in response time is usually increased.
- Throughput is often quoted as work per unit time. Throughput is driven by business needs
- The system must be sized to support the required throughput - this is capacity planning.
- Identify known peak periods in advance, so that unusual spikes in the data can be readily explained.
- Resource utilization in an office environment peaks at 11:00 a.m. and between 2:00?3:00 p.m. during the normal work day.
- Processing requirements typically grow at the end of the month or the end of a quarter.
- If expected peaks are absent, it may be a clue that something unusual is preventing full system utilization.
- Sudden changes in performance are most frequently due to a configuration change.
- Performance problem diagnosis starts with asking several questions: How long does the performance problem last? When does the performance problem occur? How long has the performance problem existed? When did it begin?
- Tuning the operating system or application may be a viable alternative to upgrading the CPU to a faster model.
- Performance tools are used collect data and present it. Consider: Which performance tools are available? What is the purpose of the measurement? Is the measurement baseline-or crisis-oriented? How long should the data be collected, and at what intervals? What data metrics are to be collected? How well are the metrics documented? How accurate are the data presented by the tool? Are certain system resources already saturated? How do these measurements relate to the organization's business needs?
- A baseline is a profile which can be used for purposes of comparison at those times when performance is not acceptable or is degrading.
- Baseline measurements require fewer metrics and a longer sampling interval than Crisis measurements, because a particular problem is not being investigated. Instead, the goals are to characterize system performance only, and to watch for trends.
- Baseline measurements should be archived for historical purposes.
- Baseline measurements can be used to: Review performance trends over time; Compare against current performance when investigating or diagnosing current performance problems; Provide data for performance forecasting; Develop and monitor service level agreements; Provide data for capacity planning.
- Crisis measurements are made for performance problem diagnosis and require much more detail so that performance problems can be determined.
- Crisis measurements required additional metrics as well as more frequent measurement, resulting in a much larger volume of data.
- Performance problem determination is much more difficult when there are no baselines against which a comparison can be made.
- Historical baseline measurements should be reviewed so that performance trends can be acted upon before they become serious.
- Sampling intervals for baseline measurements should be measured in minutes or hours rather than seconds to reduce the volume of data that will be collected.
- Sampling intervals for performance problem resolution are typically measured in seconds rather than minutes or hours.
- The shorter the sampling interval, the higher the overhead of making the measurement. This is of particular concern if one or more of the system resources are already saturated.
- Don't forget to talk to the users and ask them how they perceive system and application performance. Watch how they interact with the system: It's possible that the users may be interacting with the system in a way that the application designers never imagined, and that is causing the application to behave poorly.
- Don't be scared by large numbers. What really matters is that you meet the performance requirements of the application.
- Characteristics of bottlenecks are: A particular resource is saturated; The queue for the resource grows over time; Other resources may be starved as a result; Response time is not satisfactory.
- Disk utilization is determined by periodically monitoring whether there are any requests in the queue for each disk drive. The total number of requests in the queue is not factored into the disk utilization metric. Although 100% disk utilization is an indicator of a busy disk, it does not mean that the disk cannot support more I/Os.
- 100% resource utilization is indicative of saturation, but not definitive - for example idle time may be consumed by a low priority process which will easily give way to any other CPU requirements.
- Resource queue growth over time is a strong indicator of a bottleneck, ideally in conjunction with a utilization metric.
- The queue for a resource tends to grow when demand increases and when there is not enough resource available to keep up with the requests.
- It is easier to develop rules of thumb for queue metrics than for utilization metrics.
- Resource starvation can occur when one resource is saturated and another resource depends upon it.
- Ultimately unsatisfactory response time determines whether or not a bottleneck exists: a system using 100% CPU but with satisfactory reponse times has not (current) performance problems.
- CPU utilization metric is a measure of saturation; The run queue metric is a measure of queue growth; both need to be measured to establish performance problems.
- Multiple tools should be used to validate that there is not a problem with a particular tool yielding misleading data.
- Alleviating one bottleneck may result in the emergence of a new one to investigate.
- When tuning, bear in mind that either the amount of the resource can be increased, or the demand for the resource can be reduced.
- General tips that can be applied to tuning bottlenecks: Identify if you have a bottleneck; Have a repeatable test; Do not tune randomly; Look for simple causes; Tune methodically; Change only one thing at a time; Prioritize what to tune; Know when to stop tuning.
- Tuning the wrong thing can make the situation worse rather than better.
- Tune simple things first: tuning a system parameter is easier and less costly than modifying the design of an application.
- It is usually possible to tune for response time or for throughput, but not both.
- In a transaction-processing environment, one should tune for the most frequent type of transaction, or the most important.
- Some applications cannot fully utilize a system: Adding a CPU to a single-threaded application may not increase performance.
- Instrumenting an application to provide performance metrics can create bottlenecks - be careful to avoid this.
- Target tuning where it is cost-effective: optimizing a part of the application that is executed only once and consumes less than 5% of the total execution time is seldom worth it.
Back to newsletter 050 contents
Last Updated: 2017-11-28
Copyright © 2000-2017 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us