Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Newsletter no. 27, February 28th, 2003
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Mea Culpa. Javaworld didn't stop access to its archived articles.
More precisely, access was withheld for one week, then restored
after someone decided the policy wasn't going to make them a
fortune. Well done JavaWorld, and silly old me for writing last
month's editorial just when the restriction was in effect.
Swing vs SWT was the hot news this month. Amidst some propaganda
for each side, it seems that each has its place. More to the
point, it seems like we have Sun backing Swing and IBM backing
SWT and the two sides intend to fight it out with each other in
trying to make their GUI framework easier to use and faster to
respond. Whoopee! Don't ya' just love competition.
We also had the outing of a two year old Sun memo that indicated
Sun didn't have Java performance on Solaris as a top priority.
Is that really so surprising? Two years ago, Java had only just
become the de-facto server-side language, and Sun has never
really been that responsive to ongoing Java trends. They've
tended to be looking forward to the next best thing, or
responding to some torrent or other, rather than concentrating
on what today's community wants. That's the kind of attitude
that gave us Java in the first place. It's good for the Java
community, because it leaves plenty of opportunities for
everyone else to service those ongoing trends.
A note from this newsletter's sponsor
Java Performance Tuning, 2nd ed. covers Java SDK 1.4 and
includes four new J2EE tuning chapters. The best value Java
performance book just got even better! Order now from Amazon
I'd like to say that I'm keeping up with all the articles that
come out each month. But I can't. The truth is that my backlog
continues to grow each month. But on the positive side, that's
actually because there are so many articles every month with
something to say about Java performance. The articles I have
got to this month include a swathe (four!) articles on GC in
the IBM JVM, another bunch on performance/capacity management,
a couple on J2ME, and others on Swing, XML, JMS and more.
Javva The Hutt talks about infinitely fast computers, and
comes to a surprising conclusion; we
interview Alex Rossi who successfully
scaled a large enterprise Java system seven years ago; our
question of the month explains how to choose a profiler;
the roundup covers several very interesting questions and
answers from performance discussion groups, and, of course, we have all
the latest performance tips extracted in concise form.
A note from this newsletter's sponsor
Measure, analyze and maximize J2EE application performance
during load testing with PerformaSure. Read the Aberdeen
white paper "Honing In on J2EE Performance Assurance".
Java performance tuning related news.
A note from this newsletter's sponsor
Get a free download of GemStone Facets 2.0 and see how this
patented technology supports JCA 1.0 to seamlessly integrate
with J2EE application servers to enhance performance.
A question was asked about merging two datasets, one a sorted list, the other unsorted, the result to be sorted. Various solutions were presented: use the Collections.sort(); Insertion sort; Quicksort; Insertion sort plus binary search; sort the unsorted set then merge with mergesort; a custom sorting technique. None of the respondents suggested doing it the easiest way then timing it to see if performance was acceptable BEFORE trying to figure out the quickest way. RTFM. Doh.
Another poster seemed to be asking whether he should use sleep() to reduce the load from his back-end process. A curious question, we were given no idea of what the process was except that it used Tomcat. He later found that his process used 100% CPU, so he inserted sleep(1000) and all was well. Sometimes these discussion threads seem to consist of aliens doing things too esoteric to understand. This guy was either a genius or an idiot. I don't have an opinion which, I admit that I'm ignorant enough to be wrong either way.
The limitations of MappedBuffer, the buffer obtained from mapping files into memory using NIO, came up. The file was very large, and the system (XP in this case) seemed to be taking all the free memory for the file cache to handle the mapping. Other processes were basically swapped out, so switching tasks to another activity was very slow, probably because the system had to page that process back into memory. The poster redeveloped the task without NIO, using RandomAccessFile, and found that the speed was faster, and the problem of swamped memory went away, i.e. other tasks were now responsive. The question was, shouldn't NIO memory mapped files provide better performance? The only response suggested that MappedBuffer's were not necessarily good for random writes. The responder, Toby Reyelts, went on to say "if your mapped file is larger than your actual working memory, and you're performing lots of random write access ... the paging system will constantly be evicting entire pages to disk (even when only a few bits have changed) so it can reload pages that were previously evicted, just to swap them back out to disk again."
For a couple of years, Sun has maintained that object pooling is unnecessary and even detrimental to performance. So a thread looking at object creation and GC, compared to pooling and reusing objects in 1.4.1 looked interesting. The results of the discussion looked to be that reusing objects from a static pool still had a significant edge over creation & GC of objects, at least for some kinds of applications (such as the ray-tracing one tested). In addition, ThreadLocal is much faster in 1.4.1, and Thread.currentThread() is now so fast it's nearly free.
A question asking about EJB Local performance produced only one reply: that Locals are fast because they avoid marshalling overheads.
Another asking about J2EE compliant caching tools produced two vendor replies: eXcelon's Javlin and Tangosol Coherence.
As usual, I'm months behind on my reading. So I've only just read
Paul Davies article on How to Build a Time Machine. (I met Prof Davies once,
and he's every bit as enthusiatic in person as he comes across in his articles).
The article is hardly news (as in new info), but it's always nice to
get the old grey matter thinking. What leapt out at me this time is that
micro-wormholes could enable time travel at the particle level. So you could
conceivably build computer gates which produced results in essentially no
time. Technically, you could produce results before you started the calculation,
but assuming Hawking's chronology protection conjecture is correct, that
would be ruled out. But the conjecture would not rule out calculations
that took no time. So you could build an infintely fast computer. NP-complex?
Who cares! Elegant algorithms? Who cares! The most stupidly inefficient
computer program would still take ... no time.
I figure 50 years is enough to get some kind of time-acceleration
technology running. Let's see, current projections are that Moore's law
starts to run out around 2020. I reckon there'll be some breakthroughs
in 3-D chip architectures that'll extend that another 15 years. Throw
in some unknown (to me) technology that extends Moore's law another 15
years beyond that, then we'll enter the realm of time-speeded chips.
If we take the conservative Moore's law we get a 30-fold
CPU speedup every 10 years. From experience, I know that
dumb-couldn't-care-less-about-performance programming runs about 1,000
times slower than a reasonably tuned app. Given that a reasonably tuned
app runs okay on a modern CPU, in another 20 years you can write your
Java app any way you want. Well not quite, the CPU won't be a problem,
and memory probably also won't be a concern but I/O will still be
limiting and other shared resources will be a problem. So the
bottlenecks will be somewhere else, and I guess you'll still
need to tune your app. That's pretty funny. An infinitely fast
computer will still need to have it's software tuned to run it
adequately fast enough!
Diary of a Hutt
January 15. Ah, the rigours of the new year. Why does it take a week to
get into gear every time any public holidays roll around? I bet if most
businesses could measure productivity on a monthly basis, they'd see
big dips in December and January. And maybe Feb (too cold, or too hot
if you're in the other hemisphere from me). And March, all those hormones
surging. And April, because ... Okay, maybe there always some reason
or other why people are distracted. I'm allowed to ramble. It's my diary.
January 22. TEAM FRELLING LEADER. That's it. Dribble pay rise, no
real title, but more management work. ******* ******** *** ** ** ****
[expletive sentence deleted]. Now I know what that little snide
Weevil was up to. He was getting my promotion downgraded to a
******* ***** ** ****.
January 29. Apparently I was in a very dark mood last week. I do aplogize
to any readers of a more sensitive nature. I was slightly disappointed
at the final status discussions over my performance group. Instead of
what I'd expected, i.e. a new mini-department, I've become team leader
for a new mini-team. I'm feeling cheerier this week. On a side note,
it appears that Weevil sent a very insulting and partially pornographic
email to several directors this week, which has caused rather a lot
January 30. The security and administration team seemed to have tracked
down Weevil's alleged email as a forgery sent via an open email server
which someone installed on one of the spare machines. It seems that there
is no way of tracking down the culprit, because no logs were made on
that machine. The security team have altered system-wide configurations
to ensure that this type of activity can never occur again. Given
the lack of available evidence, the case has been closed and Weevil
absolved of blame. What a naughty person we must have somewhere in
Javva The Hutt.
This month I had the pleasure of interviewing
partner at Accenture. I first met Alex about seven years ago when
he was project manager on one of the largest Java projects in the
world - and that was 1996 when Java was only in the low 1.1's. No
JIT, no HotSpot, basic garbage collection, and very few optimizations
in the core classes! If he could make an enterprise project scale then
(and he did), it's worth finding out what he's learned.
Q. Can you tell us a bit about yourself and what you do?
I am a technology partner in Accenture, my role is to identify business opportunities involving IT integration and n-tier architectures. We help our clients implement these at all levels, at the organisational level, the process level, the tool level, and from the design/build/run standpoint. My focus is primarily on open architectures, with Java being the core business solution on the application server and integration layers.
Q. You were an early Java adopter and by 1996 you were already involved in one of the biggest Java projects in the world at the time. Was that a sucessful project?
Definitely yes, we succeeded in a very complex integration thanks to our global approach, which considered the importance of creating a sound development architecture and the related processes needed to create a professional environment capable of supporting more than 100 developers. This was when Java IDE and design tools were at their infancy. We also had to pay a lot of attention to the operational side of the development solution, because at that time the Java marketplace did not supply the vast array of tools and products you can find now.
Q. What performance lessons did you learn from that project?
"Trop d'objet tue l'objet" - A too much object-oriented system kills object orientation, meaning that being too purist in OO leads to a system which has to handle millions of objects, which had serious performance issues and also maintenance complexity issues.
I would say that you need to plan for a "denormalization" process like the one you do in the E/R world. To achieve good performance you need to balance simplicity against the flexibility and "beauty" of your OO model.
Q. What do you consider to be the biggest Java performance issue currently?
Tuning the number of server JVMs serving the clients and tuning the number of threads per JVM is still a guru task, not to mention the complexity of this tuning when it has to be done in a specific container environment. Also Garbage Collecting, although it has significantly improved in the latest JVM versions, can represent an issue especially in real time systems when you need to handle the possibility of having a system "freezing" or significantly slowing down for a couple of seconds during GC.
Q. Do you know of any requirements that potentially affect performance significantly enough that you find yourself altering designs to handle them?
Integrating your application server execution with your mainframe system achieves full reusability, but you walk on thin ice to get a decent end-user performance. You have to balance the number of layers and systems you have to cross in order to get data, against the performance cost of leaving data on the mainframe. To obtain acceptable performance I have often had to replicate business code or data onto the application server.
Q. What are the most common performance related mistakes that you have seen projects make when developing Java applications?
Not planning a serious performance and scalability test window. For a complex project, I am talking about 2-4 months with a 3-5 person dedicated performance team.
Q. Which change have you seen applied in a project that gained the largest performance improvement?
Handling the effects of the last question, i.e. creating a team which is focused on operational and performance issues.
Q. Have you found any Java performance benchmarks useful?
We have used many, but I really believe in the specificity of performance issues for each project and on the need for running adequate performance tests. Also, many benchmarks are commercially baised towards specific platforms/products which makes them less useful.
Q. Do you know of any performance related tools that you would like to share with our readers?
I have found OptimizeIT and Mercury LoadRunner two excellent tools, one in the domain of single user tuning of the application, the other in tuning of the application for scalability.
Q. Do you have any particular performance tips you would like to tell our readers about?
Forget about "Java takes care of object allocation and release". If you want to understand and tune your application, you need to understand the object allocation mechanisms of the JVMs. Do not hesitate in investing to create the right skills and buying the right tools for managing this very specific and important task.
(End of interview).
Which profiler should I use?
The most expensive you can buy. If everyone in the I.T. industry were to
spend a little more money on I.T. products, the I.T. industry would be
booming just like in early 2000 and I could retire sooner. Apart from
that, the more expensive the profiler, the more features it is likely
On the other hand, the more accurate answer is probably that you should
use the profiler that is within your budget and that will be the most
productive for your project. Here is a procedure for you to decide on
which profile to choose:
- Decide on your budget for a profiler (it can be $0.00, this procedure still works).
- Go to
http://www.JavaPerformanceTuning.com/resources.shtml and use that page to
get your list of all available profilers. Bear in mind that most commercial IDEs also
come with a profiler, so if you are choosing an IDE, factor that in, and if you already
have an IDE add your IDE's profiler to the list.
- Check also the various Java magazines for their "choices of the year" profiler.
Several have reader's choices each year (JDJ and JavaPro at least), and also editor's
choice too. These lists can be useful to make sure you haven't missed any profilers, and
also may give you an idea of which profilers are possibly better. However do note
that reader's choices are elected by online voting and these choices can be manipulated
if enough effort is made, so don't rely on these results to select a profiler.
- Cross off all profilers that are outside your budget. (If your budget is $0.00,
you will be left only with the free profilers and any you already have access to,
such as your IDE profiler).
- Select a runnable application, component or partially completed project code that
you can run which is likely to have performance problems, to use as a test application.
- Evaluate each profiler, using each profiler on your test application to determine
and fix the bottlenecks. All commercial profilers should have a trial period available
for evaluation. Note the first two profilers you used, and repeat their evaluation a
the end of the series. The first two profilers will seem more difficult to use initially,
because you are getting used to using profilers, getting used to the bottlenecks in the
test application, and working out the fixes to apply. By repeating their evaluation,
you counteract this bias.
- You now have more experience at understanding the benefits, drawbacks, ease of use
and productivity of various profiler features than most of the profiler creators.
At this point you have probably already chosen your profiler. If not, then none
of them stood out for you. In which case eliminate those that you definitely did
not want, and select the cheapest one remaining.
- Repeat this procedure when you need to upgrade your profiler. Bear in mind that
competitors may be willing to give you their upgrade price if you trade your old profiler
in for their one.
- Congratulations. You are now the proud owner of a Java profiler. Use it carefully
and keep it in good working order, and it will last you many years. Keep going to
to find the latest performance tips, and you and your profiler will have a long and
The JavaPerformanceTuning.com team
BEA JMS performance (Page last updated January 2003, Added 2003-02-28, Author Peter Zadrozny, Publisher BEA). Tips:
- The only way to understand JMS performance is by testing your own application (or a proof of concept).
- Performance of asynchronous messaging systems is typically measured based on throughput, e.g. messages per second
- Throughput is a measure of capacity, not speed, i.e. consumer performance, not response time.
Swing perf community chat (Page last updated January 2003, Added 2003-02-28, Author Edward Ort, Publisher Sun). Tips:
- For trees that contain many nodes, you can set the rowHeight to a fixed value, and set the largeModel property to true. On the down side, this makes your model queried much more often (very little is cached), but can improve memory and performance usage.
- VariableHeightLayoutCache creates one object per visible node to track its size, whereas FixedHeightLayoutCache will only create an object per expanded node. The tradeoff between Fixed* and Variable* is that the model is queried much more often.
- To garbage collect non-visible nodes, send a TreeModelEvent indicating that the nodes are no longer in the model.
- Use RepaintManager.setRepaint(false) if you only need frame animation, not simple component repaints.
- Gaming apps may want to minimize the number of components to maximize frame rates. Swing components have events and rendering overhead that are unnecessary need for simple sprite-based games.
- The single biggest difference between so-so and really great Swing apps has to do with the way developers handle threading issues.
- Create your objects lazily; if you don't need it yet, don't waste time creating it.
- Watch the number of classes you create, since each class adds some amount of overhead (both memory and performance).
- Translucent components or rendering looks great, but can cause some performance bottlenecks in the current releases (1.4.1).
- JFileChooser on large datasets is slow in the current releases (1.4.1).
- The -Xmx flag will let you specify a larger heap.
- getScaledInstance() is not accelerated, so if you are concerned about performance, you might use the drawImage() call that scales during the copy, or cache a new scaled image for repeated use.
- Use a profiler to determine bottlenecks.
- When you call repaint on a bunch of components, the RepaintManager unions the dirty regions together and makes one giant rectangle to repaint. This is good in the general case, but for cases with LOTS and LOTS of small components, things bog down. You can create your own RepaintManager to just repaint the right parts of the screen.
- Don't spawn more rendering threads. Multi- threaded rendering can at the very least buy you nothing in terms of rendering performance. But it can also lead to deadlock situations and undefined behavior.
- Calls like drawLine or drawText are pretty easy for the printer to do. But images have to be scaled and printed completely, which can be very slow. Rendering to a buffer, then printing is probably bad for printing performance.
- Internationalizing fastest to slowest, hold resources in: Java class; Property files; ListResourceBundle (XML)?
Preventing Repeated Operations (Page last updated January 2003, Added 2003-02-28, Author Mark Johnson, Publisher Sun). Tips:
- Generate a unique symbol that can be embedded as a hidden input in the form being processed to ensure that operations are processed only once.
IBM GC 1 (Page last updated August 2002, Added 2003-02-28, Author Sam Borman, Publisher IBM). Tips:
- If the frequency of GCs is too high prior to reaching a steady state, use verbosegc to determine the size of the heap at a steady state and set -Xms to this value.
- If the heap is fully expanded and the occupancy level is greater than 70%, increase the -Xmx value so the heap is not more than 70% occupied. For best performance, try to make sure that the heap never pages. The maximum heap size should, if possible, fit in physical memory.
- If, at 70% occupancy, the frequency of GCs is too great, change the setting of -Xminf. The default is 0.3, which will try to maintain 30% free space by expanding the heap. A setting of 0.4 increases this free space target to 40%, reducing the frequency of GCs.
- If pause times are too long, try using -Xgcpolicy:optavgpause (introduced in 1.3.1), which reduces pause times and makes them more consistent as the heap occupancy rises. There is a cost to pay in throughput. This cost varies and will be about 5%.
- Make sure the heap never pages; the maximum heap size must fit in physical memory.
- Avoid finalizers. If you do use finalizers, try to avoid allocating objects in the finalizer method. A verbosegc trace shows if finalizers are being called.
- Avoid compaction. Compaction is usually caused by requests for large memory allocations. Analyze requests for large memory allocations and avoid them if possible; if they are large arrays, try to split them into smaller pieces.
IBM GC 2 (Page last updated August 2002, Added 2003-02-28, Author Sam Borman, Publisher IBM). Tips:
- GC is done in three phases: mark, sweep, and optionally compaction. [Article describes the GC algorithm].
IBM GC 3 (Page last updated September 2002, Added 2003-02-28, Author Sam Borman, Publisher IBM). Tips:
- The output from -verbosegc lets you analyze the garbage collections. [Article describes the IBM JVM -verbosegc output].
GC (Page last updated January 2003, Added 2003-02-28, Author Sumit Chawla, Publisher IBM). Tips:
- Once the application has reached a steady state garbage collection, where heap expansions are no longer required, a good value for the startup heap size has been determined.
- Determine the maximum heap size by stressing the application and finding the value for -Xmx that avoids an OutOfMemory error.
- Keep the heap small enough to avoid paging. The heap size must never exceed the amount of physical memory installed on the system.
- If the heap size is too small, there will be frequent garbage collection cycles.
- The time to complete a full garbage collection cycle grows directly proportional to the size of the heap, so if the heap is too large, this can lead to long delays in the application.
- A common performance-tuning measure is to make the initial heap size (-Xms) equal to the maximum heap size (-Xmx). Since no heap expansion or contraction occurs, this can result in significant performance gains in some situation.
- Usually, only the applications that need to handle a surge of allocation requests keep a substantial difference between initial and maximum heap size.
- If -Xms is different from -Xmx the application can run into a scenario where too many allocation failures are occurring, but the heap doesn't expand - known as heap thrashing. Increase the -Xminf and -Xmaxf values to avoid this.
- Finalizers are a bad idea, and performing allocations from inside the finalizers should be avoided at all costs.
- Break down large (>500 KB) requests to smaller chunks, if possible. If the heap is fragmented, it is possible to attain an out-of-memory condition when there is plenty of space if trying to allocate a very large object (i.e. large array) which cannot fit into any of the fragments.
- Do not ignore the "mark stack overflow" messages. These indicate an inappropriate object retention problem.
- Concurrent GC smooths the effects of GC but may reduce the throughput of an application.
- Avoid -Xnocompactgc, -Xcompactgc and -Xgcthreads.
- Question whether calls to System.gc() are of any use, and if they are not, remove them.
floating point and decimal numbers (Page last updated January 2003, Added 2003-02-28, Author Brian Goetz, Publisher IBM). Tips:
- [Article has nothing specific about performance, but if you want to improve the performance of manipulating floating point numbers, you need to know all of this].
Performance of networked J2ME apps (Page last updated January 2003, Added 2003-02-28, Author Michael Abernethy, Publisher IBM). Tips:
- Every class in the application brings with it some size overhead. You have to create constructors, variables, and functions for every class you create.
- Every class has to be loaded into memory when it is used.
- Cherish every byte and keep as much available as possible.
- Make each screen its own function, NOT its own class.
- Don't use get/set methods; make every variable a public one.
- Eliminate screen location checking by using a stack and simply push() the screens when we go forward, and pop() the screens as we go backwards.
- When communicating with a database, the size of the data object being sent and returned is at the top of the list of concerns.
- Every request to the database should get as short a response as possible while still answering the request.
- Assume: small screen sizes; slow transmission speeds; slow processing speed; and a cost for every byte transmitted; 2Kb is the maximum size of a file (RecordStore) stored locally (to maintain portability).
XML data binding performance (Page last updated January 2003, Added 2003-02-28, Author Dennis M. Sosnoski, Publisher IBM). Tips:
- The choice of XML framework used dramatically affects performance and memory usage.
- [Article introduces a highly efficient XML data binding framework which avoids reflection and uses a pull parser rather than SAX2].
Manage Java apps for premium performance (Page last updated January 2003, Added 2003-02-28, Author Peter Bochner, Publisher ADTMagazine). Tips:
- The average time for resolving a performance problem is 25.8 hours.
Four Critical Issues for Mobile Apps Developers (Page last updated January 2003, Added 2003-02-28, Author Alan Zeichick, Publisher DevX). Tips:
- Mobile technology is not ubiquitous or reliable. Even when connected, a link's reliability and bandwidth can change instantly, so the software needs to be able gracefully accommodate disconnects and reconnects, with minimal impact to the quality of the user experience.
- Putting all of the program logic on the server with only a user-interface stub on the client, may produce an unsatisfactory user experience due to unpredictable network availability.
- When thinking "mobile," think "power miser." There's a power cost to everything: consuming CPU cycles, lighting the display, driving the speakers, operating a USB-based peripheral, even pinging across the Internet. Among the biggest power drains: memory, processor, and wireless transceiver.
Concise Guide to WebSphere Capacity Management (Page last updated January 2003, Added 2003-02-28, Author Ruth Willenborg Stacy Joines, Publisher e-ProMag.com). Tips:
- Define acceptable service levels: throughput (how many users/second or hits the site must support); response time (the wait a user experiences before receiving a response from the Web site); tolerance for outages (also called the site's availability).
- The first step in establishing a service level for your Web site is to examine what the site must do well.
- Determine the peak load. Peak load is frequently an order-of-magnitude higher than average load. Target the application to handle peak loads rather than average loads.
- Capture at least the following application data: user load (concurrent HTTP sessions or servlet requests being processed), throughput (requests per second), and response time.
- Capacity utilization is: how much CPU is required; the memory footprint of the application; the network capacity utilized at peak loading; the disk capacity utilized during peak loading. This data can be obtained using tools such as vmstat, iostat, or perfmon to report CPU, memory, and disk utilization and netstat for network data.
- JVM capacity can be measured using ?verbosegc or other JVM memory utilization measures.
- Continual garbage collecting indicates the need for a larger heap or another JVM (clone).
- Underutilization of the heap allows you to reduce the JVM's maximum heap allocation and give other JVMs sharing the machine more memory as needed.
- Look for high CPU utilization, excessive paging, unusually high disk I/O, network saturation.
- A component running at full capacity can't handle additional work. Any component running at full utilization may constrain the overall responsiveness and throughput of the Web site.
- Project future capacity requirements using historical growth data, expected additional functionality, marketing campaigns and promotions. Don't extrapolate beyond twice the performance limits measured in your existing production or test environment.
- Continually test and monitor the application.
Web service management (Page last updated January 2003, Added 2003-02-28, Author Justin Murray, Publisher DevX). Tips:
- The key concerns in managing Web services begin with runtime instantiation and responsiveness to requests. These issues include:
- Consider how multiple instances of the Web service may be handled concurrently, and how the load is being dealt with by one instance or shared across those instances
- Consider how the loading characteristics on a Web service can be discovered and presented.
- Measures should include: the number of concurrent messages being processed; current load statistics including byte sizes.
- The Web service platform must be able to identify the source and destination of SOAP messages to understand the usage of a service so that it's usage can be optimized.
- Response time and uptime are measures of the quality of a Web service.
Applying Design Issues and Patterns in Web Services (Page last updated January 2003, Added 2003-02-28, Author Chris Peltz, Publisher DevX). Tips:
- When you parse an XML document using DOM, you have to parse the entire document. So, there is an up-front performance and memory hit in this approach, especially for very large XML documents. DOM is appropriate when you are dealing with more document-oriented XML files.
- The SAX approach (events that invoke a callback when a given tag is found) reduces the overall memory overhead and can increase scalability if you are only processing subsets of the data.
- The Fa?ade pattern takes existing components that are already exposed and encapsulates some of the complexity into high-level, coarse-grained services that meet the specific needs of the client. This can enhance overall performance of the Web service.
Monitoring Performance with WebSphere (Page last updated January 2002, Added 2003-02-28, Author Ruth Willenborg, Publisher e-ProMag.com). Tips:
- A poorly performing Web site causes unsatisfied customers and lost revenue opportunities.
- Monitoring the performance of your Web site, identifying performance problems, and quickly finding the cause of and resolving problems should be a critical part of your overall Web site operations.
- If you don't measure performance, you won't know that you have problems.
- If you don't know where to measure performance, you can't find problems.
- If you don't know how to find the source of a problem, you can't fix problems.
- Users don't care about how a website works, they care only about how fast your Web page appears. Monitoring the end- user view tells you whether you have a publicly visible performance problem.
- If your Web site is too slow, your customers will give up and leave.
- You need to monitor all components, including your application servers, databases, network, and routers
- Watch key metrics and compare them with normal, expected behavior. If you find deviations, investigate these areas more closely.
- Three main metrics to monitor are: load (number of concurrent users); response time; throughput (requests per second).
- As the number of concurrent user requests increases, throughput should grow almost linearly and request response times should remain approximately constant. When throughput starts to grow more slowly or reaches an upper bound, you have a bottleneck ? typically a saturated resource.
- If throughput has reached an upper bound, request response time increases linearly with the rate of requests (until a system resource becomes exhausted).
- If a system resource becomes exhausted, throughput starts to degrade and response times will simultaneously increase.
- The ten most commonly monitored parameters are: concurrent requests being processed (EJB and HTTP); response time (EJB/HTTP); throughput (requests serviced per second EJB/HTTP); busy/idle HTTP servers; thread pool (size and percent maxed); DB connection pool (size/percent in use); JVM heap used/free; CPU utilization; disk I/O read/write rates; paging.
- Thread state profiling, e.g. dumping the thread stack, allows you to identify synchronization bottlenecks and method hotspots.
- Application call flow analysis, i.e. looking at the times spent in the various components of a request, can help eliminate components as bottleneck suspects.
Last Updated: 2018-02-27
Copyright © 2000-2018 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us