Java Performance Tuning

Java(TM) - see bottom of page

|home |services |training |newsletter |tuning tips |tool reports |articles |resources |about us |site map |contact us |
Tools: | GC log analysers| Multi-tenancy tools| Books| SizeOf| Thread analysers|

Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks! 

Training online: Threading Essentials course 

Newsletter no. 30, May 22nd, 2003

Get rid of your performance problems and memory leaks!

Java Performance Training Courses
COURSES AVAILABLE NOW. We can provide training courses to handle all your Java performance needs

Java Performance Tuning, 2nd ed
The classic and most comprehensive book on tuning Java

Java Performance Tuning Newsletter
Your source of Java performance news. Subscribe now!
Enter email:

Training online
Threading Essentials course

Get rid of your performance problems and memory leaks!

We've seen quite a few changes in ownership of Java performance tools vendors recently. VMGear (OptimizeIt) was taken over by Borland; Sitraka (JProbe and PerformaSure) was taken over by Quest; Precise (Indepth) was taken over by Veritas; and now Mercury have taken over Performant (OptiBench). What is going on?

In each case, the buying company is already in the tools arena, and wanted to extend their reach into Java. This is all evidence of where the real action is nowadays: Java. These companies are betting millions and millions of dollars that Java is a huge success. Not just that Java is a huge success right now, which it is. The takeover prices included a large element of assumed growth which means that these companies are betting big money that Java is going to be an even bigger success going forward.

The next step for Java performance tool vendors is to provide an integrated Java performance environment (such as Quest Central) which extends across the whole enterprise. And there are a still companies with holes in their enterprise-wide Java performance strategy, so I wouldn't be surprised to see yet more takeovers happening in this area.

A note from this newsletter's sponsor

Java/J2EE performance or scalability problems? Do NOT buy additional
hardware, do NOT buy additional software, do NOT refactor your app.
JTuneTM GUARANTEES improvements with none of the above costs., started as a support site for my Java Performance Tuning JPT book. The site has outgrown that humble beginning, but there is still a synergy between the site and book. So if you found my book useful, or find useful, please vote for "Java Performance Tuning 2nd edition" in the JDJ readers choice awards It's brilliant to see that "Java Performance Tuning 2nd edition" already made the editors choice finalists list at JavaWorld. Getting into the top three at JDJ would be a great complement to that.

In the newsletter we list some fabulous articles, new tools, and more. This month I want to point out one particular tool, JFluid from one of Sun's research groups. The idea of being able to run your application in the normal HotSpot JVM and, at any time, decide you want to attach a profiler is a powerful idea, well worth supporting in my view. Download the tool, try it out, email to give him your support, and perhaps this can be moved up the Sun priority list.

In addition, we have all our usual sections. Two tool reports: The HPjtune tool which helps you analyze garbage collection statistics from verbosegc output; and IronGrid's IronEye SQL for tracing and analyzing JDBC performance. Note that IronGrid will enter you for a chance to win $2,500 in cash if you download a trial version before the end of May. Details in the tool report.

Kirk's roundup covers the subtleties of NIO select, heap tuning, clustered servers and much more. We interview Mike Norman who gives us a JDBC performance masterclass. Our question of the month considers volatile and synchronized; Javva The Hutt reports the dumbest bug ever; and, of course, we have A 100 new performance tips extracted in concise form.

A note from this newsletter's sponsor

JProbe helps developers understand precisely what is causing
problems in Java applications - right down to the offending
line of source code. Download a free evaluation of JProbe today.

Tool Reports


Java performance tuning related news.

A note from this newsletter's sponsor

Get a free download of GemStone Facets 2.0 and see how this
patented technology supports JCA 1.0 to seamlessly integrate
with J2EE application servers to enhance performance.



Recent Articles

Jack Shirazi

Kirk's Roundup

I had a driver for a couple of years. It does sound more glamorous that it really is as Leonard, my driver, was actually a taxi driver that I had made a deal with. Leonard was courteous, reliable and often knew my schedule better than I did. He was also curious as to where I was off to now and what I was up to. The question is, how does one explain to a layperson, what performance tuning is all about? But then, I quickly realized that I was talking to a master of optimization. Here was a person that understood the quickest way to get from point A to point B. He also knew how to adjust this path based on expected traffic. So, I came up with an explanation I was able to base on his ability to optimize. The world is full of opportunities for learning. And now lets see what we can learn from this month's roundup.

The JavaRanch

One poster wondered why GC kicked in repeatedly after 32M when he specified -Xms32m -Xmx64. Was the mx parameter being ignored? The sole answer suggested that the JVM was tuned to believe that GC is cheaper than reallocating the heap. So once 32M is reached, it will always try reclaiming space before asking the OS for more memory. In this particular application, the reclaims were always successful, allowing the JVM to stay at 32M. Neither the JVM vendor nor version was mentioned.

Another question requested how to configure the JVM for production. The answer: use -server, and set -Xms and -Xmx heap parameters. There was no answer to the followup question of how to choose the heap parameters. My answer is to check (or the 2nd edition of Jack's book which covers the heap tuning methodology).

The "Java vs C++ speed" troll post came up yet again. This time, not much heat was generated, as all the answers sensibly said Java was faster for some things C++ faster for others. One poster suggested Java wasn't fast enough to write games or real-time systems, which will come as a surprise to the gamers over at, and lots of embedded systems writers. (They should actually be pleased, it's always nice to be told you are succeeding at achieving the impossible). The best answer: 'The real question should be "Is Java fast enough for the job at hand".'

A fascinating discussion about replacing multiple threads with NIO Select for a multiplayer networked game server cropped up. This is a live game, with many (over 100) players. The programmer found that despite the OS having free resources, the JVM could not exceed about 1,000 threads on his Linux server (different JVMs had different limits). He had correctly reset ulimit to allow unlimited threads for the user, and had recompiled the kernel to allow 4096 open files per process (up from the default of 1024). None of this seemed to help. The other posters suggested switching to NIO, which he did, and then the thread limitation was no longer an issue, as the NIO based server used only a few threads. However, a different issue now cropped up. The NIO Select call was taking 100% CPU, but the server seemed able to handle as many users as required. Instead of a proper select blocking call, it seemed to be polling continuously. Although the discussion never solved ths problem, the code was posted. Reviewing the code, I could see that whenever a new socket was accepted, it was registered with the server in both READ and WRITE modes. However, a new or unwritten socket is always ready to be written to, so naturally the selector immediately returned each time it was called because there were ready sockets to service. The problem was a subtle bug in the code, difficult to understand if you haven't played with NIO selectors before. The call to the selector always returned immediately, and was looping, hence the 100% CPU utilization. Whenever there was spare time in the system, the I/O service thread looped, but it wasn't actually causing any load to the CPU other than one erroneously looping thread, so the CPU could handle all other game engine threads without a noticeable decrease in performance. The solution is to avoid registering the socket for WRITE mode except when it actually needs to write.

Another poster found that using a custom ColorModel slowed painting down enormously on one platform (MacOS X). The problem turned out to be that ColorModels are optimized one each platform, and you should use the default ColorModel, from GraphicsConfiguration.getColorModel(), or Toolkit.getColorModel() to gain the optimal performance (or actually to optimize your chances that you'll get an optimal ColorModel for optimal performance).

Finally, there was an inconclusive discussion on whether boolean comparison or integer comparison to 0 is faster. My guess, and that of the respondants was that it depends entirely on the JVM/OS/compiler combinations, and optimizations applied by them.

The Server Side

One poster was seeing strange behavior from his clustered Weblogic BMP entity beans. It looked like synchronization (of data between servers) wasn't working. The beans were strongly optimized, with database stores and accesses only happening when the bean had changed. But Weblogic uses ejbLoad() to synchronize by re-loading beans. So it seems like the optimization may interact with the synchronization to cause problems.

Another poster asked about running batch EJB jobs on millions of records. The only answer pointed out that running a batch job simplistically, as if each record manipulation was like one user call, would be like simulating millions of user requests, causing millions of very short transactions and would likely bring the system (especially database) to its knees. This poster suggested throttling the requests, combining transactions and controlling when commits occur.

Balletic I.T.

Last week I had a conversation with a sales guy. He commented that the real technical people would always search for what they are looking for. But the most successful businesses were those that also catered to the hobbyist. The interesting point is that he was talking about a ballet store. It's funny how some observations transcend specialities.

Kirk Pepperdine.

Javva The Hutt

A few of my friends are I.T. consultants. I know several other conultants from one meet or another. So I know how you guys (and gals) have been suffering. Some of you have tried to move back to permanent positions with varying degrees of success, some of you have taken contract work at half (if you were lucky) or a quarter of what you were getting previously. And some of you have just stuck it out, not getting any real work for a year or more, living as best you could while trying not to eat into your savings too badly. One of my best friends has recently even lost his "permanent" position, two months before he was due to be married, with everything booked and a new mortgage having just bought the marital home. He's had no luck getting back into a job yet.

Well, here's some good news. It looks like things have bottomed out. They may even be beginning to pick up. Here's even an upbeat report for those of you looking for work. I wouldn't say that boom time is back, far from it. If you can't cope with years more of very slow growth in I.T., then don't stick around, find something that pays for you. But if you are a career I.T. person, and things haven't ever looked as bleak for you as they do now, then I think the worst may just be over. And when the sun is shining and the weather is great, go out and spend some time in a park thinking about us stuck in our over-air-conditioned offices, looking out the window and wishing we could be in a park. It won't put bread in your mouth, but maybe it'll make you feel a bit better. Here's wishing you all good fortune in your life, your health and your career.

"Dumbest bug ever"

Having previously pointed you towards that wonderful work of art which is Microsoft Knowledge Base Article 276304: "Your Password Must Be at Least 18770 Characters and Cannot Repeat Any of Your Previous 30689 Passwords" ( see Javva The Hutt January 29th, 2003), I couldn't resist pointing to this delightful and carefully crafted piece of software development from Microsoft. Like the author of that article you probably also think this is a bug. But with my heightened sense of artistic appreciation I can see that in fact it is an expression of poignant irony delivered from a master software creator who rivals Leonardo Da Vinci in his brilliance. Ah master, in time the rest of the world will appreciate your magnificent gifts. In the meantime, take comfort knowing that some of us have the perception to see your inspired artistry.

Diary of a Hutt

April 9. Brainshrii is aboard. I've been busy delegating responsibility. I read somewhere that top commanders of business are always in meetings, so I'm trying to put the day to day work in the capable hands of Brainshrii and Boris. Then I can spend all my time meeting. Ugh. Maybe that's not such a good idea.

April 16. Listening to Brainshrii and Boris have a conversation has become the highlight of my day. This has got to be the most entertaining duo I know of. Unfortunately, they seem to be acclimatising each other to their accents, so the entertainment probably won't last. What's it like? Do you remember the Muppets Swedish chef? Is there a regional accent in your country that requires ten minutes of speech before you can begin to understand it? Well square that, and you'll get an idea of what I'm talking about. I found that I'm fairly comfortable talking with either one of them, but I'm having meetings of the three of us in email at the moment.

April 23. Ah, the human being is incredibly adaptable. What was highly amusing last week is non-existent this. Looking back over my diary, I almost don't know what I was talking about, so accustomed have I become to the combined Boris and Brainshrii talk. Weevil, on the other hand, is still having problems with Brainshrii's accent. Which is why I have made him the liaison for Project Xenon. Brainshrii understands everything Weevil says. The rest is just bonus.

April 30. Well, it seems that Weevil agreed to go on a sponsored ten kilometer run for charity. I must say, that was very kind of him. When I first suggested Brainshrii ask him, I wouldn't have expected him to accept, given how out of condition he is. But I heard his agreement and emailed him congratulating him on his excellent charitable spirit. Of course I offered the first sponsorship amount, and cc'd the entire department so that they all knew what an excellent chap he is.


Javva The Hutt.

The Interview: Mike Norman, Oracle-TopLink

This month we interviewed Mike Norman, the man in charge of performance at TopLink. The interview turned into a JDBC performance masterclass, which is to all our benfits. Read on.

JPT: Can you tell us a bit about yourself and what you do?

I was employed in the Telecom industry for 13 years, working on a wide variety of technologies ranging from small embedded real-time devices (only 128K ROM!) to large broadband network management systems with hundreds of thousands of lines of code. My major focus was on Object-Oriented systems and distributed computing (DCE/RPC, CORBA, Distributed Smalltalk, etc).

I had just finished working on a very large Smalltalk project and I was looking for an opportunity to switch to Java. I left the Telecom industry (no, I did not foresee the impending collapse of Telecom!) and joined the TopLink development team in September 1998. Since I had a CORBA background, the TopLink Chief Architect gave me a copy of version 0.8 of the EJB specification and said they were thinking about using TopLink to implement the persistence portion of the spec. Our first ?EJB container? shipped in mid-1999 with subsequent releases every 4 or 5 months.

I then joined the Professional Services group and traveled extensively for more than two years, teaching courses on EJBs and TopLink, presenting at conferences and consulting directly with customers. After the acquisition by Oracle, I re-joined the TopLink development group and have been focused primarily on performance.

JPT: Since TopLink was acquired by Oracle, your personal focus has been on performance. How did you plan and initiate this effort and what were your expectations?

Since TopLink has two distinct usage patterns - with EJBs and without - it became clear that we could not solely depend upon benchmarks such as SPECjAppServer (formerly known as ECperf). Additionally, ?micro?-focused information from profilers does not always translate well into ?real-world? scenarios. My main task then was to design a framework that allows developers to write simple, robust and repeatable benchmarks to gauge the performance of any arbitrary TopLink code, not just TopLink-enabled EJB code.

We approached this like any other feature of TopLink: requirements gathering, design specifications, resources, timelines, etc. The main goal of the Performance Benchmark feature is to provide quantitative data to objectively measure TopLink?s run-time performance.

The Benchmark Infrastructure framework I designed was heavily influenced by JUnit. A ?Workload? has a simple lifecycle - setUp(), work() and tearDown() - and is specified in an XML ?WorkloadConfiguration? document (analogous to a TestSuite). A WorkloadConfiguration contains information describing the number of times a workload is run as well as the desired number of threads to schedule. The Benchmark Infrastructure also allows Workloads to be run concurrently to simulate real-world scenarios where multiple threads compete within a JVM for the same resource(s). Here is a sample WorkloadConfiguration document:

<?xml version="1.0"?>
<!DOCTYPE workload-configuration SYSTEM "file:///C:/benchmark/infrastructure/lib/workloads.dtd">
    <name>Example Workload Configuration</name>
                              <value><![CDATA[Hello, World!]]></value>

In general, the Benchmark Infrastructure does not provide assistance for 'micro' tuning; however, it acts as a starting point for investigations that can subsequently ?drill-down? using micro-tuning tools. Conversely, once a round of micro tuning has been completed, Workloads can be re-run to ensure that the gains are realizable in the real world.

JPT: We all know that performance tuning can be a "Voyage of Discovery"; as such, how did your plans and expectations change as the exercise progressed?

I suppose the biggest change has been the acquisition of TopLink by Oracle. Oracle is very focused on performance throughout its product line and TopLink is now subject to a level of scrutiny unlike any in the past. Specifically, TopLink performance must be considered within the context of the whole Oracle9iAS product suite and in a variety of different configurations (Clustering, High Availability, etc.)

JPT: What was the most interesting performance problem that you found and how did you solve it?

My most significant problem is not specific to TopLink, but is generic to all J2EE multi-tiered applications. The interactions between the JVM, the native OS?s process scheduler and the JDBC driver (network I/O) makes it very difficult to get consistent results. Even separating the tiers onto different machines did not always produce similar results.

In order to minimize the variability, I eliminated the front tier by building a Workload runner that executes solely in the AppServer; additionally, I eliminated the network round-trip cost to the back-end DB tier by using a local process. I was able to create a precise ?slice? through the TopLink run-time to get exactly the timing information I required. Of course, future testing will re-introduce those tiers in order to behave like the real world.

JPT: Did you run into any performance issues that were significant enough to warrant reworking portions of TopLink?

What is perhaps more interesting is the opposing question - what portions of TopLink did we think required reworking but actually did not show any performance problems? There are some areas of TopLink that we ourselves have always thought of as expensive, but it turns out not to be the case.

For example, TopLink uses reflection extensively at run-time and ?conventional? wisdom says that this is expensive. However, it appears that the cost of reflection is not a large contributor - at run-time, network I/O dominates. Additionally, the improvements to reflection in JDK 1.4 make this even less of an issue.

Similarly, we determined that with respect to threading, TopLink scales linearly with the workload: ask TopLink to do twice as much work, it will take twice as long.

JPT: Did you run into legacy code or design that was intended to deal with performance issues which no longer hold?

No - performance related code is reviewed regularly so that it does not get ?stale?. Of course, we are always looking for ways to improve TopLink?s performance.

JPT: What do you consider the biggest Java performance issue currently?

The cost of garbage collection is still a significant ?weight? on run-time performance. Soon we will see 64-bit computing deployed across the enterprise. When a VM has thousands of megabytes of RAM available to it (remember when 640K was a lot?), garbage collection algorithms must be very sophisticated to handle the enormous number of objects to be collected. Similarly, with such large systems the cost of synchronizing amongst many threads is likely to be a significant challenge.

JPT: What are the most common performance related mistakes that you have seen projects make when developing TopLink applications?

I guess the most common performance related mistake is trying to optimize an area of code that in the end will not result in much overall improvement. I use the following rule-of-thumb when thinking about J2EE middle-tier applications:

  1. The cost of ?regular? computation within a JVM is 1;
  2. The cost of invoking against EJBs is 10;
  3. The cost of retrieving information from the Database is 100 (with some JDBC drivers it is closer to 1000!)

Thus eliminating even one or two round-trips to the database is far better than all the StringBuffer optimizations one may ever find!

JPT: Do you have any particular performance tips you would like to tell our readers about? Any TopLink performance tips?

Using my rule-of-thumb guide above, the #1 determiner of performance for J2EE applications will be the number of round-trips to the database via JDBC. In many cases, it is immaterial how many rows are returned (well, obviously after some point it will be significant). TopLink has several performance features that optimize interactions with the database, resulting in dramatic reduction in the number of round-trips.

The primary TopLink performance feature is called ?Indirection? (a.k.a. ?lazy? or ?just-in-time? reading). Consider the following simple Employee model:

Employee has an Address and multiple Phones

When TopLink builds an Employee object, not only does it issue SQL to the EMPLOYEE table, but it must also issue SQL to join to the ADDRESS and PHONE tables:

SQL joins of EMP to ADDR and PHONE tables

If the only thing that you wanted to do with this Employee was to print her name, it is a rather expensive operation - 3 round-trips to the database!

TopLink can ?stub-out? the ?address? and ?phones? attributes of the Employee, replacing it with a proxy that holds the SQL we would have sent:

    select EMP_ID, F_NAME, L_NAME from EMPLOYEE where (EMP_ID = 558)
row in EMP table
proxy for Address row
row in ADDR table

The behavior of the proxy is such that if no one asks this Employee object for its Address, then no SQL is sent to the database. Of course, if the Address is already in-memory, then it is returned. The net result is that building this Employee object is now one-third the original cost.

The above optimization may be reasonable most times, but there may be some application-specific business logic where you know that as soon as the Employee is retrieved, the proxy will be ?triggered? and some attribute of the Address is required. TopLink has a feature called ?joined-attribute-querying? that allows the designer to retrieve both the Employee and its Address in a single database round-trip (joined queries can be composed either statically at design time, or dynamically at run-time):

example Java code and SQL to access data
example row of psuedo-table EMP+ADDR

TopLink uses the above ?super-row? - essentially the two rows shown above concatenated together - to build both objects in a single database round-trip (Note: the presence of the proxy between the Employee and its Address has no impact on this query).

Another way of dealing with the round-trip cost of the database is to ?bulk shop? for your data. For example, suppose that some business operation requires a list of 100 Employees and their Addresses:

100 Employee-proxy-Address objects

Let us suppose that the Employees were retrieved from the database using a single SQL call - this has reasonable efficiency: 100 objects/1 db round-trip. However, a simple loop iterating through the list of Employees to retrieve their Addresses will trigger each proxy. The additional SQL calls drop the efficiency to a miserable level: 200 objects/101 db round-trips.

To solve this, TopLink has a feature called batched-reading; the ability to specify that all the Addresses are to be read in a batch. Regardless of which proxy is triggered, a single SQL call reads in all the Addresses for the 100 Employees:

Any one proxy reads all 100 addresses
Any one proxy reads all 100 addresses

We now have a very reasonable level of efficiency: 200 objects/2 db round-trips.

JPT: What additional significant thing did you learn from this exercise?

Post-processing the data collected by a benchmarking exercise is yet another area that people underestimate for complexity. You must throw out statistical outliers, even though you really want to keep the extremely fast runs to make your benchmark look good. Additionally, you must plot your data and visually inspect it to see if there is clustering. From here, things get very complicated very quickly; "K-means clustering" is not ?everyday? statistics. If there are multiple loci of typical responses, you cannot use simple statistical averages to summarize the results. Whenever I encountered this phenomenon, I was forced to re-design my Workloads so that only a single response cluster would appear in the results.

JPT: Mike, we would like thank you for taking the time to answer our questions and we wish you continued success with your effort to tune TopLink

This interview was conducted by Kirk Pepperdine, a principle at He can be reached from here

(End of interview).

Question of the month

What does volatile do?

This is probably best explained by comparing the effects that volatile and synchronized have on a method. volatile is a field modifier, while synchronized modifies code blocks and methods. So we can specify three variations of a simple accessor using those two keywords:

         int i1;              int geti1() {return i1;}
volatile int i2;              int geti2() {return i2;}
         int i3; synchronized int geti3() {return i3;}

geti1() accesses the value currently stored in i1 in the current thread. Threads can have local copies of variables, and the data does not have to be the same as the data held in other threads. In particular, another thread may have updated i1 in it's thread, but the value in the current thread could be different from that updated value. In fact Java has the idea of a "main" memory, and this is the memory that holds the current "correct" value for variables. Threads can have their own copy of data for variables, and the thread copy can be different from the "main" memory. So in fact, it is possible for the "main" memory to have a value of 1 for i1, for thread1 to have a value of 2 for i1 and for thread2 to have a value of 3 for i1 if thread1 and thread2 have both updated i1 but those updated value has not yet been propagated to "main" memory or other threads.

On the other hand, geti2() effectively accesses the value of i2 from "main" memory. A volatile variable is not allowed to have a local copy of a variable that is different from the value currently held in "main" memory. Effectively, a variable declared volatile must have it's data synchronized across all threads, so that whenever you access or update the variable in any thread, all other threads immediately see the same value. Of course, it is likely that volatile variables have a higher access and update overhead than "plain" variables, since the reason threads can have their own copy of data is for better efficiency.

Well if volatile already synchronizes data across threads, what is synchronized for? Well there are two differences. Firstly synchronized obtains and releases locks on monitors which can force only one thread at a time to execute a code block, if both threads use the same monitor (effectively the same object lock). That's the fairly well known aspect to synchronized. But synchronized also synchronizes memory. In fact synchronized synchronizes the whole of thread memory with "main" memory. So executing geti3() does the following:

  1. The thread acquires the lock on the monitor for object this (assuming the monitor is unlocked, otherwise the thread waits until the monitor is unlocked).
  2. The thread memory flushes all its variables, i.e. it has all of its variables effectively read from "main" memory (JVMs can use dirty sets to optimize this so that only "dirty" variables are flushed, but conceptually this is the same. See section 17.9 of the Java language specification).
  3. The code block is executed (in this case setting the return value to the current value of i3, which may have just been reset from "main" memory).
  4. (Any changes to variables would normally now be written out to "main" memory, but for geti3() we have no changes.)
  5. The thread releases the lock on the monitor for object this.

So where volatile only synchronizes the value of one variable between thread memory and "main" memory, synchronized synchronizes the value of all variables between thread memory and "main" memory, and locks and releases a monitor to boot. Clearly synchronized is likely to have more overhead than volatile.

The team

Continuous Performance (Page last updated 2003 April, Added 2003-05-22, Author Cody Menard, Publisher TheServerSide). Tips:
Understanding Performance in Web Service Development (Page last updated 2003 April, Added 2003-05-22, Author Peter Varhol, Publisher Web Services Journal). Tips:
Caching in J2EE Architectures (Page last updated 2003 April, Added 2003-05-22, Author Helen Thomas, Publisher JDJ). Tips:
Errant Architectures (Page last updated 2003 April, Added 2003-05-22, Author Martin Fowler, Publisher Software Development Magazine). Tips:
Performance of Lists (Page last updated 2003 April, Added 2003-05-22, Author karschten, Publisher JPTC). Tips:
Select for high-speed networking (Page last updated 2003 April, Added 2003-05-22, Author Greg Travis, Publisher Javaworld). Tips:
Watch your HotSpot compiler go (Page last updated 2003 April, Added 2003-05-22, Author Vladimir Roubtsov, Publisher Javaworld). Tips:
Proactive Application Monitoring (Page last updated 2003 April, Added 2003-05-22, Author Alexandre Polozoff, Publisher IBM). Tips:
Practical examples for improving system responsiveness (Page last updated 2003 April, Added 2003-05-22, Author Cameron Laird, Publisher IBM). Tips:
Minimize Contention (Page last updated 2003 March, Added 2003-05-22, Author Ted Neward, Publisher Neward). Tips:
6 Tips for High-Performance Java Apps (Page last updated 2003 March, Added 2003-05-22, Author Peter Varhol, Publisher ftponline). Tips:

Jack Shirazi

Last Updated: 2017-12-29
Copyright © 2000-2017 All Rights Reserved.
All trademarks and registered trademarks appearing on are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed:
Trouble with this page? Please contact us