Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips September 2006
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 070 contents
http://blog.xebia.com/2006/09/06/interview-with-kirk-pepperdine-of-javaperformancetuningcom-2/
Interview With Kirk Pepperdine (Page last updated September 2006, Added 2006-09-27, Author jeroen borgers, Publisher Xebia). Tips:
- Many developers are ill equipped to deal with performance problems, and really need some training in this area.
- To solve a performance problem you need to characterize the problem, apply some solution, and then re-measure for effect.
- The most typical and most occurring performance problems you encountered in companies are database interactions and memory management.
- The first thing to do is understand the source of the problem. Once you see the source the solutions are typically self explanatory.
- It is pretty common that people miss obvious tuning opportunities because of stress and the complexity of the environments that they are working in.
- Aside from adding an all important index, the largest performance improvements tend to come from fundamental changes in architecture or design.
- By using the first version of an application as a requirements document for the second, you can often realize some amazing reductions.
- The "Guess, don't Measure" anti-pattern is the biggest and baddest one of all. The refactored solution is unsurprisingly called "Measure, Don't Guess".
- O/R mapping tools introduce some drag but in the grand scheme of things it is nothing compared to a good old hit on the network.
- Benchmarking is a much more complex task than it sounds. Anyone that thinks they are going to get numbers in a day or so hasn't tried this before, or will be reporting bogus numbers, or will need lots of luck.
- The best database call is the one you didn't make and you can avoid making them if you use caching.
- AJAX breaks the idea that someone is going to use a connection quickly and not so often. This will put more stress on the server.
- If AJAX can keep users busy so that they don't notice unreasonable response times, then it's done its job.
- Most ESB's use XML which is about the fattest protocol that one can use. So what we have is an application that is using the fattest protocol over the slowest piece of infrastructure and you want to talk about performance. Obviously a tightly coupled client/server pair using a light weight protocol is going to run circles around an ESB architected solution.
- Frameworks provide functionality we need. However, using them doesn't come for free. Used properly, frameworks will offer good enough performance.
- Generally the economic benefits of using a framework outweighs concerns for performance and in my humble opinion this is the way it should be.
- The most cost-effective performance tuning solution could be to purchase more hardware.
- Sometimes more hardware makes no difference whatsoever. The first thing you need to know is what resource constraint is responsible for the performance problem.
- I tend to rely on what some would regard to be very primitive tools. If I could equate favorite to most useful then I would have to pick vmstat. Now you may think that it is funny that a tool that spits out kstat values would be so useful in performance tuning Java, however you can use it to eliminate entire classes of problems in just a few minutes. If you are looking for a needle in a haystack the best thing to do is to get rid of as much hay as you can up front.
http://www.javaworld.com/javaworld/jw-09-2006/jw-0904-threads.html
Executing tasks in threads (Page last updated September 2006, Added 2006-09-27, Author Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes, Doug Lea, Publisher Javaworld). Tips:
- Independence facilitates concurrency, as independent tasks can be executed in parallel if there are adequate processing resources.
- Individual client requests provide most server applications a natural choice of task boundary.
- Sequential processing rarely provides either good throughput or good responsiveness.
- Processing requests in parallel on a thread-per-task approach is an improvement over sequential execution. As long as the request arrival rate does not exceed the server's capacity to handle requests, this approach offers better responsiveness and throughput.
- Thread creation and teardown are not free. If requests are frequent and lightweight, as in most server applications, creating a new thread for each request can consume significant computing resources.
- Active threads consume system resources, especially memory. When there are more runnable threads than available processors, threads sit idle tying up a lot of memory, putting pressure on the garbage collector.
- If you have enough threads to keep all the CPUs busy, creating more threads won't help and may even hurt.
- Up to a certain point, more threads can improve throughput, but beyond that point creating more threads just slows down your application.
- Creating one thread too many can cause your entire application to crash horribly.
- Unbounded thread creation may appear to work just fine during prototyping and development, with problems surfacing only when the application is deployed and under heavy load.
- Executor provides a standard means of decoupling task submission from task execution.
- [Article shows code using pre 1.5 style threads for multi-threading, and then reimplemented to use Executors and thread pools].
- An Executor which starts a new thread for each task is new Executor(){ public void execute(Runnable r){ new Thread(r).start(); } }
- An Executor which executes all tasks on the same thread thread new Executor(){ public void execute(Runnable r){ r.run(); } }
- An Executor which executes all tasks in a pool of threads is Executors.newFixedThreadPool(NTHREADS).
- Decoupling submission from execution lets you specify: In what thread will tasks be executed; execution order of tasks (FIFO, LIFO, priority order); How many tasks may execute concurrently; the number of queued tasks; the policy for rejecting tasks; actions to be taken before or after executing a task.
- Whenever you see code of the form: new Thread(runnable).start() and you think you might at some point want a more flexible execution policy, seriously consider replacing it with the use of an Executor.
- Reusing an existing thread instead of creating a new one amortizes thread creation and teardown costs and over multiple requests, and can eliminate thread startup latency.
- By properly tuning the size of the thread pool, you can have enough threads to keep the processors busy while not having so many that your application runs out of memory or thrashes due to competition among threads for resources.
- Executors.newFixedThreadPool gives a fixed-size thread pool that creates threads as tasks are submitted, up to the maximum pool size, and then attempts to keep the pool size constant (adding new threads if a thread dies due to an unexpected Exception).
- Executors.newCachedThreadPool gives a cached thread pool which can reap idle threads when the current size of the pool exceeds the demand for processing, and add new threads when demand increases, but places no bounds on the size of the pool.
- Executors.newSingleThreadExecutor gives a single-threaded executor that creates a single worker thread to process tasks, replacing it if it dies unexpectedly - tasks are executed sequentially.
- Executors.newScheduledThreadPool provides a fixed-size thread pool that supports delayed and periodic task execution, similar to Timer.
- Switching from a thread-per-task policy to a pool-based policy stabilizes an application under heavy loads.
- Even with a fixed size thread pool it is still possible to run out of memory because of the growing queue of Runnables awaiting execution - use a bounded work queue to eliminate this possibility.
- Using an Executor opens the door to all sorts of additional opportunities for tuning, management, monitoring, logging, error reporting, and other possibilities.
- Failing to shut down an Executor could prevent the JVM from exiting.
- ExecutorService has three states?running, shutting down, and terminated.
- ExecutorService.shutdown initiates a graceful shutdown: no new tasks are accepted but previously submitted tasks are allowed to complete?including those that have not yet begun execution.
- ExecutorService.shutdownNow initiates an abrupt shutdown: it attempts to cancel outstanding tasks and does not start any tasks that are queued but not begun.
- Tasks submitted to an ExecutorService after it has been shut down are handled by the rejected execution handler, which might silently discard the task or might cause execute to throw the unchecked RejectedExecutionException.
- You can wait for an ExecutorService to reach the terminated state with awaitTermination, or poll for whether it has yet terminated with isTerminated.
- Timer manages the execution of deferred and periodic tasks. However, Timer has some drawbacks, and ScheduledThreadPoolExecutor should be thought of as its replacement.
- Timer does have support for scheduling based on absolute, not relative time, so that tasks can be sensitive to changes in the system clock; ScheduledThreadPoolExecutor supports only relative time.
- Timer creates only a single thread for executing timer tasks which means the timing accuracy of TimerTasks can suffer. Scheduled thread pools address this limitation by letting you provide multiple threads for executing deferred and periodic tasks.
- Timer behaves poorly if a TimerTask throws an unchecked exception - ScheduledThreadPoolExecutor deals properly with ill-behaved tasks.
- Callable is a better abstraction than Runnable; it expects that the main entry point, call, will return a value and anticipates that it might throw an exception.
- The lifecycle of a task executed by an Executor has four phases: created, submitted, started, and completed.
- In the Executor framework, tasks that have been submitted but not yet started can always be cancelled.
- In the Executor framework, tasks that have started can sometimes be cancelled if they are responsive to interruption.
- Future.get returns immediately or throws an Exception if the task has already completed, but if not it blocks until the task completes.
- If you divide tasks A and B between two workers but A takes ten times as long as B, you've only speeded up the total process by 9 percent.
- The real performance payoff of dividing a program's workload into tasks comes when there are a large number of independent, homogeneous tasks that can be processed concurrently.
- You can submit Callable tasks to CompletionService for execution and use the queuelike methods take and poll to retrieve completed results, packaged as Futures, as they become available.
- ExecutorCompletionServices can share a single Executor - a CompletionService acts as a handle for a batch of computations in much the same way that a Future acts as a handle for a single computation.
- Future.get returns as soon as the result is ready, but throws TimeoutException if the result is not ready within the timeout period.
- All the timed methods in java.util.concurrent treat negative timeouts as zero, so no extra code is needed to deal with this case.
- The timed version of ExecutorService.invokeAll will return when all the tasks have completed, the calling thread is interrupted, or the timeout expires. Any tasks that are not complete when the timeout expires are cancelled. On return from invokeAll, each task will have either completed normally or been cancelled.
http://cretesoft.com/archive/newsletter.do?issue=132
A Thread Dump Bean JSP in Java 5 (Page last updated September 2006, Added 2006-09-27, Author Heinz Kabutz, Publisher Java Specialists' Newsletter). Tips:
- [Article implements a thread dump bean for JSPs in Java 5 - just what it says on the tin!]
http://www.cs.berkeley.edu/~bodikp/publications/hotac06.pdf
Managing Amazon.com's Systems (Page last updated September 2006, Added 2006-09-27, Author Peter Bodik, Armando Fox, Michael Jordan, David Patterson, Ajit Banerjee, Ramesh Jagannathan, Tina Su, Shivaraj Tenginakai, Ben Turner, Jon Ingalls , Publisher Berkeley University). Tips:
- Complex dependencies among system components can cause failures to propagate to other components, triggering multiple alarms and complicating root-cause determination.
- In a large complex system no individual understands all the dependencies among different parts of the system, but the more dependencies that are understood, them more easy it is to identify and resolve problems.
- Amazon's problems can ultimately be identified in terms of the behavior of a dozen or so metrics, compared to the million or so metrics that are collected.
- Amazon.com's approach distinguishes two major failure types: Severity 1 (sev1) and Severity 2 (sev2). Sev1 problems affect customers directly and need to be resolved immediately, and rarely recur. Sev2 problems do not immediately affect the behavior of the website, but could turn into sev1 problems if not resolved quickly. Sev2s typically affect only a single system component and tend to recur, so the operators learn to recognize and fix them. However, they are about 100 times more frequent than sev1 problems.
- Highest severity problems usually manifest themselves as anomalous traffic levels to parts of a system whose traffic patterns are well known.
- A failure of one component often affect other components, triggering a cascade of report alarms and anomalies. Causality is often difficult to identify in these cases unless some anomaly flow mechanism is available to track anomalies.
- The actual procedure followed to troubleshoot a problem is useful knowledge which helps to troubleshoot repeats of the problem and similar problems.
http://blogs.sun.com/fatcatair/entry/calling_conventions_part_2
Why -Xbatch works differently in Java SE 6.0 (Page last updated August 2006, Added 2006-09-27, Author Steve Goldman, Publisher Sun). Tips:
- -Xbatch always compiled code before running it pre 1.6 (waiting for the compile to complete before running the code), but in 1.6 it runs the code in interpreted mode while compiling every method it hits in the background, and runs the methods in compiled mode when they have finished compiling.
Jack Shirazi
Back to newsletter 070 contents
Last Updated: 2024-11-29
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips070.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us