Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips August 2010
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 117 contents
Node and Scaling in the Small vs Scaling in the Large (Page last updated July 2010, Added 2010-08-25, Author Alex Payne, Publisher Al3x). Tips:
- In a system of no significant scale, basically anything works.
- The power of today?s hardware is such that you can build a web application that supports thousands of users using one of the slowest available programming languages, brutally inefficient datastore access and storage patterns, zero caching, no sensible distribution of work, no attention to locality, etc. You can apply every available anti-pattern and still come out the other end with a workable system, simply because the hardware can move faster than your bad decision-making.
- In a system of significant scale, there is no magic bullet. Scaling is hard.
- Workloads change, the data you?re moving around changes. What was once well-handed by an asynchronous solution can suddenly be better served by a threaded solution, or vice versa.
- Threads can achieve all of the strengths of events, including support for high concurrency, low overhead, and a simple concurrency model.
- Events and pipelining perform equally well.
- Blocking sockets can actually increase performance; non-blocking I/O isn?t necessarily a better solution than blocking threads.
- There isn?t a one-size-fits-all concurrency solution. A hybrid approach to concurrency seems to be the way forward - a combination of threads and events offers the best of both worlds.
An Interview with Josh Marinacci (Page last updated August 2010, Added 2010-08-25, Author Yolande Poirier, Publisher Oracle). Tips:
- Do initial interface design in pure black and white.
- Think about which features are most important to end users and how users are going to use the features.
- Focus on buttons and navigation workflow from one screen to the other before you work on the look and feel.
- For small screen devices, design principles include fewer controls, making the most important task the easiest to do, and removing unnecessary features. The design also needs to be intuitive and fun for a broad audience.
Java Threads Revisited: Classic vs. New (Page last updated August 2010, Added 2010-08-25, Author Avneet Mangat, Publisher javaboutique). Tips:
- The threading model is not likely to be the bottleneck - the bottleneck is usually the I/O (network or disk I/O), but a poor threading model can lead to deadlocks and race conditions.
- If the application freezes for a few seconds, it is likely garbge collection. To diagnose the problem, enable logging of garbage collection.
- If possible, use a higher-level API/framework so you don't have to deal with the underlying details of managing concurrency. (E.g. a multithreaded server could use Apache Mina for UDP messages, QuickServer for TCP messages, a webserver for HTTP messages)
- To run several threads in parallel, your object should implement the interface Runnable. The method void run() contains the business logic that you would execute. Executor objects take care of the non-business running logic: ThreadedExecutor -- same as classic threads; ScheduledExecutor -- business logic run by threads at a particular time; ThreadPoolExecutor -- business logic executed by a pool of threads.
- Use java.util.concurrent.atomic classes to synchronize access to data - these provide lock-free and wait-free updates to a resource.
- JDK1.5+ provides two different locks: Re-entrant locks (can set timeout period to wait for acquiring lock, acquire a lock only if it is free, and specify a 'fair lock' so that the longest waiting thread gets the lock first) and ReadWrite locks (several threads can acquire read locks when there is no write lock, or a thread can ask for an exclusive write lock).
Introducing NIO.2 (JSR 203) Part 5: Watch Service and Change Notification (Page last updated August 2010, Added 2010-08-25, Author Masoud Kalali, Publisher kalali). Tips:
- Java 7 comes with NIO.2 or JSR 203 which provides a native file system watch service, using the underlying file system (where available, e.g. Windows, MacOS, Linux) to watch the file system for changes (where unavailable, Java will use a simple polling mechanism, but performance will degrade with this).
- The native file system watch service uses a Watchable object (e.g. a Path, FileSystems.getDefault().getPath("/here"), events (e.g. creation, deletion), and a Watcher (e.g. FileSystems.getDefault().newWatchService()) which watches. You register the watcher with path specifying the events you are interested (e.g. Path.register(WatchService, StandardWatchEventKind.ENTRY_MODIFY, StandardWatchEventKind.ENTRY_DELETE, ...)).
First Look at Concurrency Support in Commons Lang 3.0 (Page last updated August 2010, Added 2010-08-25, Author Shekhar Gulati, Publisher JavaLobby). Tips:
- org.apache.commons.lang3.concurrent has classes which provide support for multi-threaded programming. The main concurrency feature in this release is thread safe initialization of objects. Different concurrent initializer strategies are provided implementing the ConcurrentInitializer interface for the creation of managed objects e.g. lazy initailization or initialization on a background thread.
- LazyInitializer provides an implementation for objects which need to be initialized as late as possible, i.e. when it is first used rather than created, because object creation is expensive or the consumption of memory or system resources is significant.
- AtomicInitializer is a lazily initialized object which may get initialized multiple times if multiple threads access the initializer at the same time - but you are guaranteed that the returned object has been initialized.
- BackgroundInitializer and CallableBackgroundInitializer initialize an object in the background in a separate thread - calling get() will block until the object is initialized, so make sure all other initialization is finished before doing so.
- MultiBackgroundInitializer is a specialized BackgroundInitializer which can manage multiple background initialization tasks concurrently, for parallelizable initializations.
Why Many Profilers have Serious Problems (More on Profiling with Signals) (Page last updated August 2010, Added 2010-08-25, Author Jeremy Manson, Publisher jeremymanson). Tips:
- The JVMTI AsyncGetCallTrace lets you to get a stack trace from a running thread, regardless of the state of that thread.
- Stack traces from AsyncGetCallTrace tells you what is actually running on your CPU, not just what might be scheduled.
- Profilers often report time spent in blocking mechanisms, simply because threads frequently become scheduled as they exit blocking functions. However, those threads are all "ready to be scheduled" rather than "actually running". As a result, CPU time is counted for those threads inaccurately.
- AsyncGetCallTrace reports time spent in GC and in other VM-specific activities (like JIT compiling).
- Sampling profilers that take thread dumps from individual threads often do so by injecting thread stack trace gathering code into your code. This introduces overhead into your code; changes the size of the code changes so affecting the optimizing decisions made by the JIT and changes the generated code layout. All of these can affect performance.
- The placement of JVM safe points affects the sampling quality of standard sampling profiling techniques - these are never profiled by calls to Thread.getAllStackTraces and similar sampling techniques (the JVM pauses everything including the profiler). Only an asynchronous call to AsyncGetCallTrace or equivalent will see what heppens during safe points.
- Different profilers will cause different effects on an application, so it is normal for different profilers to report different results.
- When profiling, you can either have exact information, or you can have your code behave as it is supposed to behave, but not both.
- AsyncGetCallTrace doesn't interfere with or depend on JIT decisions at all, it doesn't wait for a safe point, it doesn't change your code, the JIT doesn't care about it. It will introduce some overhead, of course.
Back to newsletter 117 contents
Last Updated: 2018-10-29
Copyright © 2000-2018 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us