Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips July 2005
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 056 contents
http://java.sun.com/developer/JDCTechTips/2005/tt0727.html
Swing "Urban Legends" (Page last updated July 2005, Added 2005-07-31, Author John Zukowski, Publisher Sun). Tips:
- If you spawn a thread from the Event Dispatch Thread [e.g. public void actionPerformed(ActionEvent e) {(new Thread() {...}).start();}}] then the spawned thread hasthe priority of the event thread - which is normally a higher priority than normmal threads. This could reduce the responsiveness if the GUI.
- When you spawn a thread from the Event Dispatch Thread, set it's priority explicitly, e.g. using thread.setPriority(Thread.NORM_PRIORITY);
- Thread pools are availble from the java.util.concurrent package: Executors.newCachedThreadPool() For a thread pool with unbound size; Executors.newFixedThreadPool(int size) For a thread pool of fixed size > 1; Executors.newSingleThreadExecutor() For a thread pool of fixed size = 1
- Instead of synchronizing at the method level (using the syncronized modifier in the method declaration), you can create a lock variable that is shared by the methods that need to be synchronized, thus avoiding synchronizing all the methods on the same lock.
- It is possible that executed child processes (using Runtime.exec or the ProcessBuilder) will deadlock if the subprocess generates enough output to overflow the I/O buffers. A more robust solution requires draining the process stdout and stderr in separate threads.
http://www.ddj.com/documents/ddj0507f/
On-disk persistent storage, in-memory data storage, & cache management (Page last updated July 2005, Added 2005-07-31, Author Charles Lamb, Publisher Dr. Dobbs). Tips:
- [Article considers On-disk persistent storage, in-memory data storage, & cache management using the implementation of Sleepycat Software's Berkeley DB Java Edition, an open-source, pure-Java, object-based database engine (http://www.sleepycat.com/products/je.shtml)].
- A log-based filesystem can offer superior performance to a standard persistence implementation where each record is stored in its well-ordered location based upon its primary key.
- A log-based filesystem approach is primarily focused on increasing write performance: all write operations only append data to the currently active log file, greatly increasing disk write performance because disk seek latency is minimized and data throughput is maximized.
- In a log-based filesystem, new data for a record is written to the currently active log (similar to data insertion) and the "map" to the old data is changed to point to the new data. A delete operation only needs to update the map to note that there is no data for that key.
- In a log-based filesystem approach a background daemon periodically cleans files that predominantly contain stale data. This cleaner daemon copies all of the nonstale data from that mostly stale log into the currently active log, then deletes the now "empty" log file from the filesystem, similar to a disk defragmenter or a copying garbage collector.
- The downside of the log-based filesystem approach is that reads of the data are from relatively random locations on the disk because any given piece of data may be in any of the existing log files.
- Combine a log-based filesystem approach with in-memory data caching data to avoid multiple raw disk reads, thus producing optimal performance from the log-based filesystem approach.
- B-tree variants are usually optimal choices for fast generic data storage, and can be implemented to support concurrent threads.
- B-tree variants need to remain balanced to ensure optimal performance - the two main tree operations for preserving treeness involve maintaining the tree's balance.
- The key to dealing with memory limitations effectively is to recognize that in any given time frame, only some data will be used a lot while other data may be used little or not at all.
- You can approximate data usage information by keeping track of how recently each node was last used - this is known as the least-recently used (LRU) algorithm and is effective for expiring cached data.
- Syncing the filesystem (via fsync()) at the end of each write or transaction guarantees the on-disk durability of the data, but at the cost of update completion time.
http://www.theserverside.com/articles/article.tss?l=JavaOne_Day2
JavaOne Day 2 (Page last updated June 2005, Added 2005-07-31, Author Floyd Marinescu, Frank Cohen, Doug Bateman, Adib Saikali, Nitin Bharti and Joseph Ottinger, Publisher TheServerSide). Tips:
- On the System/OS you should track: free memory, pages read per second, CPU time, disk I/O, network bandwidth, network queue length, packets sent/received per second.
- On the JVM layer you should track: heap memory usage, object life cycles, garbage collections, synchronizations and locks, optimizations, and methods inlining.
- On the Application server layer you should track: queue depths, utilization of thread pools, utilization of database connection pools, number of active passivated/expired session beans.
- On the JavaEE application layer you should track: workload specific performance metrics like transactions per second, response time, order cycle times, throughput, waiting time in blocked state.
- Tools for monitoring Os/hardware: vmstat, iostat, netstat, cpustat (or on windows: perfmon)
- Tools for monitoring the JVM layer: command line utilities like verbose:gc, -xprof, and JSR 174
- Tools for monitoring the Appserver layer - JMX interfaces, SNMP, JVMPI, JVMTI, Byte code instrumentation (BCI), CIM/WBEM, etc.
- Custom tools for monitoring the Application layer - custom instrumentation (tracking method entry and exit), custom logging, log4J API, ARM API, Custom M Beans, JSR 77
http://www.theserverside.com/articles/article.tss?l=JavaOne_Day4
JavaOne Day 4 (Page last updated June 2005, Added 2005-07-31, Author Floyd Marinescu, Frank Cohen, Doug Bateman, Adib Saikali and Joseph Ottinger, Publisher TheServerSide). Tips:
- Things to think about upfront: How to scale with big spikes; How to scale when adding more applications (news sites); how to achieve no single point of failure; how to handle system abnormalities; how does application isolation in a clustered environment effect reliability and utilization.
- Throttle the number of connections as early as possibly once you know that back end resources are being all used up.
- Implement time outs.
- Be careful not to let threads infinitely build up waiting for some back end resources and bring the system to a stand still.
- Having 3 VM's on one machine is the best way to utilize CPU power, and if you have good tools to automatically provision the machines and and an appserver with a low footprint, then you can do it.
http://www.onjava.com/pub/a/onjava/2005/07/20/transactions.html
Bean-Managed Transaction Suspension in J2EE (Page last updated July 2005, Added 2005-07-31, Author Dmitri Maximovich, Publisher OnJava). Tips:
- By accessing the TransactionManager, you can suspend and resume transactions from code.
- When a transaction is suspended, the transactional timer continues, i.e. if the transaction timeout was set to 30 seconds and then transaction was suspended for 20 seconds and resumed, the transaction would have only ten seconds left to run before timeout.
- Transaction suspension simply disassociates a transaction from the running thread, and the resume() call associates it back to that thread.
http://www.devx.com/Java/Article/28685
System.currentTimeMillis is fast enough (Page last updated July 2005, Added 2005-07-31, Author Javid Jamae, Publisher DevX). Tips:
- The resolution or granularity of a clock is the smallest time difference that can be measured between two consecutive calls - this varies for different operating systems.
- System.currentTimeMillis is fast enough on any modern enterprise OS/JVM combination, so you can use it without concern.
- The method signature for System.currentTimeMillis is not defined as synchronized, but the underlying operating-system call that a native implementation makes is usually synchronized. Nevertheless the call is fast enough that you can call it from multiple threads without worry.
- System.nanoTime() appears to be faster than System.currentTimeMillis, with better resolution too.
Jack Shirazi
Back to newsletter 056 contents
Last Updated: 2025-01-27
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips056.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us