Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips March 2010
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 112 contents
http://highscalability.com/blog/2010/3/10/how-farmville-scales-the-follow-up.html
How FarmVille Scales - The Follow-up (Page last updated March 2010, Added 2010-03-29, Author Todd Hoff, Publisher highscalability). Tips:
- to scale high write app, work primarily with cache components
- throttle back non-essential calls depending on load to maintain the essential processing
- isolate troubled and highly latent services from causing latency and performance issues elsewhere through use of error and timeout throttling and disabling functionality in the application as needed.
- Read heavy apps can often get by with a caching layer in front of a single database. Write heavy apps will need to partition so writes are spread out and/or use an in-memory architecture.
- verify each action performed in the client asynchronously, queueing the transactions on the client.
http://jeremymanson.blogspot.com/2010/02/garbage-collection-softreferences.html
Garbage Collection, [Soft]References, Finalizers and the Memory Model (Page last updated February 2010, Added 2010-03-29, Author Jeremy Manson, Publisher jeremymanson). Tips:
- The VM can decide a variable held by a SoftRefeerence is no longer used after the last time it appears in the program, and clear the SoftReference immediately after it is constructed.
- Finalizers can (and usually do) run in a separate thread, so your finalization code needs to be thread-safe.
- The VM can eliminate a write altogether if it can determine that a thread is not using the value of a write, and that there is no evidence that synchronization will make the write visible to another thread.
http://blog.dynatrace.com/2010/01/18/week-2-the-many-faces-of-end-user-experience-monitoring/
The many faces of end-user experience monitoring (Page last updated January 2010, Added 2010-03-29, Author Alois Reitbauer, Publisher dynatrace). Tips:
- Before deciding what data to monitor, you need to decide which questions you want to get answered by end-user monitoring, e.g. How long did it take to load the page; were there any problems on the page; how long did certain actions?
- In order to understand the cause of performance problems, you need to know the data flow and sequence of actions that occurred prior to the issue, so consider what needs to be logged to obtain this data.
- Network times should be split up into wait time (the delay until a connection is available), DNS lookup time, transfer time and server processing time (service time).
- Synthetic Transactions provide one mechanism for monitoring the end-user performance, but cannot really identify client issues although they are useful to show performance degradation, detect networking problems or provide notificatons in case of errors.
- Network Sniffing can verify SLAs are being met for all traffic oriented SLAs (including response times excluding the client processing times)
- Instrumentation in the client is the most accurate way to monitor end-user perceived performance
http://today.java.net/article/2010/03/03/rethinking-multi-threaded-design-priniciples
Rethinking Multi-Threaded Design Principles (Page last updated March 2010, Added 2010-03-29, Author Dibyendu Roy, Publisher java.net). Tips:
- Return a copy of a collection to avoid the collection itself being edited by another class - this keeps the collection thread-safe (as only the owning object can modify it).
- ConcurrentHashMap is a thread safe HashMap
- To ensure one thread at a time has access to something, you can use bounded BlockingQueues that act as a mediator between the accessor and the data source.
- Where two threads try to acquire two or more locks in different orders, a deadlock is a potential outcome.
- Make sure all threads calling an atomic operation get all resource locks in the same order.
- You can collapse multiple locks into one lock to eliminate a deadlock potential - but at the cost that concurrency will be reduced (only one threda at a time can process through all sequences of operations involving that one lock).
http://today.java.net/article/2010/02/22/has-jdbc-kept-enterprise-requirements
Has JDBC Kept up with Enterprise Requirements? (Page last updated February 2010, Added 2010-03-29, Author Jesse Davis, Publisher java.net). Tips:
- In order to accurately control the amount of memory the driver allocates for each parameter in the PreparedStatement using Oracle's Thin driver, you must use the OraclePremparedStatement.defineColumnType() method.
- [Article discusses what type 5 JDBC drivers should offer, which is primarily providing efficiency options orthoganal to the code.]
http://www.infoq.com/articles/memory_barriers_jvm_concurrency
Memory Barriers and JVM Concurrency (Page last updated March 2010, Added 2010-03-29, Author Dennis Byrne, Publisher InfoQ). Tips:
- The reads and writes of a program are not necessarily performed in the order in which they are given to the processor, and thread-local memory operations can be much cheaper than main-memory ones. A thread can write values that become visible to another thread in ways that are inconsistent with the order in which they were written. Memory barriers (like synchronize) prevent this problem by forcing the processor to serialize pending memory operations.
- volatile establishes a "happens before" relationship between the write to the volatile variable and writes to subsequent volatile variables. The compiler cannot re-order these write operations and must forbid the processor from doing so with a memory barrier.
- The '++' operation (and similar compound operations) are not atomic - it is a triple of read-modify-write operations. This means that to guarantee thread-safety of these operations (assuming used on a variable updateable across multiple threads) they must use memory barriers, e.g. by putting in a synchronized block.
- Using AtomicInteger could be more efficient than using a synchronized wrapped int.
Jack Shirazi
Back to newsletter 112 contents
Last Updated: 2024-09-29
Copyright © 2000-2024 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips112.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us