Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Concurrency, Threading, GC, Advanced Java and more ...
Tips April 2010
JProfiler
|
Get rid of your performance problems and memory leaks!
|
JProfiler
|
Get rid of your performance problems and memory leaks!
|
|
|
Back to newsletter 113 contents
http://weblogs.java.net/blog/editor/archive/2010/04/14/thinking-about-how-much-fun-it-develop-multi-threaded-apps
Thinking about How Much Fun It Is to Develop Multi-Threaded Apps (Page last updated April 2010, Added 2010-04-28, Author Kevin Farnham, Publisher java.net). Tips:
- The parallelisable part of a program can be speeded up with more cores, but the sequential part can't - which means the sequential code tends to dominate execution time the more parallel you make your code
- To improve execution speed of an application, you need to figure out ways or patterns to make more code execute in parallel.
- A multithreaded application can actually run slower than its purely sequential, single-threaded starting point, because you are adding all the management of the threads, potential dividing of the application's data into snippets which then have to be re-assembled after the parallel code has been executed, etc.
- It requires a lot of care to properly implement and use locks - but it is worth it as when you try to access a lock that's already being held, the thread gets suspended first and awakened later when the lock is available, thus spending some time doing nothing.
- Compare-And-Swap is helpful in cases where there's something else a thread can work on if it finds that a locked resource is presently unavailable.
http://www.infoq.com/interviews/billy-newport-parallel
Billy Newport Discusses Parallel Programming in Java (Page last updated April 2010, Added 2010-04-28, Author Ryan Slobojan, Billy Newport, Publisher InfoQ). Tips:
- Non-blocking TCP is likely to be more efficient than a reliable transport implementation of UDP, but with the same capabilities.
- The cost of distributing work to another node can be orders of magnitude slower than just doing work on a local machine because of IO overheads. So you need to analyse the cost versus benefits to see if it is worth it.
- One of the reasons for using a data grid instead of something like Hadoop is to keep the data in memory so that it's fast. If you are going to keep all the data on disk you may as well use Hadoop.
- Hadoop is optimized for streaming data; it tries to read through a very large file but it's not loading the whole file up at once, it reads a block of records in, processes them, you get an output that is written to disk, you read the next block of records in. You are not paging but you are still working with a much bigger data set than you could in memory. A na?ve programmer might read the whole file into RAM and then it could be paged by the OS, and the difference in performance would be massive.
http://www.ibm.com/developerworks/websphere/techjournal/0701_botzum/0701_botzum.html
The top Java EE best practices (Page last updated January 2007, Added 2010-04-28, Author Keys Botzum, Kyle Brown, Ruth Willenborg, Albert Wong , Publisher IBM). Tips:
- Reduce the distribution communication in your application, with very large-grained "facade" objects that wrap logical subsystems and that can accomplish useful business functions in a single method call. This reduces network overhead, and also reduces the number of database calls by creating a single transaction context for the entire business function. The session facade should be a stateless session bean, with remote interfaces.
- EJB local interfaces provide performance optimization for co-located EJBs (local interfaces must be explicitly called by your application, requiring code changes and preventing the ability to later distribute the EJB without application changes). If you are certain the EJB call will always be local, take advantage of the optimization of local EJBs.
- For performance optimization, local interfaces can be added to the session facade.
- Use stateless session beans instead of stateful session beans (stateful solutions are not as scalable as stateless ones).
- Java EE application servers cannot load-balance requests to stateful beans but do load-balance stateless beans.
- Avoid stateful beans - to get stateless session beans user-specific state can be passed in as an argument or be retrieved as part of the EJB transaction from a persistent back-end store.
- Rely on two-phase commit transactions rather than developing your own transaction management - the container will almost always be better at transaction optimization - and can optimize for different deployments with no code changes.
- Store the minimum amount of state as possible in HttpSessions - what you need for the current business transaction and no more. A good rule of thumb is under 4K.
- A common problem is in using HttpSessions to cache information that can be easily recreated - this is a very expensive decision forcing unnecessary serialization and writing of the data. Instead, use an in memory hash table to cache the data and just keep a key to the data in the session.
- Enable session persistence - the fault tolerance obtained by automatic failover of sessions by the application server is valuable for providing uninterrupted user servicing.
- Log all transition point activity (entering and exiting significant boundaries).
- One of the most common errors is memory leaks, nine times out of ten caused by forgetting to close a connection (JDBC most of the time) or return an object back into the pool.
http://blog.dynatrace.com/2010/04/21/how-better-caching-helps-frankfurts-airport-website-to-handle-additional-load-caused-by-the-volcano/
How better Caching helps Frankfurt's Airport Website to handle additional load caused by the Volcano (Page last updated April 2010, Added 2010-04-28, Author Andreas Grabner, Publisher DynaTrace). Tips:
- Too many resources for each page makes for slow page download.
- Browsers limit the number of concurrent connections to a domain - if you have many resources to deliver, deliver them from across different domains to "fool" the browser into using more connections. (the technique is called domain sharding.) Merge files if possible (e.g. combine CSS files into one, and combine Javascript files into one Javascript file) to reduce the number of separate downloads needed.
- Tell the browser to cache static or infrequently changing content.
- Ensure that the "expires" header is correctly set (to far in the future), otherwise although the cached content is retrieved from the browser cache, the server is still queried on each retrieval which is almost as big a delay as not having the content in the cache.
- Use Http 1.1 and Connection: Keep-Alive.
- Gzip content for delivery.
- Minimize or preferably eliminate any redirects.
http://highscalability.com/blog/2010/3/4/how-myspace-tested-their-live-site-with-1-million-concurrent.html
How MySpace Tested Their Live Site with 1 Million Concurrent Users (Page last updated March 2010, Added 2010-04-28, Author Dan Bartow, Publisher highscalability). Tips:
- Understand your application breaking points, define your capacity thresholds, and have a plan for when those thresholds are exceeded.
- Testing production infrastructure with actual anticipated load levels is the only way to understand how things will behave when peak traffic arrives.
- [Article decribes the Amazon cloud resources acquired for running 1 million concurrent user requests.]
- The more you scale, the more you have to limit the statistics you collect to just the most relevant.
- For highest load generation, you probably have to stagger virtual user requests so that the load generator can spread resources across virtual users.
- For load testing across data centres, you uneed to generate requests from multiple geographically different sites so that point of presence servers service requests appropriately.
- For high traffic websites, testing in production is the only way to get an accurate picture of capacity and performance.
- Elastic scalability (dynamically moving resources to handle load changes) is becoming an increasingly important part of application architectures.
- Applications should be built so that critical business processes can be independently monitored and scaled.
- Keeping things loosely coupled has many benefits, and capacity and performance are quickly moving to the front of that list.
- Real-time monitoring is critical. In order to react to capacity or performance problems, you need real-time monitoring in place. This monitoring should tie in to your key business processes and functional areas, and needs to be as real time as possible.
- Performance testing online applications is about more than saturation. Opening threads and sockets that actually remain open while downloading or streaming content is where you eat up all of your capacity on a server by server basis. Downloading content takes time, and while content is downloading or streaming you have lost capacity to generate load.
- For performance testing you aren't just firing off requests to generate load and letting them go. You are recording massive amounts of performance data about every single user. How much time every hit took, bandwidth transferred, errors, and things of that nature.
http://arstechnica.com/business/data-centers/2010/02/what-second-life-can-teach-all-companies-about-scaling-web-apps.ars
What Second Life can teach your datacenter about scaling Web apps (Page last updated February 2010, Added 2010-04-28, Author Ian Wilkes, Publisher ars technica). Tips:
- Make sure all your code assumes that any component can be in any failure state at any time
- Version all interfaces such that they can safely communicate with newer and older modules
- Practice a high degree of automated fault recovery, auto-provision all resources
- Implement a working version rather than one that scale hugely - and change the implementation as it scales. A single up front effort to achieve "right first time" is doomed to fail - and me very expensive too.
- Use the basic restraints to identify expected load (how many users, how many concurrent, how much work per concurrent user).
- Can the system be shut down at regular intervals?
- Developers can misunderstand how their code will affect the rest of the system, especially when centralized resources (e.g. databases) are abstracted (e.g. by ORM). Ask developers which resources their new feature consumes, and how much of them.
- Either load-test the system automatically or add load-testing and/or profiling hooks to internal interface layers.
- Beyond a certain size table, schema changes to MySQL may become impossible in production due to the time it takes. One solution is to create a new table each time the schema changes, and slowly migrate data to it while the system is running.
- When the database does become the bottleneck, one improvement is to partition the databases into horizontal slices of the data set (e.g. by user); another is to reduce the data going to and from the database.
- When a system is changing, the more heavily interchangeable the parts are, the more quickly the team can respond to failures or new demands. Standardise as much as possible and eliminate system specific dependencies.
- Applications can ofetn have silent failures - only an error in the logs shows up that something went wrong. Reporting statistics on error rates across the entire system allows you to identify where to target developer time to fix things - letting error rates get too high means you are likely having more and more problems until at some point the system becomes constantly unusable.
- Keep a close eye on batch jobs - they can easily spiral out of control in terms of their resource requirements, but as they run when staff are at a minimum, the problems may not be discovered until they cause serious outages.
- The frequency of alerts needing human intervention must be low and manageable or the system becomes unmaintainable (killed by it's own success).
- Try to automatically handle failures rather than require human intervention.
Jack Shirazi
Back to newsletter 113 contents
Last Updated: 2025-02-25
Copyright © 2000-2025 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
URL: http://www.JavaPerformanceTuning.com/news/newtips113.shtml
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us