EJAPP Top 10 countdown: #3 - Incorrectly implemented concurrency

Vincent Partington

We've reached the top 3 of the EJAPP Top 10 countdown now, so let's get going...

Incorrectly implemented concurrency can cripple the performance of your application in very unpredictable ways. Applications that perform pretty well under light load may crawl to a halt under heavier load.

A major cause is lock contention, which only becomes an issue when multiple threads are involved. For example, if a request that takes 100ms, spends 25ms in a critical section, no more than four requests can be handled every 100ms. No matter how much you decrease the other 75ms, Amdahl's law tells us the maximum speedup by introducing parallelization is 4!

Lock contention can be caused by a number of different things:

  • Synchronized methods and blocks that take a long time to complete. See EJAPP Top 10 #6 - Improper Caching for an example.
  • Long-running database transactions. These also hurts performance because they consume large amounts of resources such as transactions logs and memory.
  • Long-running database transactions with high isolation levels that cause lock escalation to occur. Instead of a just one row, a whole table will be locked increasing the chance of multiple threads competing for the same lock.

While lock contention is only an issue when multiple threads are involved, the lock overhead associated with managing the lock is also incurred when only one thread is involved. This is the reason that developers have been using ArrayList over Vector at HashMap over Hashtable since JDK 1.2.

However, just as memory management has become cheaper and cheaper, lock performance has improved in recent JDK releases. For example, JDK 1.6 includes a number of synchronization performance improvements like lock elision, adaptive locking, and lock coarsening. These make StringBuilder obsolete after just one JDK release!

Of course, just forgetting about locking altogether may improve the performance of your application, but could seriously mess up your application! So even though concurrency is a complicated subject, it is one all Enterprise Java developers will have to deal with.

A number of simple guidelines can be given though:

  • Minimize the amount of data that needs to be accessed and mutated by multiple threads thereby limiting the number of locks needed. This includes static variables, singletons, database rows, etc.
  • When shared data needs to modified, keep the critical section as short as possible.
  • In Java, avoid writing your own synchronization primitives; use java.util.concurrent (or backport-util-concurrent) instead.
  • In the database, considering using optimistic concurrency control (confusingly called optimistic locking by some, even though the point is that it does not use locking) or getting rid of transactions all together.
  • Finally, check the synchronization performance tips at the Java Performance Tuning site.

Thanks to Peter Veentjer for providing valuable input for this blog.

Comments (1)

  1. Xebia Blog - Reply

    December 21, 2007 at 2:35 pm

    [...] We tell about escape analysis not only in the course, but also in our Performance Top-10 blog and podcast, and in my J-Fall presentation. Brian Goetz writes in September 2005: “Escape analysis is an optimization that has been talked about for a long time, and it is finally here — the current builds of Mustang (Java SE 6) can do escape analysis …” Furthermore, Wikipedia states: “Escape analysis is implemented in Java Standard Edition 6.” And several escape analysis JVM switches, like -XX:-DoEscapeAnalysis are available. So, we can assume it works, right? [...]

Add a Comment