Web performance in seven steps; Step 6: Tune based on evidence

Last time I blogged about the relevance of monitoring and diagnostics in production to solve incidents quickly and prevent future problems. This time I'll talk about tuning based on evidence.

If an application turns out to be too slow, tuning can provide a solution. Tuning can take place on multiple levels. Adding hardware can be a cheap solution. However, when you add hardware at a place where the bottleneck is not located, this has little use.

Important steps of tuning are therefore identifying which pages or services do not meet stated requirements and isolating the problem: where is it located, in which layer, in which component. This can be made clear with testing and monitoring on application parts. The next step is diagnosing. In fact, this comes down to making up a hypothesis why this component is so slow.

Performance tune cycle
Figure 1. Performance tune cycle.

This can for instance be a missing or wrong index on a database table or the invocation of too many small queries. Next, the component is improved based on this hypothesis. Finally, one needs to verify whether the improvement actually brings the expected speedup. If so, then the proposed hypothesis is true and the speedup is the result. If not, then there is something wrong with the hypothesis and we need an alternative hypothesis. As soon as the performance of the system meets its requirements, tuning is finished.

finding evidence
Figure 2. Finding evidence.

The right tools are indispensable: performance test tool, enterprise profiler, heap monitor, etc. I have seen developers work on assumed performance improvements which turned out not to help at all, or even slowed down the application and also deteriorate the maintainability and flexibility. This is caused by the fact that developers are used to mould functionality from source code and therefore work from source code to improve performance. What is missing here is measure, don't guess. This is something developers learn in my performance training. Experience also has taught me to judge every proposed improvement separately and to only implement the improvement when we have proven that it really helps. Without this systematic approach, a handful of steps together may take you backwards at the end of the day instead of forwards.

Next time I'll talk about Sharing responsibility for the whole chain.

Comments (1)

  1. Sjoerd Bakker - Reply

    November 12, 2009 at 12:18 pm

    Hi Jeroen,

    When talking about the right tooling, I would also like to point to the (free) AJAX performance measurement tool (http://ajax.dynatrace.com/) and of course the (commercial) dynaTrace tool itself (both tools work perfectly together, giving you a complete picture of the complete end-to-end transaction and its bottlenecks, even in heterogeneous systems).

    If you look at the dynaTrace site http://www.dynaTrace.com, you find a 2 minute demo, which is very interesting.

    Although this sounds like a sales story, I have been using dynaTrace in practice now for 2 years, and the whole buzz on the website is really true; measurement of application performance in a depth that is useful for development, testing and operations (great dashboarding), which is so low on overhead that you even can run this in (pre-)production systems.

Add a Comment