Web performance in seven steps; step 3: test representatively

Last time I blogged about the importance of benchmarking the architecture and new technology in a Proof of Concept for Performance. This time I’ll deal with the importance of representative performance testing.

Slowness of applications in development environments is often neglected with the rationale that faster hardware in the production environment will solve this problem. However, whether this is really true can only be predicted with a test on a representative environment and in a representative way. In such an environment, there needs to be more representative than just the hardware.

I have experienced multiple times that a database query on the test database with 1000 customers took only less than 10 ms., while on the production database with 100.000 customers this turned out to take tens of seconds, because of missing indexes. So, if the development team does not test with a full, complete database, going to production may lead to some surprises.

It is also important that the number of concurrent users and their behavior is well simulated in the test. Furthermore, care should be taken to take caching effects into account: if the test continuously requests for the same product by the same customer, this data will be in database or query cache the second and following times. This will speed up the request considerably and be much faster than with many customers and products. This test is therefore not representative for the real situation.

A suitable performance test tool and performance expertise is necessary to create a valuable test. The most popular open source performance test tool is Apache JMeter, see the next figure.

Run of a performance test in JMeter.
Figure: Screenshot of a run of a performance test in Apache JMeter.

We deal with this tool, and how to performance test properly in our Speeding up Java Applications course. This is a tool made by programmers, for programmers. Test scripts can be created with visual elements like a HTTP request, which can be recorded and configured. Many are available and if you need more, you can always fall back on a BeanShell element in which you can manipulate the request, response and various JMeter variables. If that even does not meet your needs yet, you can extend JMeter source code and develop your own elements. Because of its for-programmers nature, it is less suited for the average tester. Also reporting features and maintainability of the scripts are both not so great. Therefore, commercial tools like HP Mercury LoadRunner, Borland SilkPerformer or Neotys’ Neoload may be good alternatives for companies.

Performance testing from the cloud

The emergence of cloud computing adds new possibilities for performance testing. An elastic compute cloud like Amazon EC2 provides the ability to scale up quickly with the number of application deployments because of increasing load. For performance testing, the cloud can be used the other way around: for temporary use of many load generating test clients to generate expected and peak loads for your application. This saves you from having to buy many servers to run the load generating clients and if you run these performance tests say only a couple of days in a release cycle, this can be an economically attractive solution. Quite some information is available how to test with various performance tools from the cloud.

Next time I’ll blog about step 4: continuous performance testing.

Comments (1)

  1. phil - Reply

    January 8, 2010 at 7:22 pm

    Don't have alot of extra server capacity available to generate load for performance testing? I recommend using one of the new cloud based tools.

Add a Comment