Why even Spray-can is Way Too Slow (for my purposes)

Wilco Koorn

In a previous blog I discussed the speed of the Spray-can web-server and mentioned some measurements I did. My co-worker Age Mooij, committer on the Spray project, pointed me at 'weighttp' (see weighttp at github) a tool for benchmarking web servers. Cool! Of course I now had to do more experiments and so I did. I found out Spray-can is way too slow for my purposes and here's why.

Recall I am after handling peak-loads on the net. Part of the solution (no details here, sorry, maybe later) I have in mind is handing out an integer number. Therefore the Spray-can server just issues a number at this time. Age suggested this weighttp command:

weighttp -n 100000 -c 100 -t 4 -k 'http://localhost:8080/dispatcher'

And so I did. I got:

finished in 2 sec, 302 millisec and 196 microsec, 43436 req/s, 5594 kbyte/s

There is no typo in this, I got a throughput of 43k requests per second. Now that is impressive. When doing ten times more:

weighttp -n 1000000 -c 100 -t 4 -k 'http://localhost:8080/dispatcher'

I even get:

finished in 13 sec, 797 millisec and 56 microsec, 72479 req/s, 9420 kbyte/s

That's almost a throughput of 73k requests/sec!!! On a laptop! And I'm not even using the latest Spray-can version which is supposedly even faster.


Note the usage of the '-k' switch. It is the "keep alive" switch. Here's what happens when you do not use it:

weighttp -n 100000 -c 100 -t 4 'http://localhost:8080/dispatcher'

I get:

finished in 217 sec, 972 millisec and 939 microsec, 458 req/s, 68 kbyte/s

Que? A throughput of only about 450 requests / second? What the F? So I dug into this "keep alive" some more. My good friend Google came up with persistent http connections and I read some more here: KeepAlive Nonsense

Where does this leave me? Well, I learned Spray-can has beyond excellent performance when keeping the connection open. For my purpose, peak load handling, this is irrelevant, as I want to handle about 200k requests within 10 seconds but these requests will come in from different clients. Therefore the experiment without the '-k' switch is closer to reality. Ergo even Spray-can is way too slow for my purposes. Back to the drawing table. I learned a lot today.

Comments (6)

  1. Age Mooij - Reply

    August 21, 2013 at 12:51 pm

    Did you try the latest version of spray? M8 and later are way faster and might have solved this problem already.

    Did you do any kind of custom configuration? There are a lot of knobs to turn. Have a look at the well-documented reference configuration: https://github.com/spray/spray/blob/master/spray-can/src/main/resources/reference.conf and then have a look at the configuration used for the Spray server-benchmark project: https://github.com/spray/spray/blob/master/examples/spray-can/server-benchmark/src/main/resources/application.conf

    I would strongly urge you to discuss your requirements and your results on the Spray mailing list (https://groups.google.com/forum/#!forum/spray-user) or to create an issue if you believe something is not correct (https://github.com/spray/spray/issues).

  2. Armin Coralic - Reply

    August 21, 2013 at 10:00 pm

    I did a test with Tomcat 7 and Caucho resin. I have a servlet (GET) with the following code [number++; response.getWriter().print(number);] Here are the results:

    weighttp -n 100000 -c 100 -t 4 'http://localhost:8080/speed/test'

    finished in 4 sec, 748 millisec and 351 microsec, 21059 req/s, 2589 kbyte/s

    finished in 4 sec, 226 millisec and 233 microsec, 23661 req/s, 2793 kbyte/s

    weighttp -n 100000 -c 100 -t 4 -k 'http://localhost:8080/speed/test'

    fnished in 2 sec, 600 millisec and 849 microsec, 38448 req/s, 4061 kbyte/s

    finished in 2 sec, 365 millisec and 949 microsec, 42266 req/s, 4205 kbyte/s

    • Johannes - Reply

      August 23, 2013 at 3:02 pm

      @Armin: was that on the same machine as the original test? When it comes to connection establishment there's so much done already on the kernel level that it depends very much on the exact system configuration what kind of performance you get out of tests like this. Also, testing on localhost may not be comparable to what you get over a real network.

  3. Wilco Koorn - Reply

    August 23, 2013 at 12:53 pm

    @Age: I put my finding on the spray mailing list and added the interesting results of Armin's experiments.

    @Armin: Interesting! And many thanks! The results show a drop in throughput of about 50% which is way less than the drop I saw on Spray-can.

    • Nthalj - Reply

      November 1, 2013 at 4:59 am

      At 100k requests, your os is waiting on socket cleanup. The limiting factor here is most likely stale sockets that are eaiting to be reaped.

      Try doing 10k after a 5 minutes.

  4. sheatrevor - Reply

    August 12, 2015 at 2:26 am

    Your local client is running out of sockets. If you could execute a test using distributed clients on separate machines you would see much different results.

Add a Comment