加载中…
个人资料
  • 博客等级:
  • 博客积分:
  • 博客访问:
  • 关注人气:
  • 获赠金笔:0支
  • 赠出金笔:0支
  • 荣誉徽章:
正文 字体大小:

some http server's Benchmarks

(2005-12-05 10:08:32)
分类: linux OS

Benchmarks

下面的站点对几种httpd的比较非常详细。有各个webserver官方站点的,还有apache的开发者。

Never trust us when we say that lighttpd is fast. Do your own benchmarks and feel for yourself how everything gets faster. If you just want to see some numbers, take a look at our benchmarks:

贴几个他们的结论看看:

Conclusion

boa performs fine but has a lot of work to do in user-mode when handling 1000 parallel connections. The network saturation for small files can be improved for keep-alive mode.

thttpd can't saturate the 100Mbit network because is still lacking keep-alive support.

lighttpd perfoms fine in all sections. apache uses far to much system resources but is stable. monkey failed in all tests.

mathopd was tested the wrong way and was removed from the benchmark for now.

Results

lighttpd 1.0.2

          con   user         sys        [#/sec]     [Kbytes/sec]
4k file
           100  0m0.260s     0m0.880s   1488.54     6889.75
keepalive  100  0m0.260s     0m0.870s   1788.59     8290.74
          1000  0m0.350s     0m1.350s   1278.77     5983.60
keepalive 1000  0m0.410s     0m2.380s   1732.20     8333.10

100k files
           100  0m0.490s     0m5.730s     81.67     8207.19
keepalive  100  0m0.250s     0m5.620s     85.16     8573.19
          1000  0m0.920s     0m48.470s    77.04     8096.55
keepalive 1000  0m0.830s     0m41.480s    79.54     8552.96

thttpd 2.23b

          con   user         sys        [#/sec]     [Kbytes/sec]
4k file
           100  0m0.280s     0m0.960s   1481.26     6814.43
          1000  0m0.310s     0m1.270s   1300.39     6096.46
100k files
           100  0m0.770s     0m5.870s     80.85     8130.32
	  1000  0m1.360s     0m51.270s    76.92     8085.59

Boa/0.94.14rc16

          con   user         sys        [#/sec]     [Kbytes/sec]
4k file
           100  0m0.420s     0m0.940s   1485.00     6778.21
keepalive  100  0m0.300s     0m0.790s   1466.92     6790.01
          1000  0m0.330s     0m1.180s   1305.14     5986.00
keepalive 1000  0m0.900s     0m3.360s   1552.07     7529.29

100k files
           100  0m1.000s     0m6.330s     81.33     8166.78
keepalive  100  0m0.890s     0m6.240s     85.16     8567.05
          1000  0m10.790s    0m47.700s    78.70     8137.15
keepalive 1000  0m10.630s    0m46.600s    82.28     8512.32 [*]
[*] failed requests

Apache 1.3.28

As apache is a (pre-)forking server it is not that easy to measure the time used by all childs in user- and kernel-mode. That's why we use a guessed takes from top. This might not be completly correct but should give you the right impression.

100% CPU Usage means that full 7-8s that the rest runs for the small files are spent for handling requests. Just check the servers above to see that same job can be done in 1 second.

          con   CPU Usage               [#/sec]     [Kbytes/sec]
4k file
           100       100%               1482.80     6847.20
keepalive  100        80%               1466.92     6790.01
          1000       100%               1318.74     6094.71
keepalive 1000        90%               1887.50     8797.84

100k files
           100        30%                 81.80     8227.11
keepalive  100        30%                 86.71     8741.96 [*]
          1000        33%                 81.80     8258.31
keepalive 1000        33%                 90.87     9225.39 [*]
[*] failed requests

monkey 0.82

The author (Eduardo Silva) has been contacted and replied:

I know about this problem, it's just a problem of concurrence because monkey create thread per every request arrived...this just happens with a lot of connections because system comes to be slowly making the threads. I'm working in a poll implementation to fix this problem.

          con   user         sys        [#/sec]     [Kbytes/sec]
4k file
           100                                              [*]
keepalive  100                                              [**]
          1000                                              [**]
keepalive 1000                                              [**]

100k files
           100                                              [*]
keepalive  100                                              [*]
          1000                                              [*]
keepalive 1000                                              [*]
[*] > 500 failed requests
[**] ab failed with: Test aborted after 10 failures
 
另一家的结论:
******************************************************************

Conclusions

The entire benchmarking process seemed to end up being a battle-royale between thttpd and lighttpd. thttpd is truly impressive in that when serving up a single file, it scales linearly with the number of connections, as long as there's only one request per connection. This limitation is a serious handicap, though. Without support for support persistent connections, it can't really spit out the data quite as fast as when that feature is used. Lighttpd seems very powerful, particularly since it can support persistent connections, but I'm surprised at its erratic behavior. I'm at a loss for an explanation.

Anyway, for the moment I'm going to be using lighttpd, in part because it supports PHP, and in part because most clients out there (at least, all the ones I use) support persistent connections, and it is unquestionably the most reliably speedy in that situation. But, if I ever need to distribute a single file (rather than several dozen) to lots and lots of people, I would definitely go with thttpd. While its performance over loopback is not impressive compared to lighttpd, when accessing it from another machine, its performance is extremely good.

I really hope that either thttpd adds support for persistent connections soon, or that Gatling and lighttpd fix their bizarre scaling behavior (or both!).

 

 
 

0

阅读 收藏 喜欢 打印举报/Report
  

新浪BLOG意见反馈留言板 欢迎批评指正

新浪简介 | About Sina | 广告服务 | 联系我们 | 招聘信息 | 网站律师 | SINA English | 产品答疑

新浪公司 版权所有