You may think of us as a vendor. We think of ourselves as a bunch of engineers who sit around and ask questions like: â€œis NginX faster and if so why?â€. If we deliberately gamed a benchmark that would take all of the fun out of it. We like the competition. Resin has always been really fast, but competition from NginX has made us faster.
Liam wrote the following:
Cauchoâ€™s press release references â€œbenchmark tests between Resin and NGINX,â€ and â€œnumerous and varying tests,â€ which it says delivered competitive figures, with Resin leading with fewer errors and faster response times. In particular, it says, Resin sustained fast response times under heavy load while NGINX degraded.
The announcement does not describe the testing methodology, however, which raises the obvious question of whether the tests, which appear to have been conducted by Caucho itself, were somehow stacked in favor of the companyâ€™s own web server.
Press releases are usually short and light on technical detail. However, all syndication of our press release had a link to the exact testing methodology that we used (linked text was “benchmark tests prove”).
We tried to keep our benchmark fairly simple and fair. We used three files: a 0k (almost empty HTML page), a 1k file, an 8k file and a 64k file. Then we used AutoBench, which uses httperf to test throughput. The benchmark does not raise the obvious question of whether the tests were stacked in our favor as the test was fairly straightforward and done with tools that are industry standards for benchmarking. Benchmarks can be really flawed. We hate flawed benchmarks.
We felt like this benchmark was a good demonstration of the file sizes that most web application servers serve, and we based this on a few studies and our own experience. Let us know how we can improve the test to be more fair.
You may think of us as a vendor. We think of ourselves as a bunch of engineers who sit around and ask questions like: “is NginX faster and if so why?”. Then we come up with a plan to match or beat the speed. Then we devise a benchmark. Then we test. Then we make improvements. It is nice to have a worthy competitor like NginX. Resin has always been really fast, but competition from NginX has made us faster. We love not only the outcome, but the journey and craftsmanship it took to get there. The reason we wouldn’t game the benchmark is that would take all of the fun out of it. We would have to start wearing suits and going to meetings with powerpoint slides. Our souls are intact. We are software developers. We are engineers.
Liam wrote the following:
So it is probably not surprising that up-and-coming web server products would intentionally prompt comparisons to NGINX. In a similar way, NGINX was quick to initiate comparisons to Microsoftâ€™s IIS as it approached the popular web server in market share.
Thank you for the compliment. We think that Resin has a lot of promise of continued success as well.
When a lot of people think of up-and-coming, they might think of new. Resin has been around longer than NginX. In two more years, Resin will be old enough to drive a car in California. Resin’s 14 year success is largely attributed to its performance. Resin’s strong growth is a welcome site for battle-hardened veteran of the web. Leading companies worldwide with demands for reliability and high performance web applications including the Toronto Stock Exchange, Salesforce.com and CNET are powered by Resin.
The NginX form had a post as follows:
What nginx configuration was used during the testing? Did they tune it?
Did Resin use an equivalent level of logging? What build options were used
to build nginx? Why did they test on 1k page? I don’t think that the average
size of typical web-page and its elements are about 1 Kb. Does it mean that
the Resin cannot effectively handle files of more size? What about memory
usage? And after all, why did they use the latest version of Resin and
relatively old version of nginx?
All syndication of our press release had a link to the exact testing methodology that we used, which is on our wiki. This page and this blog post will answer all of your questions except for why we used NginX 1.2.0.
The press release was done in August. NginX 1.2.0 was used for the test and it was released in April. NginX 1.2.2 was released on July 3rd. We began benchmarking in June. The latest stable release was 1.2.0 at that time. There are no speedups documented in the release notes for 1.2.2 or 1.2.3 file with regard to HTTP file delivery. We also did benchmarking with an older versin of NginX prior to June and did see an NginX speedup from the earlier version. (We used to be further ahead, but we did not publish that benchmark.)
Feel free to benchmark against 1.2.2 or 1.2.3, and send us the results.