Benchmarking Amazon EC2: The wacky world of cloud performance

The performance of Amazon machine instances is sometimes fast, sometimes slow, and sometimes absolutely abysmal

Page 3 of 4

To make matters worse, the Micro instances would sometimes fail to finish. Sometimes there was a 404 error caused by a broken request for a Web page in the Web services simulations (Tradesoap and Tradebeans). Sometimes the bigger jobs just died with a one-line error message: "killed." The deaths, though, weren't predictable. It's not like one benchmark would always crash the machine.

Well, there was one case where I could start to feel that death was imminent. One instance was a definite lemon, and its sluggishness was apparent from the beginning. The Eclipse test is one of the more demanding DaCapo benchmarks. The other Micro machines would usually finish it between 500 and 600 seconds, but the lemon started off at more than 900 seconds and got worse. By the third run, it was pushing 2,476 seconds, almost five times slower than its brethren.

This wasn't necessarily surprising. This machine started up on a Thursday just after lunch on the East Coast, probably one of the moments when the largest part of America is wide awake and browsing the Web. Some of the faster machines started up at 6:30 in the morning on the East Coast.

While I normally shut down the machines after the tests were over, I kept the lemon around to play with it. It didn't get better. By late in the afternoon, it was crashing. I would come back to find messages that the machine had dropped communications, leaving my desktop complaining about a "broken pipe." Several times the lemon couldn't finish more than a few of the different tests.

For grins, I fired up the same tests on the same machine a few days later on Sunday morning on the East Coast. The first test went well. The lemon powered through Avrora in 18 seconds, a time comparable to the results often reported by the Medium machines. But after that the lemon slowed down dramatically, taking 3,120 seconds to finish the Eclipse test.

Up from Micro
The Medium machines were much more consistent. They never failed to run a benchmark and reported times that didn't vary as much. But even these numbers weren't that close to a Swiss watch. One Medium machine reported times of 16.7, 16.3, and 17.5 seconds for the Avrora test, while another reported 14.9, 14.8, and 14.8. Yet another machine ran it in 13.3 seconds.

Some Medium machines were more than 10 percent faster than others, and it seemed like they arrived with the luck of the draw. The speed of the Medium machines was consistent across most of the benchmarks and didn't seem as tied to the time of day.

The performance of the Medium machines also suggests that RAM may be just as important as the CPU cores but only if you need it. The Medium has roughly six times more RAM than the Micro; it costs, not surprisingly, six times as much. But on the Avrora benchmark, the Micro machine often ran faster or only a few times slower. On the Tomcat benchmark, it never ran faster but routinely finished between four and six times slower.

Benchmarking Amazon EC2: The wacky world of cloud performance
Performance of Amazon's M1 Medium instances was much more consistent. Unlike the Micros, the Mediums never failed to complete a test run. (Download a PDF of the M1 Medium results .)
| 1 2 3 4 Page 3
From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies