The performance of Amazon machine instances is sometimes fast, sometimes slow, and sometimes absolutely abysmal
To make matters worse, the Micro instances would sometimes fail to finish. Sometimes there was a 404 error caused by a broken request for a Web page in the Web services simulations (Tradesoap and Tradebeans). Sometimes the bigger jobs just died with a one-line error message: "killed." The deaths, though, weren't predictable. It's not like one benchmark would always crash the machine.
Well, there was one case where I could start to feel that death was imminent. One instance was a definite lemon, and its sluggishness was apparent from the beginning. The Eclipse test is one of the more demanding DaCapo benchmarks. The other Micro machines would usually finish it between 500 and 600 seconds, but the lemon started off at more than 900 seconds and got worse. By the third run, it was pushing 2,476 seconds, almost five times slower than its brethren.
This wasn't necessarily surprising. This machine started up on a Thursday just after lunch on the East Coast, probably one of the moments when the largest part of America is wide awake and browsing the Web. Some of the faster machines started up at 6:30 in the morning on the East Coast.
While I normally shut down the machines after the tests were over, I kept the lemon around to play with it. It didn't get better. By late in the afternoon, it was crashing. I would come back to find messages that the machine had dropped communications, leaving my desktop complaining about a "broken pipe." Several times the lemon couldn't finish more than a few of the different tests.
For grins, I fired up the same tests on the same machine a few days later on Sunday morning on the East Coast. The first test went well. The lemon powered through Avrora in 18 seconds, a time comparable to the results often reported by the Medium machines. But after that the lemon slowed down dramatically, taking 3,120 seconds to finish the Eclipse test.
Up from Micro
The Medium machines were much more consistent. They never failed to run a benchmark and reported times that didn't vary as much. But even these numbers weren't that close to a Swiss watch. One Medium machine reported times of 16.7, 16.3, and 17.5 seconds for the Avrora test, while another reported 14.9, 14.8, and 14.8. Yet another machine ran it in 13.3 seconds.
Some Medium machines were more than 10 percent faster than others, and it seemed like they arrived with the luck of the draw. The speed of the Medium machines was consistent across most of the benchmarks and didn't seem as tied to the time of day.
The performance of the Medium machines also suggests that RAM may be just as important as the CPU cores but only if you need it. The Medium has roughly six times more RAM than the Micro; it costs, not surprisingly, six times as much. But on the Avrora benchmark, the Micro machine often ran faster or only a few times slower. On the Tomcat benchmark, it never ran faster but routinely finished between four and six times slower.
This weekend's Windows 10 upgrade has users angry, and it's unclear if the ploy will continue
Here’s the best of the best for Windows 10. Sometimes good things come in free packages
Speaking at the O'Reilly Fluent conference, Eich also endorsed the Service Workers mobile app...
Sponsored by Intel
Making the switch to Salesforce’s ecosystem can prove lucrative for biz-savvy programmers
A chat with IT consultant and former InfoWorlder Bob Lewis leads to a couple of entertaining...
Secure messaging apps are all the rage, and developers can now develop their own open source Wire...