To see the latest results, please click below.
Introduction
The term “commoditized” is often used to describe the future of cloud infrastructure. This very well may be accurate – for the future. The reality is that now, there is such a lack of standardization that businesses are having serious trouble selecting the right provider, and even coming up with concrete criteria to do so. Though “commoditized” may describe IaaS in the future, it certainly does not describe it now.
Performance for some seems to be an afterthought in the provider selection process. Many think, “if I need better performance, then I can conquer that with scale.”
While this is true for some performance bottlenecks, it is not true for all. And there is an adverse effect on cost that is rarely considered. Remember, we are not commoditized yet; take advantage of this opportunity by seeking out the providers that offer more bang for your buck.
One example of a big problem experienced by users is disk IO with calls to a SAN. In a multitenant environment, the crowded internal network becomes the bottleneck with so many people trying to pull data from storage. This is why we saw AWS introduce provisioned IOPS early on. They recognized that this was a problem that needed to be corrected to provide a quality service.
Is your application dependent on CPU or RAM? Are you sure the resources that your application needs are going to be available when you need them? You may be able to get all the CPU you need for your graphics rendering at 12PM on Thursday, but will it be there at the same time on Friday? With shared, multitenant services, it is difficult to be sure.
The point is that these differences are present and they matter. This is why we monitor the performance of 20 of the largest IaaS providers across 40+ tests that measure CPU, RAM, storage, internal network, and database applications. We measure three times per day, every day to catch the variability present in multitenant services.
Testing Details
Below is one test each for overall system, CPU, storage, RAM, and internal network. While true performance measurements depend on the unique workload of the user, these synthetic tests can provide an indication to use as a starting point for evaluations.
Providers: Of the 20 providers benchmarked, only the highest and lowest performers in each test are revealed. Provider names are omitted.
Server Sizes: All server sizes have 2 vCPUs and 4 GB of RAM with varying amounts of storage (storage sizes do not have an effect on results).
Testing Interval: Tests were run from July 17th, 2013 to July 23rd, 2013 with three data points recorded each day. Only the highest and lowest values per day are displayed and used for the calculations in the tables.
Detailed Results
System Test – UnixBench
Unixbench runs a set of individual benchmark tests, aggregates the scores, and creates a final score to gauge the performance of Linux systems overall. From the Unixbench homepage:
The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple tests are used to test various aspects of the system’s performance. These test results are then compared to the scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores. The entire set of index values is then combined to make an overall index for the system.
Do be aware that this is a system benchmark, not a CPU, RAM or disk benchmark. The results will depend not only on your hardware, but on your operating system, libraries, and even compiler.
Provider 1 has 3.7x the average performance of Provider 2 in the UnixBench Test.
.
CPU Test – 7 Zip Compression
With p7zip’s integrated benchmark feature, we test the performance of the virtual CPU by measuring the millions of instructions per second (MIPS) that it can handle when compressing a file.
Provider 1 has 2.4x the average performance of Provider 2 in the 7 Zip Compression Test.
Storage Test – Dbench
Dbench can be used to stress a filesystem or a server to see which workload it becomes saturated with and can also be used for prediction analysis to determine “How many concurrent clients/applications performing this workload can my server handle before response starts to lag?”
It is an open source benchmark designed by the Samba project as a free alternative to netbench, but dbench contains only file-system calls for testing the disk performance.
Provider 1 has 6.8x the average performance of Provider 2 in the DBench Test.

RAM Test – RAMSpeed SMP
The RAMSpeed test is an aggregate of several tests that measure COPY, SCALE, ADD and TRIAD functions for both integer and floating point values.
More information on COPY, SCALE, ADD, and TRIAD:
- COPY transfers data from one memory location to another (A = B)
- SCALE multiplies the data with a constant value before writing it (A = Bn)
- ADD reads results from two different locations, adds those results and writes then to the new location (A = B + C)
- TRIAD merges ADD and SCALE. It reads data from the first memory location, scales it (multiplies it), then adds data from the second one and writes to the new location (A = Bn + C)
Each test results in a score in MB/s. All scores are averaged to come up with the final score.
Provider 1 has 4.1x the average performance of Provider 2 in the RAMSpeed Test.
Internal Network Test – Iperf
Running iperf tests the network throughput between two virtual machines (VMs) within the same private network inside the data center. The results are important in understanding the possible network bottlenecks. Applications that require databases, which may pull from storage located off of the local server, need a large pipe to transfer data as efficiently as possible to load quickly from the server side. This is especially true for big data applications like Hadoop, which push massive amounts of data.
Throughput and bandwidth are often confused; while both measure the size of the pipe, bandwidth is the theoretical capacity, while throughput is the actual bandwidth the user actually receives. This is an important to distinguish for a cloud environment, where a 10GBit pipe may be split among many users.
Provider 1 has 120.3x the average internal network throughput of Provider 2 in the Iperf Test.
Conclusion
Before migrating to a public cloud, one should make sure that his or her due diligence includes performance considerations. Fast applications are needed in today’s marketplace and internal business environments. The differences above highlight the fact that not all services are created equal and the industry is not commoditized. There are significant performance discrepancies present among providers that cannot be revealed unless tested objectively and accurately. Ensure that your provider will be able to meet your application performance needs.