Cost Effective Testing Environments

spacer

I really enjoy using cloud-based virtual servers for testing MariaDB and MySQL deployments. With automation scripts, it’s fast and easy to create environments that match what customers are using to solve issues quicker. For example, with just a couple of commands, we can bring up a replication cluster with one master and two slaves, running a specific MySQL version, and run tests to find a memory leak. There are many virtual server providers, with varying levels of performance and pricing tiers, but are they all created equal? Which one will give us the best performance for the lowest price? Let’s find out. We first need to define our requirements to test against. The top priority is price. We’re only using this for testing, so we don’t need to put too much money into it. We also need the ability for hourly usage so we only pay for what we use. Final requirement would be a provider that has an external API so we can manage the administration with automation scripts. Most tests are not performance dependent (and if so, we would use dedicated hardware), so we can save money by using the low end configurations. However, we don’t want to get too low, because we’ll be waiting forever for tests to finish. I prefer a minimum of two CPU cores and 2GB of RAM for my testing environments, as that gives enough power for both the MariaDB/MySQL and the operating system to co-exist happily. The three providers that seem to be the most popular are Amazon EC2, Digital Ocean, and Linode. As of the time of this writing, these are their offerings meeting our requirements:

Provider CPU RAM (GB) Disk (GB) $/hour More on pricing
Amazon EC2 – m1.small 1 1.7 160 0.044 Pricing
Digital Ocean 2 2 40 SSD 0.030 Pricing
Linode – Linode 2GB 2 2 48 SSD 0.030 Pricing

They are similar in specs, except for Amazon EC2 having only one CPU core and yet still more expensive per hour. For benchmarking, we will be using MariaDB 10 from the RPM repository and the latest sysbench 0.5 compiled from source. Details about installing sysbench from source can be found here: https://blog.mariadb.org/using-lua-enabled-sysbench/ The configuration of MariaDB should be the same as the actual test environment, that way we benchmark our exact test environment. We’ll set InnoDB buffer pool to half of system memory, and I prefer to configure my virtual servers to sacrifice data integrity/safety for more performance.

innodb_buffer_pool_size=1G
innodb_log_file_size=1G
sync_binlog=0
innodb_flush_log_at_trx_commit=2
skip-innodb_doublewrite
innodb_file_per_table=1
performance_schema=off

The first step for sysbench is the prepare command, which will create and populate a table for testing with the specified number of rows. We want this table to be larger than the memory on hand, that way we can have some disk IO activity. 20 million rows gives us a 5GB table, well above the 2GB in our virtual server. We’ll use a command similar to this:

sysbench --test=tests/db/oltp.lua --mysql-user=root --mysql-db=test --oltp-table-size=20000000 prepare

And then to actually run the tests, we’ll use a command like this:

sysbench --test=tests/db/oltp.lua --mysql-user=root --mysql-db=test --mysql-table-engine=innodb --mysql-ignore-duplicates=on --num-threads=4 --oltp-table-size=20000000 --oltp-read-only=off --oltp-test-mode=complex --max-requests=0 --report-interval=5 --max-time=600 run

We ran the above nine times, with a 30 second delay between each run. Here are the results:

Amazon EC2
EC2/1.txt:    read/write requests:                 625482 (1041.99 per sec.)
EC2/2.txt:    read/write requests:                 809208 (1348.57 per sec.)
EC2/3.txt:    read/write requests:                 785322 (1308.84 per sec.)
EC2/4.txt:    read/write requests:                 916632 (1527.61 per sec.)
EC2/5.txt:    read/write requests:                 910098 (1516.66 per sec.)
EC2/6.txt:    read/write requests:                 888264 (1480.33 per sec.)
EC2/7.txt:    read/write requests:                 805968 (1343.07 per sec.)
EC2/8.txt:    read/write requests:                 656208 (1093.60 per sec.)
EC2/9.txt:    read/write requests:                 778014 (1296.62 per sec.)
Digital Ocean
DO/1.txt:    read/write requests:                 4845942 (8076.49 per sec.)
DO/2.txt:    read/write requests:                 4719258 (7865.35 per sec.)
DO/3.txt:    read/write requests:                 4587426 (7645.64 per sec.)
DO/4.txt:    read/write requests:                 4556646 (7594.36 per sec.)
DO/5.txt:    read/write requests:                 4767426 (7945.67 per sec.)
DO/6.txt:    read/write requests:                 4538988 (7564.84 per sec.)
DO/7.txt:    read/write requests:                 4509198 (7515.15 per sec.)
DO/8.txt:    read/write requests:                 4489776 (7482.90 per sec.)
DO/9.txt:    read/write requests:                 4462794 (7437.81 per sec.)
Linode
Lin/1.txt:    read/write requests:                 5512680 (9187.71 per sec.)
Lin/2.txt:    read/write requests:                 5510700 (9184.45 per sec.)
Lin/3.txt:    read/write requests:                 5572764 (9287.88 per sec.)
Lin/4.txt:    read/write requests:                 5665032 (9441.64 per sec.)
Lin/5.txt:    read/write requests:                 5576904 (9294.74 per sec.)
Lin/6.txt:    read/write requests:                 5528286 (9213.75 per sec.)
Lin/7.txt:    read/write requests:                 5720670 (9534.37 per sec.)
Lin/8.txt:    read/write requests:                 5613336 (9355.01 per sec.)
Lin/9.txt:    read/write requests:                 5579136 (9298.33 per sec.)

Linode stood out as the clear leader, but I believe they just rolled out a new set of servers so they’re running on newer technology. Digital Ocean has been around for a while, and they might have some upgrades coming soon. As we can see, the Amazon EC2 instance did the poorest, with the key constraint being the speed of IO (measured by iostat). Average wait time was five times greater than Digital Ocean and Linode. They do offer some guaranteed IOPs bandwidth, but this would further increase the cost over the other providers. So, what can we conclude from all this? At the time of this test, Linode gave us the best performance for the price and will be my choice for test environments. But, competition is propping up all the time in the virtual server market, so always be benchmarking.