Quantcast
Channel: SoftLayer Blog » hard drives
Viewing all articles
Browse latest Browse all 3

Big Data at SoftLayer: The Importance of IOPS

$
0
0

The jet flow gates in the Hoover Dam can release up to 73,000 cubic feet — the equivalent of 546,040 gallons — of water per second at 120 miles per hour. Imagine replacing those jet flow gates with a single garden hose that pushes 25 gallons per minute (or 0.42 gallons per second). Things would get ugly pretty quickly. In the same way, a massive “big data” infrastructure can be crippled by insufficient IOPS.

IOPS — Input/Output Operations Per Second — measure computer storage in terms of the number of read and write operations it can perform in a second. IOPS are a primary concern for database environments where content is being written and queried constantly, and when we take those database environments to the extreme (big data), the importance of IOPS can’t be overstated: If you aren’t able perform database reads and writes quickly in a big data environment, it doesn’t matter how many gigabytes, terabytes or petabytes you have in your database … You won’t be able to efficiently access, add to or modify your data set.

As we worked with 10gen to create, test and tweak SoftLayer’s MongoDB engineered servers, our primary focus centered on performance. Since the performance of massively scalable databases is dictated by the read and write operations to that database’s data set, we invested significant resources into maximizing the IOPS for each engineered server … And that involved a lot more than just swapping hard drives out of servers until we found a configuration that worked best. Yes, “Disk I/O” — the amount of input/output operations a given disk can perform — plays a significant role in big data IOPS, but many other factors limit big data performance. How is performance impacted by network-attached storage? At what point will a given CPU become a bottleneck? How much RAM should included in a base configuration to accommodate the load we expect our users to put on each tier of server? Are there operating system changes that can optimize the performance of a platform like MongoDB?

The resulting engineered servers are a testament to the blood, sweat and tears that were shed in the name of creating a reliable, high-performance big data environment. And I can prove it.

Most shared virtual instances — the scalable infrastructure many users employ for big data — use network-attached storage for their platform’s storage. When data has to be queried over a network connection (rather than from a local disk), you introduce latency and more “moving parts” that have to work together. Disk I/O might be amazing on the enterprise SAN where your data lives, but because that data is not stored on-server with your processor or memory resources, performance can sporadically go from “Amazing” to “I Hate My Life” depending on network traffic. When I’ve tested the IOPS for network-attached storage from a large competitor’s virtual instances, I saw an average of around 400 IOPS per mount. It’s difficult to say whether that’s “not good enough” because every application will have different needs in terms of concurrent reads and writes, but it certainly could be better. We performed some internal testing of the IOPS for the hard drive configurations in our Medium and Large MongoDB engineered servers to give you an apples-to-apples comparison.

Before we get into the tests, here are the specs for the servers we’re using:

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
CentOS 6 64-bit
36GB RAM
1Gb Network – Bonded
Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
CentOS 6 64-bit
128GB RAM
1Gb Network – Bonded
 

The numbers shown in the table below reflect the average number of IOPS we recorded with a 100% random read/write workload on each of these engineered servers. To measure these IOPS, we used a tool called fio with an 8k block size and iodepth at 128. Remembering that the virtual instance using network-attached storage was able to get 400 IOPS per mount, let’s look at how our “base” configurations perform:

Medium – 2 x 64GB SSD RAID1 (Journal) – 4 x 300GB 15k SAS RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 2937
Random Write IOPS – /var/lib/mongo/logs 1306
Random Read IOPS – /var/lib/mongo/data 1720
Random Write IOPS – /var/lib/mongo/data 772
Random Read IOPS – /var/lib/mongo/data/journal 19659
Random Write IOPS – /var/lib/mongo/data/journal 8869
   
Medium – 2 x 64GB SSD RAID1 (Journal) – 4 x 400GB SSD RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 30269
Random Write IOPS – /var/lib/mongo/logs 13124
Random Read IOPS – /var/lib/mongo/data 33757
Random Write IOPS – /var/lib/mongo/data 14168
Random Read IOPS – /var/lib/mongo/data/journal 19644
Random Write IOPS – /var/lib/mongo/data/journal 8882
   
Large – 2 x 64GB SSD RAID1 (Journal) – 6 x 600GB 15k SAS RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 4820
Random Write IOPS – /var/lib/mongo/logs 2080
Random Read IOPS – /var/lib/mongo/data 2461
Random Write IOPS – /var/lib/mongo/data 1099
Random Read IOPS – /var/lib/mongo/data/journal 19639
Random Write IOPS – /var/lib/mongo/data/journal 8772
 
Large – 2 x 64GB SSD RAID1 (Journal) – 6 x 400GB SSD RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 32403
Random Write IOPS – /var/lib/mongo/logs 13928
Random Read IOPS – /var/lib/mongo/data 34536
Random Write IOPS – /var/lib/mongo/data 15412
Random Read IOPS – /var/lib/mongo/data/journal 19578
Random Write IOPS – /var/lib/mongo/data/journal 8835

Clearly, the 400 IOPS per mount results you’d see in SAN-based storage can’t hold a candle to the performance of a physical disk, regardless of whether it’s SAS or SSD. As you’d expect, the “Journal” reads and writes have roughly the same IOPS between all of the configurations because all four configurations use 2 x 64GB SSD drives in RAID1. In both configurations, SSD drives provide better Data mount read/write performance than the 15K SAS drives, and the results suggest that having more physical drives in a Data mount will provide higher average IOPS. To put that observation to the test, I maxed out the number of hard drives in both configurations (10 in the 2U MD server and 34 in the 4U LG server) and recorded the results:

Medium – 2 x 64GB SSD RAID1 (Journal) – 10 x 300GB 15k SAS RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 7175
Random Write IOPS – /var/lib/mongo/logs 3481
Random Read IOPS – /var/lib/mongo/data 6468
Random Write IOPS – /var/lib/mongo/data 1763
Random Read IOPS – /var/lib/mongo/data/journal 18383
Random Write IOPS – /var/lib/mongo/data/journal 8765
   
Medium – 2 x 64GB SSD RAID1 (Journal) – 10 x 400GB SSD RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 32160
Random Write IOPS – /var/lib/mongo/logs 12181
Random Read IOPS – /var/lib/mongo/data 34642
Random Write IOPS – /var/lib/mongo/data 14545
Random Read IOPS – /var/lib/mongo/data/journal 19699
Random Write IOPS – /var/lib/mongo/data/journal 8764
   
Large – 2 x 64GB SSD RAID1 (Journal) – 34 x 600GB 15k SAS RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 17566
Random Write IOPS – /var/lib/mongo/logs 11918
Random Read IOPS – /var/lib/mongo/data 9978
Random Write IOPS – /var/lib/mongo/data 6526
Random Read IOPS – /var/lib/mongo/data/journal 18522
Random Write IOPS – /var/lib/mongo/data/journal 8722
 
Large – 2 x 64GB SSD RAID1 (Journal) – 34 x 400GB SSD RAID10 (Data)
Random Read IOPS – /var/lib/mongo/logs 34220
Random Write IOPS – /var/lib/mongo/logs 15388
Random Read IOPS – /var/lib/mongo/data 35998
Random Write IOPS – /var/lib/mongo/data 17120
Random Read IOPS – /var/lib/mongo/data/journal 17998
Random Write IOPS – /var/lib/mongo/data/journal 8822

It should come as no surprise that by adding more drives into the configuration, we get better IOPS, but you might be wondering why the results aren’t “betterer” when it comes to the IOPS in the SSD drive configurations. While the IOPS numbers improve going from four to ten drives in the medium engineered server and six to thirty-four drives in the large engineered server, they don’t increase as significantly as the IOPS differences in the SAS drives. This is what I meant when I explained that several factors contribute to and potentially limit IOPS performance. In this case, the limiting factor throttling the (ridiculously high) IOPS is the RAID card we are using in the servers. We’ve been working with our RAID card vendor to test a new card that will open a little more headroom for SSD IOPS, but that replacement card doesn’t provide the consistency and reliability we need for these servers (which is just as important as speed).

There are probably a dozen other observations I could point out about how each result compares with the others (and why), but I’ll stop here and open the floor for you. Do you notice anything interesting in the results? Does anything surprise you? What kind of IOPS performance have you seen from your server/cloud instance when running a tool like fio?

-Kelly


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images