What would you do if Samsung gave you only 24-hours of hands-on time with a stack of solid-state drive (SSD) engineering samples to do some viral marketing with? For you this is surely just an academic question, but for Paul Curry of The Viral Factory in London, it was a very real challenge. And he took the challenge to the limits of where only the truly geekiest would go: he custom-built an 8-core, dual-RAID, Windows Vista system, utilizing 24 256MB MLC SSDs, for a total of 6TB of storage.
Curry’s system used an Intel Skulltrail D5400XS motherboard, with two Intel 3.2GHz QX9775 Quad-Core processors, 4GB of 800MHz FB-DIMM DDR2 SDRAM, two ATI Radeon HD 4870 X2 graphics cards, an Adaptec 5 Series RAID card, an Areca 1680ix-24 RAID card, and two Corsair HX1000W power supply units. And, of course, 24 Samsung SSDs. If this sounds like it was a difficult system to build, let’s just say he ran into a few problems along the way…
First of all, getting everything inside the case was such a tight fit, that Curry had to saw off part of the Zalman coolers to squeeze them in. He also managed to fry a motherboard and a 1,000-watt power supply. He replaced the motherboard and soldered the power leads from two 1,000 power supplies together (“connected their power_on line and gave them common ground“) to make sure the rig had enough juice–the motherboard and CPU were powered by one of the power supplies, while the system’s drives and add-in cards were powered by the other. The total output under load before the drives were added was about 1,400-watts, and around 1,500-watts after all 24 SSDs were installed. Curry also had to remove one of the two Radeon HD 4870 X2 graphics cards as he found it was drawing too much power from the PCI-e bus and preventing the Areca RAID card from initializing.
It also took Curry several iterations of setting up the RAID controllers until he had a configuration that didn’t saturate the controllers. He finally settled on a configuration that had 10 of the SSDs connected to the Areca RAID controller in a RAID 0 array, 8 of SSDs connected to the Adaptec 5 Series controller in a separate RAID 0 array, and the remaining 6 SSDs connected directly to the motherboard’s on-board SATA ports as stand-alone drives. The optical drives were disconnected during testing to maximize available throughput for the on-board SATA ports. All testing was done at stock speeds, although Curry did experiment with some overclocking “for the fun of it“–the system remained stable with the CPUs running up 3.6GHz; and he even got it up to 4GHz, but at that speed the system was “wobbly.”
The Samsung SSDs’ specifications state that they support up to a write speed of 200MB/Sec and a read speed of 220MB/Sec. So how did the system perform? Here are some of the test results highlights:
- 2121.29MB/Sec sequential reading using IOMeter
- 2000.195MB/Sec sequential writing using IOMeter
- Loaded all Microsoft Office Apps in 0.5 seconds
- Opened 53 apps in 18.09 seconds
- Ripped a 700MB DVD transfer in 0.8 seconds
And all of this was done with “zero sector failures” on any of the drives. With data transfer rates in excess of 2GB/Sec, this puts the throughput of the rig on par with the theoretical limits of Fibre Channel. Curry was even able to get throughput above 1GB/Sec with only 9 drives. As one of the primary uses for SSDs will be in data centers, this exercise shows the potential transfer rates that might be achieved in such an environment.