News

Paying the price

Servers’ hard disk access times have not kept up with the increasing speed of their CPUs, and the resulting lags can be a limiting factor in some database and caching applications, particularly those involving software as a service and cloud computing. As these applications become even more popular, the bottleneck is likely to get worse, analysts predict. And the database appliances designed to target the problem, which have been on the market for years, remain expensive.

Enter flash memory. Until recently, the high cost of flash memory limited it to consumer items with relatively low storage capacities, like digital cameras and MP3 players.
 
But over the last three years, flash prices have declined an average of 60% a year — a rate that’s “faster than has ever happened in the world of semiconductors,” according to In-Stat, a market research firm.
 
This confluence of factors is bringing flash, in the form of solid-state drives (SSD), into data centres.
While it’s still early in the adoption curve, analysts predict an increase in the use of SSDs in the enterprise. At its annual data centre conference in December 2009, research firm Gartner Inc. called flash-based solid-state storage one of the most important technologies of 2010.
 
Gartner isn’t alone in singing SSDs’ praises. “Anybody that’s managing and buying storage should be taking a look at the flash options on the market and determining whether it’s a good fit for them,” says Andrew Reichman, a storage analyst at Forrester Research.
 
Costing out SSDs
 
To be sure, despite their declining prices, SSDs remain much more expensive than hard drives on a cost-per-gigabyte basis. Depending on performance, SSDs can range in cost from $3 to $20 per gigabyte, says Jim Handy, an SSD analyst with Objective Analysis, a semiconductor market research consultancy.
 
Hard drives range in price from 10 cents per gigabyte on the low end to $2 to $3 per gigabyte for enterprise-level, 15,000-rpm models. “The ratio between SSD and HDD pricing is about 20:1 for the bottom end of both technologies, and this will stay in effect for the next several years,” Handy says.
 
Gartner analyst Joe Unsworth puts the average cost of SSDs at about 10 times that of hard disk drives. But cost-per-gigabyte is not the most important factor in some of these access-heavy data centre applications. Rather, it’s cost per IOPS (input/output operations per second).
 
The average enterprise-class, 15,000-rpm hard drive achieves 350 to 400 IOPS, says Scott Stetzer, director of enterprise SSD products at STEC, an SSD vendor. The average enterprise-class SSD can push 80,000 IOPS. “It’s a world of difference in the level of performance.”
 
Such performance benefits can outweigh the cost differential in certain instances, particularly once you factor in savings from energy costs (SSDs use less energy than hard disks). Applications that require extremely fast location and/or retrieval of data — like credit-card transactions, video on demand or information-heavy Web searches — stand to benefit from solid-state technology.
Although the total share of the enterprise market that uses enterprise SSDs will remain small, the technology will play an important role in these critical applications, analysts predict.
 
SSDs in the data centre
 
SSDs are making their way into data centres in several ways. First, most server vendors are offering SSDs as options, either as a replacement for a hard drive or in addition to one. Second, most storage vendors are incorporating SSDs into their systems. EMC, for example, buys SSDs from STEC and incorporates them into its Symmetrix and Clariion products.
 
And finally, several companies are building general-use data-access appliances that incorporate SSDs. For instance, California-based Schooner Information Technology, has developed appliances for the Memcached caching system and for MySQL databases. Rather than targeting specific functions like business analytics or Web caching, as previous data appliances have, Schooner incorporates flash in an appliance that aims to improve performance of the entire data access tier of the data centre, explains John Busch, Schooner’s chairman and chief technology officer.
 
Where SSDs make sense
 
SSDs appeared on the radar of data centre operators in the last year to 18 months. “It became an increasingly frequent conversation [with customers] about 13 to 14 months ago,” says Steve Merkel, director of solutions engineering at Latisys, a managed service provider. “We started running a few scenarios where it looked like SSDs would be a reasonable technology to use.”
 
Today, there are two prime uses. Houston-based The Planet, which runs eight data centres and hosts 18 million Web sites worldwide, sees customers using SSDs on their host servers to speed Web analytics applications, says Rob Walters, director of product management. Latisys sees a similar uptake. One customer, for example, does analytics and profiling of Web site visitors, then offers up targeted ads. “They have very large data sets,” says Merkel. “They can’t sit around and wait for traditional storage [response times].”
 
Others simply need their databases to run faster, so they use SSDs rather than traditional storage arrays. Using flash memory in this way is about 10 times more expensive than storing it on hard drives, but about 10 times less expensive than storing an entire database in dynamic RAM, says Forrester’s Reichman.
 
A third option is to put frequently accessed information in cache memory. To cache or not to cache was a dilemma for NextStop.com, a Web service designed to help people find and recommend things to do in particular locales around the world. Launched in June by two former Google product managers, the service was running into problems with slow page loading. The company had invested in a caching infrastructure at its managed hosting provider, but that wasn’t helping much because there was no way to predict what needed to be in the cache.
 
“Take a page about a particular restaurant in Singapore, for example,” says NextStop co-founder Carl Sjogreen. “That page isn’t accessed every five minutes — it’s only accessed when people are searching for it, and that happens sporadically.” And yet, when someone does want to access that page, it needs to be available quickly, so caching doesn’t solve the problem.
 
Self-professed “dorks,” Sjogreen and NextStop’s other co-founder, Adrian Graham, had been tracking the development of SSD technology for some time and thought it might solve their problem. When Intel announced its X25-M SATA Solid-State Drive in late 2008, NextStop started testing it and pushed its managed service provider to offer it.
 
About four months ago, NextStop replaced the hard drives in its RAID with SSDs and saw an immediate decrease in average page load time of more than 50%.
 
“We found SSDs to be an order of magnitude cheaper than putting all that content in memory,” says Graham. “SSDs are more expensive than disk drives, but they are a lot cheaper than RAM. It was a nice middle ground for us.”
 
In fact, in certain situations, SSDs might actually be less expensive than using hard drives. When trying to squeeze the highest performance out of hard drives, data centre operators sometimes use only a small fraction of the outside of each spinning disk, a process called short-stroking, explains Reichman. The technique requires more drives and uses only 10% to 15% of the capacity of each drive, wasting the rest of that capacity.
 
“So if you’re using only 10% of each drive, and [SSDs] are 10 times as expensive, it could be cost-effective to replace 10 hard drives with one SSD, and you’d end up getting better performance and potentially equal or lower cost,” he explains.
 
Those types of highly specialised uses aside, Reichman isn’t seeing data centres widely adopting SSDs. While storage vendors see SSDs playing a role in tiered storage, most data centre operators haven’t yet figured out which data can really cost-justify the high performance of SSDs. “They often don’t know which volumes are the most latency-sensitive,” he says. Without knowing that, why buy media that’s 10 times more expensive?
 
Another barrier to using SSDs in tiered storage is an inability to accelerate only certain parts of a database. “In most storage systems, you have to provision the whole volume on the same tier,” Reichman explains. “So if anything in that table needs to be accelerated, then the whole thing has to be accelerated.” Storage vendors are starting to add features that allow selective tiering of databases. Such block-level tiering, “could be the killer app to allow users to take advantage of solid state economically,” he says.
 
Toward that goal, EMC earlier this year released FAST technology that supports automated database tiering, allowing SSDs to be used to their full potential.
 
Managed service providers are starting to experiment with ways of integrating SSDs into a storage hierarchy, using each type of storage in the most economical way. For example, The Planet is using SSDs in a shared infrastructure service that it plans to offer its clients soon. The infrastructure consists of 10 diskless host machines using a centralised storage array that contains both hard disks and SSDs.
 
The Planet will use the SSDs as a caching layer to accelerate the disk performance. The shared infrastructure will be more resilient and offer better performance for the same price that some customers pay to lease low-end servers, says Walters.
 
Game-changing technology?
 
Long term, SSDs may end up changing the architecture of the data centre.
 
“I don’t think we’re seeing all the performance gains from SSDs that we could see,” says NextStop’s Graham. “There are more gains to be made by thinking about how to re-architect some of these systems that have for a long time been optimised to work well with disks.”
 
That’s the focus of some vendors, like Schooner, which emphasizes that it isn’t just adding SSDs, it’s focusing on revamping the entire data-tier architecture. “The [individual component] technology today is way ahead of what the architectures can utilise,” says Busch, adding that re-architecting could produce 10 times greater improvement in performance.
 
No one’s arguing that SSDs are going to replace hard drives. Although market research firms, including In-Stat, expect enterprise SSD shipments to remain comparatively miniscule for many years to come compared to sales of hard disk drives, those small numbers may not accurately reflect solid-state memory’s true impact.
 
“When these solutions are done right,” notes Gartner’s Unsworth, “SSDs have a transformational capability within the data centre.”
Forecast: Enterprise SSD shipments
Units in thousands
2007
16.7
2008
59.4
2009
281
2010
845
2011
1,979
2012
3,444
2013
5,337
Source: Gartner Inc.
 
7-year Forecast
Enterprise HDDs vs. Enterprise SSDs
(units in millions)
HDDs
SSDs
2007
27.8
0.0
2008
33.8
0.1
2009
37.0
0.2
2010
40.2
0.4
2011
42.9
1.4
2012
43.7
3.5
2013
42.7
5.4
 
The seven-year compound annual growth rate is projected to be 4.8% for hard-disk drives and 154.8% for solid-state drives.
Source: In-Stat
Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines