Ditching Top 5 Myths on Flash-Based Storage

By CIOReview | Thursday, October 27, 2016

In IT, delivering time-sensitive information is indeed the form factor for hyper-converged infrastructures and disruptive technologies. While many enterprises are playing the hard ball around transition towards storage and archive growing volumes of data, a few are betting in fundamental changes on which they’ll build the new digital enterprise. The effort resulted in the development of Flash storage; a data repository that can vouch for ultra-low latency, show operational efficiency and be cost-effective. Flash storage eliminates the storage I/O bottlenecks and hence, fire up businesses decisions by meeting mission-critical data reliability and ultimately, customers’ satisfaction.

Still, when it comes to work in modern virtualized environments, hybrid storage holds up for many even after all the ballyhoo around flash storage. Enterprises claimed that they lacked clarity about what flash really can achieve and what it can’t. As a consequence, there have been some widely held misconceptions and the incomplete wisdom on flash storage. The article lay out the most prevailed myths and reframe it with a heavy dose of reality.

Myth 1: It’s all about IOPS

Hybrid and all-flash arrays are reigning monarch in the data storage industry. Flash-based storage certainly adds hundreds of thousands to millions of IOPS but, it’s not all about IOPS. IOPS are important but the problem is with latency; the response time of those IOPS.

The long transmission of data packets cause a lot of latency that affects the performance flash SSD and it doesn’t improve. The latency between the drive media and the storage controller is increased (typically by multiple orders of magnitude) due to the powerful flash solid-state drives (SSDs) inside the storage array.

                Therefore, while considering flash storage solution for an enterprise, ensure minimum latency characteristics of the solution. Typically, one should expect a latency of less than 1 millisecond and, in fact, latency up to sub-1-msec is manageable for business-critical applications.                         

Myth 2: Flash drives in arrays are more enterprise-ready than those in servers

Array vendors ensure that all the drives in the arrays must minimize the drives downtime, data loss, and maintenance costs. For that, flash drives are being tested, burn-in and then certified for practical purpose. The flash SSDs and the hard disk drives (HDDs) are quite different in nature as SSDs are consuming much less energy than HDDs. Nevertheless, burning-in SSDs typically yields limited benefit and cause the performance to drop, sooner. But, burning-in waning HDDs can be removed before they take an exit road.

                Also, technically, SSDs are more comforts to the users when implanted in an external array. But only two arguments don’t push SSDs over the hump as to be more enterprise-ready than those in servers. That's a subjective opinion, not necessarily backed up by the facts.

Myth 3: SSDs are too expensive than HDDs

Yes, maybe. The perception of costly SSDs is since its inception in the market. Many times, people think that the things which are making life easier, especially in technology, would be too costly for their enterprise; it’s a human tendency.

However, the cost price of flash-based drives is falling quickly as compared to HDDs, which is coming down slowly. Years back, the NAND flash manufacturer started making NAND chips smaller and smaller, which resulted in slipping the acquisition price per gigabyte (GB). The performance of SSDs is much more efficient (reducing latency); it delivers speeds that spinning disk cannot come any close in comparison. Power, floor space, cooling, rack space, weight and serviceability (among other factors) favor the SSD.

Top notch players, like SanDisk and Toshiba, are already in motion to produce 3-D NAND for a better, faster and larger capacity of solid state drives. This will cut the rate per GB to much extent in future.

Myth 4: Only a startup company can deliver a flash solution

As an important argument, the myth propagated by—you guessed it—startup companies is that they don’t have any legacy architecture to deal with. And also, while that seems true, there are downsides to manifest such arguments. For start-ups, building a newfangled architecture—including equipment, investing the time in training and knowledge—may team to whistle for significant investments. Further, the startup companies typically lack experience in building status quo for storage solutions; there are ongoing challenges for them to scale their solutions as well.

Scrapping one more similar notion about suitability to SMBs, leaders in enterprises need to point up the use of SSDs. An SMB working with big data sets or trying to do VDI with even 100 machines may pull off more to the limit than rotating disks, using SSD storage.

Myth 5: All-flash solutions are same irrespective of vendor we select

The flash solutions differ among vendors as they use software and architectural approaches that are different in nature—some are all-flash, some are hybrid; some are scale-out, some are scale-up. The solutions could be host-based as well as appliances based flash storage solutions.

                Enterprises could reap major benefits with established vendors such as NetApp, as newer companies don’t offer cutting-edge technology needed to perform critical applications.

To wrap up the squabble between prevalent myths and prevailing evidence in support, let us conclude that there is a good reason for gathering momentum around SSDs; there seems a blazing-fast future of storage.