Essential Cases of Object Storage
CIOREVIEW >> Storage >>

Essential Cases of Object Storage

By CIOReview | Tuesday, May 9, 2017

With the trillions of objects and thousands of petabyte of data in ‘Object Storage’ public clouds, it is evident that this architecture is an emerging data storage technology. The two prime reasons to consider object storage a good match for the archive storage tier is—a) its modest performance and a low cost per gigabyte (GB), and b) its capability to extend data durability making it even more of a natural fit for archiving. However, with growing technology, the use cases for object storage are also rising. As vendors continue to explore the possibilities, object storage is being transformed out of its traditional role—as an archiving technology—into securing its place in the production role in datacenters.

Every object has associated metadata and also a unique object identifier, which is good for data retrieval. However, the challenge is that object storage’s rich metadata capabilities add latency and impact the performance of the system as a whole. Also, rich metadata requires all the nodes (Metadata nodes deal with information embedded in images, such as the image's original bit depth, width, and height, for example) to be powered and available but, due to scale-out nature of most object storage systems, it is difficult for them to use power efficiently.

In the thick of these issues, the good part is that object storage vendors are addressing these shortcomings. For an instance, the metadata latency is minimized if it is stored either in RAM or flash storage. It also enables vendors to meet the power consumption limit by powering off nodes and their HDDs. Such improvements prepare object storage systems for bigger responsibilities.

Data Protection with Object Storage

Ideally, object storage systems are designed to store a plentiful data—cost effectively—for a very long period of time. Using these systems as backup targets with good data ingestion rates, in the past, was a challenge. Today, however, these systems can ingest data significantly faster than before because of DRAM and flash storage; they speed up metadata processing.

Further, in comparison with disk backup appliance, the speed of ingestion of an object storage system is faster as expected. But, it brings its own advantages. Scalability is one of them. The object storage systems are not only more scalable than most disk-based backup systems but also tend to be more efficient as they can apply deduplication globally, storing the redundant data just once. Finally, as already discussed above, the object storage systems are often cost effective since the organization can use commodity, off-the-shelf servers, and HDDs.

Now consider snapshots—object storage systems are considered very ideal for storing snapshot replication jobs from production storage. The integration of production storage with object storage allows keeping an updated copy of production data on the object storage system.

Object Storage as Public Cloud Replacement

Object storage systems are ideally suited for this task because files are fundamentally a form of objects. When combined with file-sync-and-share (FSS) software, these systems can help discard file serving to the modern era, one where users want to access data from any location and from any device.

In this use case, the speed of ingesting data is not a focus as most devices gain access via broadband or Wi-Fi and object storage systems are able to keep up with that range of speeds. At the end, reliability and durability matters and object storage delivers that.

Most significantly, the combination of FSS and object storage systems eliminate the problem known as "shadow IT"—an unauthorized use of public cloud services and on-demand access to data across multiple devices.

Though justifying object storage just for FSS is not an easy job done, the combination of it with traditional archive and backup becomes very compelling because the FSS issue can be dealt with almost no additional investment.

Promote Object Storage as Data Lake

Typically, a data lake storage repository has to support Common Internet File System (CIFS) and Network File System (NFS), plus occasionally the Internet Small Computer Systems Interface (iSCSI). The best part is that many object storage vendors have already added these protocols in their system. With the addition of multiprotocol, object storage—just like a real data lake—achieves greater scalability and durability, while being cost effective at the same time.

It is expected in coming years that beyond archiving, object storage will play a key role in data centers. People may see the integration of all-flash array with object storage and all their snapshot replication jobs sent directly to an object store, offering a more real-time backup and DR options for the most mission-critical data.