Storage Virtualization's Elusive ROI

By CIOReview | Tuesday, August 23, 2016
636
1039
196

Ecumenically, many IT organizations would correspond to the fact that the most upsetting causes of downtime are natural disasters and sudden failures, like a power source failure. Once disasters and disruptions are linked to storage, enterprises would agree that most of their downtime nuisance emerge from myriads of planned outages needed to manage storage resources.

Any alteration in the storage environment has a direct impact on applications that need to be up and working. Issues varying from few minutes of rebooting to several hours for backups, or many days for reconfiguration have a significant cost to the enterprise with regard to lost opportunities to conduct business in efficient manner. Planned outages need the same attention that unplanned outages currently receive.

When IT administrators moan about storage management, it is not just about intricate cabling issues, tiresome physical reconfiguring of hardware, long backups, or learning various management interfaces. Growing market, rapid data expansion, and infinite uptime has altered the reality. Administrators need to be alert at all times since regular maintenance tasks demand attention round the clock. There is no doubt that by eliminating known storage-related disruptions will provide a significant monetary value and will contribute largely to a corporation’s ROI.

Storage virtualization may be the answer for companies’ relief from issues and costs of planned downtime. A flexible, greatly automated solution for addressing most fundamental storage management activities, virtualization can vastly reduce the need for shutting down servers or networks.

Root of Storage Related Downtime

There are various storage management activities that elevate business downtime and inefficiency. Few of the critical issues that enterprises currently face are listed below:-

□    Allocating storage to servers
□    taking out storage from servers
□    Backing up data
□   Replicating volumes
□   Migrating data
□   Adding/maintaining servers
□    Testing disaster recovery promptness

Few of the mentioned items are often neglected when calculating server downtime because not all tasks handicap servers completely. For example, replicating a volume can consume around 35 percent of a server’s processing power– comparatively small cost compensation. But even small system performance reduction compromises the ability to run the business at full potential. Thirty-five percent less available powers can get converted into thirty percent lesser transactions–thirty percent fewer customers.

Direct-Attached Storage–Prime Victim

Evidently, the most vulnerable to storage-related downtime and tentative regarding the operating device are enterprises that attach storage devices directly to the servers. To modify, move and add storage in a direct-attached ecosystem means re-booting the server at best, and taking it out of operation for hours or several days in the worst scenario.

Companies make major investments for immobilizing servers to add extra capacity. This depicts the relationship between technology and business, and the magnitude of minimizing interruptions. Forbidding such opportunity costs can swiftly deliver more than required ROI to a storage virtualization investment.

Give More Get More–A False Theory

Companies with financial brawn move to large and consolidated shared disk arrays to reduce the impact of a disaster and due to fear of a breakdown. Due to fear of falling down the pit, shared arrays clear the disarray of direct-attached devices; they contain the storage resources in one big box and deliver a progressive storage subsystem with features for on-the-fly volume allocation and point-in-time copies of production data. Likely, shared arrays offer effective alleviation from several kinds of disruptions. Companies who walk down this road are pretty satisfied and believe that their business operations will run smoothly on the long haul.

But, high-end features of consolidated arrays come with a hefty price-tag, proprietary tech, and vendor dependence, which have its own downsides on the run. There was a time, when these premiums were digested since the economy was on a high, corporate growth needed storage expansion and IT budgets were more obliging.

Current economic scenario exerts pressure on the OIT department to cut down costs, making consolidated arrays tough to invest on. Furthermore, the features rendered by storage subsystems are restricted to individual hardware frames and one can only deal with a few application servers. The difficulties of direct attached storage devices surface when storage asks for more capacity or connectivity of a single unit, or more frames.

Virtualization to the Rescue

Storage virtualization gives advanced abilities to guard applications from the changing storage environment. A virtual pool of storage acts in the same way to a consolidated array, and in many ways complements it. The advanced storage management services cover several arrays from potentially different suppliers, rather than limiting these functions to a single array.

Based on configurations and features, it’s possible to entirely purge disruptions or substantially decrease the impact on the business. A robust high availability architecture or similar approach to protect against a single point of failure and disaster recovery alternatives, like mirroring, positively boosts the storage investment.

By implementing a good virtualization strategy, enterprises can eliminate storage related interruptions and run applications at full potential to deliver the best.