Migrating Critical Applications to the Cloud: Points to Ponder
As cloud offerings have matured over the years, an increasing number of enterprises are moving mission critical applications to the cloud. During the early days of cloud computing, critical applications remained within the bounds of on-premise infrastructure; primary reasons being the complexity of implementing them on a cloud platform and the unpredictability of downtimes which would account for losses at multiple levels. Not to mention the security concerns that have always been perceived as an innate issue.
Being part of the core business, vital infrastructure applications demand more than just an Application Programming Interface (API) orchestration and in many instances; have to be tailor made on Infrastructure as a Service (IaaS) which is deeper than the surface level Software as a Service (SaaS) and Platform as a Service (PaaS) offerings. Yet, quite reassuringly and contrary to popular beliefs, it is estimated that nowadays two out of three enterprises run mission critical applications on the cloud. Enterprises have increasingly begun to trust SaaS and licensed enterprise cloud solutions with their mission critical applications.
However, some CIOs are still burdened with the dilemma of standing for or against the motion. When applications are as unique as the business and demand supreme sophistication, an attempt to simplify them in the name of cloud is no child’s play. While a ‘one size fits all’ solution is yet to be realized in this realm, thorough evaluation from multiple angles would pave a clearer path for enterprises as they prepare to embrace cloud technology for their mission critical applications.
Importance of Service Level Agreements
You’ve probably read this in every cloud technology advisory document: SLA, it may either serve as lockdown chains or synergic bonds or anywhere in between. Apart from being the proper means to evaluate vendor offerings, they provide insight into the current stance of the consumer. For example, the downtime compensation stated in SLA would prompt the consumer to evaluate their downtime cost which is an important prerequisite. The SLA should also clarify the enquiries regarding data backup, disaster recovery, privacy options, location, portability and availability of data, and indicate transparency of vendor policies.
Keeping an Eye on the Cost
Hosting a mission critical application on the cloud does not necessarily translate to savings though it may reduce the organization’s workforce numbers. As in the case of many SMBs, it may even turn out that certain applications were better off without cloud integration. Knowing resource requirements for factors such as computing power, storage, back up, and networking would serve as an asset in planning ahead for the long run—especially from a budget point of view. This is usually done by comparing in-house expenses and resource allocation demands to the offerings of vendors while simultaneously examining the possibilities for expansion.
The Key Role of Virtualization
There is quite an ambiguity in relating virtualization to cloud. In simple terms, while cloud computing refers to ‘hosting with scalability,’ virtualization deals with running ‘multiple instances (networking, storage, visualization) on the same hardware’ thereby adding flexibility to the cloud platform. In other words, virtualization enables running non-native code on the hardware like using a computer / virtual box / container. It also gives the ability to move or duplicate operating systems between different types of hardware without worrying about drivers, providing faster recovery and maintenance options. For the record, virtualization in part has protected Facebook from numerous DDoS attacks.
Despite its ability to be improvised as a security and backup option, until recently, virtualization of mission critical applications in particular was considered taboo that required a ‘leap of faith’ spurred by the ‘some things are better left untouched’ attitude of CIOs. Afterall, who would jeopardize a mission in the name of mere experimentation? But it turns out that demographics are changing as virtualization improves productivity along with developments in bandwidth and hardware that appear to be accustomed to Moore’s Law.
Options to Enhance Monitoring
Cloud vendors provide monitoring and management tools through dedicated dashboards or APIs and may narrow down the administrator’s access to applications when compared to its existence on bare metal. However, several open source network monitoring tools exist. The absence of license fees and the code being freely downloadable in the open source ecosystem enables inculcating enhancements or customizations as desired. This would however demand a broader technical knowledge from the organization or they’d have to rely on support from open source vendors on a paid-for basis; the typical revenue model for service providers in the ecosystem.
Nagios and Zabbix offer basic network monitoring tools with unified dashboards, data aggregation, and report generation to name a few. Opsview, based off Nagios offers advanced features, service support, and several different plans—from a free, open-source core option to enterprise-level options. When it comes to visualizing data logs and reports, products base off RRDtool—an industry standard, high performance data logging and graphing system, may be used. Alos, Paessler Router Traffic Grapher (PRTG) offered by Paessler, and Cacti from AWS Marketplace Partner JumpBox are other popular examples. Tools such as Gork from Numeta are ideal for anomaly detection from large and seemingly subtle patterns in network traffic.
Ensuring Migration Readiness
Ensuring Migration Readiness further stresses the importance of SLAs, provisions regarding contract termination demand detailed investigation. Organizations must not refrain from anticipating an instance of migrating to other vendors or back on-premise. The market is witnessing an increase in migration for reasons such as discontinuation of service, merger acquisitions, intense competition among vendors, and compliance and legislation issues. While the backup options of vendors could aid in migration, the in-house IT should be prepared to seamlessly execute it. Documenting procedures and ensuring preparedness would not only come in handy during migration, but also during downtime.
Cloud Computing Changing Management
By Nancy S. Wolk, CIO, Alcoa - Global Business Services
By John Kamin, EVP and CIO, Old National Bancorp
By Gregg T. Martin, VP & CIO, Arnot Health
By Elliot Garbus, VP-IoT Solutions Group & GM-Automotive...
By Bryson Koehler, EVP & CIO, The Weather Company, an IBM...
By Gregory Morrison, SVP & CIO, Cox Enterprises
By Adrian Mebane, VP-Global Ethics & Compliance, The Hershey...
By Lowell Gilvin, Chief Process Officer, Jabil
By Dennis Hodges, CIO, Inteva Products
By Gerri Martin-Flickinger, CIO, Adobe Systems
By Walter Carvalho, VP& Corporate CIO, Carnival Corporation
By Mary Alice Annecharico, SVP & CIO, Henry Ford Health System
By Bernd Schlotter, President of Services, Unify
By Bob Fecteau, CIO, SAIC
By Kushagra Vaid, GM, Server Engineering, Microsoft
By Steve Beason, Enterprise CTO, Scientific Games
By Steve Bein, VP-GIS, Michael Baker International
By Jason Alan Snyder, CTO, Momentum Worldwide
By Jim Whitehurst, CEO, Red Hat
By Alberto Ruocco, CIO, American Electric Power