What CIOs Need to Know About Managing OpenStack as Part of Your Software Delivery Pipeline
Understanding and knowing how, where and when to leverage public or private cloud computing within the enterprise is crucial to building an effective IT strategy for your organization’s competitiveness and bottom line.
Since its origins in late 2010, OpenStack has piqued the interest of the IT community. As the industry is always looking to increase technology efficiency and reduce costs—the promise of a free, open source, cloud computing platform seems an exciting alternative to other proprietary offerings. Emerging Infrastructure-as-a-Service providers looking to offer new public clouds, as well as enterprises, who are looking to reduce data center costs, started looking into ways to implement OpenStack for two main use cases:
1. Data Center virtualization and private cloud initiatives— as a possible alternative to VMware.
2. For Public Cloud—as a possible alternative to Amazon Web Services.
Yet, despite OpenStack’s growing popularity, recent data shows that less than 10 percent of large enterprises have deployed to the cloud, and those that do usually go with a public cloud option to start. With the current rate of cloud adoption for large-scale enterprise applications (particularly legacy apps) and the current maturity of the OpenStack landscape, many enterprises are finding that OpenStack is just one piece of their overall cloud pie.
The Challenges of OpenStack for Enterprise Use
In the case of OpenStack (as with the adoption of other open source technologies), ‘free’ isn’t entirely free. The OpenStack code base and supporting services are still not sufficiently mature for a plug-n-play experience and support for enterprise needs.
Eventually, though, we’ll all get there (again, as we have with other open source initiatives). But where does it leave enterprises looking to leverage OpenStack now?
Due to the fact that the OpenStack installation is not trivial, and legacy apps—prevalent in large organizations - might not fit into the cloud out of the box—many organizations are finding out that they currently have difficulty in either:
1. Standardizing on OpenStack for their production environments
2. Maintaining and scaling Openstack adoption throughout their organization for their internal infrastructure— such as their development machines, build environments, testing infrastructure, as well as deployment targets.
It is becoming evident that Openstack is neither a cheaper replacement to VMware, nor a quick fix to all your cloud deployment needs, or an answer to AWS lock-in. So what is it good for?
What is the Role of OpenStack in the Enterprise:
In the complex world of enterprise apps, no one stack/technology can be the answer to all of your needs. Different needs require different solutions. Large enterprises, particularly as they increasingly need to balance and integrate legacy systems with new web/cloud services or modern Microservices architectures—find that each application or component in their services catalog might require a different “best of breed” technology for optimal performance and maintainability.
The trend seems to indicate that, eventually, OpenStack will become just one more flavor in your organization’s federated/hybrid infrastructure.
This emerging need and the opportunity to tailor your stack/ technology to best fit the specific needs of your particular use case means that organizations will need to learn how to deploy and manage OpenStack as part of a complex matrix involving a myriad of other technologies in their infrastructure. For example, a common enterprise infrastructure may include a mix of VMware for inherited legacy applications, AWS for hosted web apps, Parse for mobile backends, bare metal for high performance systems, and OpenStack.
How You Should Leverage OpenStack as Part of Your IT Strategy:
Let us review some of the key capabilities CIOs need to ensure in order to enable complex application release across federated environments, in a fast, reliable, predictable and auditable manner.
1. OpenStack is just one flavor: To support future scale, flexibility to accommodate complex software delivery pipelines, and extensibility to different technology stacks—you want to ensure that the processes and tools that power your Application Release are agnostic of your cloud/stack. You need to ensure that you’re able to deploy any artifact, to any environment, be it OpenStack, or not—with no need to reconstruct your processes or code.
2. End to end orchestration is crucial: As your application or architecture evolves, the code that your organization produces may find itself—throughout its lifecycle— deployed to different environments and stacks (either across Dev, Test, Staging, Prod, etc.; or you may want to migrate your application from one cloud to another, and so on.)
In addition, remember that a lot happens to your code before it is finally deployed to the ‘Last Mile’ in Production, and that dozens of point tools are involved as part of your software delivery process—from code check-in all the way to Production.
To accelerate your pipeline and support better manageability of the entire process, you want a platform that can serve as a layer above any infrastructure or specific tools/technology and enable centralized management and orchestration of all your tool chain, environments and apps.
3. Visibility and auditability: With the complex releases for today’s enterprise apps, you want to have visibility into the entire path leading up to the Release. This not only speeds up your release process, mitigates risk and eliminates manual handoffs that are error prone— but also serves as your audit trail. When you can manage and automatically track the entire path every artifact takes as part of the Release, including all related processes and environments (who approved deployment of which bits to which server), this effectively ensure compliance.
Internal self-service Dev/Test cloud:
When looking at appropriate uses, OpenStack has proven to bewell-suited for building a private “as-a-Service” shared environment for your internal teams to collaborate on. This has been particularly useful for consolidating Build or Test environments, such as enabling “CI-as-a-Service” inside your organization, or “Deployment-as-a-Service” type offering (which allows your QA teams to easily deploy any release candidate to any test environment and start testing.) Engineering teams appreciate the speed, easy access and consistency that come with these self-service solutions. The ability to elastically scale up/down helps organizations reduce management overhead, improve resource utilization, and save on OPEX and CAPEX. In this scenario, it’s easy to have your end to end orchestration platform manage these internal cloud resources, and allow teams to trigger deployments or test suites to the appropriate environments—which are span up and torn down depending on demand.
4. Microservices and OpenStack: New development innovations, including Microservices approaches and container technologies, allow for extensible application architecture, and a vendor-agnostic, scalable infrastructure. While Microservices simplify application deployments through a de-coupled approach to introducing new, high-value functionality, they come with a price—because it is so fragmented, it is more difficult to track and manage all the independent, yet inter-connected, components of the application.
To leverage the benefits of new innovations like Microservices with a combination of Docker, OpenStack and an end-to-end orchestration layer can help manage the challenges of the Microservices architecture, while supporting easy deployments across build, QA and production environments. This is one example of creating a scalable, centrally managed OpenStack infrastructure that helps both developers and IT operations team members.