Container Technologies Adoption Strategies
The application quality reduces, and the governance risk increases if an organization adopts container technologies without a well-defined strategy. Without the proper tools and support structure, adoption of container technologies is set to be a failure. Some containers may introduce hidden variables that eventually surface as bugs or outages, ultimately affecting the quality of the product. Without a clear oversight of containers, organizations fail to differentiate between which are policies and which are not—hence, threatening the governance. There is also the added issue of “change management”; the problems felt after an expert leaves the organization. By examining the gaps and the ways to address them, and the options to take containers beyond testing and development (test/dev)—ensures that adoption of container technologies will sustain and extend with the development organization, and not hinder it.
Although containers guarantee that the software will always run the same regardless of the computing environment, the typical reason for not using containers for production is not about the technology itself but about how it fits into the broader delivery chain in a stable way. Here are the key gaps that need to be considered while implementing container technology:
Network Drawbacks: Container networks allow easy networking of the containers on the same host, and with some additional effort, the network features can be overlaid across different hosts. However, the manipulation of network configuration is limited and the effort is manual. Although containers can be scripted at scale due to the need to add provisioned instances to the network definition, it is prone to error as there is an additional step each time you provision a container.
Limited library control: Although the public library is prized due to its huge volume of contributed prebuilt containers, saving many hours of configuration time, using it beyond sandboxing is risky. There could also be any number of intentional or unintentional stability and security risks without knowing who and how images were created. Enterprises are compelled to maintain and create a private library—which is easy to set up, but hard to manage.
No clear audit trail: Despite the ease to provision a container, it is challenging to identify the when, who, why, and how for its provisioning. So, post-provisioning, organizations have a very little history for auditing purposes.
Low visibility on running instances: Once instances are provisioned, it is hard to reach into the population of running containers and identify which should or should not be there. This problem can be a serious issue and can result in rogue VMs, resource waste, inability to do resource planning, and old versions and configurations.
Key techniques to address these challenges are:
Planning: Sometimes organizations not only have to architect their application but also have to architect the pipeline. Organizations would not abandon the planning activities around sprints and product features. There needs to be a deliberate and upfront effort to make sure the system of containers can function for a sustained period of time and is well defined. Picking examples of issues that can arise and test the team on how they would respond to the issue is often a good litmus test of overall quality.
Provisioning: Container provisioning is simple at low volume, but there are many more variables when you add a team. It is important to make sure that provisioning matches an expected configuration team-wide, such as an expected component. It is also important that de-provisioning or replacement of containers is not ad hoc and without guidance.
Log analysis: When logging from host machines—to enable easier query across an entire population of containers to know what is going on—each container is the only way to create full visibility with no additional effort.
As the container world evolves quickly with the focus on tools that make it more mature for the enterprise, the future holds greater adoption of concepts such as microservices, more abstractions of the container pipeline, and more robust container libraries. With the rapid introduction of new tools and functionalities, organizations usually tend to wait for that one key feature that solves all setbacks or limit the usage of containers until they are sure an update does not cause a massive interruption.