Containers: Benefits, Vulnerabilities and Security Measures
Approach towards implementing virtualization was traditionally based on creating VMs on top of hypervisors that allocated hardware resources. Hypervisor based virtualization imparts the ability to host multiple Operating Systems (OS) on a single server without the need to modify the OS or applications to run on the virtual environment. This approach wherein the guest OS and the host OS run their separate kernel (program that constitutes the central core of the OS) provides a strong level of isolation and security. However, when organizations do not have the need to run different types of OSs a top of a single platform, the Hypervisor approach would constrain virtualization performance.
For Instance, duplicating 100 identical OSs’ on the same hardware would mean generating 100 separate copies of kernel and other root software stack to run the guest OS which evidently translates to wastage of time and resources in terms of RAM and CPU cycles. Containerization aka OS based virtualization removes this ‘barrier’ by enabling to share the kernel with the host OS resulting in almost zero performance overhead. Unlike VMs, containers use the OS's system call interface. Miles Ward, Global Head of solutions for Google's Cloud Platform, says “containers can boot up in one-twentieth of a second. Containers thus enable packing a lot more applications into a single physical server than a VM making it all the more easier for application deployment and testing.” Google, for running search operations launches about 7,000 containers every second, which amounts to about 2 billion every week. Container environments like Kubernetes, Docker, Rocket, Core OS, Joynet and LXC(Linux Containers) to name a few, makes it simpler to create and manage containers and are getting better through every release and update.
Comparing Containers and Virtual Machines
Owing to their resource-saving approach as opposed to hypervisor/VM, containers are booming in popularity among organizations and market trends now indicate a large scale migration from VMs to containers. The interest towards container technologies have seen a sharp increase in general owing to their advantages, and popularity which in parts can be attributed to its value as well as hype. It is estimated that two out of every three firms that try containers proceed to adopt it, with Docker being the most popular choice of container environment.
The relatively new container technologies are yet to gain a level of maturity as that of VMs/Hypervisors. Just as any new entrant in the tech scene they too have faced limitations in the availability of tools needed to monitor and manage them which over the years have been rectified vastly. Containers could never render the former approach obsolete (at least as of now). Choosing between containers and VMs thus depends on the type of process. Take testing for instance, Hypervisors approach may be suited for checking cross platform compatibility where as a container approach is more suitable for testing various versions of the application on the same platform.
Applications offering micro services tend to be more suitable to run on containers as opposed to hefty monolithic applications. However, Google’s multi-container management system which now forms the open source Kubernetes Project helps build a cluster to run containers, providing networking and a container-naming system that enables managing many containers at once which allows running big applications in multiple containers across many computers without the need to know all the ins and outs of container cluster management. It is up to the enterprise to identify a suitable virtualization approach by experimenting on both.
Containers function in the form of layers created on top of base images which may be created by anyone and shared across repositories. Layering helps to easily rollback changes by simply switching to the old layers with almost no overhead. However, the container architecture makes it possible for malicious code to escape from a container and gain access to host OS or other containers in the system. Container images may be created by anyone and shared across official and unofficial repositories. Poisoned images or those tampered by attackers are a security concern for containers. If a container image has full user privileges, intruders can break through from the containerized environment to the underlying platform and retain privileges. According to BanyanOps, a container technology company, more than 30 percent of containers distributed in the official repositories have high priority security vulnerabilities and jumps to about 40 percent outside official repositories.
Another concern that may root from the framework of container technology is Denial of Service (DoS) wherein if one container (manipulated by attacker or otherwise) could monopolize access to certain resources–including memory or user IDs, it can starve out other containers on the host resulting in legitimate users being unable to access part or all of the system.
Containing Container Vulnerabilities
From a security perspective, it is strongly recommended not to set enhanced privilege for virtualization containers. If at all containers require privileges to be raised, it has to be ensured that they are reverted. For instance, majority of containers do not need root privileges as services that require them should already be running outside of the container as part of the underlying platform. Running containers with reduced privilege is in turn a protective measure as it causes the container to deny any mount requests, deny file creation or attribute change activities, prevent module loading. Enabling abstraction using specialized namespaces is another method of container isolation. They should also be provided with their own network stack avoiding any privileged access across different containers to physical ports. APIs being integral parts of containerized environments calls for API management and monitoring tools; small errors in API calls could as well pave way for intrusion.
Running containers on top of a hypervisor/VM is an option worth considering as it would establish a secure boundary for containers whilst avoiding conflicts and enhancing redistribution. But as intriguing as it may sound, it is regarded a rather complex set up to maintain and sure comes with a minor performance penalty which include a setback to container’s higher hardware density and auto scaling features.
Addressing security and policy compliance, Docker has recently added the capability to sign container images using a hardware device, to scan container images for vulnerabilities, and to set up separate user namespaces to isolate environments which apparently circumvents through previously mentioned security challenges that container technology pose.
A Note on Container Environments
Although it is one among similar offerings, Docker has become synonymous with container technology. IBM has partnered with Docker for building and running applications on the IBM Cloud. On June 8th 2016 Microsoft announced that Docker now could be used natively on Windows 10 with Hyper-V Containers, to build, ship and run containers utilizing the Windows Server 2016 Technical Preview 5 Nano Server container OS image. Google has also offered adding in support for Docker containers on its Google Cloud Platform.
On the down side of Docker, a movement to boycott Docker is underway on grounds that it masks plenty of loop holes which apparently cause unsuspecting clients to end up in situations like vendor lock-in and slewed network. The movement lays down points to argue that much of the Docker platform is hyped and the reality is barren. Details about an ongoing case against Docker are also laid out. In such a state of uncertainty, especially among enterprises looking to ‘try out’ the technology, we encourage thorough research on other offerings. Choosing between containers and VMs is not an either-or proposition but an ‘if-this-then-that’ strategy; ironical to the if-else/do-while statements in programming languages.