Two Roads, One Destination: Which Will You Take, Docker Containers or VMs

By CIOReview | Monday, July 4, 2016
691
1133
223

“Containerization” is the new talk of the IT world and a big hit, particularly among software developers; and although “container” is a technology and Docker an associated company, by present market it will not be wrong to consider the two as a single entity. Pitched as a faster way to provision, use and move around applications, containerization has rapidly percolated into business infrastructures across the globe; even though the concept remains nebulous to many. Often viewed as the next big thing and a potential substitute for Virtual Machines (VMs), here are what businesses need to learn before container implementation.

Where it all started

On the surface VMs and containers seem different paths to same destination—“improving computing resources utilization”—but underneath they are same; both are virtualization approaches where one’s flaws are the other’s strengths. Quoting Wikipedia “Virtualization began in the 1960s, as a method of logically dividing the system resources provided by mainframe computers between different applications.” Virtualization awards the benefits of decentralized servers like security and stability, while harnessing the most of a machine’s computing power offered by centralization.

Understanding VM

Simply put, a VM is an operating system (OS) or application environment which imitates dedicated hardware and offers the same end-user experience on a virtual machine as they would have on dedicated hardware. The backbone supporting a VM is a hypervisor or virtual machine manager (VMM), which allows multiple OSs to share a single hardware host. Based on what the hypervisor is installed atop (directly on top of bare-metal or host OS), VM architecture may be bare-metal/Type 1; or hosted/Type 2. On top of an installed hypervisor layer, VM instances are provisioned from the host’s available computing resources, with each VM receiving its own OS, called the guest OS.

Understanding Containers

Using the technological term “operating system-level virtualization” makes it easier to understand containers. As the jargon suggests, opposed to VMs that virtualize the hardware, containers virtualize the OS. Instead of using a complex hypervisor supporting multiple OSs, this approach uses a lightweight “container layer” installed atop the host OS, which usually is a Linux variant. Over the container layer, container instances are provisioned from the host’s computing resources and multiple applications are deployed and executed within the single host OS. Several isolations build around the applications, segregate each application and trick each into thinking the server resources for itself.

What exactly is Docker?

Containers are not new and in fact, have been a part of the Linux kernel since 2008. In this context, Docker, as Matt Weinberger puts it, “is essentially a wildly popular open source implementation of lightweight Linux containers, putting some secret sauce on top (and standardizing them in the process)”; or simply Docker is a “container layer” installed atop the host OS. But, the way Docker has taken forward container based virtualization makes it synonymous with the technology; it’s the most popular container standard, which since its launch in March 2013 has had over 100 million downloads and presently there are over 75,000 Dockerized applications.

Docker containers allow packaging an application with all of the underlying micro services, such as libraries and other dependencies, and ship it all out as one package that doesn't require a full-fledged VM to run. This means that one can run numerous applications on a single host Linux OS, as long as the host’s computing resources permit. This also means that applications can be shipped with only the things not present on the host computer, with assurance that the application will run on any machine with Linux kernel. Another advantage is that, since there’s no need to spin up a VM for each and every application more processing power is freed up for more Docker containers.

Containers vs VMs

Through years long presence, VMs have managed to highlight their potential in the IT, but do pack their own limitations like enormous overheads of running multiple guest OSs; inability to freely allocate resources to processes; and reduced application performance due to overhead of calls from the guest OS to the hypervisor. Such deficiencies have prompted the IT community to explore the container technology which even though has been used extensively over the years, failed to find greater traction among enterprises.

Exploring the comparisons between the two is vast and exhaustive and therefore, this article highlights a few important comparisons that may help readers gauge each technology:

-    A VM’s hypervisor dependence grants it flexibility in choosing the guest OS, but leads to lesser portability and agility when compared to containers. However, years long existence has matured the VM technology and enjoys a wider support for back-end of applications, which Docker containers currently lack.

-    VMs offer greater flexibility in terms of networking capabilities, but are more suited for desktop environments than application development. Although, containers can also be networked, the less mature technology makes configuration complex. Before implementation of any technology, management considerations play an important role, and in the case of containers and VMs one has to remember that both are different and have unique management considerations. Due to smaller size, a typical Docker setup will have substantially more containers as compared to the number of VMs in a typical hypervisor setup; this results in greater container sprawl, but owing to larger size, VM sprawl is also a critical issue.

-    Docker ancillary and third party tools for better repository, management and auditing may incur unexpected costs in contrary to the perception of Docker being free. However, in real time such costs are far less than hypervisor technology which makes Docker a highly attractive solution. Mostly suited developers, it needs to be highlighted that Docker may not be suitable for every application like legacy applications which have too many dependencies to be neatly packaged up. Container-based virtualization is mostly for newer applications which are designed to be run at web scale.

-    Docker container’s approach of sharing the host OS’s kernel boosts the performance of the host hardware, but deeper examination reveal limitations underneath. VMs run their own separate OS resulting in higher overheads, but this also means greater flexibility in terms of the OS. On the other hand, Docker is limited to both Linux host and container. Single OS also presents the risk of single point of failure for all of the containers that use it.

VMs have so far displayed their potential within the IT and are likely to remain dominant in the near future. Organizations currently speculating these two technologies need to understand that adoption of any one, to a large extent depends on the organizational structure. Containers are more suited for application development and Docker as a tool is designed to benefit developers and system administrators making it an important within the DevOps. As discussed, Docker's container software holds advantages over VMs in terms of agility, portability and machine speed, but these benefits have their own pit falls when viewed from another perspective.
 
As said above—one’s flaws are other’s strengths—judging by what these two virtualization technologies have to offer, declaring any one the clear winner would be inappropriate and inaccurate. Evaluating the two, at least till one achieves significant breakthrough, it will be safe to assume that the second-to-best option is to create infrastructures where containers and VMs co-exist, complementing each other; expanding the available toolset of today’s application architects and data center administrators in order to provide unique advantages for the most compatible workloads.