Why Containers Instead of Hypervisors?


Our cloud-based IT world is founded on hypervisors. It doesn’t have to be that way – and, some say, it shouldn’t be. Containers can deliver more services using the same hardware you’re now using for virtual machines, said one speaker at the Linux Collaboration Summit, and that spells more profits for both data centers and cloud services.

I confess that I’ve long been a little confused about the differences between virtual machine (VM) hypervisors and containers. But at the Linux Collaboration Summit in March 2014, James Bottomley, Parallels‘ CTO of server virtualization and a leading Linux kernel developer, finally set me straight.

Before I go farther I should dispel a misconception you might have. Yes, Parallels is best known for Parallels Desktop for Mac; it enables you to run Windows VMs on Macs and yes, that is a hypervisor-based system. But where Parallels makes its real money is with its Linux server oriented container business. Windows on Macs is sexier, so it gets the headlines.

So why should you care about hypervisors vs. containers? Bottomley explains that hypervisors, such as Hyper-V, KVM, and Xen, all have one thing in common: “They’re based on emulating virtual hardware.” That means they’re fat in terms of system requirements.

Bottomley also sees hypervisors as ungainly and not terribly efficient. He compares them to a Dalek from Dr. Who. Yes, they’re good at “EXTERMINATE,” but earlier models could be flummoxed by a simple set of stairs and include way too much extra gear.

Containers, on the other hand, are based on shared operating systems. They are much skinner and more efficient than hypervisors. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application,” says Bottomley.

That has implications for application density. According to Bottomley, using a totally tuned-up container system, you should expect to see four-to-six times as many server instances as you can using Xen or KVM VMs. Even without making extra effort, he asserts, you can run approximately twice as many instances on the same hardware. Impressive!

Lest you think this sounds like science fiction compared to the hypervisors you’ve been using for years, Bottomley reminds us that “Google invested in containers early on. Anything you do on Google today is done in a container—whether it’s Search, Gmail, Google Docs—you get a container of your own for each service.”

To use containers in Linux you use the LXC userspace tools. With this, applications can run in their own container. As far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on.

So far that sounds remarkably how a VM looks to an application. The key difference is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.

LXC’s entire point is to “create an environment as close as possible as a standard Linux installation but without the need for a separate kernel,” says Bottomley. To do this it uses these Linux kernel features:

  • Kernel namespaces (ipc, uts, mount, pid, network, and user)
  • AppArmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • Control groups (cgroups)

The one thing that hypervisors can do that containers can’t, according to Bottomley, is to use different operating systems or kernels. For example, you can use VMware vSphere to run instances of Linux and Windows at the same time. With LXC, all containers must use the same operating system and kernel. In short, you can’t mix and match containers the way you can VMs.

That said, except for testing purposes, how often in a production environment do you really want to run multiple operating system VMs on a server? I’d say “Not very damn often.”

You might think that this all sounds nice, but some developers and devops believe that there are way too many different kinds of containers to mess. Bottomley insists that this is not the case. “All containers have the same code at bottom. It only looks like there are lots of containers.” He adds that Google (which used cgroups for its containers) and Parallels (which uses “bean-counters” in OpenVZ) have merged their codebases so there’s no practical differences between them.

Programs such as Docker are built on top of LXC. In Docker’s case, its advantage is that its open-source engine can be used to pack, ship, and run any application as a lightweight, portable, self sufficient LXC container that runs virtually anywhere. It’s a packaging system for applications.

The big win here for application developers, Bottomley notes, is that programs such as Docker enable you to create a containerized app on your laptop and deploy it to the cloud. “Containers gives you instant application portability,” he says. “In theory, you can do this with hypervisors, but in reality there’s a lot of time spend getting VMs right. If you’re an application developer and use containers you can leave worrying about all the crap to others.”

Bottomley thinks “We’re only beginning to touch what this new virtualization and packing paradigm can mean to us. Eventually, it will make it easier to create true cloud-only applications and server programs that can fit on only almost any device.” Indeed, he believes containers will let us move our programs from any platform to any other platform in time and space… sort of like Dr. Who’s TARDIS.

See also:



  1. Marnie van der Wel says:

    Interesting article!

  2. AutoDevBot says:

    I have been coming to this conclusion also.

    What do you think the future stacks will look like? Running containers on bare metal or in VMs?

  3. RandyMSFT says:

    Mike Toutonghi was fired as the CTO of Parallels. It is great to see them with a CTO who understands the virtualization and container market.

  4. Manish says:

    How would you use ARM TrustZone with linux containers

  5. Levi Roberts says:

    I feel that this article and it’s relatives are more geared toward the mindset of “one tool fits all”. Forget that!

    It has not nor ever will be a perfect simple world where that’s the case. No product has ever gotten it 100% right. In fact, it’s most definitely NOT the unix way.

    To reiterate – this topic and perhaps it’s robustness is only geared toward several virtualization.

    Heres the bottom line. Use the right tool for the job.
    Containers for server side application virtualization and hypervisors for client side emulation/virtualization. This has worked well for me and will probably continue to do so.

    Real world use: I use docker on the cloud for nodejs instances and KVM for x86 virtualization on ARM devices with low resources as thin clients.

    • Levi Roberts says:

      My fingers got the best of me and I didn’t spellcheck.

      To reiterate – this topic and perhaps it’s robustness is only geared toward server virtualization.

      Additionally, I’d like to add that each tool has their place and I feel that neither should be left in the dark to fend for itself. I’ll continue to use both where it fits properly.

      There’s a motto at my work place that I will stand by till the end. “Theres always a best way to do everything.” Efficiency is key to being productive. Understanding each tool’s efficiencies and which tool does what best will go a long way in understanding which to choose for the job.

  6. I am a developer. My main use of virtualisation is to run on a different machine than the host. So – containers go away. Often I need to set up a number of machines. Then, in principle, those machines could be similar to my host. But – in practice the task demands some difference. So, then also containers go away.

    So, I agree with others here. If you want to set up a server that can implement a cloud service of computing resources, then you can use containers.

    But, if you want to set up entire machines with your own OS in the cloud, then containers also go away.

  7. William Warren says:

    The lack of isolation is going to be a huge security issue. VM’s give you isolation from each other. A container means one container can either corrupt the host system and/or bring it down since there is component sharing at the base level. Sure if there is a hyper crash that can happen too but that is much harder to accomplish from inside a vm than it will be from inside a container.

    If you REALLY are interested in security then you fire up a vm and run your containers inside of that.

  8. Yossi Cohen says:

    What about features like vMotion of VMWare?

  9. I’ll right away take hold of your rss as I can’t find your email subscription link or e-newsletter service.
    Do you’ve any? Kindly allow me understand so that I may subscribe.


  10. Malcolm Badley says:

    The Docker theory sounds great. Try & build lots of containers in a large production environment & suddenly you are faced with a multitude of problems. This is not to say it will not work but you will have to re-think your entire infrastructure. Is the time invested really worth it? Do not base infrastructure decisions on pure theory. My experience is that Docker works well for developers who may want to build a test platform on their laptop. However migrating from VMs to Docker in production is a project that could take months & even then you will have a highly volatile environment.

Speak Your Mind