Docker is a very popular tool in the world of enterprise software development. However, it can be difficult to understand what it’s really for. Here we will take a brief look at why software engineers, and everyday users, choose Docker to quickly and efficiently manage their computer software.
Containers vs. Virtual Machines
Docker is a tool used to run containers. Containers are sort of like virtual machines, which are like simulations of a computer running inside of your real computer. If you’ve ever used VirtualBox or VMware, you may be familiar with virtual machines used to run Windows inside of a Mac, for example. A virtual machine simulates all of the parts of a real computer, including the screen and the hard drive, which on the real computer (often referred to as the host) is just a big single file (called a virtual hard drive). On a virtual machine (or VM for short) running Windows, the virtual hard drive contains all of the Windows operating system code, which can be several gigabytes. The Windows in the VM doesn’t know that it’s running inside of a simulation and not a real computer — it just thinks its the main operating system. Docker, like VirtualBox, “virtualizes” an operating system inside of a host operating system.
So what’s the difference between a virtual machine in VirtualBox and a container in Docker? Well, using a virtual machine can be a heavy task for a processor. In the VM example above, not only is the Mac host running all of the Mac OS background tasks, it’s also running all of the Windows background tasks — which all just look like one big heavy program to the host. The host operating system has control over how much processing power it gives to a program, and VirtualBox asks for a lot of power, so virtual machines often run very slowly. It can be especially taxing to run several VMs at once, because that is asking one computer to run several entire operating systems at once, and keep gigantic virtual hard drives containing several operating systems. Running several instances of the same OS is clunky, redundant, and unnecessary, often defeating the purpose of running a virtualized OS in the first place.Enter Docker containers — the solution to monolithic, slow virtual machines. Containers share redundant resources, such as big operating system files, and containers split for resources unique to each individual container, such as processing power for each running virtualized program.
Containers and Images
Docker builds containers based on images, which contain the program code that doesn’t change during usage. Images are stacked on top of each other to build a complete setup. Stacked images can share the same core images, much like the branches of a tree stem from the same trunk.
Say, for example, you want to test what your new website looks like on different web browsers, but you don’t want to directly install every browser onto your computer. Doing so could cause problems with your personal browser, confuse your OS default browser choice, conflict with your browser extensions and configurations, and display your website in a non-standard way. When testing your website, you would want to do so in a clean, vanilla, isolated test environment, and this is exactly what Docker is good for.
So you make three Docker images, with Chrome, Firefox, and Edge installed, respectively. All the images should be the same, except for the browser they have installed, so all of the images will use the same operating system. Instead of setting up the same operating system three times, you can tell each browser’s image to build on top of the one core image of an operating system. Each new image will uniquely install a web browser on top of the OS one. This saves a lot of storage and processing power resources.
A container executes the code in an image, but it doesn’t change the image. As the name suggests, containers are self-contained, so they can be quickly and easily created or destroyed, even if they crash. You can run several instances of containers on the same image. Now you can spin up three containers, one on each image, and test your website in a clean, default Chrome, Firefox, and Edge environment, without messing with your daily driver web browser.
Volumes and Host-Container Interaction
Docker containers are great for quickly spinning up and scrapping programs in an isolated environment, but as virtual computers, there must be some way to persistently store data. That is what Docker volumes are for. You can quickly create and destroy simulations of hard drive storage, that you can mount, read, write, and eject from containers with ease. Think of them as virtual USB flash drives.
You can also bind-mount folders on your host machine to a container, allowing you to instantly share files between otherwise isolated environments. This is less like a USB flash drive and more like a direct data bridge between host and container.
Finally, you can also open TCP/UDP ports between host and container, to access web content. This is like a direct web bridge between host and container.
One major difference between virtual machines and containers is that you don’t typically interact with Docker containers through a graphical user interface — you usually use a command line interface, or a web browser, though you could run a GUI for a container. This is why Docker containers are more common in the software development world than the common consumer world.
Docker containers are very useful for creating isolated environments for separate programs to run in without interfering with each other. They make it very easy to tinker in a throwaway environment without risking your main host setup.
If you would like to learn more about Docker, you can read about How to Get Started with Docker, how to write a Dockerfile to build custom images, and further how to use Docker Compose to orchestrate several Docker containers to work together.