The Docker Revolution

Mushaffa Huda
5 min readApr 5, 2021

Docker is the future of virtualization!

As a software engineer, its really easy to develop some sort of an imposter syndrome somewhere down the line when everybody is using phrases like “pull the image”, “have you, docker, it?”, “just run docker-compose” etc.

well, fret not everybody felt the same at some point, even me.

here in this article, I will try to explain Docker in the most simplistic way, so that we all could understand what it is, and why is it so popular nowadays.

What is Docker Anyway?


Docker is a tool designed to make it easier to create, deploy, and run applications by using containers.

Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package.

For example, two containers running on the same computer might as well be on two completely different computers. They are entirely and effectively isolated from each other.

so whats the big deal?

well, the isolation has several advantages

  • Two containerized processes can run side-by-side on the same computer, but they can’t interfere with each other.
  • Two different applications can run containers on the same hardware with confidence that their processes and data are secure.
  • using shared hardware means that we use less hardware in general. this means companies and startups could save a lot of time and money in the need of getting thousands of servers just to run their applications.

the main purpose of using docker is to make the process of application development portable, simple, and robust.

Docker vs VM

so what's the difference between a docker and virtual machines? sounds pretty similar to me

In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they’re running on and only requires applications to be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.

here is the illustration of the difference between VM’s and docker on a high level, as you can see the difference is quite palpable.

Docker images don't have extra loads like the hypervisor on VM’s, and each VM’s has an OS inside of them, making them rely on heavy disk space usage, well in the Gigabytes(GB) size. Whereas Containers are light-weight, often in the Megabytes (MB) size, and run directly on the OS kernel.

in short, the main difference between Docker and a VM is that a Virtual Machine is virtualization done on the hardware level, whereas Docker is virtualization done on the Operating system level.

Docker Orchestration

Containers can be thought of as necessitating three categories of software:

  • Builder: technology used to build a container.
  • Engine: technology used to run a container.
  • Orchestration: technology used to manage many containers.

Docker orchestration is basically all about managing the lifecycle of containers.

Software development teams use orchestration to manage and automate many tasks at different stages of development. it can be used in basically any environment that uses containers. it can help you deploy, manage, scale, and also set up the networking between containers.

the set example of Docker orchestration tools are Kubernetes, Docker Swarm, and Mesos. There are also cloud services available such as ECS, Google Container Engine (GKE), and many more. but in this article, I will not go over those and just take a look at a much simpler orchestration.

Real Project Docker Application

In a project I'm currently working on, we use a simple orchestration tool called Docker Compose. It's a framework that allows us as developers to define container based applications in a single config YAML file.


yep, take a look at the snapshot I took below.

it is that easy to configure a docker-compose file. all you need to define is a set of services, put the configuration of those services such as the build, the port it is going to be broadcasting/listening to, the volumes, and also the environment of the services, all in a single 10–20 block line of code.

here's another example usage of Docker compose on our backend system.

by running docker-compose up you could run a multi-container application on your host computer, without the hassle of setting up huge infrastructures and architecture that is both mind-bogglingly difficult and mind-numbingly boring 😫

here is an example of me running a docker-compose command on the backend system.

after its finished, you could see the currently running containers by using the docker ps command.

alternatively, you could also use something like the Docker desktop on windows and mac, to see and manage the running containers.

docker desktop

pretty neat huh? now we could just access our container on its specified port.