The magical world of containers — Docker

Photo by Guillaume Bolduc on Unsplash

Anyone in my LinkedIn network knows last year I became a Certified Kubernetes Application Developer. After about 4 years working actively on Kubernetes day by day, this certification was a great recognition of my studies.
This is why I decided to start a new post series, describing what I’ve learned about containers and their use cases. Topics will be:

* Basic concepts
* Use cases
* Unix background
* Namespaces
* Control Groups
* chroot
* Implementations
* Docker
* containerd
* Orchestration and Docker Swarm
* Kubernetes
* Architecture
* Objects
* Cloud Foundry
* Architecture
* Application development
* OpenShift
* Architecture
* Objects

Before starting, I would add a caveat: this is what I understood after studying on books and on the job, but could include lots of misunderstanding, so please use those posts just a starting point to deepen your knowledge, starting your own learning roadmap (and eventually point me to the misunderstandings)!

After we described what is a container, now we will illustrate how containers are technologically implemented. The most famous implementation is Docker.

Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization. This layer adds also a “language”, with its syntax and semantic, that allow to compose your application as multiple microservices, without locking into any platform or language, since that language introduced by Docker is fully compliant with some standards.

Thanks to Docker, you could design the entire cycle of application development, testing and distribution, and manage it with a consistent user interface. In addition, you could use it to deploy scalable services, securely and reliably, on a wide variety of platforms.

From an historical point of view, Docker was born in 2008 as language-agnostic PaaS offering but in 2013 becoming a community-driven project. The first version released in June 2014, and the first European conference in December 2014.

Docker architecture

From “Using Docker” by Adrian Mouat

Docker is composed by three different components:

  • the daemon, responsible for creating, running and monitoring containers, as well as building and storing images;
  • the client, that communicates to the daemon via HTTP (by default over a Unix domain socket);
  • the registry, which stores and distributes images.

CLI command summary

Previously we said Docker offers a “language” to create and manage containers. The primary way to use this language is the CLI.

One of most used command is docker pull, which downloads the given image from a registry, chosen by the image name.

When an user perform docker pull command, behind the scenes those steps occurs.

  1. The client send a request to the index (responsible for maintaining details on user accounts, images’ checksum, etc.) to download the image.
  2. The index returns the reference of the registry where the image is located, its checksum and an authorization token (only when X-Docker-Token header is in HTTP request).
  3. The client send a request to the registry using the token got previously.
  4. The registry verify token validity with the index.
  5. The registry send the image to the client.

Another important command is docker run , which launch a new container from an image. Making a parallel with Objected-Oriented world, docker run is the same as new Class().



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Gabriele de Capoa

Gabriele de Capoa


Cloud software engineer, wanna-be data scientist, former Scrum Master. Agile, DevOps, Kubernetes and SQL are my top topics.