Docker- Everything you wanna know.
PART 1- Basics of Docker and all questions related to it.

If you’re a programmer or involved in operation team for production level application delivery, there is a high chance you’ve at least heard about “Docker” as an open source platform for building, deploying, and managing containerized applications, but this definition seems overwhelming. In this article you’ll learn Docker from scratch and after completing you will be able to use docker in your own application delivery process.
Now a days tech giants like google, Microsoft, Amazon are also showing there interest in Docker and are showing great support to expand the technology, so it’s better to get equipped by the knowledge about it.
In this article you will learn all about docker in detail, but before that let, me start by telling you what are we going to learn in brief, so if you are looking for a particular question or concept you can directly jump there.
Overview of the article
- What’s the need of docker?
- What problem it solves?
- What is docker?
- What are containers?
- What are images?
- What is difference between Image and Container?
- What is difference between VMs and Containers?
- Why not hypervisor and why to prefer docker over it?
- Docker Architecture
- Docker Installation
- Dockerizing your application (Dockerfile)
- Development workflow of the application using docker.
- Basic docker commands
- What is docker hub?
- Docker as Life savior in some cases.
- Future plans and what’s next.
What’s the need of docker?

Consider the following conversation between a developer and a tester working on a full stack project:
Developer: Hi, our application is up and running and it’s ready to test. Code has been pushed.
Tester: Great, I’ll test it.
(after 2 hours)
Tester: Seems like the application has some issue or few dependencies are missing, I’m trying to run the application but it shows the error XYZ.
Developer: It’s running perfect here, maybe you’ve missed some dependency installation (I don’t see any XYZ error at my end)
Tester: I’ll check.
(tester tried to find the issue- took 4 hours)
Tester: No it’s not running, it says XYZ error while starting.
Developer: but it works fine on my machine.
and the blame game continues…
In the above conversation who do you think is wrong, the answer is “no one”- actually the process of shipping the application is wrong- “the way that both are sharing the codebase of the application is wrong”, shipping application this way can cause following issues:
- There might be some dependencies which were not mentioned by the developer while sending the application, or some dependency version mismatch might occur.
- There might be some OS level features which were not included in the project and can be reason of the error XYZ.
- There might be some dependencies which were already installed by the developer while working on some other application and has forgot to mention in the project.
The different environments all require both software and hardware management. We have to ensure that both the installed software and configured hardware in each is the same. We also need to configure aspects such as network access, data storage, and security per environment in a consistent and easily reproducible manner.
Due to the these reasons there is a debate between the teams(actually no one’s fault) and here is when docker comes into play and solves the issue.
What problem it solves?

“It works on my machine” is the problem which docker solves by giving us the concept of “share everything which your project requires to run” including application code, dependencies, OS level features(optional), documents, configurations, processes, networking and anything related to the project, so it is easy for the other person to use your code without errors in installing and managing the dependencies. If you ship(send/share) your application this way there are few benefits mentioned below:
- It works on all machines as everything has been shared.
- The other person do not need to care about all the set up and installation process to start the application.
- It reduces the friction between teams by making the SDLC easy and less error prone.
- Saves time and resources- as the person only needs to install the container which contains all the things of the application.
In the next section let us actually understand what is docker in technical terms.
What is docker?

Docker is an open platform for developing, shipping, and running applications. Docker is designed to facilitate and simplify application development. It is a set of platform-as-a-service products that create isolated virtualized environments for building, deploying, and testing applications. it enables you to pack all the things which are needed for your application to build and run including application code, dependencies, OS level features(optional), documents, configurations files, processes, networking and anything related to the project in a container and ship(share) it directly to anyone else, so that they can directly run your project without concerning about your project configuration and setup. This reduces the over head of setting up the environment and also reduces friction between the two individuals or teams so that they can focus on there jobs rather than solving the errors related to setup and configuration of the project.
but how does docker pack everything in a CONATINER, what is a container ?
What are containers?
Containers are software that wrap up all the parts of a code and all its dependencies into a single deployable unit that can be used on different systems and servers. A container is a virtualized run-time environment where users can isolate applications from the underlying system. These containers are compact, portable units in which you can start up an application quickly and easily. This containers can be shared to other teams by making an image so that that the other team can directly make an instance and start working on it without worrying about the installation and setup.

This images are smart enough to set them up on any OS which has docker installed on it and works smoothly. The installation of all the dependencies and pre-requisites for the project and modules is handled by the containers itself.
but how do we share the containers? what are images ?
What are images?
An image is a portable package that contains software. It’s this image that, when run, becomes our container. The container is the in-memory instance of an image.
An image is immutable. Once you’ve built an image, the image can’t be changed. The only way to change an image is to create a new image. This feature is our guarantee that the image we use in production is the same image used in development and QA.
A Docker image is an immutable (unchangeable) file that contains the source code, libraries, dependencies, tools, and other files needed for an application to run.
Due to their read-only quality, these images are sometimes referred to as snapshots. They represent an application and its virtual environment at a specific point in time. This consistency is one of the great features of Docker. It allows developers to test and experiment software in stable, uniform conditions.
Since images are, in a way, just templates, you cannot start or run them. What you can do is use that template as a base to build a container. A container is, ultimately, just a running image. Once you create a container, it adds a writable layer on top of the immutable image, meaning you can now modify it.

You can create an unlimited number of Docker images from one image base. Each time you change the initial state of an image and save the existing state, you create a new template with an additional layer on top of it.
Docker images can, therefore, consist of a series of layers, each differing but also originating from the previous one. Image layers represent read-only files to which a container layer is added once you use it to start up a virtual environment.
What we need to understand from this is, we just share the image of our container- not the container.
What is difference between Image and Container?
When discussing the difference between images and containers, it isn’t fair to contrast them as opposing entities. Both elements are closely related and are part of a system defined by the Docker platform.
If you have read the previous two sections that define docker containers and docker images, you may already have some understanding as to how the two establish a relationship.
Images can exist without containers, whereas a container needs to run an image to exist. Therefore, containers are dependent on images and use them to construct a run-time environment and run an application.
The two concepts exist as essential components (or rather phases) in the process of running a Docker container. Having a running container is the final “phase” of that process, indicating it is dependent on previous steps and components. That is why docker images essentially govern and shape containers.
What is difference between VMs and Containers?
Before looking at the difference between VMs and Containers let us understand what is hypervisor.
A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, such as memory and processing.
Containers and hypervisors are both involved in making applications faster and more efficient, but they achieve this in different ways.
VMs:
- Allow an operating system to run independently from the underlying hardware through the use of virtual machines.
- Share virtual computing, storage and memory resources.
- Can run multiple operating systems on top of one server (bare-metal hypervisor) or installed on top of one standard operating system and isolated from it (hosted hypervisor).
- Efficiency: VM are less efficient as they have to manage full blown guest operating system. VM’s have to access host resource through a hypervisor.
- Portability: VMs aren’t that easily ported with the same settings from one operating system to another.
- Scalability: VMs aren’t very easily scalable as they are heavy in nature.
- Deployment: VMs can be deployed by using the PowerShell or by using the VMM or using cloud services such as AWS or azure.
Containers:
- Allow applications to run independently of an operating system.
- Can run on any operating system — all they need is a container engine to run.
- Are extremely portable since in a container, an application has everything it needs to run.
- Efficiency: Container are way more efficient as they only utilize the most necessary parts of the Operating system. They act like any other software on the host system
- Portability: Containers are self- contained environments that can easily be used on different Operating systems.
- Scalability: Containers are very easy to scale, they can be easily added and removed based on requirements due to their light weight.
- Deployment: Containers can be deployed easily using the Docker CLI or making use of Cloud services provided by AWS or azure.

Why not hypervisor and why to prefer docker over it?
Hypervisor was a solution which came before docker and was successful till docker came in picture because of the following reasons:
- Hypervisors are heavyweight as they have OS installed on the Host OS on the other side, containers are lightweight as they only have what the application needs.
- Hypervisors have a limited performance as they are built on the Host OS but docker can have a native performance and can be very flexible.
- Hypervisor require a high memory storage as it installs the whole OS on the host OS but containers require less storage as they only install the software.
- The start time of the Hypervisors in minutes but the start time of containers in in milliseconds, so they are fast.
There is one feature where Hypervisor wins over Docker is Security- hypervisor are fully isolated hence have a high level security but docker have the process-level isolation possibly less secure.
There are still few companies which use hypervisors or docker keeping in mind there own preferences!
Docker Architecture

it’s okay if at first this image looks overwhelming to you- let us try to understand this in bits and pieces and at final join everything and I’m sure that will make sense.
The Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
In easy words daemon is a middle man between the client and the registry which manages all the services, listens to the calls the client makes and responds to it.
The Docker client
The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
Docker registries
A Docker registry stores Docker images. Docker Hub (more on that later) is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.
When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
from now we’ll talk about more technical and implementation concepts related to docker- installation, workflow and etc.
Docker Installation on Windows, MacOS, Linux

installation process is very well explained in the docker documentation, so why reinvent the wheel.
- steps of installation on LINUX: linux installation
- steps of installation on WINDWOS: windows installation
- steps of installation on MACOS: macOS installation
Dockerizing your application (Dockerfile)
Dockerizing your application simply means- adding a Dockerfile to your application and thus enabling it to build and run inside a isolated container.
A Dockerfile(with no extension) is basically a file containing the instructions of building a container of the application. The Instructions in the docker file are responsible how the container will get build. The Dockerfile is added to the root directory of the project, so that it can have the access of the complete project.
Below is the example of a Dockerfile:

- FROM: the command is used to set up the pulling stream of the container, or it is used to make the application aware- from where it is going to extend or extract the file which will the application needs. Moreover the docker container is smart enough to know from where it has to pull. By default the pulling registry is set to docker.hub but we can also set up our own private registry for pulling in the base image and this service of setting up the private registry are provided by any of the cloud services such as AWS, AZURE or GCP.
syntax: FROM _name_of_base_image_
- COPY: the command is used to copy the files from the pulled image to the root directory of our project. We can copy all the files from the pulled image to the root directory by mentioning (.) or we can copy only required files in the root by mentioning the absolute paths (node/xyz/pqr.js).
syntax: COPY _what_to_copy_ _where_to_copy_
- WORKDIR: the command is used to set up the current working directory as the root of the project- so if someone starts our container, they will be directed to this directory which we set up. The advantage of using this command is to reduce the load on the other person, so that they know from where to build and run the project.
syntax: WORKDIR _dirctory_path_
- CMD: cmd stands for the command or a instruction to the container to run this command when someone runs the container. So this is the command used to start the project inside of our container.
syntax: CMD _command_
so now if you are familiar with the basic commands of Dockerfile- let’s recall what is our Dockerfile doing:
- it takes the base image which is node:alpine from docker.hub (default repo.)
- copies all the files from the pull image to the /app directory in our project.
- sets up the working directory to /app in our project.
- finally tells the command which should be executed- node app.js
Development workflow of the application using docker.

The basic workflow of docker is inspired by the workflow of VCS.
The developer writes the code on his local machine- then he containerize the application by the help of a docker file- then he pushes the image of the container on any repository from where other folks can take that image and test it or continue to work on it.
The other person who wants to run the application- pulls the image from the same repository and builds a container from the image which starts the application on their own local machine flawlessly.
We can see how the problem which was raised by the tester and the developer in the earlier section has been now resolved and now:
“IT WORKS ON EVERY MACHINE”
Basic docker commands
When starting with docker there are few basic commands which are used on regular basis like- building a image, pull it from remote registry, listing the image running on the machine and many more…
Below are the listed commands which are bare minimum and explained easily to the extent that you can start using them.
How to build a image? How to run an docker container? How to list images? How to remove a image? How to pull the docker image?

What is docker hub?

Docker Hub is a Software as a Service (SaaS) Docker container registry. Docker registries are repositories that we use to store and distribute the container images we create. Docker Hub is the default public registry Docker uses for image management. Docker Hub is a service provided by Docker for finding and sharing container images with your team. It is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers.
Keep in mind that you can create and use a private Docker registry or use one of the many cloud provider options available. For example, you can use Azure Container Registry to store Docker containers to use in several Azure container-enabled services.
Docker as Life savior in some cases.
This is a purely personal experience which I‘m going to share with you, when docker came out as a life savior for me. While starting a project I came across a issue to connect database to our server side backend that was again “It works on my machine”- but everyone in our team were having there own environment setup. Few guys were using macOS few were using windows so the major problem was to set up the correct deployment and sharing system to share the software as a complete package between us all.
Few problems which we faced at the beginning were
- The local database which we were using for testing on personal devices became inconsistent as we tested on an on.
- Tracking a bug was nearly impossible due to different data on different machines.
- The internal dependencies which windows person used was useless in macOS and vice versa- so merge conflicts started.
- Dependencies went out of date on few machine due to system settings(already setup)
After smashing our heads hard on the wall we decided to move to a isolated system where we kept our software, deployed a database locally in the container and docker did everything else.
We managed to reduce the merge conflicts, time for testing and debugging bugs, the system finally looked on place.
Thanks to Docker!
Future plans and what’s next.
While you will be reading this article- I’ll be writing the part 2 of the same and few topics/question which will be addressed in the article will be:
- Pushing and pulling a image on docker hub or private repositories.
- What is docker compose?
- Making a docker compose file and related commands.
- What is Kubernetes?
- How to used and deploy Docker- a real life example.
- Details of Docker- Managing containers efficiently.
- Private repositories with examples.
- Lot more about kubernetes.
If any other questions or concepts you would like to read in the next article (part2) please mention that in the comments or contact me. All the suggestions are always welcomed. I hope you like it- thanks for your time :)