Why you should care about Docker?Chinmay ShahBlockedUnblockFollowFollowingJan 22If you’re a Software Engineer or a Data Scientist, you probably would have heard about docker by now.
The way it caught my attention was me browsing around the internet for deep learning framework and almost every other framework had docker support, which got me thinking what exactly is docker.
It is definitely not intuitive at first glance.
But before we dive into docker, we need to understand what are VMs and containers.
What are “Containers” and “VMs”?Containers and VMs are similar in their goals: to isolate an application and its dependencies into a self-contained unit that can run anywhere.
Moreover, containers and VMs remove the need for physical hardware, allowing for more efficient use of computing resources, both in terms of energy consumption and cost effectiveness.
The main difference between containers and VMs is in their architectural approach.
VMs and Container architectureAs it is quite evident in the above diagram, VMs are built on top of the HostOS, thus adding an extra layer, which is completely eluded in containers.
If you’re like me, just think of it (docker) as a better VM where you can do tons of experimentation and don’t have to worry about environment variables.
What is Docker?What Docker really does is separate the application code from infrastructure requirements and needs.
It does this by running each application in an isolated environment called a ‘container.
’This means developers can concentrate on the actual code to run in the Docker container without worrying about the system it will ultimately run on, and DevOps can focus on ensuring the right programs are installed in the Docker container and reduce the number of systems needed and complexity of maintaining said systems after deployment.
Why you should care about it?Every single Docker container begins as a pure vanilla Linux machine that knows nothing.
Then, we tell the container everything it needs to know — all the dependencies it needs to download and install in order to run the application.
This process is done with a Dockerfile.
For this section, suffice it to say, Docker takes away the guesswork (and hours spent debugging) from deploying applications because it always starts as a fresh, isolated machine and the very same dependencies get added to it.
No environments that have different versions of dependencies installed.
No environments missing dependencies entirely.
None of that nonsense with Docker.
Ease of use: Docker has made it much easier for anyone — developers, systems admins, architects, and others — to take advantage of containers in order to quickly build and test portable applications.
It allows anyone to package an application on their laptop, which in turn can run unmodified on any public cloud, private cloud, or even bare metal.
The mantra is: “build once, run anywhere.
Speed: Docker containers are very lightweight and fast.
Since containers are just sandboxed environments running on the kernel, they take up fewer resources.
You can create and run a Docker container in seconds, compared to VMs which might take longer because they have to boot up a full virtual operating system every time.
Docker Hub: Docker users also benefit from the increasingly rich ecosystem of Docker Hub, which you can think of as an “app store for Docker images.
” Docker Hub has tens of thousands of public images created by the community that is readily available for use.
It’s incredibly easy to search for images that meet your needs, ready to pull down and use with little-to-no modification.
Modularity and Scalability: Docker makes it easy to break out your application’s functionality into individual containers.
For example, you might have your Postgres database running in one container and your Redis server in another while your Node.
js app is in another.
With Docker, it’s become easier to link these containers together to create your application, making it easy to scale or update components independently in the future.
Getting started with DockerHead to docker website, and if you’re using Windows 10 Home edition, you need Docker Toolbox.
Once you manage to install Docker, Let’s try running ubuntu image on it — more about it later.
Now Docker allows you to either use an already pre-built image or build upon an existing image.
This building upon an existing image is really exciting.
You get to customize the image, to only what you need and work upon it.
Before we start looking into Dockerfile, let’s ensure that our installation is complete.
Head to Docker Quick Start terminalDocker Quick Start TerminalTo ensure that our setup is correctly configured, let’s run a default image provided by Docker.
docker pull hello-worlddocker pull commandTo see the image you just pulled, type the below command:docker image lsAnd finally, for the moment, you were waiting for, Hello, world in Dockerrunning the hello-world containerdocker run hello-worldThe Dockerfile — where it all beginsDocker is a powerful tool, but its power is harnessed through the use of things called Dockerfiles (as mentioned above).
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Using docker build users can create an automated build that executes several command-line instructions in succession.
– Docker, Dockerfile ReferenceA Docker image consists of read-only layers each of which represents a Dockerfile instruction.
The layers are stacked and each one is a delta of the changes from the previous layer.
When a Docker container starts up, it needs to be told what to do, it has nothing installed, it knows how to do nothing.
The first thing the Dockerfile needs is a base image.
A base image tells the container what to install as its OS — Ubuntu, RHEL, SuSE, Node, Java, etc.
Next, you’ll provide setup instructions.
These are all the things the Docker container needs to know about: environment variables, dependencies to install, where files live, etc.
And finally, you have to tell the container what to do.
Typically it will be running specific installations and commands to the application specified in the setup instructions.
I’ll give a quick overview of the most common Dockerfile commands next and then show some examples to help it make sense.
Trying Ubuntu on DockerHere are a few sample Dockerfiles, complete with comments explaining each line and what’s happening layer by layer.
# Get the base ubuntu 18.
04 from Docker hub# Head to https://hub.
com/_/ubuntu for other variationsFROM ubuntu:18.
04# Get the necessary updatesRUN apt-get update# This ensures that the first directory that is opened once image is # build is /homeWORKDIR /homeSave this file as Dockerfile.
Now head to DockerQuick start terminal and make sure the current directory and location where Dockerfile is stored are same.
docker build .
Now when you do this, although docker creates an image, you’ll have to remember the random name given by docker to it as we have not named it yet.
docker build -t ubuntu1:latest .
Now this ensures that this image just build is named as ubuntu1docker image lsDocker imagesWhat you can see is the image size is just 111MB compared to when we use VM where we have to allocate at least 10GB.
Also note that when we don’t have -t tag while building, the repository name and tag is none.
So when we try to use this image, we need to remember it by IMAGE ID.
-t is basically in format repository:tagIf you forget to put tag, docker by default labels it as latest.
And finallydocker run –rm -it ubuntu1:latest— rm ensures that after the container is run, it is deleted immediately after it.
-it because we want to interact using terminal.
You can type all the commands that you type on your Ubuntu system in this docker terminal.
ConclusionI hope you’re now equipped with the knowledge you need to start hacking Docker on your own system and are able to realize the power of this incredible tool!.