xkcd comic
Source: xkcd.com/1988

Docker and Kuberntes

As developers, we work with several tools and platforms that helps us achieve our end goal product. Managing everything we need can become a tedious task for us and most importantly time consuming. From setting up the database, the web server and downloading all the required dependencies all that time is waisted before we can start coding and getting things done.

As a mac user I lost few hours at the beginning of this semester as our Capstone project is based on a dotNet framework. I had to spend more time setting up a virtual machine so I can run MSSQL as it is not available in platforms other than Windows. That is just the beginning, other software engineers around the globe might need to test their applications on different environment to see if they work and behave as they should. Imagine setting up a virtual machine every time you had to develop an application, not only a virtual machine takes time to be completely setup, it takes us as twice as we used to because we still need to do all the setup we usually do on our host machine.

That is where containers comes into place, especially docker. Application containerization is nothing new as it is a technology that has been evolving since the 70s. In the other hand, Docker has been around since 2013 and since then it did revolutionize how we work with containers. As much as system administrators need a software like VMware ESXI to manage the hundreds of virtual machines their users use we also need something to do the exact thing but with our containers. That is where container-orchestration system like Kuberntes comes into place. In this blog post we will go through the basics of Docker and Kuberntes in order to have a proper understanding of how these systems work, then in the second blog post we will demonstrate how both environments work hands in hands to achieve a scalable, stable and robust environment.

🐳 What is docker ? 🐳

Software Cycle
As mentioned in the introduction, when working on a project we have several frameworks and dependencies that our project relies on. Let’s assume that we are done with writing our code and it is time to send it to the QA team in order for them to test it and point bugs and errors in our code. Once they receive the code they might not have the same dependencies versions or even are running your application on a different OS. After spending some time having an identical environment as your dev team they are finally able to test it, the next step is deployment which might also be a headache for the DevOps team. Docker is a tool that is meant to work on new project, test them and deploy them by using containers. We can think of a container as a virtual machine but rather than having to create a new virtual operating system on top of a host OS and a hypervisor we only need to have docker installed which will provide those containers with the same linux kernel to run on. Docker can save us a huge amount of time, save us infrastructure cost as we won’t need to worry about licensing as docker is open source. On top of saving time and cost our resources won’t be overused. Instead of having multiple heavy resources virtual machines running on our development server we can simply have multiple docker containers that run our services without being resources hungry. Finally, these are some benefits of using Docker containers:
  • Docker runs applications in an isolated environment.
  • Apps are developed, ran and deployed on the same environment.
  • Docker uses less memory.
  • Docker images can be built to contain the bare minimum requirements that our app needs. Also known as Alpine images.

Docker Architecture

docker architecture

Now that we have a brief understanding of what docker is, we can start taking a look at the main components:

  • Docker Registries: this is where we store our images, we have public and private registries. There is several images registries but the most popular one is docker hub we can also deploy our own on premise registry. Just like github is used to store our code, docker registries store docker images.
  • Docker Images: An image is read only template that provide instructions on how to create a docker container. We can think of it as a class or a blue print. Usually we start our projects by creating an image on top of already created images. Docker Container: This is the runnable version of our image. We can think of it as our virtual machine.
  • Docker Deamon: Listens for API requests and interacts with objects such as volumes, containers, image, etc.
  • Docker Client: Also known as docker dashboard, this is how must users interact with docker.

Install Docker

Docker is available for a wide range of operating systems, even for the new Apple M1 chip which is something incredible as that shows us how eager the community is to have this technology available for everyone. Before watching the video on how to install docker please head to https://docs.docker.com/get-docker/


In the next brief video we will walk through the Docker Dashboard which is simply a user interface that will help us work with docker. In the next sections we will also take a look at how we can use the command line to achieve the same things and also have more control over docker.

Hands On Tutorial (CLI/Dashboard Tutorial)

Working with Images

So far we have learned about the architecture of docker and its main components let’s take a closer look at how things work. Before creating a container we need an image, in order to do so we need to run docker pull name-of-image docker will then go ahead and look for an image with the same name from docker-hub as it is our main repository. For the purpose of this tutorial we will be creating a new container for Wordpress (the most popular blogging platform) docker pull wordpress will pull the image and store it locally so we can create a new container from it.

Now that we have an image we can also list all the available images that we have by simply running docker images -a. We can also delete an image by doing docker image rm image-name. If you face any issues or you are stuck somewhere you can use the --help in front of any command and you will get some help, we can try that with the container command as we are going to be creating our first container from the CLI.

Working With containers

We are now ready to create our first container, how awesome is that 😎😎😎 !!! In our case we are creating a wordpress container referring to their documentation we can create the new container by simply running docker run --name some-wordpress -p 8686:80 -d wordpress. Let’s analyze together the meaning of each part of this command:

  • docker run: We are calling the container process that will be responsible of creating a new container for us.
  • --name: This argument will give a name to our container. If we do not include it docker will generate a random name for us. It is always best practice to give a container a name especially when you work with different names and version of a containers things can get tricky.
  • -p: This argument maps one or multiple network port(s) to our container. If we look at the picture bellow we can see how the exposure of ports work. In docker whenever we are exposing a port, drive, etc between the host and the container the right side of the colin is the container resource and the left side of the colon it is the host resource. In our case we are exposing port 80 of our container to port 8686 of our host.
docker ports
  • -d: This flag is used to instruct docker that we would like to run the container in detached mode AKA in the background as we won’t be giving any inputs our outputs to the docker CLI
  • wordpress: Finally we can give the name of the image that we would like to use.

☸️ What is Kubernetes? ☸️

Kubernetes
Source: Level Up Coding
As our projects grow we might have hundreds of containers and in order to keep everything working properly without loosing our sanity we need a tool that help us manage everything. In order to do so we need an orchestrator that will make sure that everything is working properly. Kubernetes is an open source platform that is responsible of managing containerized services, it is an Open Source platform built by Google.

Why use Kubernetes?

Let say we would like to maintain more than one wordpress website, after few months we have more than thirty wordpress containers and their databases, even if our website is powerful we did experience some down time from time to time due to different reason. Some team members did come up with few scripts to monitor and recover the down time but that is not enough we need to forward requests from one container to another, balance the load and our team is not able to keep up with everything going on. By using Kubernetes things are different as the system will take care of balancing, recovering and making sure we do not experience any down time. Kubernetes allow us to scale our applications and services by managing them properly. In addition, we can deploy containers on premise and some containers on the cloud, as long as all these containers are part of the same cluster we will be able to manage them via Kubernetes.

Kubernetes Architecture

kubernetes architecture
Kubernetes is a huge system that cannot be summarized in one section, for the purpose of this part we will only cover the most important components to us. Just as we did with docker's architecture we only covered the most relevant parts that will help us understand both systems in order to be able to create our own mini project on the next blog post.

Let’s take a look at few major components.

  • API Server: Similar to docker api this service is responsible of receiving requests related to pods,nodes, services, ingress,etc. It is the heart of kubernetes.
  • Scheduler: This process is responsible of assigning pods to the worker nodes.
  • Node: These are our workers machines, they can either be virtual or physical machines.
  • Pod: This is a single instance of our running instance, for example it can be a docker container. Pods are not only supposed to be docker containers but can be any type of container.
  • Networking
    • Service: We can look at it as our network layer, a service has a permanent IP address. When we assign an IP address to a pod we don’t want the service to die if the pod terminates that is why pods and services are not connected. Services also behave as load balancers, they route the requests to the available pods and workers.
      • External Services: This type of service is used if we would like to expose our application to external inbound connections.
      • Internal Services: We use this type of service if we would like to only allow internal connections like between our application and the database.
    • Ingress: If we would like to allow HTTP connections to our application ingress will receive the request and forward it to our service.
  • Configmap: External configuration of our containerized applications.
  • Secret: This component stores our secret configurations such as databases passwords and usernames, everything is base64 encoded.

Kubernetes Locally

Whenever we need to test a cluster configuration we need to be able to have the possibility to imitate a real cluster. Fortunately we are not required to have several environments in order for us to test our cluster. All we need is Minikube for testing purposes, which will have both the worker and the master nodes on a local virtual machine for us to work with it. Just like Docker CLI we also have kubectl which is going to help us interact with our cluster we have the option to simply use the CLI or one of the several open-source dashboards.

What’s Next?

In this post we solidified our knowledge of Docker and Kubernetes, we learned how we can containerize an application and how it is more efficient and faster than deploying the same application on an actual virtual machine. As usual the best software is the one that can be scaled to be used by thousands of users around the globe, we might be ready to save time and resources to run our services inside containers but we were not equipped with an orchestrator that will manage the hundreds of containers we have, until Google came up with kubernetes an immense system that will help us dive into our next project which is creating a cluster of nodes in order to showcase our knowledge of both systems and mainly to prove that docker and kubernetes are not rivals but when combined they are the best team. I hope that this post provided you with some new knowledge and I am looking forward for you to read the next part of this post.

References

Word Count ≈ 2,172