Demystifying Kubernetes (K8's), The Office version.

As much, as I would love to go the route of a 3000 word complete explanation of every word, concept, and technical term in a single blog. We will take a shortcut with a bunch of links, videos and diagrams and my trust that you will Google or comment questions on things you didn’t understand.

The meaning of Kubernetes in Greek is pilot.

Considering, it’s a container-orchestration system for automating application deployment, scaling, and management. It’s quite an apt choice. Kubernetes was created for the sole purpose of automating the management of containers. Let’s go over what containers are.

Containers, are basically lightweight, standalone virtual machines. They pack everything that is needed to run an application. Code, runtime, system tools, system libraries, settings, you name it and its there.

Jello is the container that packs the stapler which is the Code, libraries, files all together in one place.

Let’s say, Aastha is building an application using containers and its working quite nicely. The container contains the OS, runtime libraries needed by the application, and the application code it self (Why Containers). She wants to deploy her application for others to use and modify. Her application starts getting more users and for serving those users she would need more resources. More users means more problems. Hence, instead of dealing with just a handful of containers. Aastha has to work with hundreds of containers. This becomes too cumbersome too fast. She needs a way to automate this process. Our humble pilot comes to rescue. Over to part 2.

Diving Deeper into the architecture!

In part 1, containers were totally getting the job done except in the end when it comes to wide scale deployment. They become a bit too hard to manage. Here are a recap of reason why.

  1. The need to be manually managed individually
  2. While working with containers networking is hard
  3. Containers need to be scheduled, load balanced and distributed
  4. Data needs to be stored someplace.
Pretty Complex on first look. Source – X-Team

This architecture is amazingly described here. If you don’t want to read, Watch this video mentioned below and then read the article. It will work some serious wonders. You will need it for part 2.

Just for the sake of completion. Pods are nothing but container or groups of containers running together. Multiple pods can be run on a single worker node. Worker nodes contains all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the containers scheduled. Right, master node is who we tell how our application needs to be deployed, what container image will be used in which pod, containing how many replicas of that container. This is what is the “Desired State Management” of Kubernetes.

Kubectl is nothing but a command line tool for interfacing with the master node. Kubelet is an API interface for the master node to command over the worker nodes. If the worker nodes stopped working, its Kubernetes job is to scale, manage and schedule the application deployment however possible. Taking a lot of overhead work that is put on development teams helping them focus more on fixing issues. Kubernetes can kill containers as easily as it manages them.

Kubernetes thrills but can also kills containers just as easily.

Kubernetes to the rescue!

Aastha does her research and finds that Kubernetes might be the best option to go ahead for her application. For folks, reading this blog. Story or not, do your due diligence into exactly figuring out if K8’s is the best option for you or not. The hype is real but the solution is not for everybody.

Aastha starts designing her application according to pods where containers form together to start working as a unit. Once, the application is ready. She informs the master node about pod definitions and how many, and exactly how does she want to them to be deployed. After then, Kubernetes takes over completely.

Kubernetes starts pods on worker nodes. If one of the worker node goes down. Kubernetes can automatically detect it and deploy pods on another worker node. And, that’s the beauty of it. From the architecture, we can see that kube-proxy can help us connect to the internet and provide networking. Whereas, etcd provides us with persistent storage space in our K8’s cluster. Kubernetes is quite a complex, large scale technology for automating, deploying, scaling, application containers. Aastha is happy as she can now focus more on features of her application and not the complexities of deployment. Deploying as many pods and replicas as she need.

A pod with 3 different containers working as a unit for assistant to the regional manager.

Well, that’s it for today!

There is a lot to learn and implement when it comes to containerisation and orchestration on a large scale. Luckily, there are lots of resources provided by the Cloud Native foundation as well as many experts in the field. Let me know down in the comments if you like to get any link of the same.

Hope you like this. It took a lot of time to compile, as I had to a lot of topics short for the sake of brevity. But as always folks, live in the mix.

I will be leaving you with one last video for summing this all up.

By far, the best video on the topic for beginners!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s