As soon as one hears about, Kubernetes or K8’s. The minds of some people run off to faraway lands as to what this complex piece of technology really is. With this post, I will give my best to bring forth some unique clarity on the subject with the help of my favorite sitcom, The Office. This is for people who basically know nothing, know very little or should know nothing about the technology but still want to know what the hype is about. It’s for everyone. Also, a bit of a disclaimer.
As much, as I would love to go the route of a 3000-word complete explanation of every word, concept, and technical term related to Kubernetes in one blog. We will take a shortcut instead where we won’t reinvent the wheel and discover this technology for ourself through a bunch of help from other blog, links, and videos that would help you understand the concept once and for all. Let’s dig in.
The meaning of Kubernetes in Greek is actually “Pilot”
Considering, it’s a container-orchestration system for automating application deployment, scaling, and management. It’s quite an apt choice for it to be named that. But, that Wikipedia definition probably went over your head. So we will break down the definition into word by word bits that will be easier for your brain to consume. Kubernetes was created for the sole purpose of automating the management of containers. What are the containers? Glad you asked.
Containers are lightweight, standalone virtual machines. They pack everything that is needed to run an application. Be it code, runtime applications, system tools, libraries, settings files, you name it and its there. Containers have become a staple in application development, and have been helping countless teams all over the world deploy their applications better since their inception back in 2013.
Explain to me with a example!
Let’s say, Aastha is building an application using containers and its working quite nicely. The container contains the OS, runtime libraries needed by the application, and the application code itself (Why Containers). She wants to deploy her application for others to use and modify. She deploys her application and some time passes by.
Her application starts getting more users and for serving those users she would need more resources. Hence, now instead of dealing with just a handful of containers for her application. Aastha has to work with hundreds of containers to handle the additional load. This becomes too cumbersome, too fast for her. She needs a way to automate this process. Our humble pilot, Kubernetes comes to rescue. But how? Let’s check that over in part 2 of this story.
Diving Deeper into the architecture!
In part 1, containers were totally getting the job done except in the end when it comes to wide scale deployment. They become a bit too hard to manage. Here are a recap of reason why.
- The need to be manually managed individually
- While working with containers networking is hard
- Containers need to be scheduled, load balanced and distributed
- Data needs to be stored someplace and container don’t handle it well
If these drawbacks can be handled by a system that can automate the application deployment, scaling, and management then life wouldn’t be so hard. Here comes our main man, Kubernetes with the architecture mentioned below. This architecture although looks complex is amazingly described on the X-Team blog. You can watch a video as well that I mentioned below.
If you don’t want to read, then watch this video mentioned below and then read the article. It will work some serious wonders for you. You will need it for understanding part 2 of Aastha’s story.
Let me boil it down for you!
Just for the sake of completion, I will reiterate over the architecture assuming you read or saw the video about it. Pods in the system are nothing but container or groups of containers running together. Multiple pods can be run on a single worker node. Worker nodes contains all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the containers scheduled.
And, yes the master node is the one who we tell that “how our application needs to be deployed, what container image will be used in which pod, containing how many replicas of that container”. This is what is the “Desired State Management” of Kubernetes that you keep hearing about.
Kubectl is nothing but a command-line tool for interfacing with the master node. Kubelet is an API interface for the master node to command over the worker nodes. If the worker nodes stopped working, its Kubernetes job is to scale, manage and schedule the application deployment however possible. Taking a lot of overhead work that is put on development teams helping them focus more on fixing issues. Kubernetes can kill containers as easily as it manages them.
Back to our story: Kubernetes to the rescue!
Aastha does her research and finds that Kubernetes might be the best option to go ahead for her application. For folks, reading this blog. Story or not, do your due diligence into exactly figuring out if K8’s is the best option for you or not. The hype is real but the solution is not for everybody.
Aastha starts designing her application according to pods where containers form together to start working as a unit. Once, the application is ready. She informs the master node about pod definitions and how many, and exactly how does she want the application to be deployed. After then, Kubernetes takes over completely like “pilot” (Now, you get it)
Kubernetes starts pods on worker nodes. If one of the worker nodes goes down. Kubernetes can automatically detect it and deploy pods on another worker node. And, that’s the beauty of it. It’s all automated and Aastha can focus over just her application. From the architecture, we can see that kube-proxy can help us connect to the internet and provide networking. Whereas, etcd provides persistent storage space in our K8’s cluster if we ever need it.
Kubernetes is quite a complex, large scale technology for automating, deploying, scaling, application containers. Aastha is happy as she can now focus more on features of her application and not the complexities of deployment. Deploying as many pods and replicas as she needs.
Well, that’s it for today!
There is a lot to learn and implement when it comes to containerization and orchestration on a large scale. Luckily, there are lots of resources provided by the great folks of the Cloud Native foundation as well as many experts in the field. Let me know down in the comments if you like to get any link of the same. Also, if I missed out on something. I new to the community so take it easy you all.
Hope you like this post. Something new I did. This took a lot of time to compile, as I had to cut short a lot of content for the sake of brevity. But as always folks, live in the mix.
I will be leaving you with one last video for summing this all up.