Kubernetes Beginner’s Guide Part 1: Introduction

In the past few installments, we looked at how we can run our containerized applications in Azure. If you missed those, you can check them out here:

We ended that series by figuring out how we can run our app in AKS but it assumed that you are already familiar with the basics of Kubernetes. In this new series, we’ll take a step back and learn what Kubernetes is all about. In this introductory post, we’ll answer the what and why questions and end with a glossary containing some of the common concepts / terms associated with Kubernetes. In future posts, we’ll look at how we can apply those learnings by running our own cluster and creating those artifcats ourselves to run our containerized app within it.

What is Kubernetes?

Kubernetes, often abbreviated as K8s, was originally developed by Google and is now maintained by the Cloud Native Computing Foundation. It helps in managing applications built in containers (like Docker) more efficiently. Containers are a lightweight, portable, and self-sufficient way of running applications, ensuring that they work seamlessly in different computing environments.

Why Kubernetes?

While Kubernetes or any other orchestration engine is not a prerequisite for containerized applications, it is certainly worthy of consideration for any sizable applications. When you compose multiple cotainer pieces, as in the case of a microservices architecture, the health and availability of those pieces are integral to the successful running of your app. While you can care and feed for them individually, a much prudent approach would be to introduce Kubernetes or another orchestrator to do that work for you. With Kubernetes, you define the desired state of your application using declarative language, in the form of yaml files. When you activate it, Kubernetes will work to maintain that desired state. If a pod becomes unresponsive, the Kubernetes engine will do the necessary work to have another pod take its place. With Kubernetes, you get

  • High Availability: Ensures your application is always available, regardless of any failures.
  • Scalability: Easily scale your application up or down based on demand.
  • Portability: Run your applications on public, private, hybrid, or multi-cloud environments.
  • Self-healing: Automatically restarts containers that fail, replaces and reschedules containers when nodes die.

Kubernetes Components

  • Pods: The smallest deployable units in Kubernetes, which can contain one or more containers. Generally, you’ll see a 1-to-1 correlation between a pod and a container. However, you can certainly put more than one in a pod. One common use-case for this is running a sidecar container within a pod to take care of logging and other ancillary functions of a container.
  • Nodes: Worker machines in Kubernetes where pods are deployed. Nodes can be virtual or physical machines. A three-node configuration is common in basic setups.
  • Cluster: A set of Nodes that run containerized applications. The Kubernetes cluster coordinates all its parts.
  • Deployment: Manages the creation and updating of Pods.
  • Service: An abstract way to expose an application running on a set of Pods as a network service.

Conclusion

Kubernetes is a powerful tool for managing containerized applications, but it can be complex and overwhelming when starting out. In the coming weeks, we’ll dive a bit deeper and create each of these components ourselves. Stay tuned.

Leave a Comment

Your email address will not be published. Required fields are marked *