Kubernetes and its use cases in industry
Understanding Kubernetes Architecture and Its Use Cases

Since 2014, Kubernetes has grown immensely in popularity. The adoption of this container deployment tool is still growing among IT professionals, partly because it is highly secure and easy to learn. But with every tool, knowing its architecture makes it easier to understand.
Let’s go over the fundamentals of Kubernetes architecture from what it is and why it is important, to a deep dive into what it’s made of.
Kubernetes is a flexible container management system developed by Google that’s used to manage containerized applications in multiple environments. Initially introduced as a project at Google (as a successor to Google Borg), Kubernetes was released in 2014 to manage applications running in the cloud. The Cloud Native Computing Foundation currently maintains Kubernetes.
Kubernetes is often chosen for the following reasons:
- Kubernetes has a better infrastructure than many of the DevOps tools
- Kubernetes breaks down containers into smaller modules to enable more granular management
- Kubernetes deploys software updates often and seamlessly
- Kubernetes lays the foundation for cloud-native apps
Let us now begin with the introduction to the Kubernetes architecture.
Introduction to Kubernetes Architecture
Kubernetes architecture comprises the following components.
Cluster
- A collection of servers that combines available resources
- Includes RAM, CPU, disk, and devices
Master
- A collection of components that make up the control panel of Kubernetes
- Consists of both scheduling and cluster events
Node
- A single host capable of running on a virtual machine
- Runs both Kube-proxy and Kubelet, which are a part of the cluster
After going through the introduction to Kubernetes architecture, let us next understand the need for the containers.
Need for Containers
With the ever-expanding presence of technology in our lives, downtime on the internet is becoming unacceptable. Hence, developers need to find solutions to maintain and update the infrastructure of the applications we depend on without interrupting other services people depend on.
The solution is container deployment. Containers work in isolated environments, making it easy for developers to build and deploy apps.
Docker Swarm vs. Kubernetes
CategoryDocker Swarm KubernetesScalingNo Auto Scaling Auto ScalingLoad BalancingAutoload BalancingManually configures load balancingInstallationEasy and fast
Long and time-consuming
ScalabilityCluster strength is weak when compared to KubernetesCluster strength is strongStorage Volume SharingShares storage volumes with any other container
Shares storage volumes between multiple containers inside the same pod
GUINot availableAvailable
Hardware Components
Nodes
A node is a worker machine on Kubernetes. It is a Virtual Machine or a physical machine based on the cluster. The master maintains the code, and each node contains the necessary components required to run the Kubernetes cluster.
In Kubernetes, there are two types of nodes, Master Node and Slave Node.

Cluster
Kubernetes does not work with individual nodes; it works with the cluster as a whole. Kubernetes clusters make up the master and slave node and manage it as a whole. There can be more than one cluster in Kubernetes.

Persistent Volumes
Kubernetes persistent volumes are administrator provisioned volumes with the following characteristics.
- Allocated either dynamically or by an administrator
- Created with a particular file system
- It has a specific size.
- Has identifying characteristics such as volume IDs and a name.
Kubernetes Persistent Volumes remain on a pod even after the pod is deleted. It’s used for the temporary storage of data.

Software Components
Containers
Containers are used everywhere, as they create self-contained environments where applications are executed. The programs are bundled up into single files (known as containers) and then shared on the internet. Multiple programs can be added to a single container; be sure to limit one process per container. Programs run on the Linux package as containers.

Pods
A Kubernetes pod is a group of containers deployed together on the same host. Pods operate one level higher than individual containers, and these groups of containers work together to operate for a single process. Pods provide two different types of shared resources: networking and storage, and are the units of replication in Kubernetes.
Deployment
A deployment is a set of identical pods. It runs multiple replicas of the application, and if in case an instance fails, deployment replaces those instances. Pods cannot be launched on a cluster directly; instead, they are managed by one more layer of abstraction. The manual management of pods is eradicated when deployment is used.

Ingress
Ingress is a collection of routing rules that decide how the external services access the services running inside a Kubernetes cluster. Ingress provides load balancing, SSL termination, and name-based virtual hosting.

Kubernetes Architecture
Kubernetes has two nodes — Master Node and Server Node.

Master
The master node is the most vital component of Kubernetes architecture. It is the entry point of all administrative tasks. There is always one node to check for fault tolerance.
The master node has various components, such as:
- ETCD
- Controller Manager
- Scheduler
- API Server
- Kubectl
1. ETCD
- This component stores the configuration details and essential values
- It communicates with all other components to receive the commands to perform an action.
- Manages network rules and post-forwarding activity
2. Controller Manager
- A daemon (server) that runs in a continuous loop and is responsible for gathering information and sending it to the API Server
- Works to get the shared set of clusters and change them to the desired state of the server
- The key controllers are the replication controllers, endpoint controller, namespace controllers, and service account controllers
- The controller manager runs controllers to administer nodes and endpoints
3. Scheduler
- The scheduler assigns the tasks to the slave nodes
- It is responsible for distributing the workload and stores resource usage information on every node
- Tracks how the working load is used on clusters and places the workload on available resources.
4. API Server
- Kubernetes uses the API server to perform all operations on the cluster
- It is a central management entity that receives all REST requests for modifications, serving as a frontend to the cluster
- Implements an interface, which enables different tools and libraries to communicate effectively
5. Kubectl
- Kubectl controls the Kubernetes cluster manager
Syntax — kubectl [flags]
Slave
The slave node has the following components:
1. Pod
- A pod is one or more containers controlled as a single application
- It encapsulates application containers, storage resources, and is tagged by a unique network ID and other configurations that regulate the operation of containers
2. Docker
- One of the basic requirements of nodes is Docker
- It helps run the applications in an isolated, but lightweight operating environment. It runs the configured pods
- It is responsible for pulling down and running containers from Docker images
3. Kubelet
- Service responsible for conveying information to and from to the control plane service
- It gets the configuration of a pod from the API server and ensures that the containers are working efficiently
- The kubelet process is responsible for maintaining the work status and the node server
4. Kubernetes Proxy
- Acts as a load balancer and network proxy to perform service on a single worker node
- Manages pods on nodes, volumes, secrets, the creation of new containers, health check-ups, etc.
- A proxy service that runs on every node that makes services available to the external host
After going through the Kubernetes architecture, let us next understand its uses in the enterprise.
How is Kubernetes Being Used in the Enterprise?
Some companies merge Kubernetes with their existing systems for better performance. For example, let’s take a look at the company Black Rock. Black Rock needed better dynamic access to their resources because managing complex Python installations on users’ desktops were extremely difficult. Their existing systems worked, but they wanted to make it work better and scale seamlessly. The core components of Kubernetes were hooked into their existing systems, which gave the support team better, more granular control of clusters.
While Kubernetes gives enterprise IT administrators better control over their infrastructure and, ultimately, application performance, there is a lot to learn to be able to get the most out of the technology. If you would like to start a career or want to build upon your existing expertise in cloud container administration, Simplilearn offers several ways for aspiring professionals to upskill. If you want to go all-in and are already familiar with container technology, you can take our Certified Kubernetes Administrator (CKA) Training to prepare you for the CKA exam. You can even check out the DevOps Engineer Master’s Program that can help you will prepare you for a career in DevOps.
CASE STUDY:Spotify
Spotify: An Early Adopter of Containers, Spotify Is Migrating from Homegrown Orchestration to Kubernetes

Challenge
Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. “Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,” he says.
Solution
“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.
Impact
The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. “A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we’ve heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
“We saw the amazing community that’s grown up around Kubernetes, and we wanted to be part of that. We wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” — JAI CHAKRABARTI, DIRECTOR OF ENGINEERING, INFRASTRUCTURE AND OPERATIONS, SPOTIFY
“Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations at Spotify. Since the audio-streaming platform launched in 2008, it has already grown to over 200 million monthly active users around the world, and for Chakrabarti’s team, the goal is solidifying Spotify’s infrastructure to support all those future consumers too.
An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs since 2014. The company used an open source, homegrown container orchestration system called Helios, and in 2016–17 completed a migration from on premise data centers to Google Cloud. Underpinning these decisions, “We have a culture around autonomous teams, over 200 autonomous engineering squads who are working on different pieces of the pie, and they need to be able to iterate quickly,” Chakrabarti says. “So for us to have developer velocity tools that allow squads to move quickly is really important.”
But by late 2017, it became clear that “having a small team working on the Helios features was just not as efficient as adopting something that was supported by a much bigger community,” says Chakrabarti. “We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that. We wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community.
“The community has been extremely helpful in getting us to work through all the technology much faster and much easier. And it’s helped us validate all the things we’re doing.” — DAVE ZOLOTUSKY, SOFTWARE ENGINEER, INFRASTRUCTURE AND OPERATIONS, SPOTIFY
Another plus: “Kubernetes fit very nicely as a complement and now as a replacement to Helios, so we could have it running alongside Helios to mitigate the risks,” says Chakrabarti. “During the migration, the services run on both, so we’re not having to put all of our eggs in one basket until we can validate Kubernetes under a variety of load circumstances and stress circumstances.”
The team spent much of 2018 addressing the core technology issues required for the migration. “We were able to use a lot of the Kubernetes APIs and extensibility features of Kubernetes to support and interface with our legacy infrastructure, so the integration was straightforward and easy,” says Site Reliability Engineer James Wen.
Migration started late that year and has accelerated in 2019. “Our focus is really on stateless services, and once we address our last remaining technology blocker, that’s where we hope that the uptick will come from,” says Chakrabarti. “For stateful services there’s more work that we need to do.”
A small percentage of Spotify’s fleet, containing over 150 services, has been migrated to Kubernetes so far. “We’ve heard from our customers that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes over 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Wen. Plus, Wen adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
“We were able to use a lot of the Kubernetes APIs and extensibility features to support and interface with our legacy infrastructure, so the integration was straightforward and easy.” — JAMES WEN, SITE RELIABILITY ENGINEER, SPOTIFY
Chakrabarti points out that for all four of the top-level metrics that Spotify looks at — lead time, deployment frequency, time to resolution, and operational load — “there is impact that Kubernetes is having.”
One success story that’s come out of the early days of Kubernetes is a tool called Slingshot that a Spotify team built on Kubernetes. “With a pull request, it creates a temporary staging environment that self destructs after 24 hours,” says Chakrabarti. “It’s all facilitated by Kubernetes, so that’s kind of an exciting example of how, once the technology is out there and ready to use, people start to build on top of it and craft their own solutions, even beyond what we might have envisioned as the initial purpose of it.”
Spotify has also started to use gRPC and Envoy, replacing existing homegrown solutions, just as it had with Kubernetes. “We created things because of the scale we were at, and there was no other solution existing,” says Dave Zolotusky, Software Engineer, Infrastructure and Operations. “But then the community kind of caught up and surpassed us, even for tools that work at that scale.”
“It’s been surprisingly easy to get in touch with anybody we wanted to, to get expertise on any of the things we’re working with. And it’s helped us validate all the things we’re doing.” — JAMES WEN, SITE RELIABILITY ENGINEER, SPOTIFY
Both of those technologies are in early stages of adoption, but already “we have reason to believe that gRPC will have a more drastic impact during early development by helping with a lot of issues like schema management, API design, weird backward compatibility issues, things like that,” says Zolotusky. “So we’re leaning heavily on gRPC to help us in that space.”
As the team continues to fill out Spotify’s cloud native stack — tracing is up next — it is using the CNCF landscape as a helpful guide. “We look at things we need to solve, and if there are a bunch of projects, we evaluate them equivalently, but there is definitely value to the project being a CNCF project,” says Zolotusky.
Spotify’s experiences so far with Kubernetes bears this out. “The community has been extremely helpful in getting us to work through all the technology much faster and much easier,” Zolotusky says. “It’s been surprisingly easy to get in touch with anybody we wanted to, to get expertise on any of the things we’re working with. And it’s helped us validate all the things we’re doing.”