Configuring A Multi-Node Kubernetes Cluster On AWS Cloud Using Ansible

Himanshi Kabra
6 min readApr 2, 2021

Configuring a multi-node Kubernetes cluster is among one of the most painful tasks to do. In this article, I am going to show you how to configure a multi-node Kubernetes cluster on AWS cloud using Ansible.

And performing the entire task using ansible is one of the easiest things to do as the entire setup is just one click away from you.

But before configuring anything using Ansible, it is must to know how to set up the desired environment manually. So let us start by manually configuring the Kubernetes (k8s) cluster on AWS. And then, I will show you how to repeat the same process using Ansible.

Also before starting, let’s have a quick overview of the technologies used in this article:

Kubernetes:

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

Know More About Kubernetes: https://kubernetes.io/

Ansible:

Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems and can configure both Unix-like systems as well as Microsoft Windows.

Know More About Ansible: https://www.ansible.com/

AWS:

Amazon Web Services (AWS) is a subsidiary of Amazon providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.

Know More About AWS: https://aws.amazon.com/

We will configure everything using Ansible Roles.

We need to follow step by step procedure to achieve this:

  1. We will launch three ec2-instances over AWS cloud one as Master Node and two are Slave/Worker Nodes.

2. Configuring K8S Master

3. Configuring K8S Slaves

👨🏻‍💻 Ansible Configuration file “/etc/ansible/ansible.cfg”

We have discussed how we can use K8s for managing our containers. But setting up the k8s cluster is a very long and complicated task. So, In this article, we will discuss how we can create ansible roles to provision k8s cluster on AWS.

As we are provisioning the k8s cluster, we may need more slave nodes. So, entering the IP of every new slave can be a very tough task. So, we will be using dynamic inventory.

https://himanshikabra-cse22.medium.com/dynamic-configuration-of-target-nodes-using-variable-files-named-same-as-os-using-ansible-cb89b77edc2

Pre-requisites

  • Install boto and boto3 Python Library
#pip3 install boto3

Creating Roles

Roles are like an empty skeleton of an architecture you have to add steps by typing code into the modules present inside these roles.

  • Ansible playbooks can be very similar: code used in one playbook can be useful in other playbooks also
  • To make it easy to re-use the code, roles can be used.
  • An Ansible role is composed of multiple folders, each of which contains several YAML files.
  • By default, they have a main.yml file, but they can have more than one when needed.
  • This is a standardized structure for all Ansible roles, which allows Ansible playbooks to automatically load predefined variables, tasks, handlers, templates, and default values located in separate YAML files.
  • Each Ansible role should contain at least one of the following directories, if not all of them.

Here we will be creating four roles: one is for launching ec2, one for common configuration, one for master configuration, and another for slave configuration.

As roles are successfully configured, we will create tasks for each respective role.

ec2_provisioning

After creating the role we create the task for launching the ec2 instance.

./tasks/main.yml

We launch 3 instances: 1 as master and 2 as slaves. Tags are very important as they will be used in dynamic Inventory.

common_setup

After creating the role we create the task for installing common software for master and slave.

./tasks/main.yml

./handlers/main.yml

./files/docker_daemon.json

master_setup

After creating the role we create the task to provision the master node.

./tasks/main.yml

slave_setup

After creating the role we create the task to provision the slave node.

./tasks/main.yml

The structure of roles and playbooks will be like this:

.
├── ansible.cfg
├── common_setup
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ │ └── docker_daemon.json
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── config.yml
├── ec2_provisioning
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ │ └── main.yml
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── ec2.yml
├── inventory
│ ├── ec2.ini
│ └── ec2.py
├── master_setup
│ ├── defaults
│ │ └── main.yml
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── README.md
└── slave_setup
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml

Now, we just need to run the playbook and see the magic.

First, we will run the playbook of ec2. As we have stored variables inside the ansible vault, we have to provide passwords also.

Now, we will run the playbook to provision the master and slave node.

output

Playbook has been successfully run. Now, we have to check if it has configured everything properly or not.

slave_node

First, we will check if services are running in slave or not

As we can see all the services running successfully, we will log in to the master node for further confirmation.

master_node

As we can see all the nodes are connected and up. So, let’s try to launch a pod for further confirmation.

We can see all the pods are running and the pod launched by us is also running. So, let’s expose it using svc and check the result.

website

As we can see everything is working perfectly. Hence, we can say the configuration was successful.

Thank You

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Himanshi Kabra
Himanshi Kabra

No responses yet

Write a response