Configure K8S Multi Node Cluster over AWS Cloud using Ansible with dynamic inventory
Hello connections !!!!!
In this blog i will give entire overview to configure the Kubernetes Multinode cluster over Aws cloud using Ansible with the dynamic inventory
Preconfiguration
Install Ansible, setup the dynamic inventory and set the path to roles and private key and inventory and privilege escalation. you can configure the entire setup by using the following article link.
What is Kubernetes?
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.
Kubernetes will not launch applications kubernetes is the only helps the container to launch the application.
How Kubernetes works?
User request the services or resources through the kubectl program and then will contact to kubernetes api server and then apiserver will contact to schedular where schedular is the one schedules the pods in the slave nodes i.e in which slave pod to be launch is decided by the schedular.
After that schedular is the contact the Kubernetes controller Manager (KCM) where manages all the controls of kubernetes. if pod to launch in any slave node need to contact some program in the slave node which is nothing but kubelet program lies between Master node and slave node.
To contact containers in the slavenode through master kublet needs to contact to Container Runtime Interface (CRI) where CRI is the one provides a interface of runtime container.
Why we need Multinode Cluster?
In the single node different programs and resources are running to monitor the pods if node goes down then entire services will lost so we will face single point of failure. so its recommended to use Multinode cluster.
After that we have to update the configuration file (ansible.cfg) of the ansible in the root directory of the ansible /etc/ansible
#vim /etc/ansible/ansible.cfg
Update the path of the ansible role where your ansible roles are located and Update the inventory path where your downloaded script file located
Ansible works on the ssh protocol where ansible login to instance to configure for that we have to provide the username for the remote_user and while doing the ssh we have the accept the host key to login to os for that ansible won’t help in this time for that we have to disable the host key checking.
You can see that i provided private key to login the instance and privilege escalation
In the linux root user is the one can install the package where the instance have only normal user i.e ec2-user to get the power of root we have to give sudo power. The concept of giving the root power to normal user with sudo is called privilege escalation.
Ansible Role
In Ansible role is the one helps to manage the code of the playbook sometimes we have requirement that we have use to templates to do dynamic configuration and easy to change the values of the variables we will include the var files and we have write the tasks and handlers in the same playbook in the real the size of the playbook will be large in order to manage all the things we have to create the roles where everything manage inside it and we can run groups of role at a time using setup file and in order to share them with other user will be easy where they can reuse it again.
You can create an ansible role with the following command
#ansible-galaxy init <role_name>
Listed the all ansible roles available in the roles path with the following command
#ansible-galaxy role list
In the ec2-instance-master ansible role in the tasks folder we have main.yml update the all tasks code in it.
#vim /ec2-instance-master/tasks/main.yml
In the vars folder we have main.yml file where updated the values of the variables declared in the task file
#vim /ec2-instance-master/vars/main.yml
Every thing releated to launch the ec2 instance keyword are written in the tasks main yaml file and releated variables are written in the vars main yaml file you can see that the variables are access_key and secret_key are not mentioned in the vars main yaml file because if anyone have that values of that keywords they can directly access my account so for that i created a Ansible vault.
Ansible vault was created with the following command it will encrypted the values no one will be access the values without credentials
#ansible-vault create <filename>.yml
<variable>: <value> format in the ansible vault file
Same way updated the code in the ec2-instance-slave ansible role
Refer this article for entire explaination of the K8s multi node cluster configuration manually.
In the master node also we need to configure the Container Runtime Interface for that i am installing docker
- name: installing docker
package:
name: docker
state: present
starting and and enabling the services of the docker
- name: starting docker services
service:
name: "docker"
state: started
enabled: yes
Kubeadm is an authorize program which helps to installing the kubernetes for that we have to configure the yum repolist.
- name: configuring yum for kubeadm
yum_repository:
name: "kubernetes"
description: "repo for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: yes
repo_gpgcheck: yes
gpgcheck: yes
gpgkey:
- "https://packages.cloud.google.com/yum/doc/yum-key.gpg"
- "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"
Installing the kubelet, kubeadm and kubectl simultaneously and you can disable the exclude from from the installation guide.
- name: Installing kubeadm, kubectl and kubelet
yum:
name:
- kubelet
- kubeadm
- kubectl
state: present
disable_excludes: "kubernetes"
Master can be also used to launched the pods. kublet is the program you need to install in master and slave both. fort that we have to start the kubelet service.
- name: starting and enabling kubelet services
service:
name: "kubelet"
state: started
enabled: yes
After enabling the kublet service but it was in the waiting mode as all the programs run inside the container for that kudeadm is the one provides the required images you can pull the images
- name: Pulling Images using kubeadm
shell: "kubeadm config images pull"
changed_when: false
Kubernetes supports systemd driver where docker works on cdriver where control group fs (cgroupfs) helps to control the performance of the container for that we have to update the internals of the docker /etc/docker/daemon.json file
- name: copy daemon file
copy:
src: daemon.json
dest: /etc/docker/daemon.json
The daemon.json source file conatins the following code where the file placed inside the files folder
{
“exec-opts”:[“native.cgroupdriver=systemd”]
}
we changed the internals of the docker to update the configuration we have to restart the services of the docker.
- name: Restarting Docker
service:
name: docker
state: restarted
now, kubernetes needs the tc software to do routing inside kubernetes used this program internally.
- name: Installing iproute-tc
package:
name: iproute-tc
updating config file for enabling kernel setting for bridging and restarting sysctl
- name: updating k8s config file
copy:
dest: /etc/sysctl.d/k8s.conf
src: k8s.conf- name: restarting sysctl
command: sysctl --system
The k8.conf source file conatins the following code where the file placed inside the files folder
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Kubeadm init will helps to initize the conatiners every conatiner has Ip address so master is the one who allocates these IP’s to pods where CNI plays a vital role to allocate range of IP’s to pods and to setup kubernetes the requirement is 2CPU and 2Gb Ram where i launched the instance with 1GB Ram and 1CPU (t2.micro) ignore option will innore the error occured due to compute requriments.
- name: initialising kubeadm
shell: "kubeadm init --pod-network-cidr={{ cidr }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
ignore_errors: yes
and cidr variables contains cidr: “10.244.0.0/16”
Create the kube directory in the home directory and copying the config file from the root directory of kubernetes to created directory of the kube and changing the owner permission
- name: Creating .kube directory
file:
name: "$HOME/.kube"
state: directory- name: Copy kube config
copy:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
remote_src: yes- name: changing owner permission
shell: "chown $(id -u):$(id -g) $HOME/.kube/config"
In docker when we launch container it will gives Virtual ethernet card where it helps to connect the docker conatiner with base os. For every container in the docker gives separate Virtual ethernet card and the connection between the container is done by switch.
When Master node connects with Workernode then in worker node creates a cni O switch. where the virtual ethernet card connect one end with cni o switch and another end with the baseos by default we have connectivity between container to container in that node only we don’t had connectivity with another contianer in another node. for that we have create overlay network with tunneling approach with Virtual Extended Lan (VxLAN)
- name: Flannel Command
shell: "kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
ignore_errors: yes
Generated token to connect the worker node with the master node.
- name: Generating Token
shell: "kubeadm token create --print-join-command"
register: tokens
ignore_errors: yes
printed the output of the above task with the debug command.
- debug:
var: tokens.stdout_lines
In the Kubernetes-master ansible role we have tasks folder in the main.yml update the all tasks code in it.
#vim /kubernetes-master/tasks/main.yml---
# tasks file for kubernetes-master- name: installing docker
package:
name: docker
state: present
- name: starting docker services
service:
name: "docker"
state: started
enabled: yes
- name: configuring yum for kubeadm
yum_repository:
name: "kubernetes"
description: "repo for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: yes
repo_gpgcheck: yes
gpgcheck: yes
gpgkey:
- "https://packages.cloud.google.com/yum/doc/yum-key.gpg"
- "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"
- name: Installing kubeadm, kubectl and kubelet
yum:
name:
- kubelet
- kubeadm
- kubectl
state: present
disable_excludes: "kubernetes"- name: starting and enabling kubelet services
service:
name: "kubelet"
state: started
enabled: yes
- name: Pulling Images using kubeadm
shell: "kubeadm config images pull"
changed_when: false
- name: Installing iproute-tc
package:
name: iproute-tc
- name: copy daemon file
copy:
src: daemon.json
dest: /etc/docker/daemon.json
- name: Restarting Docker
service:
name: docker
state: restarted
- name: updating k8s config file
copy:
dest: /etc/sysctl.d/k8s.conf
src: k8s.conf
- name: restarting sysctl
command: sysctl --system
- name: initialising kubeadm
shell: "kubeadm init --pod-network-cidr={{ cidr }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
ignore_errors: yes
- name: Creating .kube directory
file:
name: "$HOME/.kube"
state: directory- name: Copy kube config
copy:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
remote_src: yes- name: changing owner permission
shell: "chown $(id -u):$(id -g) $HOME/.kube/config"- name: Flannel Command
shell: "kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
ignore_errors: yes- name: Generating Token
shell: "kubeadm token create --print-join-command"
register: tokens
ignore_errors: yes- debug:
var: tokens.stdout_lines
same way in the slave node
the token input run as command while role running
- name: "joining slave with master"
shell: "{{ token }}"
ignore_errors: yes
In the Kubernetes-slave ansible role we have tasks folder in the main.yml update the all tasks code in it.
#vim /kubernetes-slave/tasks/main.yml---
# tasks file for kubernetes-slave- name: installing docker
package:
name: docker
state: present
- name: starting docker services
service:
name: "docker"
state: started
enabled: yes
- name: configuring yum for kubeadm
yum_repository:
name: "kubernetes"
description: "repo for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: yes
repo_gpgcheck: yes
gpgcheck: yes
gpgkey:
- "https://packages.cloud.google.com/yum/doc/yum-key.gpg"
- "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"
- name: Installing kubeadm, kubectl and kubelet
yum:
name:
- kubelet
- kubeadm
- kubectl
state: present
disable_excludes: "kubernetes"- name: starting and enabling kubelet services
service:
name: "kubelet"
state: started
enabled: yes
- name: Pulling Images using kubeadm
shell: "kubeadm config images pull"
changed_when: false
- name: Installing iproute-tc
package:
name: iproute-tc
- name: copy daemon file
copy:
src: daemon.json
dest: /etc/docker/daemon.json- name: Restarting Docker
service:
name: docker
state: restarted- name: updating k8s config file
copy:
dest: /etc/sysctl.d/k8s.conf
src: k8s.conf
- name: restarting sysctl
command: sysctl --system
- name: "joining slave with master"
shell: "{{ token }}"
ignore_errors: yes
the main playbook to run the all the roles once.
- hosts: localhost
vars_files:
- secure.yml
roles:
- name: "Launching ec2 instance for master"
role: ec2-instance-master
- name: "Launching ec2 instance for slave"
role: ec2-instance-slave
tasks:
- name: Wait to Completely Provision Instances
pause:
minutes: 2
- name: Refresh Inventory
meta: refresh_inventory- hosts: tag_Name_kubernetes_master
roles:
- name: "configuring Master node"
role: kubernetes-master- hosts: tag_Name_kubernetes_slave #takes the input of taken which is printed on the screen to join the worker node with the master.
vars_prompt:
- name: "token"
prompt: "Enter token to join with master: "
roles:
- name: "configuring Slave node"
role: kubernetes-slave
Successfully the playbook was runned without any errors and Multi node cluster was configured.
Github source code link