Configuring Kubernetes Multinode cluster Over AWS Cloud manually

Kalla kruparaju
8 min readApr 13, 2021

--

What is Kubernetes?

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.

Kubernetes will not launch applications kubernetes is the only helps the container to launch the application.

How Kubernetes works?
User request the services or resources through the kubectl program and then will contact to kubernetes api server and then apiserver will contact to schedular where schedular is the one schedules the pods in the slave nodes i.e in which slave pod to be launch is decided by the schedular.

After that schedular is the contact the Kubernetes controller Manager (KCM) where manages all the controls of kubernetes. if pod to launch in any slave node need to contact some program in the slave node which is nothing but kubelet program lies between Master node and slave node.
To contact containers in the slavenode through master kublet needs to contact to Container Runtime Interface (CRI) where CRI is the one provides a interface of runtime container.

Why we need Multinode Cluster?
In the single node different programs and resources are running to monitor the pods if node goes down then entire services will lost so we will face single point of failure. so its recommended to use Multinode cluster.

Configuring Multinode cluster

Launched two Amazon linux instances one is for Master node and another one is for Slave node with t2 -micro instance type.

Setup of Master node

In the master node also we need to configure the Container Runtime Interface for that i am installing docker

yum install docker -y

and enabling the services of the docker


#systemctl enable docker --now

Kubeadm is an authorize program which helps to installing the kubernetes for that we have to configure the yum repolist with the following command.

#cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

Installing the kubelet, kubeadm and kubectl simultaneously and you can disable the exclude from from the installation guide.

#yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Master can be also used to launched the pods. kublet is the program you need to install in master and slave both. fort that we have to start the kubelet service

#systemctl enable --now kubelet

After enabling the kublet service but it was in the waiting mode as all the programs run inside the container for that kudeadm is the one provides the required images you can pull the images with the following command.

#kubeadm config images pull

Kubernetes supports systemd driver where docker works on cdriver where control group fs (cgroupfs) helps to control the performance of the container for that we have to update the internals of the docker /etc/docker/daemon.json file

#cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"]
}
EOF

we changed the internals of the docker to update the configuration we have to restart the services of the docker.

#systemctl restart docker

now, kubernetes needs the tc software to do routing inside kubernetes used this program internally.

#yum install iproute-tc -y

Above we got warning bridge-nf-call-iptables is disabled for that we can configure with the following command. This is for enabling kernel setting for bridging

# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Restarting sysctl

#systemctl --system

Kubeadm init will helps to initize the conatiners every conatiner has Ip address so master is the one who allocates these IP’s to pods where CNI plays a vital role to allocate range of IP’s to pods and to setup kubernetes the requirement is 2CPU and 2Gb Ram where i launched the instance with 1GB Ram and 1CPU (t2.micro) ignore option will innore the error occured due to compute requriments.

#kubeadm init --pod-network-cidr=10.240.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

In the output of the above you can see token copy the token to join slave with master and procedure to create the kube directory in the home directory and copying the config file from the root directory of kubernetes to created directory of the kube and changing the owner permission

creating kube directory
#mkdir -p $HOME/.kube
Copying kube config form root directory of kubernetes.
#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
changing owner permission
#sudo chown $(id -u):$(id -g) $HOME/.kube/config
clear the cache in the memory
#echo 3 > /proc/sys/vm/drop_cache
To list the nodes of the kubernetes
#kubectl get nodes

you can see that one master node was created but master node was not in ready state for that we will discuss later.

Setup of Slave node

Installing CNI docker

#yum install docker -y

enabling the services of the docker

systemctl enable docker — now

configuring yum for kubeadm

#cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

Installing kubeadm, kubectl and kubelet and disable the exclude from from the installation guide.

#yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Enabling kubelet services

#systemctl enable --now kubelet

Pulling Images using kubeadm

#kubeadm config images pull

Installing iproute-tc

#yum install iproute-tc -y

updating k8s config file

# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

restarting sysctl

#systemctl --system

configuring daemon file of docker

#cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"]
}
EOF

Restarting Docker

#systemctl restart docker

while running the kudeadm init command you will find some token to join the slave with the master in the output

#kubeadm init --pod-network-cidr=10.240.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

if you forget to save token then you can run the following command in the master node this will generate a token

#kudeadm token create --print-join-command

copy the output and paste it in the worker node to join the worker node with master

#kubectl get nodes

Two nodes are created but two nodes are not ready state because of overlay issue we will discuss in detail now.

In docker when we launch container it will gives Virtual ethernet card where it helps to connect the docker conatiner with base os. For every container in the docker gives separate Virtual ethernet card and the connection between the container is done by switch.

When Master node connects with Workernode then in worker node creates a cni O switch. where the virtual ethernet card connect one end with cni o switch and another end with the baseos by default we have connectivity between container to container in that node only we don’t had connectivity with another contianer in another node. for that we have create overlay network with tunneling approach with Virtual Extended Lan (VxLAN)

#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

This command will creates the overlay network

now you can see that nodes are in ready state

#kubectl get pods --all-namespaces

while listing the all namespace still we have problem with the core dns pods are not running

#kubeadm init --pod-network-cidr=10.240.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

when you run the initialize your master with the above command then we assign the range of 10.240.0.0/16 where master is the one gives the IP’s to the containers whereas flannel running in each node will manage the IP.

# cat /var/run/flannel/subnet.env

if you see the internal config file of the fannel the range of ip address is differnet for that we have to update in the config file of flannel with flannel network from 10.244.0.0/16 to 10.240.0.0/16 .

# kubectl edit configmap kube-flannel-cfg -n kube-system

After updating the flannel file also was not updated internally so we have to delete the pods of the flannel they will launch automatically.

# kubectl delete pods -l app=flannel -n kube-system
#kubectl get pods --all-namespaces

Because of the different network between flannel and master cidr there was the issue and also core dns pods were not working therefore after update config file flannel all the coredns pods are got restarted and working fine. Now successfully our Multinode cluster was configured.

GitHub link for all commands

--

--

No responses yet