Table of Contents
- Benefits of Kubernetes
- Prerequisites
- Step 1 - Disable Swap and Enable IP Forwarding
- Step 2 - Install Docker CE
- Step 3 - Add Kubernetes Repository
- Step 4 - Install Kubernetes Components (Kubectl, kubelet and kubeadm)
- Step 5 - Initialize Kubernetes Master Node
- Step 6 - Deploy a Pod Network
- Step 7 - Join Worker Nodes to the Kubernetes Cluster
- Step 8 - Verify the Kubernetes Cluster
- Conclusion
Kubernetes is an open-source system originally designed by Google for deploying, managing, scaling, and automating containerized applications. The open-source platform works with various containers and creates a framework to run these containers in clusters with images built via Docker. Also known as K8s, the system handles application failovers, stores and manages sensitive information, supports load balancing, and more. Nowadays, many businesses are turning to Kubernetes to solve container orchestration needs.
At present, many services are delivered over the network using exposed APIs and distribution systems daily. Thus, the systems must be highly reliable, scalable, and must not face downtime. Kubernetes provides relevant services covering all these features to help deploy cloud-native applications anywhere. Kubernetes has become one of the best and fast-growing solutions for development.
Let‘s discuss some of the essential benefits of using container orchestration platforms like Kubernetes for deployment, management, and scaling.
Benefits of Kubernetes
Highly Portable and Flexible
No matter how complex your needs, whether you require local testing or running a global enterprise, Kubernetes’ flexible features help consistently deliver your applications. Also, Kubernetes can work virtually with any infrastructure, including the public cloud, a private cloud, hybrid, or an on-premises server. You can create portable deployments with Kubernetes and can run them consistently on different environments (development, staging, and production). Other orchestrators lack this feature. Kubernetes is a trusted open-source that helps effortlessly move workloads to your destined location.
Using Kubernetes Can Improve Developer Productivity
Proper implementation of Kubernetes into engineering workflows can be highly advantageous and lead to enhanced productivity. Kubernetes, with its huge ecosystem and ops friendly approach, allows developers to rely on tools and use them to reduce the negative impact and eliminate general complexity, as well as scale further and deploy faster (and at multiple times during a day). As a result, developers gain access to various solutions that cannot build themselves.
Also, developers can now run distributed databases on Kubernetes and scale stateful applications with its advanced versions.
Affordable Solution
When compared with other orchestrators, Kubernetes can at times be a more affordable solution. It automatically helps scale resources up or down based on the requirement of a specific application, traffic, and load. As a result, when there is less demand for an application, the company or organization pays less.
Self-Healing Service
Enterprise-grade clusters are complex and require best-practice configuration to enable self-service Kubernetes features. Self-service Kubernetes clusters can restart if a container fails and can replace containers or reschedule on noticing the dead nodes. Also, they help kill containers not responding to a user-defined health check.
Multi-Cloud Capability
Nowadays, many businesses are switching to multi-cloud strategies, and there are various orchestrators available in the market that work with multi-cloud infrastructures. Kubernetes has the capability to host workloads running on both single and multiple clouds and is very flexible. Also, Kubernetes users can easily scale their environment.
Apart from the above-listed benefits, Kubernetes also supports frequent container image build and deployment, high efficiency, and more.
In this post, we will be going to explain how to install and deploy a three-node node Kubernetes Cluster on Ubuntu 20.04.
Prerequisites
- Three servers running an Ubuntu 20.04 operating system
- Minimum 2 GB RAM and 2 Core CPUs on each node
- A root password configured on each server
We will use the following setup to demonstrate a three-node Kubernetes cluster:
Kubernetes Node |
IP Address |
Operating System |
Master-Node | 69.28.88.236 | Ubuntu 20.04 |
Worker1 | 104.219.55.103 | Ubuntu 20.04 |
Worker2 | 104.245.34.163 | Ubuntu 20.04 |
Step 1 – Disable Swap and Enable IP Forwarding
Memory swapping causes performance and stability issues within Kubernetes, so it is recommended to disable Swap and enable IP forwarding on all nodes.
First, verify whether Swap is enabled or not using the following command:
swapon --show
If Swap is enabled, you will get the following output:
NAME TYPE SIZE USED PRIO /swapfile file 472.5M 0B -2
Next, run the following command to disable Swap:
swapoff -a
To disable Swap permanently, edit the /etc/fstab file and comment the line containing swapfile:
nano /etc/fstab
Comment or remove the following line:
#/swapfile none swap sw 0 0
Next, edit the /etc/sysctl.conf file to enable the IP forwarding:
nano /etc/sysctl.conf
Un-comment the following line:
net.ipv4.ip_forward = 1
Save and close the file, then run the following command to apply the configuration changes:
sysctl -p
Also Read
How to Install Docker on Ubuntu 20.04 LTS
Step 2 – Install Docker CE
Kubernetes relies on a Docker container, so you will need to install the Docker CE on all nodes. The latest version of the Docker CE is not included in the Ubuntu default repository, so you will need to add Docker’s official repository to APT.
First, install the required dependencies to access Docker repositories over HTTPS:
apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Next, run the curl command to download and add Docker’s GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
Next, add Docker’s official repository to the APT:
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Once the repository is added, run the following command to install Docker CE:
apt-get install docker-ce -y
After the installation, verify the Docker installation using the following command:
docker --version
Sample output:
Docker version 20.10.10, build b485636
Step 3 – Add Kubernetes Repository
By default, the Kubernetes package is not included in the Ubuntu 20.04 default repository, so you will need to add the Kubernetes repository to all nodes.
First, add the Kubernetes GPG key:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
Next, add the Kubernetes repository to APT:
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Once the repository is added, update the APT cache using the command below:
apt-get update -y
Step 4 – Install Kubernetes Components (Kubectl, kubelet and kubeadm)
Kubernetes depends on three major components (Kubectl, kubelet and kubeadm) that make up a Kubernetes run time environment. All three components must be installed on each node.
Let’s run the following command on all nodes to install all Kubernetes components:
apt-get install kubelet kubeadm kubectl -y
Next, you will need to update the cgroupdriver on all nodes. You can do it by creating the following file:
nano /etc/docker/daemon.json
Add the following lines:
{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" }
Save and close the file, then reload the systemd daemon and restart the Docker service with the following command:
systemctl daemon-reload systemctl restart docker systemctl enable docker
At this point, all Kubernetes components are installed. Now, you can proceed to the next step.
Step 5 – Initialize Kubernetes Master Node
The Kubernetes Master node is responsible for managing the state of the Kubernetes cluster. In this section, we will show you how to initialize the Kubernetes Master node.
Run the kubeadm command-line tool to initialize the Kubernetes cluster.
kubeadm init --pod-network-cidr=10.244.0.0/16
Once the Kubernetes cluster has been initialized successfully, you will get the following output:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 69.28.84.197:6443 --token tx7by6.nae8saexoj2y3gqb \ --discovery-token-ca-cert-hash sha256:a506a51aa88791b456275b289bedc5d3316534ff67475fdbc7c2c64ace82652f
From the above output, copy or note down the kubeadm join full command. You will need to run this command on all worker nodes to join the Kubernetes cluster.
If you are logged in as a regular user then run the following command to start using your cluster:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
If you are the root user, you can run the following command:
export KUBECONFIG=/etc/kubernetes/admin.conf
At this point, the Kubernetes cluster is initialized. You can now proceed to add a pod network.
Also Read
How to Use chmod (Change Mode) Command in Linux
Step 6 – Deploy a Pod Network
The pod network is used for communication between all nodes within the Kubernetes cluster and is necessary for the Kubernetes cluster to function properly. In this section, we will add a Flannel pod network on the Kubernetes cluster. Flannel is a virtual network that attaches IP addresses to containers.
Run the following command on the Master node to deploy a Flannel pod network.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
Next, wait for some time for the pods to be in running state. Then, run the following command to see the status of all pods:
kubectl get pods --all-namespaces
If everything is fine, you will get the following output:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5d995d45d6-tfdk9 1/1 Running 0 66m kube-system calico-node-5v5ll 1/1 Running 0 66m kube-system calico-node-rws9b 1/1 Running 0 66m kube-system calico-node-tkc8p 1/1 Running 0 66m kube-system coredns-78fcd69978-7ggpg 1/1 Running 0 127m kube-system coredns-78fcd69978-wm7wq 1/1 Running 0 127m kube-system etcd-master 1/1 Running 0 127m kube-system kube-apiserver-master 1/1 Running 0 127m kube-system kube-controller-manager-master 1/1 Running 0 127m
Step 7 – Join Worker Nodes to the Kubernetes Cluster
After the successful pod network initialization, the Kubernetes cluster is ready to join the worker nodes. In this section, we will show you how to add both worker nodes to the Kubernetes cluster.
You can use the kubeadm join command on each worker node to join them to the Kubernetes cluster.
kubeadm join 69.28.88.236:6443 --token alfisa.guuc5t2f66cpqz8e --discovery-token-ca-cert-hash sha256:1db0bb5317ae1007c1f7774d5281d22b2189b239ffabecaedcd605613a9b10cd
Once the worker node is joined to the cluster, you will get the following output:
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run kubectl get nodes on the control-plane to see this node join the cluster.
If you forget the Kubernetes Cluster joining command, you can retrieve it any time using the following command on the master node:
kubeadm token create --print-join-command
You will get the Kubernetes Cluster joining command in the following output:
kubeadm join 69.28.88.236:6443 --token alfisa.guuc5t2f66cpqz8e --discovery-token-ca-cert-hash sha256:1db0bb5317ae1007c1f7774d5281d22b2189b239ffabecaedcd605613a9b10cd
Next, go to the master node and run the following command to verify that both worker nodes have joined the cluster:
kubectl get nodes
If everything is set up correctly, you will get the following output:
NAME STATUS ROLES AGE VERSION master Ready control-plane,master 18m v1.22.3 worker1 Ready 101s v1.22.3 worker2 Ready 2m1s v1.22.3
You can also get the cluster information using the following command:
kubectl cluster-info
You will get the following output:
Kubernetes control plane is running at https://69.28.88.236:6443 CoreDNS is running at https://69.28.88.236:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. At this point, the Kubernetes cluster is deployed and running fine. You can now proceed to the next step.
Step 8 – Verify the Kubernetes Cluster
After setting up the Kubernetes cluster, you can deploy any containerized application to your cluster. In this section, we will deploy an Nginx service on the cluster and see how it works.
To test the Kubernetes cluster, we will use the Nginx image and create a deployment called nginx-web:
kubectl create deployment nginx-web --image=nginx
Wait for some time, then run the following command to verify the status of deployment:
kubectl get deployments.apps
If the deployment is in a ready state, you will get the following output:
NAME READY UP-TO-DATE AVAILABLE AGE nginx-web 1/1 1 1 6s
Next, scale the Nginx deployment with 4 replicas using the following command:
kubectl scale --replicas=4 deployment nginx-web
Wait for some time, then run the following command to verify the status of Nginx replicas:
kubectl get deployments.apps nginx-web
You will get the following output:
NAME READY UP-TO-DATE AVAILABLE AGE nginx-web 4/4 4 4 40m
To see the detailed information of your deployment, run:
kubectl describe deployments.apps nginx-web
Sample output:
Name: nginx-web Namespace: default CreationTimestamp: Thu, 06 Jan 2022 05:49:41 +0000 Labels: app=nginx-web Annotations: deployment.kubernetes.io/revision: 1 Selector: app=nginx-web Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx-web Containers: nginx: Image: nginx Port: Host Port: Environment: Mounts: Volumes: Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: NewReplicaSet: nginx-web-5855c9859f (4/4 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 52s deployment-controller Scaled up replica set nginx-web-5855c9859f to 1 Normal ScalingReplicaSet 30s deployment-controller Scaled up replica set nginx-web-5855c9859f to 4
As you can see, the Nginx deployment has been scaled up successfully.
Now, let’s create another pod named http-web and expose it via http-service with port 80 and NodePort as a type.
First, create a pod using the command below:
kubectl run http-web --image=httpd --port=80
Next, run the following command to expose the above pod on port 80:
kubectl expose pod http-web --name=http-service --port=80 --type=NodePort
Wait for some time to bring up the pod then run the following command to check the status of the http-service:
kubectl get service http-service
You will get the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-service NodePort 10.109.210.63 80:31415/TCP 8s
To get the detailed information of the service, run:
kubectl describe service http-service
You will get the following output:
Name: http-service Namespace: default Labels: run=http-web Annotations: Selector: run=http-web Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.109.210.63 IPs: 10.109.210.63 Port: 80/TCP TargetPort: 80/TCP NodePort: 31415/TCP Endpoints: 10.244.1.4:80 Session Affinity: None External Traffic Policy: Cluster Events: Next, run the following command to retrieve the IP address and the node on which the http-web pod is deployed:
kubectl get pods http-web -o wide
You will get all information in the following output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES http-web 1/1 Running 0 62s 10.244.1.4 worker1
As you can see, the http-web is deployed on the worker2 node and their IP address is 10.244.1.4.
You can now use the curl command to verify the webserver using port 80:
curl http://10.244.1.4:80
If everything is set up correctly, you will get the following output:
<html><body><h1>It works!</h1></body></html>
Conclusion
In this guide, we explained how to install and deploy a three-node Kubernetes cluster on Ubuntu 20.04 server. You can now add more worker nodes to scale the cluster if necessary. For more information, read the Kubernetes documentation. Try to deploy Kubernetes cluster today on your dedicated servers from Atlantic.Net!