K3s is a lightweight, easy-to-install distribution of Kubernetes designed for resource-constrained environments, such as edge computing, IoT devices, and development/testing environments. It is developed by Rancher Labs, an organization known for its Kubernetes management platform.

In this tutorial, we will show you how to install Kubernetes using K3s on Ubuntu 22.04.

Step 1 – Install K3s

First, install the curl package for a smooth K3s installation.

apt install curl -y

Next, use the curl command to download and run the K3s installation script.

curl -sfL https://get.k3s.io | sh -

Once K3s has been installed, you can verify the K3s service using the following command.

systemctl status k3s

Output:

● k3s.service - Lightweight Kubernetes
     Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-03-23 03:46:43 UTC; 7s ago
       Docs: https://k3s.io
    Process: 1860 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null (code=exited, status=0/SUCCESS)
    Process: 1862 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 1863 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 1864 (k3s-server)
      Tasks: 29
     Memory: 522.4M
        CPU: 47.856s
     CGroup: /system.slice/k3s.service
             ├─1864 "/usr/local/bin/k3s server"
             └─1894 "containerd " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">

Mar 23 03:46:49 ubuntu22 k3s[1864]: time="2024-03-23T03:46:49Z" level=error msg="error syncing 'kube-system/traefik-crd': handler helm-controller-chart-registration: h>
Mar 23 03:46:49 ubuntu22 k3s[1864]: I0323 03:46:49.689460    1864 event.go:307] "Event occurred" object="kube-system/traefik-crd" fieldPath="" kind="HelmChart" apiVers>
Mar 23 03:46:49 ubuntu22 k3s[1864]: I0323 03:46:49.694891    1864 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="HelmChart" apiVersion=>
Mar 23 03:46:49 ubuntu22 k3s[1864]: I0323 03:46:49.698154    1864 event.go:307] "Event occurred" object="kube-system/traefik" fieldPath="" kind="HelmChart" apiVersion=>
Mar 23 03:46:49 ubuntu22 k3s[1864]: I0323 03:46:49.699433    1864 event.go:307] "Event occurred" object="kube-system/traefik-crd" fieldPath="" kind="HelmChart" apiVers>
Mar 23 03:46:50 ubuntu22 k3s[1864]: time="2024-03-23T03:46:50Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
Mar 23 03:46:50 ubuntu22 k3s[1864]: time="2024-03-23T03:46:50Z" level=info msg="Connecting to proxy" url="wss://209.23.13.14:6443/v1-k3s/connect"
Mar 23 03:46:50 ubuntu22 k3s[1864]: time="2024-03-23T03:46:50Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
Mar 23 03:46:50 ubuntu22 k3s[1864]: time="2024-03-23T03:46:50Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpecte>
Mar 23 03:46:50 ubuntu22 k3s[1864]: time="2024-03-23T03:46:50Z" level=info msg="Handling backend connection request [ubuntu22]"

Step 2 – Access Kubernetes Cluster

You will need to configure kubectl to access and interact with the Kubernetes cluster via the command line.

First, create a directory to store kubectl configuration.

mkdir ~/.kube

Next, copy the Kubernetes configuration file inside the .kube directory.

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config 
chmod 600 ~/.kube/config 

Next, expose the path of the kubectl configuration file.

export KUBECONFIG=~/.kube/config

Next, verify the Kubernetes cluster nodes using the kubectl command.

kubectl get nodes

Output.

NAME       STATUS   ROLES                  AGE   VERSION
ubuntu22   Ready    control-plane,master   51s   v1.28.7+k3s1

You can also see the Kubernetes cluster information using the following command.

kubectl cluster-info

Output:

Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

Step 3 – Create an Nginx Deployment

To validate the Kubernetes installation, we will deploy a Nginx-based application and expose it via NodePort.

First, create an Nginx deployment using the following command.

kubectl create deployment  nginx-app --image nginx --replicas 2

Next, verify the Nginx deployment using the following command.

kubectl get deployment nginx-app

Output.

NAME        READY   UP-TO-DATE   AVAILABLE   AGE
nginx-app   2/2     2            2           10s

Next, verify the Nginx pods using the following command.

kubectl get pods

Output:

NAME                        READY   STATUS    RESTARTS   AGE
nginx-app-5777b5f95-lhcpp   1/1     Running   0          22s
nginx-app-5777b5f95-2j6ld   1/1     Running   0          22s

Next, expose the nginx-app deployment using NodePort.

kubectl expose deployment nginx-app --type NodePort --port 80

Next, get the IP address of nginx-app using the following command:

kubectl get svc nginx-app

Output:

NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-app   NodePort   10.43.162.54           80:31588/TCP   5s

Now, use the IP address from the above output and run the curl command to access the nginx-app.

curl http://10.43.162.54

If everything is fine, you will see the following output:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Conclusion

Overall, K3s offers a compelling solution for scenarios where a full-fledged Kubernetes deployment may be impractical or overly resource-intensive. Its lightweight design, ease of installation, and focus on security make it an attractive choice for edge computing, IoT deployments, and development/testing environments. You can now easily deploy Kubernetes cluster using K3s on dedicated server hosting from Atlantic.Net!