K3S Kubernetes cluster on Raspberry Pi

Setup a K3S Kubernetes cluster on Raspberry Pi micro-computers

Overview

A single-node server installation is a fully-functional Kubernetes cluster, including all the datastore, control-plane, kubelet, and container runtime components necessary to host workload pods. It is not necessary to add additional server or agents nodes, but you may want to do so to add additional capacity or redundancy to your cluster.

Requirements

  • At least one Raspberry Pi dedicated to running K3S
  • CPU; 1 core (min), 2 cores (recommended)
  • RAM; 512 MiB (min), 1 GiB (recommended)

Setup the Raspberry Pi(s)

You can setup one or many Raspberry Pi computers for this project. Follow the Raspbian getting started guide for details. Key thing is to make sure your Raspbian is:

  • Static IP
  • Updated (sudo apt-get update && sudo apt-get upgrade)

Setup your Raspberry Pi with Raspbian Lite and static IP addresses.

Disable SWAP

It’s recommended to disable swap because of how Kubernetes manages resources.

Memory Management: Kubernetes efficiently manages and allocates resources, including memory. Allowing an operating system to swap can interrupt Kubernetes’ memory management process.

Performance Issues: Swapping can lead to performance degradation. When Kubernetes needs to access something that has been swapped to disk, it must wait for it to be loaded back into memory, causing delays.

Predictability: Disabling swap helps ensure predictable performance, as it removes the chance of the system swapping out Kubernetes’ processes.

Kubernetes Design: Kubernetes is designed to work with no swapping activity. It assumes that applications are memory-resident, which means it expects them to stay in memory all the time.

To disable swap on Linux, you can edit the dphys-swapfile and edit the following value to =0

sudo nano /etc/dphys-swapfile

Update the CONF_SWAPSIZE in dphys-swapfile file to 0

Cgroup configuration

Standard Raspberry Pi OS installations do not start with cgroups enabled. K3S needs cgroups to start the systemd service. cgroupscan be enabled by appending cgroup_memory=1 cgroup_enable=memory to /boot/cmdline.txt. Below are the steps needed to resolve this issue:

Open the cmdline.txt file:

sudo nano /boot/firmware/cmdline.txt

Add below into THE END of the current line:

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Save the file and reboot:

sudo reboot

Install K3S on the main node

Execute the following command to install the K3s main node. Replace the IP address according to the IP of the main node you are installing onto.

curl -sfL https://get.k3s.io | \
    INSTALL_K3S_EXEC="server \
    --disable=traefik \
    --flannel-backend=host-gw \
    --tls-san=10.0.0.3 \
    --bind-address=10.0.0.3 \
    --advertise-address=10.0.0.3 \
    --node-ip=10.0.0.3 \
    --cluster-init" \
    sh -s -

K3s parameters examination:

server: This is telling k3s to run in server mode (as opposed to agent mode). In server mode, k3s will start up and manage Kubernetes master components.

disable=traefik: This is instructing k3s to disable the Traefik ingress controller. By default, k3s includes and enables Traefik; this flag will prevent that from happening.

flannel-backend=host-gw: This flag is setting the backend for Flannel (k3s’s default network provider) to use. The host-gw option provides high-performance networking by creating a route for each node in the cluster.

tls-san=10.0.0.3: The — tls-san flag allows you to specify additional IP or DNS names that should be included in the TLS certificate that is automatically generated for the Kubernetes API server. You can repeat this flag to add more than one SAN. The value 10.0.0.3 is an additional Subjective Alternative Name (SAN) for the Kubernetes API server’s certificate.

bind-address=10.0.0.3: This is the IP address that the k3s API server will listen on.

advertise-address=10.0.0.3: This is the IP address that the k3s API server will advertise to other nodes in the cluster. They will use this IP to connect to the API server.

node-ip=10.0.0.3: This defines the IP that should be used for Kubernetes services on the node.

cluster-init: This flag instructs k3s to initialize a new Kubernetes cluster. If this flag is not provided, k3s will join an existing cluster if one is available.

Logon to the K3S main node from a workstation

To log in to your K3s cluster using the k3s.yaml configuration file, follow these steps

  1. Copy the contents of the K3S configuration File on the server at /etc/rancher/k3s/k3s.yaml
  2. Open the Kubernetes config files on your workstation at ~/.kube/config
  3. Paste the contents
  4. Update the Server Field with the value of the server IP address of your K3s server.
  5. Access the Cluster with kubectl:
    • Now you can use kubectl to manage your K3s cluster from your local machine.
    • For example:

kubectl get pods --all-namespaces

Deploy an NGINX ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=120s
kubectl create deployment demo --image=httpd --port=80
kubectl expose deployment demo
kubectl create ingress demo-localhost --class=nginx --rule="demo.localdev.me/*=demo:80"
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
kubectl create ingress demo --class=nginx --rule="www.demo.io/*=demo:80"

Simple pod deployment

Create a namespace

kubectl create namespace ktest

Create a deployment

kubectl --namespace ktest create -f https://k8s.io/examples/application/deployment.yaml

Confirm the deployment has generated new pods.

kubectl --namepace ktest get all

Create a Service

This will connect the NGINX instance with a Kubernetes Service.

  • This is the NGINX Service YAML file contents.
  • Create a file named nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-deployment
  labels:
    run: nginx-deployment
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: nginx
  • Run this command against that file.
kubectl --namespace ktest create -f nginx-service.yaml
  • Verify the service is created and running.
kubectl --namespace ktest get svc nginx-deployment

Expose the deployment to outside of the cluster

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/manifests/metallb.yaml kubectl create secret generic -n metallb-system memberlist –from-literal=secretkey="$(openssl rand -base64 128)"

References

Raspberry Pi: Getting Started

K3S

Step-By-Step Guide: Installing K3s on a Raspberry Pi 4 Cluster

Last modified July 21, 2024: update (e2ae86c)