K3S on Raspberry Pi 4 - initial setup
December 18, 2023
Now that the supply pressures have eased, I wanted to get a few Raspberry Pi and start messing around with a Kubernetes cluster. This post covers the initial setup of the cluster and hopefully I'll get to deploying something on it in due course.
I bought three Raspberry Pi 4 model B's, each with 4Gb of memory and bought a small set of cluster shelves to put it on. This will let me set up a master node and two worker nodes. Each of the nodes runs directly off an SD card, and starts off the using the raspberry pi imager to put a 64 bit operating system onto the card.
Master Node
The first stage was to setup a master node. The pi was booted from the card and plugged into a monitor and keyboard. Config changes were made to make sure the hostname was master1
and that ssh was running. I gave this node a static IP address of 192.168.1.20
so that it was easy to supply this IP address to future worker nodes, and that it wouldn't change.
I also had to change the /boot/cmdline.txt
file to append cgroup_memory=1 cgroup_enable=memory
to this line, ending up with something like this:
console=serial0,115200 console=tty1 root=PARTUUID=<UNIQUE_RASPI_ID> rootfstype=ext4 fsck.repair=yes rootwait cgroup_memory=1 cgroup_enable=memory
After this is done, the pi was rebooted and then we were ready to install k3s, which can be done with
$ curl -sfL https://get.k3s.io | sh -
Once complete, you should be able to run a get nodes
and see the master node listed.
$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 15h v1.28.4+k3s2
Worker Nodes
I had hardware to setup two worker nodes. The initial configuration of these involved doing the same /boot/cmdline.txt
fix as above, and giving them a hostname of knode1
and knode2
. They were left with a dynamic IP address assigned using DHCP, and ssh was confirmed as running.
We need to find a token from the master node that we can then supply to the worker nodes to allow them to connect. This can be found by sshing into the master node and running
$ sudo cat /var/lib/rancher/k3s/server/node-token
K1067013bd3f6468a123cd19d234ac35457c00082424b67e4f7bbdfdfe7da6f83d::server:f1235ade92e93588941b12cd1230de9df
Make a note of this long token - you'll need to copy and paste it to a command line on each worker node.
On each of the worker nodes, ensuring a clean restart after the cgroup fix, run the following comment to install k3s and connect to the master node. Remember that we set the static IP address for the master node to be 192.168.1.20
- remember to substitute your own if you changed this.
$ curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.20:6443 K3S_TOKEN=<long_token_from_above> sh -
Once complete, get nodes
should now show the worker nodes too:
$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
knode2 Ready <none> 13h v1.28.4+k3s2
master1 Ready control-plane,master 15h v1.28.4+k3s2
knode1 Ready <none> 13h v1.28.4+k3s2