Currently 3 mini-PCs with N100 16GB RAM and 512GB SSD running K3s on FCOS.
Currently the primary purpose of my homelab
is to play around with Kubernetes and BGP EVPN to create VPCs for kubevirt.
Several resources played a major role to put together this experimental setup:
- Vincent Bernat's amazing blog posts
- MetalLB's FRR daemon
- Will Daly's flawless blog post illustrating how to run K3s on FCOS
An environment with docker
, direnv
, nix
and nix-direnv
available.
Make sure to update the justfile
-recipes for your environment.
Homelab does not support PXE boot and I wanted to keep it simple, so for every node a new ISO is generated with the configuration embedded.
just controller-1 # equivalent to `just node-iso "controller-1" "/dev/sda" "server"`
NOTE: Behind the scenes just node-iso
is used which takes a hostname
, device
(to install on) and K3s role ("server" || "agent"
).
Let's find the USB-stick and unmount if mounted:
df # or lsblk or dmesg
sudo umount /run/media/...
Let copy the data to the USB-stick:
just dd controller-1 /dev/sdX
The above commands can also be run for compute-1
and compute-2
to set up the non-control-plane nodes.
However only do so, when controller-1
is up and running as an update to justfile
is required (more info follows in the next sections).
Change the boot order of the mini-PCs to boot from USB first and make sure neither secure boot nor fastboot are enabled. (We will install to SSD but reinstall is required for config changes.)
Start with the controller-node before continuing to the worker.
After the controller is installed, make sure to update k3s_server_ip
in justfile
!
CoreOS will be installed with the embedded config automatically. Make sure to remove the USB after install to boot into the new installation.
You should be able to SSH now:
ssh core@${NODE_IP}
On the node you can change to the root and test kubectl
:
sudo -i
KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl get po --all-namespaces
Retrieve and patch the kubeconfig from the controller:
just kubeconfig
You can then either invoke kubectl via:
just k get po --all-namespaces
Or alternatively simply export KUBECONFIG
pointing to the config and using kubectl
and related tool:
export KUBECONFIG=.build/kubeconfig
kubectl get po --all-namespaces
We want a route-reflector and a daemonset of VTEPs, as well as Multus and custom CNI components to be installed. With the kubeconfig available, run:
just install
To get quick access to vtysh
(FRR) on a particular node, run:
just vtysh $nodename # e.g. controller-1, compute-1, ...
You can then run commands to check the status of the BGP setup, e.g.
show bgp neighbors # bgp sessions
show interface vxlan100 # inspect interface
show evpn mac vni 100 # retrieve information regarding to vni, e.g. mac
show bgp l2vpn evpn route # sent routes
show evpn vni # received routes
Make sure FRR provides the correct information to the kernel, e.g.:
bridge fdb show dev vxlan100 | grep dst
To manually setup vxlan and veth interfaces on compute-1
and compute-2
and test pinging each other run:
just vxlan-test-setup
just vxlan-test
just vxlan-test-clean
# install test pods with network attachements
k apply -f test.yaml
# check if status field is properly set
k describe po vxlan-a
# on one host
k exec -it vxlan-a -- sh
ping6 ${IP_B}%net1
nc -6 -l -p 3000
# on the other
k exec -it vxlan-b -- sh
nc -6 ${IP_A}%net1 3000 # followed by input