8000 GitHub - trevex/homelab
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

trevex/homelab

Repository files navigation

homelab

Currently 3 mini-PCs with N100 16GB RAM and 512GB SSD running K3s on FCOS.

Currently the primary purpose of my homelab is to play around with Kubernetes and BGP EVPN to create VPCs for kubevirt.

Several resources played a major role to put together this experimental setup:

Prerequisites

An environment with docker, direnv, nix and nix-direnv available. Make sure to update the justfile-recipes for your environment.

Setup

Generate node-specific ISOs

Homelab does not support PXE boot and I wanted to keep it simple, so for every node a new ISO is generated with the configuration embedded.

just controller-1 # equivalent to `just node-iso "controller-1" "/dev/sda" "server"`

NOTE: Behind the scenes just node-iso is used which takes a hostname, device (to install on) and K3s role ("server" || "agent").

Let's find the USB-stick and unmount if mounted:

df # or lsblk or dmesg
sudo umount /run/media/...

Let copy the data to the USB-stick:

just dd controller-1 /dev/sdX

The above commands can also be run for compute-1 and compute-2 to set up the non-control-plane nodes. However only do so, when controller-1 is up and running as an update to justfile is required (more info follows in the next sections).

Install FCOS

Change the boot order of the mini-PCs to boot from USB first and make sure neither secure boot nor fastboot are enabled. (We will install to SSD but reinstall is required for config changes.)

Start with the controller-node before continuing to the worker.

After the controller is installed, make sure to update k3s_server_ip in justfile!

CoreOS will be installed with the embedded config automatically. Make sure to remove the USB after install to boot into the new installation.

Testing FCOS

You should be able to SSH now:

ssh core@${NODE_IP}

On the node you can change to the root and test kubectl:

sudo -i
KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl get po --all-namespaces

Connect to the K3s

Retrieve and patch the kubeconfig from the controller:

just kubeconfig

You can then either invoke kubectl via:

just k get po --all-namespaces

Or alternatively simply export KUBECONFIG pointing to the config and using kubectl and related tool:

export KUBECONFIG=.build/kubeconfig
kubectl get po --all-namespaces

Install cluster components

We want a route-reflector and a daemonset of VTEPs, as well as Multus and custom CNI components to be installed. With the kubeconfig available, run:

just install

Testing / Debugging

vtysh

To get quick access to vtysh (FRR) on a particular node, run:

just vtysh $nodename # e.g. controller-1, compute-1, ...

You can then run commands to check the status of the BGP setup, e.g.

show bgp neighbors # bgp sessions
show interface vxlan100 # inspect interface
show evpn mac vni 100 # retrieve information regarding to vni, e.g. mac
show bgp l2vpn evpn route # sent routes
show evpn vni # received routes

FDB

Make sure FRR provides the correct information to the kernel, e.g.:

bridge fdb show dev vxlan100 | grep dst

VXLAN

To manually setup vxlan and veth interfaces on compute-1 and compute-2 and test pinging each other run:

just vxlan-test-setup
just vxlan-test
just vxlan-test-clean

K8s with Multus

# install test pods with network attachements
k apply -f test.yaml
# check if status field is properly set
k describe po vxlan-a 
# on one host
k exec -it vxlan-a -- sh
ping6 ${IP_B}%net1
nc -6 -l -p 3000
# on the other
k exec -it vxlan-b -- sh
nc -6 ${IP_A}%net1 3000 # followed by input

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 
0