Kubefarm combines everything need to spawn multiple Kubernetes-in-Kubernetes clusters and network booting configuration to simple bootstrap the physical servers from the scratch.
The project goals is to provide simple and unified way for deploying Kubernetes on bare metal.
There is no installation process as such, you just run your physical servers from scratch, during the boot they download the system image over the network and run it similar docker containers with overlayfs root.
You don't have to think about redundancy and performing the updates for your OS anymore. Simple reboot is enough to apply new image.
You can spawn new Kubernetes clusters and PXE-servers using Helm very quickly, just providing all the parameters in simple Yaml form.
You can build your own image for the physical servers simple using Dockerfile. The default image is based on Ubuntu. You can put there anything need, simple add any additional packages and custom kernel modules.
Whole setup consist of few known components:
- Kubernetes-in-Kubernetes - Kubernetes control-plane packed to Helm-chart, it is based on official Kubernetes static pod manifests and using the official Kubernetes docker images.
- Dnsmasq-controller - simple wrapper for Dnsmasq which automates the configuration using Kubernetes CRDs and perform leader-election for the DHCP high availability.
- LTSP - network booting server and boot time configuration framework for the clients written in shell. It allows to boot OS over the network directly to RAM and perform initial initial configuration for each server.
There is a number of dependencies needed to make kubefarm working:
-
The parent Kubernetes cluster is required to deploy Kubernetes-in-Kubernetes control-planes and network booting servers there. You need to deploy new Kubernetes cluster using your favorite installation method, you can use kubeadm or kubespray for example.
You might want untaint master nodes to allow run workload on them
kubectl taint nodes --all node-role.kubernetes.io/master-
-
The cert-manager performs the certificates issuing for Kubernetes-in-Kubernetes and its etcd-cluster.
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.2/cert-manager.yaml
-
You need an automated persistent volumes management for your cluster, local-path-provisioner is simpliest way to achieve that.
kubectl apply -f https://github.com/rancher/local-path-provisioner/raw/master/deploy/local-path-storage.yaml
Optionaly any other csi-driver can be used.
-
You also need an automated external IP-addresses management, MetalLB is providing this opportunity.
kubectl apply -f https://github.com/metallb/metallb/raw/v0.9.3/manifests/namespace.yaml kubectl apply -f https://github.com/metallb/metallb/raw/v0.9.3/manifests/metallb.yaml kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Also configure MetalLB Layer 2 address range after the installation.
These IP-addresses will be used for the child Kubernetes clusters and network booting servers.
-
High available DHCP-server wrapper allows to configure DHCP leases over Kubernetes. Additional DNS-server mode is allowed.
kubectl create namespace dnsmasq kubectl create -n dnsmasq clusterrolebinding dnsmasq-controller --clusterrole dnsmasq-controller --serviceaccount dnsmasq:dnsmasq-controller kubectl create -n dnsmasq rolebinding dnsmasq-controller-leader-election --role dnsmasq-controller-leader-election --serviceaccount dnsmasq:dnsmasq-controller kubectl apply -n dnsmasq \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/crd/bases/dnsmasq.kvaps.cf_dhcphosts.yaml \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/crd/bases/dnsmasq.kvaps.cf_dhcpoptions.yaml \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/crd/bases/dnsmasq.kvaps.cf_dnshosts.yaml \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/crd/bases/dnsmasq.kvaps.cf_dnsmasqoptions.yaml \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/rbac/service_account.yaml \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/rbac/role.yaml \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/rbac/leader_election_role.yaml \ -f https://github.com/kvaps/dnsmasq-controller/raw/master/config/controller/dhcp-server.yaml kubectl label node --all node-role.kubernetes.io/dnsmasq=
You also need to deploy basic platform matchers for DHCP, they allows to detect the clients architecture (PC or EFI) to allow sending proper bootloader binary.
kubectl apply -n dnsmasq -f https://github.com/kvaps/kubefarm/raw/master/deploy/dhcp-platform-matchers.yaml
Spawn new cluster:
git clone --recurse-submodules https://github.com/kvaps/kubefarm
cp kubefarm/deploy/helm/kubefarm/values.yaml .
vim values.yaml
helm upgrade --install cluster1 kubefarm/deploy/helm/kubefarm -f values.yaml --wait
You can access your newly deployed cluster very quickly:
kubectl exec -ti `kubectl get pod -l app=cluster1-kubernetes-admin -o name` -- sh
To achieve that you need to specify correct hostname or IP-address for kubernetes.apiserver.certSANs
in your values.yaml
file.
Now you can get kubeconfig for your cluster:
kubectl get secret cluster1-kubernetes-admin-conf -o go-template='{{index .data "admin.conf" | base64decode }}'
you only need to correct the server address in it.