This repository is my home Kubernetes cluster in a declarative state. Flux watches my cluster folder and makes the changes to my cluster based on the YAML manifests.
Feel free to open a Github issue or join the k8s@home Discord if you have any questions.
This repository is built off the k8s-at-home/template-cluster-k3s repository.
My cluster is k3s provisioned overtop Ubuntu 20.04 using the Ansible galaxy role ansible-role-k3s. This is a semi hyper-converged cluster, workloads and block storage are sharing the same available resources on my nodes.
See my server/ansible directory for my playbooks and roles.
- calico: For internal cluster networking using BGP configured on Opnsense.
- rook-ceph: Provides persistent volumes, allowing any application to consume RBD block storage.
- SOPS: Encrypts secrets which is safe to store - even to a public repository.
- external-dns: Creates DNS entries in a separate coredns deployment which is backed by my clusters etcd deployment.
- cert-manager: Configured to create TLS certs for all ingress services automatically using LetsEncrypt.
- kube-vip: HA solution for Kubernetes control plane
The Git repository contains the following directories under cluster
and are ordered below by how Flux will apply them.
- base directory is the entrypoint to Flux
- crds directory contains custom resource definitions (CRDs) that need to exist globally in your cluster before anything else exists
- core directory (depends on crds) are important infrastructure applications (grouped by namespace) that should never be pruned by Flux
- apps directory (depends on core) is where your common applications (grouped by namespace) could be placed, Flux will prune resources here if they are not tracked by Git anymore
./cluster
├── ./apps
├── ./base
├── ./core
└── ./crds
- Github Actions for checking code formatting
- Rancher System Upgrade Controller to apply updates to k3s
- Renovate with the help of the k8s-at-home/renovate-helm-releases Github action keeps my application charts and container images up-to-date
Currently when using BGP on Opnsense, services do not get properly load balanced. This is due to Opnsense not supporting multipath in the BSD kernel.
In my network Calico is configured with BGP on my Opnsense router. With BGP enabled, I advertise a load balancer using externalIPs
on my Kubernetes services.
Name | CIDR |
---|---|
Management | 192.168.1.0/24 |
Servers | 192.168.42.0/24 |
k8s external services (BGP) | 192.168.69.0/24 |
k8s pods | 10.69.0.0/16 |
k8s services | 10.96.0.0/16 |
To prefix this, I should mention that I only use one domain name for internal and externally facing applications. Also this is the most complicated thing to explain but I will try to sum it up.
On Opnsense under Services: Unbound DNS: Overrides
I have a Domain Override
set to my domain with the address pointing to my in-cluster-non-cluster service CoreDNS load balancer IP. This allows me to use Split-horizon DNS. external-dns reads my clusters Ingress
's and inserts DNS records containing the sub-domain and load balancer IP (of traefik) into the in-cluster-non-cluster service CoreDNS service and into Cloudflare depending on if an annotation is present on the ingress. See the diagram below for a visual representation.
Device | Count | OS Disk Size | Data Disk Size | Ram | Purpose |
---|---|---|---|---|---|
Intel NUC8i3BEK | 3 | 256GB NVMe | N/A | 16GB | k3s Masters (embedded etcd) |
Intel NUC8i5BEH | 3 | 120GB SSD | 1TB NVMe (rook-ceph) | 32GB | k3s Workers |
Intel NUC8i7BEH | 2 | 750GB SSD | 1TB NVMe (rook-ceph) | 64GB | k3s Workers |
Qnap NAS (rocinante) | 1 | N/A | 8x12TB RAID6 | 16GB | Media and shared file storage |
Synology NAS (serenity) | 1 | N/A | 8x12TB RAID6 | 4GB | Media and shared file storage |
Tool | Purpose |
---|---|
direnv | Sets KUBECONFIG environment variable based on present working directory |
go-task | Alternative to makefiles, who honestly likes that? |
pre-commit | Enforce code consistency and verifies no secrets are pushed |
kubetail / stern | Tail logs in Kubernetes |
A lot of inspiration for my cluster came from the people that have shared their clusters over at awesome-home-kubernetes