This project sets up a Kubernetes cluster on Proxmox VMs and deploys a complete observability stack including Prometheus, Grafana, Loki, Tempo, and more.
The project is organized with proper Terraform modules:
.
├── modules/ # Terraform modules
│ ├── proxmox/ # Proxmox VM provisioning
│ │ ├── main.tf # VM creation logic
│ │ ├── variables/ # Module input variables
│ │ ├── outputs/ # Module outputs
│ │ └── snippets/ # Helper code snippets
│ ├── kubernetes/ # Kubernetes infrastructure
│ │ ├── main.tf # K8s resources (MetalLB, NFS)
│ │ ├── variables/ # Module input variables
│ │ └── outputs/ # Module outputs
│ ├── monitoring/ # Observability stack
│ │ ├── main.tf # Monitoring components
│ │ ├── variables/ # Module input variables
│ │ ├── outputs/ # Module outputs
│ │ └── templates/ # Configuration templates
│ ├── cert-manager/ # Certificate management
│ │ ├── templates/ # Configuration templates
│ │ └── variables/ # Module input variables
│ └── ingress/ # Ingress controllers
├── kubernetes/ # Kubernetes manifests
│ ├── alertmanager-config.yaml # Alertmanager email notifications
│ ├── apply-alertmanager-config.sh # Script to apply alerting config
│ ├── grafana/ # Grafana-related manifests
│ │ └── grafana-ingress-tls.yaml # TLS-enabled ingress
│ ├── k3s-cleanup-servicemonitors.sh # Fix k3s monitoring
│ ├── n8n/ # n8n automation platform manifests
│ │ ├── deployment.yaml # n8n deployment configuration
│ │ └── ingress-tls.yaml # TLS-enabled ingress for n8n
│ ├── obsidian/ # Obsidian sync manifests
│ │ ├── couchdb-deployment.yaml # CouchDB for Obsidian sync
│ │ └── obsidian-deployment.yaml # Obsidian server
│ ├── prometheus-rule-suppress.yaml # Alert suppression rules
│ ├── README-monitoring.md # Monitoring documentation
│ └── traefik/ # Traefik ingress controller manifests
│ ├── current-traefik.yaml # Current Traefik configuration
│ └── traefik-deployment-acme.yaml # ACME/Let's Encrypt enabled
├── config/ # Configuration files
│ └── k3s/ # k3s config files
├── templates/ # General templates
├── snippets/ # Helper code snippets
├── main.tf # Root Terraform configuration
├── variables.tf # Root variables
├── outputs.tf # Root outputs
├── terraform.tfvars # Variable values (not in git)
├── nok8s.tfvars # Infrastructure-only variables
├── kubeconfig.yaml # Kubernetes configuration
└── cloud-init-userdata.tftpl # Template for cloud-init configuration
- Proxmox server with API access
- SSH keypair for VM access
- Terraform installed locally
The deployment is split into two phases:
# Initialize Terraform
terraform init
# Create the VMs and basic infrastructure
terraform apply -var="deploy_kubernetes=false"
# After VMs are created:
# 1. SSH to the control node
ssh -F ssh_config gimli
# 2. Verify k3s is running on the control node
sudo systemctl status k3s
# 3. Copy the kubeconfig from the control node
scp -F ssh_config gimli:/etc/rancher/k3s/k3s.yaml ./kubeconfig.yaml
# 4. Update the server address in the kubeconfig
sed -i '' 's/127.0.0.1/CONTROL_NODE_PRIVATE_IP/g' kubeconfig.yaml
# Deploy Kubernetes resources and monitoring stack
terraform apply -var="deploy_kubernetes=true"
This diagram illustrates the complete architecture of the project, showing the relationships and dependencies between all components:
- Infrastructure Layer: Proxmox VMs, k3s Kubernetes, MetalLB, and NFS storage
- Core Services: Traefik Ingress, Cert-Manager, and HashiCorp Vault
- Observability Stack: Prometheus, Grafana, Loki, Tempo, Mimir, and Alertmanager
- Applications: WordPress, Obsidian Sync with CouchDB, n8n, and their respective monitoring components
The diagram visualizes key relationships including:
- Service dependencies
- Monitoring data flow (metrics and logs)
- Ingress routing paths
- Security integrations
- Dashboard connections
Creates Proxmox VMs with the following features:
- Public and private networking
- Cloud-init for initial configuration
- K3s installation
Sets up core Kubernetes infrastructure:
- MetalLB for LoadBalancer services
- NFS storage for persistent volumes
Deploys a comprehensive observability stack:
- Prometheus for metrics
- Grafana for visualization
- Loki for logs
- Tempo for tracing
- Mimir for long-term metrics storage
The project also sets up these additional services (via Kubernetes manifests):
- Ingress controller with automatic TLS
- ACME/Let's Encrypt integration
- Workflow automation tool
- Prometheus metrics integration
- Configurable with authentication
- Integration with Grafana dashboards
- Self-hosted Obsidian sync server
- CouchDB backend for data storage
- Monitoring with Prometheus
- Visualization with Grafana dashboards
The project includes a comprehensive monitoring stack:
- Email notifications for alerts
- Configured with appropriate inhibition rules
- Customized for k3s environments
- Kubernetes system resources
- Node resources
- Custom dashboards for all services (n8n, CouchDB, Obsidian)
Accessible via ingress at https://grafana.your-domain.com
or:
kubectl port-forward svc/kube-prometheus-stack-grafana 3000:80 -n monitoring
# Open http://localhost:3000
# Username: admin, Password: from terraform.tfvars
kubectl port-forward svc/traefik 9000:9000 -n kube-system
# Open http://localhost:9000/dashboard/
Accessible via the configured Ingress at https://automate.your-domain.com
kubectl port-forward svc/couchdb 9984:5984 -n obsidian
# Open http://localhost:9984
- Keep
terraform.tfvars
and secrets secure - The node token file should not be committed to version control
- For k3s-specific monitoring configuration, see
kubernetes/README-monitoring.md
- Alert notifications are configured to use email via Alertmanager
This project uses HashiCorp Vault for secure credential management. All sensitive information is stored in Vault and retrieved by applications at runtime, rather than being stored in manifest files.
To deploy and configure the Vault server:
# Deploy Vault with secure credentials
cd kubernetes/vault
VAULT_PASSWORD="your-secure-password" SMTP_PASSWORD="your-email-app-password" ./deploy-vault.sh
# Deploy the Vault Secrets Operator to sync credentials to Kubernetes
./deploy-secrets-operator.sh
-
Web UI Access:
- The Vault UI is available at
https://vault.xalg.im
- Use the initial root password:
********
(refer to the deployment script)
- The Vault UI is available at
-
CLI Access:
- Source the credentials file to load environment variables:
source ~/.vault/credentials
- Access Vault using the CLI:
export VAULT_ADDR=https://vault.xalg.im vault login -method=token "$VAULT_ROOT_TOKEN"
- Source the credentials file to load environment variables:
The following credentials are securely stored in Vault:
- AlertManager: Email SMTP configuration
- n8n: Admin username and password
- CouchDB: Database credentials for Obsidian sync
- K3s: Cluster token
To update a secret:
# Export variables from the credentials file
source ~/.vault/credentials
# Update a secret using the Vault CLI
export VAULT_ADDR=https://vault.xalg.im
vault kv put secret/n8n admin_password="********" admin_user="admin"
- After first login, change the root token and initial password
- Back up the ~/.vault/credentials file to a secure location
- Avoid committing any plain-text credentials to version control