I couldn't find any example Terraform projects to make an EKS cluster that I was happy with, so I cobbled this one together. This project spins up a decent EKS cluster for demos, development, or testing. In theory, you could scale it up to production too if your apps are stateful and can tolerate using spot instances - but it's really meant to be for short/medium term environments that you spin up or down at need.
It currently features:
- Custom VPC Setup.
- Kubernetes 1.21.
- Secrets Encryption via a rotating customer-managed KMS key.
- Cloudwatch Encryption via a rotating customer-managed KMS key.
- Control Plane logging to Cloudwatch.
- Common Tagging across all created resources for easy billing resolution.
- Calico networking instead of "aws-node"
- EC2 worker nodes with encrypted root volumes.
- 2 Helm Charts at a minimum:
- Cluster-Autoscaler for autoscaling
- AWS's Node Termination Handler to watch for Spot instances being terminiated and draining them, rebalancing requests, and scheduled event draining
- Configurable ausoscaling EC2 Pools. By default it runs:
- 1 t3.small instance for safety. The autoscaler pod runs here.
- 1 to 5 t3.medium spot instances. Ideally, most of the workload should run on these. The spot price is set to the on-demand price.
- Configurable mapping of accounts, IAM roles, and IAM users to the aws-auth conifgmap.
- (Occasionally) bleeding edge compatibility with Terraform 1.0.7
- Generation of the Kubeconfig needed for kubectl, helm, etc.
- Cost to remain as low as possible.
- Ideally, I want this project to always run with the latest Terraform - though this requires compatibility with the public AWS terraform modules.
- Helm is the tool of choice for installing into the cluster - Convince me otherwise.
- This was last run with Terraform 1.0.7
- Just edit what you need to in provider.tf to allow you to connect, and put what you want into local.tf
- Run a terraform apply.
This is what ends up running after your first install:
╰─❯ kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/aws-load-balancer-controller-54cf85b446-c8244 1/1 Running 1 43m
kube-system pod/aws-load-balancer-controller-54cf85b446-vbpfz 1/1 Running 1 43m
kube-system pod/aws-node-termination-handler-5jcfs 1/1 Running 0 43m
kube-system pod/aws-node-termination-handler-nqv7q 1/1 Running 0 43m
kube-system pod/calico-kube-controllers-784b4f4c9-qpfs8 1/1 Running 0 43m
kube-system pod/calico-node-r7lkj 1/1 Running 0 43m
kube-system pod/calico-node-rmw6m 1/1 Running 0 43m
kube-system pod/cluster-autoscaler-aws-cluster-autoscaler-d49c449d5-vlzt5 1/1 Running 0 43m
kube-system pod/coredns-65ccb76b7c-g7cbz 1/1 Running 0 46m
kube-system pod/coredns-65ccb76b7c-nzcd4 1/1 Running 0 46m
kube-system pod/kube-proxy-qs2r5 1/1 Running 0 43m
kube-system pod/kube-proxy-svnh5 1/1 Running 0 43m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 46m
kube-system service/aws-load-balancer-webhook-service ClusterIP 10.100.25.81 <none> 443/TCP 43m
kube-system service/cluster-autoscaler-aws-cluster-autoscaler ClusterIP 10.100.115.16 <none> 8085/TCP 43m
kube-system service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 46m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node-termination-handler 2 2 2 2 2 kubernetes.io/os=linux 43m
kube-system daemonset.apps/calico-node 2 2 2 2 2 kubernetes.io/os=linux 43m
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 <none> 46m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/aws-load-balancer-controller 2/2 2 2 43m
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 43m
kube-system deployment.apps/cluster-autoscaler-aws-cluster-autoscaler 1/1 1 1 43m
kube-system deployment.apps/coredns 2/2 2 2 46m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/aws-load-balancer-controller-54cf85b446 2 2 2 43m
kube-system replicaset.apps/calico-kube-controllers-784b4f4c9 1 1 1 43m
kube-system replicaset.apps/cluster-autoscaler-aws-cluster-autoscaler-d49c449d5 1 1 1 43m
kube-system replicaset.apps/coredns-65ccb76b7c
- Setup pre-commit tooling, including a Checkov security scan.
- I wanted to use Launch Templates instead of Launch Configs - but there seems to be a bug in the EKS terraform modules where it's ignoring the Spot configuration.
- Testing Framework?
- Build a list of must-have Helm charts you'd tend to put into an EKS/K8S cluster. I'm thinking it would start with:
- Vault
- Consul ?
- Prometheus (via its Operator)
- Cert Manager
- Keycloak
- How can this integrate with Route53? Should it?
Name | Version |
---|---|
terraform | >= 0.13.1 |
aws | >= 3.42.0 |
cloudinit | ~> 2.2.0 |
kubernetes | ~> 2.2.0 |
local | >= 2.1.0 |
null | ~> 3.1.0 |
random | >= 3.1.0 |
template | ~> 2.2.0 |
Name | Version |
---|---|
aws | >= 3.42.0 |
helm | n/a |
null | ~> 3.1.0 |
Name | Source | Version |
---|---|---|
alb_controller | git::github.com/GSA/terraform-kubernetes-aws-load-balancer-controller?ref=v4.1.0 | |
eks | terraform-aws-modules/eks/aws | |
iam_assumable_role_admin | terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc | |
vpc | terraform-aws-modules/vpc/aws |
Name | Type |
---|---|
aws_iam_policy.cluster_autoscaler | resource |
aws_kms_alias.eks | resource |
aws_kms_alias.ekslogs | resource |
aws_kms_key.eks | resource |
aws_kms_key.ekslogs | resource |
helm_release.autoscaler | resource |
helm_release.aws_node_termination_handler | resource |
null_resource.install_calico_plugin | resource |
null_resource.kube_config | resource |
aws_availability_zones.available | data source |
aws_caller_identity.current | data source |
aws_eks_cluster.cluster | data source |
aws_eks_cluster_auth.cluster | data source |
aws_iam_policy_document.cluster_autoscaler | data source |
aws_iam_policy_document.logging | data source |
No inputs.
Name | Description |
---|---|
cloudwatch_log_group_name | Cloudwatch Log Group Name for this Cluster |
cluster_endpoint | Endpoint for EKS control plane. |
cluster_security_group_id | Security group ids attached to the cluster control plane. |
config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster. |
kubectl_config | kubectl config as generated by the module. |
region | AWS region. |