Skip to content

Commit

Permalink
Add Helm Chart (#38)
Browse files Browse the repository at this point in the history
* add helm chart

* added pre-install hook to the helm chart, added namespace handling

* feat: use existing secret

Only add AWS AK/SK to secrets. Use directly non-secret ENV from values.
Namespace secret-related keys under "auth:" to mimick Bitnami values.yaml
Add a ref to the helm chart repo (if github pages are activated, see https://medium.com/@mattiaperi/create-a-public-helm-chart-repository-with-github-pages-49b180dbb417 and https://docs.github.com/en/pages)

* chore: reuse lables defined in the template

* fix: add annotations to ensure service accounts are created before the job runs

* chore: rename overrideNamespaces to targetNamespace

To make it coherent with ENV name

* fix: allow actions across namespaces with ClusterRole

* Tweak the readme

* Simplify title

* Another tweak;

* Start refactoring and simplifying the Helm charts

* Fill in secret definition

* Continue trimming

* Split into files, remove hooks, clean up values.

* Make the most common parameters easier to edit

* Seems to be working now

* Fix version of the image

* Try adding a helm chart releaser

* Move some files around

* remove custom charts dir name

* remove charts

* Undo everything

* Try rolling my own helm release

* Rename to index

* Add a timezone

* Keep trying to get the timezone right

* Try to update the date again

* Try creating simple GH pages

* Rename to docs

* - Merge all template yaml into one file
- delete example directory because we'll be changing that
- delete docs directory because the helm index is now in a different
  repo

* Major cleanup of readme

* Wrap up final version of things

* Bump chart to version 1.0.0

* Add contributors

* Improve the readme

* Fix file encoding

Co-authored-by: James Hebden <[email protected]>
Co-authored-by: Cyril Duchon-Doris <[email protected]>
Co-authored-by: James "ec0" Hebden <[email protected]>
Co-authored-by: Nabeel Sulieman <[email protected]>
  • Loading branch information
5 people authored Sep 11, 2022
1 parent bbc08ac commit f36fcbf
Show file tree
Hide file tree
Showing 18 changed files with 399 additions and 171 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@
.idea
k8s-ecr-login-renew.exe
k8s-ecr-login-renew
*~
164 changes: 87 additions & 77 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,24 +8,35 @@ To work around this, I created this small tool to automatically refresh the secr
It deploys as a cron job and ensures that your Kubernetes cluster
will always be able to pull Docker images from ECR.

## Quick Start

Prerequisite: AWS IAM credentials with permission to read ECR data.

Installation with Helm:

```sh
helm repo add nabsul https://nabsul.github.io/helm
helm install k8s-ecr-login-renew nabsul/k8s-ecr-login-renew --set awsRegion=[REGION],awsAccessKeyId=[ACCESS_KEY_ID],awsSecretAccessKey=[SECRET_KEY]
```

## Docker Images

The tool is built for and supports the following Architectures:
- `linux/amd64`
- `linux/arm64`
- `linux/arm/v7`

If there is an architecture that isn't supported you can request it [Here](https://github.com/nabsul/k8s-ecr-login-renew/issues).
If there is an architecture that isn't supported you can request it [here](https://github.com/nabsul/k8s-ecr-login-renew/issues).

The latest image can be pulled by any supported Architecture:
- `nabsul/k8s-ecr-login-renew:latest`
The Docker image for running this tool in Kubernetes is published here: https://hub.docker.com/r/nabsul/k8s-ecr-login-renew

Or by tag:
- `nabsul/k8s-ecr-login-renew:v1.6`
Note: Although a `latest` tag is currently being published, I highly recommend using a specific version.
With the `latest` you run the risk of using an outdated version of the tool, or getting upgraded to a newer version before you're ready.
I will eventually deprecate the `latest` tag.

## Environment Variables

The tool is mainly configured through environment variables. These are:
The tool is configured using the following environment variables:

- AWS_ACCESS_KEY_ID (required): AWS access key used to create the Docker credentials.
- AWS_SECRET_ACCESS_KEY (required): AWS secret needed to fetch Docker credentials from AWS.
Expand All @@ -38,33 +49,28 @@ The tool is mainly configured through environment variables. These are:
If none is provided, the default URL returned from AWS is used.
- Example: `DOCKER_REGISTRIES=https://321321.dkr.ecr.us-west-2.amazonaws.com,https://123123.dkr.ecr.us-east-2.amazonaws.com`

## Running the Example
## Prerequisites

The following sections describe step-by-step how to set the cron job up and test it.
You should be able to use the `example/service-account.yml` and `example/deploy.yml` files as-is for testing purposes,
but you'll need to fill in you registry's URL before using `example/pod.yml`.
The following sections describe step-by-step how to set up the prerequisites needed to deploy this tool.

### Create an ECR Instance

I'm not going to describe this in too much details because
there is
I'm not going to describe this in too much details because there is
[plenty of documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html)
that describes how to do this.
Here are a few pointers:
that describes how to do this. Here are a few pointers:

- Create an AWS ECR instance
- Create a repository inside that instance

### Push a Test Image to ECR

To complete the final steps of these instructions, you'll need to create and upload an image to ECR.
As with the previous section, there's plenty of good documentation out there.
But if you're looking to quickly try things out, I've included a trivial Dockerfile
that builds an nginx server image with no modifications:
If you are not already using ECR and pushing images to it,
you'll need to create and upload an test image to ECR.
Theres's plenty of good documentation out there, but basically:

- Install the AWS CLI tool and Docker on your machine
- Log into the registry: `$(aws2 ecr get-login --no-include-email --region [AWS_REGION])`
- Build your image: `docker build -t [ECR_URL]:latest example/.`
- Build your image: `docker build -t [ECR_URL]:latest .`
- Push your docker image to the registry: `docker push [ECR_URL]:latest`

### Setup AWS Permissions
Expand All @@ -79,99 +85,103 @@ Ideally you should set up a policy that:
I find IAM to be rather tricky, but here are the steps that I followed:

- Select "Add User", select the "Programmatic Access" option
- Create a group for that user
- Create a policy for that group with the following configuration:
- Service: Elastic Container Registry
- Access Level: List & Read
- Resources: Select the specific ECR instance that you'll be using
- (Optionally) Create a group for that user
- Authorize the group or user to pull images from ECR. Either
a. Use the existing AWS policy "AmazonEC2ContainerRegistryReadOnly"
b. or create a policy for that group or user with the following configuration:
- Service: Elastic Container Registry
- Access Level: List & Read
- Resources: Select the specific ECR instance that you'll be using

Once that's created, you'll want to get the access key ID and secret for the next step.
Once that's created, you'll want to copy and save the Access Key ID and Secret Access Key of the user for the next step.
I recommend storing these secrets in some kind of secret store, such as:
[Doppler](https://www.doppler.com/),
[Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault),
[AWS Secret Store](https://azure.microsoft.com/en-us/services/key-vault),
[1Password](https://1password.com/),
[LastPass](https://www.lastpass.com/)

### Deploy AWS Credentials to Kubernetes as a secret

### Deploy AWS Access Keys to Kubernetes
Note: If you want to use Helm to create this secret automatically, you can skip this section.

You will then need to create a secret in Kubernetes with the IAM user's credentials.
You will need to create a secret in Kubernetes with the IAM user's credentials.
The secret can be created from the command line using `kubectl` as follows:

```shell script
kubectl create secret -n ns-ecr-renew-demo generic ecr-renew-cred-demo \
--from-literal=REGION=[AWS_REGION] \
--from-literal=ID=[AWS_KEY_ID] \
--from-literal=SECRET=[AWS_SECRET]
--from-literal=aws-access-key-id=[AWS_KEY_ID] \
--from-literal=aws-secret-access-key=[AWS_SECRET]
```

### Required Kubernetes Service Account
## Deploy to Kubernetes

You will need to setup a service account with permissions to create/delete/get the resource docker secret.
Ideally, you should give this service account the minimal amount of permissions needed to do its job.
An example of this minimal permissions setup can be found in `example/service-account.yml`.
You can use this apply this configuration directly as follows:
There are two ways to deploy this tool, and you only need to use one of them:

```shell script
kubectl apply -f example/service-account.yml
```
- Helm chart
- Plain YAML files

### Deploy the cron job
### Deploy to Kubernetes with Helm 3

You'll need to
Deploy the cron job using the example yaml file in `example/deploy.yml`:
Add the repository

```shell script
kubectl apply -f example/deploy.yml
```
helm repo add nabsul https://nabsul.github.io/helm
```

### Test the Cron Job

The easiest way to test is to manually trigger the cron job from the Kubernetes dashboard.
This should create a job and you can then check the logs for any error messages.
Once the job completes, you should notice that the target docker secrets object was either created or updated.

The job can also be manually triggered with the following command:
Deploy to your Kubernetes cluster with:

```shell script
kubectl create job --from=cronjob/cron-ecr-renew-demo cron-ecr-renew-demo-manual-1
```sh
awsRegion=[REGION],awsAccessKeyId=[ACCESS_KEY_ID],awsSecretAccessKey=[SECRET_KEY]
```

You can view the status and logs of the job with the following commands:
Note: If you have already created a secret with your IAM credentials, you only need to provide a region parameter to Helm.

```shell script
kubectl describe job cron-ecr-renew-demo-manual-1
kubectl logs job/cron-ecr-renew-demo-manual-1
You can uninstall the tool with:

```sh
helm uninstall k8s-ecr-login-renew
```

### Deploy the Test Image
### Deploy to Kubernetes with plain YAML

If you pushed an image to your ECR registry, you should now be able to deploy that image to a pod.
If you edit `example/pod.ym` and replace `[ECR_URI]` with your registry's URI,
you should now be able to run a pod with this command:
If you don't want to use Helm to manage installing this tool, you can use [`deploy.yaml`](https://github.com/nabsul/k8s-ecr-login-renew/blob/main/deploy.yaml) and `kubectl apply`.
Note that this file is generated from the Helm template by running `helm template .\chart --set forHelm=false,awsRegion=us-west-2 > deploy.yaml`.
You will likely need to review and edit this yaml file to your needs, and then you can deploy with:

```shell script
kubectl apply -f example/pod.yml
```sh
kubectl apply -f deploy.yaml
```

Check that the pod is running with the following commands:
You can also uninstall from your Kubernetes cluster with:

```shell script
kubectl exec -it ecr-image-pull-test-demo bash
```sh
kubectl delete -f deploy.yaml
```

This should log you into the running pod, where you can execute commands such as `ls`,
`cat /usr/share/nginx/html/index.html` and `exit`.
You can also try running the following:
## Test the Cron Job

```shell script
kubectl port-forward ecr-image-pull-test-demo 8080:80
To check if the cron job is correctly configured, you can wait for the job to be run.
However, you can also manually trigger a job with the following command:

```sh
kubectl create job --from=cronjob/k8s-ecr-login-renew-cron k8s-ecr-login-renew-cron-manual-1
```

And then you can open http://localhost:8080 in your browser to see an nginx default welcome message.
You can view the status and logs of the job with the following commands:

```sh
kubectl describe job k8s-ecr-login-renew-cron-manual-1
kubectl logs job/k8s-ecr-login-renew-cron-manual-1
```

### Clean up after the Demo
### Deploying ECR Images

After running this demo, you might want to clean up everything.
Since the demo is all in its own namespace, just delete it:
You should now be able to deploy them to a pod.
Note that you will need to specify the Docker secret in your Pod definition by adding a `imagePullSecrets`
field pointing to the created Docker secret (named `k8s-ecr-login-renew-docker-secret` by default).

```shell script
kubectl delete namespace ns-ecr-renew-demo
```
You can find more information about this here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

### Running in a namespace other than default namespace

Expand Down
19 changes: 19 additions & 0 deletions changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
# Changelog

## V1.7.2 (2022-09-10) - Helm Charts!

- Added support for deploying via Helm chart. Thanks to:
- [devec0](https://github.com/devec0): v1.7.1
- [PawelLipski](https://github.com/PawelLipski): v1.7.1
- [xavidop](https://github.com/xavidop): v1.7.1
- [armenr](https://github.com/armenr): v1.7.1


## V1.7.1 (2022-06-05) - V1.7.0 for Real

I forgot to merge the change that stops using root user in the container. Thanks to [PawelLipski](https://github.com/PawelLipski) for spotting this.

## V1.7.0 (2022-06-05) - Security and Updates

- The job now runs as a user in the container instead of root (#30)
- Updated to latest version of Go
- Updated dependencies to latest versions

## v1.6 (2021-10-31) - Spooky Separators!

- Support multi-line and whitespace in Namespace list
Expand Down
5 changes: 5 additions & 0 deletions chart/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
.vs
.idea
k8s-ecr-login-renew.exe
k8s-ecr-login-renew
*~
16 changes: 16 additions & 0 deletions chart/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
apiVersion: v2
appVersion: 1.7.1
description: Deploys a cronJob which will renew ECR imagePullSecrets automatically
name: k8s-ecr-login-renew
version: 1.0.0
maintainers:
- name: Nabeel Sulieman
url: https://nabeel.dev
- name: James "ec0" Hebden
email: [email protected]
keywords:
- aws
- ecr
- imagePullSecrets
sources:
- https://github.com/nabsul/k8s-ecr-login-renew
13 changes: 13 additions & 0 deletions chart/templates/001-ServiceAccount.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ required "A service account name is required" .Values.names.serviceAcount }}
namespace: {{ .Release.Namespace | default "default" }}
{{- if .Values.forHelm }}
labels:
app.kubernetes.io/name: {{ .Chart.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
28 changes: 28 additions & 0 deletions chart/templates/002-ClusterRole.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ required "A cluster role name is required" .Values.names.clusterRole }}
{{- if .Values.forHelm }}
labels:
app.kubernetes.io/name: {{ .Chart.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- list
- apiGroups: [""]
resources:
- secrets
- serviceaccounts
- serviceaccounts/token
verbs:
- 'delete'
- 'create'
- 'patch'
- 'get'
20 changes: 20 additions & 0 deletions chart/templates/003-ClusterRoleBinding.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ required "A cluster role binding name is required" .Values.names.clusterRoleBinding }}
{{- if .Values.forHelm }}
labels:
app.kubernetes.io/name: {{ .Chart.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ required "A cluster role name is requred" .Values.names.clusterRole }}
subjects:
- kind: ServiceAccount
name: {{ required "A service account name is required" .Values.names.serviceAcount }}
namespace: {{ .Release.Namespace | default "default" }}
19 changes: 19 additions & 0 deletions chart/templates/004-Secret.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{{- if .Values.awsAccessKeyId }}
apiVersion: v1
kind: Secret
metadata:
name: {{ required "A secret name must be defined" .Values.aws.secretName }}
namespace: {{ .Release.Namespace | default "default" }}
{{- if .Values.forHelm }}
labels:
app.kubernetes.io/name: {{ .Chart.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
type: Opaque
stringData:
{{ required "Secret key for access key id must be defined" .Values.aws.secretKeys.accessKeyId }}: {{ required "Value for access key id must be defined" .Values.awsAccessKeyId }}
{{ required "Secret key for secret access key must be defined" .Values.aws.secretKeys.secretAccessKey }}: {{ required "Value for secret access key must be defined" .Values.awsSecretAccessKey }}
{{- end }}
Loading

0 comments on commit f36fcbf

Please sign in to comment.