From ea879c36697f33410290feebe84c94161f80cc87 Mon Sep 17 00:00:00 2001 From: Aleksandar Grbic Date: Tue, 5 Nov 2024 16:31:40 +0100 Subject: [PATCH 1/6] saving work --- .gitignore | 2 + COMMON_COMMANDS.md | 3 + MOUNT_AND_FORMAT_THE_DRIVE.md | 134 +++++++++++++++++ RAID_1_SETUP.md | 91 ++++++++++++ README.md | 30 +--- SETTING_UP_ANSIBLE.md | 125 ++++++++++++++++ SETTING_UP_CLOUDFLARE_DNS.md | 100 +++++++++++++ SETTING_UP_CLOUDFLARE_SSL_VIA_API.md | 60 ++++++++ ansible/README.md | 3 + ansible/host_vars/rp_1.yml | 3 + ansible/host_vars/rp_2.yml | 2 + ansible/host_vars/rp_3.yml | 2 + ansible/host_vars/rp_4.yml | 2 + ansible/inventory.yml | 16 +++ ansible/library/my_test.py | 134 +++++++++++++++++ ansible/playbooks/apt-update.yml | 12 ++ ansible/playbooks/disable-bluetooth.yml | 50 +++++++ ansible/playbooks/disable-swap.yml | 55 +++++++ ansible/playbooks/disable-wifi.yml | 38 +++++ ansible/playbooks/enable-memory-groups.yml | 18 +++ ansible/playbooks/fan-control/fan-control.py | 33 +++++ ansible/playbooks/fan-control/fan-control.yml | 63 ++++++++ ansible/playbooks/join-worker-nodes.yml | 37 +++++ ansible/playbooks/partition-and-format.yml | 65 +++++++++ ansible/playbooks/setup-postgres.yml | 70 +++++++++ ansible/playbooks/setup-redis.yml | 41 ++++++ assets/diagrams/plan.drawio | 135 ++++++++++++++++++ .../3.expose-deployment.yaml | 7 +- 28 files changed, 1298 insertions(+), 33 deletions(-) create mode 100644 .gitignore create mode 100644 COMMON_COMMANDS.md create mode 100644 MOUNT_AND_FORMAT_THE_DRIVE.md create mode 100644 RAID_1_SETUP.md create mode 100644 SETTING_UP_ANSIBLE.md create mode 100644 SETTING_UP_CLOUDFLARE_DNS.md create mode 100644 SETTING_UP_CLOUDFLARE_SSL_VIA_API.md create mode 100644 ansible/README.md create mode 100644 ansible/host_vars/rp_1.yml create mode 100644 ansible/host_vars/rp_2.yml create mode 100644 ansible/host_vars/rp_3.yml create mode 100644 ansible/host_vars/rp_4.yml create mode 100644 ansible/inventory.yml create mode 100644 ansible/library/my_test.py create mode 100644 ansible/playbooks/apt-update.yml create mode 100644 ansible/playbooks/disable-bluetooth.yml create mode 100644 ansible/playbooks/disable-swap.yml create mode 100644 ansible/playbooks/disable-wifi.yml create mode 100644 ansible/playbooks/enable-memory-groups.yml create mode 100644 ansible/playbooks/fan-control/fan-control.py create mode 100644 ansible/playbooks/fan-control/fan-control.yml create mode 100644 ansible/playbooks/join-worker-nodes.yml create mode 100644 ansible/playbooks/partition-and-format.yml create mode 100644 ansible/playbooks/setup-postgres.yml create mode 100644 ansible/playbooks/setup-redis.yml create mode 100644 assets/diagrams/plan.drawio diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..d11bc1e --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +ansible/secrets.yml +.TODO \ No newline at end of file diff --git a/COMMON_COMMANDS.md b/COMMON_COMMANDS.md new file mode 100644 index 0000000..6f7a54b --- /dev/null +++ b/COMMON_COMMANDS.md @@ -0,0 +1,3 @@ +df -h - Check mount points +utop - system activity +lsblk diff --git a/MOUNT_AND_FORMAT_THE_DRIVE.md b/MOUNT_AND_FORMAT_THE_DRIVE.md new file mode 100644 index 0000000..1dbadf6 --- /dev/null +++ b/MOUNT_AND_FORMAT_THE_DRIVE.md @@ -0,0 +1,134 @@ +To **reformat and recreate a partition** (in your case, **`sda1`**) using **`fdisk`** on Linux, you can follow the steps outlined below. These instructions will guide you through deleting the existing partition, creating a new one, and then formatting it properly. + +Since you're working with an external device and the partition appears to be large (**931.5GB**), I assume it's an external hard drive or SSD attached via USB. + +### ⚠️ **Important!** Before proceeding: +1. **Backup Data**: Formatting and deleting the partition will erase all data on it, so ensure you have backed up any important data currently stored on **`sda1`**. +2. **Unmount the Partition**: If the partition is currently mounted, you’ll need to unmount it before working on it. + +--- + +### Steps to Format and Recreate **`sda1`** Using `fdisk`: + +#### **1. Unmount the Partition** +Before modifying the partition, unmount it if it's mounted: +```bash +sudo umount /dev/sda1 +``` + +#### **2. Launch `fdisk` to Edit the Partition Table** +Run `fdisk` for the target device (`/dev/sda` in your case): +```bash +sudo fdisk /dev/sda +``` + +This will start the interactive **`fdisk`** utility on the entire **`/dev/sda`** disk. + +#### **3. Delete the Existing Partition** +Once inside the `fdisk` tool, list the partitions to verify: +```bash +p +``` + +This should display your existing partition table, showing the **`sda1`** partition (`/dev/sda1`). + +To delete the existing partition **`sda1`**: +1. Press `d` (to delete a partition). +2. If there is only one partition (`sda1`), it will automatically choose `1`. Otherwise, you may be asked to specify the partition number (enter `1` to select **`sda1`**). + +Confirm that **`/dev/sda1`** has been deleted by pressing `p` again to view the partition table—it should now list no partitions. + +#### **4. Create a New Partition** +Now, create a new partition by doing the following: + +- Type `n` (to create a new partition). + - When asked for the partition type, press `p` to create a **primary partition**. + - When asked for the partition number, press `1` (to recreate it as **`sda1`**). + - Choose the **default starting sector** by just pressing `Enter` (this will typically start at sector 2048 if you're using a GPT or MBR partition scheme). + - You will be asked for the last sector — press `Enter` to choose the default and use the rest of the available space from the starting sector, effectively recreating a partition that spans the entire disk. + +#### **5. Set the Partition's Filesystem (Optional)** +If you're partitioning for normal use (e.g., formatting to **ext4**), you can skip this step. But if you want to set a specific partition type (like Linux filesystem (`83`)), you'll be prompted to choose it. By default, **`fdisk`** will set it to **`83` (Linux Filesystem)** for most Linux machines. + +To explicitly set it: +- Press `t` to change the partition type. +- Type `83` for Linux filesystem. + +#### **6. Write the Partition Table** +Once satisfied with the changes, write the new partition table to the disk by typing: +```bash +w +``` + +This will save the changes and exit `fdisk`. + +--- + +### 7. Format the New Partition (**ext4**) + +Now that **`sda1`** has been recreated, you will want to format it with a filesystem. In your case, I recommend **ext4** unless you have a specific reason to use another filesystem type. + +To format **`/dev/sda1`** as **ext4**: +```bash +sudo mkfs.ext4 /dev/sda1 +``` + +This will begin formatting the newly created partition `sda1` as an **ext4** volume. The process will take some time, depending on the size of the partition. + +--- + +### 8. Mount the New Partition & Check + +Once the partition is formatted, you can mount it back for use: + +1. Create a mount point (if it doesn’t exist yet): + ```bash + sudo mkdir -p /mnt/mydisk + ``` + +2. Mount the partition: + ```bash + sudo mount /dev/sda1 /mnt/mydisk + ``` + +3. Verify the mount: + ```bash + df -h + ``` + +You should now see the newly mounted **`sda1`** partition, and it should be available in **`/mnt/mydisk`**. + +--- + +### 9. Add to `/etc/fstab` for Persistent Mounting (Optional) + +If you want this disk to mount automatically at boot, add an entry to **`/etc/fstab`**: + +1. Find the **UUID** of the partition: + ```bash + sudo blkid /dev/sda1 + ``` + + You will see an output that looks something like this: + ```bash + /dev/sda1: UUID="xxxx-xxxx-xxxx-xxxx" TYPE="ext4" + ``` + +2. Open `/etc/fstab` in an editor: + ```bash + sudo nano /etc/fstab + ``` + +3. Add the following line to the end of the file to make the partition auto-mount at `/mnt/mydisk` on boot: + ```bash + UUID=xxxx-xxxx-xxxx-xxxx /mnt/mydisk ext4 defaults 0 0 + ``` + +4. Save (`Ctrl + O`) and exit (`Ctrl + X`). + +--- + +### **Conclusion:** +You’ve now successfully reformatted and recreated **`sda1`** using `fdisk` and formatted it as **ext4**. This partition is mounted and ready to store data. By adding the entry to **`/etc/fstab`**, it will automatically mount on boot. + +Let me know if you run into any issues or need clarification! \ No newline at end of file diff --git a/RAID_1_SETUP.md b/RAID_1_SETUP.md new file mode 100644 index 0000000..b09f255 --- /dev/null +++ b/RAID_1_SETUP.md @@ -0,0 +1,91 @@ +Using RAID (Redundant Array of Independent Disks) can help you achieve redundancy for data protection. For your requirements, you would use RAID 1, which mirrors data across both disks. + +Here's how you can configure RAID 1 on your Raspberry Pi using `mdadm`, a software RAID utility: + +### **1. Install mdadm** +First, you need to install `mdadm` if it's not already present. + +```bash +sudo apt update +sudo apt install mdadm +``` + +### **2. Create the RAID Array** +Assuming your disks are `/dev/sda1` and `/dev/sdb1`, you can create a RAID 1 array like this: + +```bash +sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 +``` + +- `/dev/md0` is the virtual disk for the RAID array. +- `--level=1` specifies RAID 1. +- `--raid-devices=2` means you'll be using two devices for the array. + +### **3. Verify RAID Array Creation** +Check the RAID status with the following command: + +```bash +cat /proc/mdstat +``` + +This should show the RAID array's status, indicating that it's syncing, initializing, or active. + +### **4. Create a Filesystem on the RAID Array** +Format the new RAID array with a filesystem, for instance, ext4: + +```bash +sudo mkfs.ext4 /dev/md0 +``` + +### **5. Mount the RAID Array** +Create a directory to mount the RAID array and mount it: + +```bash +sudo mkdir -p /mnt/raid1 +sudo mount /dev/md0 /mnt/raid1 +``` + +### **6. Configure Auto-Mounting on Boot** +Edit the `/etc/fstab` file to auto-mount the RAID array on boot: + +1. **Get the UUID of RAID Array:** + ```bash + sudo blkid /dev/md0 + ``` + +2. **Add to fstab:** + - Open fstab: + ```bash + sudo nano /etc/fstab + ``` + - Add the entry (replace `UUID=xxxx` with your actual UUID from the blkid command): + ``` + UUID=xxxx /mnt/raid1 ext4 defaults 0 0 + ``` + +### **7. Postgres Configuration** +You'll need to point PostgreSQL to use this RAID array for its data directory. Here's a basic outline: + +1. **Stop PostgreSQL Service:** + ```bash + sudo systemctl stop postgresql + ``` + +2. **Move Data Directory:** + ```bash + sudo rsync -av /var/lib/postgresql /mnt/raid1 + ``` + +3. **Adjust PostgreSQL Configuration:** + Open the PostgreSQL configuration file, usually located at `/etc/postgresql/*/main/postgresql.conf`, and update the data directory to `/mnt/raid1/postgresql`. + + ```bash + sudo nano /etc/postgresql/*/main/postgresql.conf + ``` + +4. **Start PostgreSQL Service:** + ```bash + sudo systemctl start postgresql + ``` + +With these steps, your PostgreSQL database should be storing its data on the RAID 1 array, ensuring redundancy. Make sure to test and validate the setup to ensure everything is working correctly. Let me know if you need further assistance! \ No newline at end of file diff --git a/README.md b/README.md index d0f701e..d747307 100644 --- a/README.md +++ b/README.md @@ -225,36 +225,9 @@ sudo reboot - Locate the `Leases` tab and identify the MAC addresses of your Raspberry Pi units. - Click on the entry for each Raspberry Pi and change it from "dynamic" to "static". -##### Rasperry Pi - -SSH into each Rasperry Pi to configure static IP by editing the `dhcpcd.conf` file: - -```bash -sudo vi /etc/dhcpcd.conf -``` - -Add the following, adapting to your network configuration: - -```bash -interface eth0 -static ip_address=192.168.1.XX/24 -static routers=192.168.1.1 -static domain_name_servers=192.168.1.1 -``` - -* static `ip_address`: The static IP you want to assign to the Raspberry Pi. -* static `router`: The IP address of the default gateway (usually your router). -* static `domain_name_servers`: The IP address of the DNS server (can be the same as the gateway). - -Save the file and exit, then restart the networking service: - -```bash -sudo service dhcpcd restart -``` - ## Set SSH Aliases -Once you have assigned static IPs to your Raspberry Pis, you can simplify the SSH process by setting up SSH aliases. Here's how to do it: +Once you have assigned static IPs on your router, you can simplify the SSH process by setting up SSH aliases. Here's how to do it: 1. **Open the SSH config file on your local machine:** @@ -549,6 +522,7 @@ kubectl apply -f service.yaml 4. **Verify Using Port-Forward**: ```bash +# This is only needed if service type is ClusterIP kubectl port-forward deployment/hello-world 8081:80 --namespace=my-apps ``` diff --git a/SETTING_UP_ANSIBLE.md b/SETTING_UP_ANSIBLE.md new file mode 100644 index 0000000..5711c32 --- /dev/null +++ b/SETTING_UP_ANSIBLE.md @@ -0,0 +1,125 @@ +# Starting with Ansible + +To get started with Ansible, check out the official [Getting Started](https://docs.ansible.com/ansible/latest/getting_started/index.html) guide. + +## Installing Ansible + +In order to install Ansible, in case you don't already have it, you will need to install Python. + +After that run + +```python +pip install ansible +``` + +You might get a warning like + +```bash +ansible-doc, ansible-galaxy, ansible-inventory, ansible-playbook, ansible-pull and ansible-vault are installed in '/home/YOUR_USER/.local/bin' which is not on PATH +``` + +To add `/home/YOUR_USE/.local/bin` to your PATH, follow these steps: + +1. **Open your shell profile file** (e.g., `.bashrc`, `.zshrc`, or `.profile`): + ```bash + nano ~/.bashrc + ``` + Or, if you’re using `zsh`, open `.zshrc`: + ```bash + nano ~/.zshrc + ``` + +2. **Add the directory to the PATH** by appending the following line at the end of the file: + ```bash + export PATH="$HOME/.local/bin:$PATH" + ``` + +3. **Save and close the file**, then reload the profile with: + ```bash + source ~/.bashrc + ``` + Or, for `zsh`: + ```bash + source ~/.zshrc + ``` + +After this, the directory `/home/YOUR_USER/.local/bin` will be in your PATH, and you should be able to run the Ansible commands without seeing the warning. + +### Create a project folder + +```bash +mkdir ansible_quickstart && cd ansible_quickstart +``` + + +### TODO + +- Create an inventory +- Create a playbook +- Explain relations between Control Node, Mannaged Nodes, Playbook, Tasks, Roles, etc + + +## Setting up Ansible Vault + +Ansible Vault is a great way to securely store sensitive information, like IP addresses, passwords, and other secrets. Here’s a step-by-step guide to setting it up and using it for sensitive inventory data: + +### Step 1: Initialize Ansible Vault +1. To create a new encrypted file, run: + +```bash +ansible-vault create secrets.yml +``` + +2. You’ll be prompted to set a password. This password will be required to access the encrypted file. + +3. Inside `secrets.yml`, you can store sensitive data in YAML format, such as IP addresses or inventory details. Here’s an example format: + +```yaml + all: + hosts: + raspberry_pi_1: + ansible_host: 192.168.1.10 + raspberry_pi_2: + ansible_host: 192.168.1.11 + raspberry_pi_3: + ansible_host: 192.168.1.12 + raspberry_pi_4: + ansible_host: 192.168.1.13 + vars: + ansible_user: pi + ansible_password: "your_password_here" +``` + +### Step 2: Encrypt the Existing Inventory File (Optional) +If you already have an inventory file and want to encrypt it, run: + ```bash + ansible-vault encrypt inventory.yml + ``` + +### Step 3: Use the Encrypted Inventory File +1. When running a playbook, provide the vault password with `--ask-vault-pass`: + ```bash + ansible-playbook -i secrets.yml --ask-vault-pass playbook.yml + ``` + +2. Alternatively, create a file to store the vault password (for automation purposes): + - Save the password in a file, e.g., `vault_pass.txt`, and protect it with permissions: + ```bash + chmod 600 vault_pass.txt + ``` + - Run the playbook with the password file: + ```bash + ansible-playbook -i secrets.yml --vault-password-file vault_pass.txt playbook.yml + ``` + +### Step 4: Editing the Encrypted File +To make changes to the encrypted file, use: + ```bash + ansible-vault edit secrets.yml + ``` + +### Additional Tips +- **For multiple environments**: You can create separate encrypted inventory files (e.g., `prod_secrets.yml`, `dev_secrets.yml`) to manage environments. +- **Organizing secrets**: Use `group_vars` and `host_vars` directories for organizing secrets by groups or hosts, and encrypt files within those directories as needed. + +This setup will keep your IP addresses, credentials, and other sensitive details secure while enabling Ansible to use them when needed. \ No newline at end of file diff --git a/SETTING_UP_CLOUDFLARE_DNS.md b/SETTING_UP_CLOUDFLARE_DNS.md new file mode 100644 index 0000000..286c6b8 --- /dev/null +++ b/SETTING_UP_CLOUDFLARE_DNS.md @@ -0,0 +1,100 @@ +To dynamically create a DNS record (such as a CNAME or A record) in Cloudflare when provisioning an API in a Kubernetes (K3s) cluster, you can use **ExternalDNS** along with Cloudflare's API. ExternalDNS is a tool designed to manage DNS records dynamically for Kubernetes resources like services and ingresses. + +### Setup Overview +1. **Install ExternalDNS**: Configure ExternalDNS in your K3s cluster to watch for services or ingress resources and create/update DNS records in Cloudflare. +2. **Configure Cloudflare API Access**: Provide the necessary permissions and API tokens to ExternalDNS to interact with Cloudflare’s DNS. +3. **Create Kubernetes Resources**: Set up your API service/ingress with annotations that ExternalDNS will detect to create the necessary DNS records. + +### Step 1: Configure Cloudflare API Token +Create an API Token in Cloudflare with permissions to manage DNS records: +1. Go to [Cloudflare API Tokens](https://dash.cloudflare.com/profile/api-tokens). +2. Create a custom token with permissions: + - Zone → DNS → Edit + - Zone → Zone → Read +3. Copy the generated API token for later use. + +### Step 2: Install ExternalDNS in Your K3s Cluster +You can deploy ExternalDNS with Helm or using a Kubernetes YAML manifest. + +#### Using Helm: +If you have Helm installed, you can deploy ExternalDNS like this: + +```bash +helm repo add bitnami https://charts.bitnami.com/bitnami +helm install externaldns bitnami/external-dns \ + --set provider=cloudflare \ + --set cloudflare.apiToken="YOUR_CLOUDFLARE_API_TOKEN" \ + --set txtOwnerId="my-k3s-cluster" \ + --set policy=sync \ + --set source=service +``` + +Replace `YOUR_CLOUDFLARE_API_TOKEN` with your actual API token from Cloudflare. + +#### Using a YAML Manifest: +If you prefer, you can use a YAML configuration file. Here’s a basic example for deploying ExternalDNS: + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: external-dns + namespace: default + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: external-dns + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app: external-dns + template: + metadata: + labels: + app: external-dns + spec: + serviceAccountName: external-dns + containers: + - name: external-dns + image: registry.k8s.io/external-dns/external-dns:v0.12.2 + args: + - --source=service + - --provider=cloudflare + - --cloudflare-proxied + - --cloudflare-api-token=YOUR_CLOUDFLARE_API_TOKEN + - --txt-owner-id=my-k3s-cluster + env: + - name: CF_API_TOKEN + value: "YOUR_CLOUDFLARE_API_TOKEN" +``` + +Replace `YOUR_CLOUDFLARE_API_TOKEN` with the Cloudflare API token you created earlier. + +### Step 3: Annotate Services or Ingresses +To automatically create a DNS record for your API when provisioning, annotate the Kubernetes Service or Ingress resource. For example, to create a CNAME record for a Service, you might define the following YAML: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-api-service + annotations: + external-dns.alpha.kubernetes.io/hostname: "api.example.com" # Replace with your desired hostname +spec: + selector: + app: my-api + ports: + - protocol: TCP + port: 80 + targetPort: 8080 + type: LoadBalancer +``` + +### Step 4: Verify the DNS Record Creation +ExternalDNS will detect the annotated service or ingress and automatically create a DNS record in Cloudflare using the specified hostname. You can check Cloudflare’s dashboard or use a DNS lookup tool to verify the new DNS entry. + +This setup will dynamically manage DNS records in Cloudflare, creating and updating them based on changes in your Kubernetes cluster. \ No newline at end of file diff --git a/SETTING_UP_CLOUDFLARE_SSL_VIA_API.md b/SETTING_UP_CLOUDFLARE_SSL_VIA_API.md new file mode 100644 index 0000000..2e824c9 --- /dev/null +++ b/SETTING_UP_CLOUDFLARE_SSL_VIA_API.md @@ -0,0 +1,60 @@ +To enable **Full (Strict) SSL** for a DNS record in Cloudflare using Cloudflare's API, you can follow these steps. This involves making authenticated API requests to set SSL/TLS settings for a specified domain. + +### Prerequisites +1. **API Token**: You’ll need a Cloudflare API Token with the necessary permissions for DNS and SSL settings. +2. **Zone ID**: The unique identifier for the domain (zone) you’re working with in Cloudflare. + +### Step 1: Retrieve Zone ID (if not already known) +If you don’t already have the Zone ID, you can get it with a `GET` request. + +```bash +curl -X GET "https://api.cloudflare.com/client/v4/zones?name=yourdomain.com" \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" +``` + +This will return a list of zones. Find the `id` associated with your domain (`yourdomain.com`) and save it for the next steps. + +### Step 2: Enable Full (Strict) SSL + +Now, update the SSL/TLS mode for the specified domain to "Full (Strict)" mode. + +```bash +curl -X PATCH "https://api.cloudflare.com/client/v4/zones/YOUR_ZONE_ID/settings/ssl" \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" \ + --data '{"value":"strict"}' +``` + +### Explanation +- `https://api.cloudflare.com/client/v4/zones/YOUR_ZONE_ID/settings/ssl`: This endpoint allows you to modify the SSL settings for the zone. +- Replace `YOUR_ZONE_ID` with the actual Zone ID. +- Replace `YOUR_API_TOKEN` with your API Token. + +### Step 3: Verify SSL Mode (Optional) +To verify the SSL mode, you can retrieve the current SSL setting for the zone: + +```bash +curl -X GET "https://api.cloudflare.com/client/v4/zones/YOUR_ZONE_ID/settings/ssl" \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" +``` + +This should confirm that the `value` is set to `"strict"`, meaning **Full (Strict)** SSL is enabled. + +### Example Response for Verification +A successful response will look like this: +```json +{ + "result": { + "id": "ssl", + "value": "strict", + ... + }, + "success": true, + "errors": [], + "messages": [] +} +``` + +This setup will enforce Full (Strict) SSL for the specified domain in Cloudflare, meaning all traffic must be end-to-end encrypted with a valid certificate on both Cloudflare’s and your server's side. \ No newline at end of file diff --git a/ansible/README.md b/ansible/README.md new file mode 100644 index 0000000..7599c68 --- /dev/null +++ b/ansible/README.md @@ -0,0 +1,3 @@ +# Programmer Network Ansible + +[Popular Ansible Modules](https://opensource.com/article/19/9/must-know-ansible-modules) \ No newline at end of file diff --git a/ansible/host_vars/rp_1.yml b/ansible/host_vars/rp_1.yml new file mode 100644 index 0000000..de99138 --- /dev/null +++ b/ansible/host_vars/rp_1.yml @@ -0,0 +1,3 @@ +ansible_host: 192.168.88.242 +ansible_user: aleksandar +k3s_master_node: true \ No newline at end of file diff --git a/ansible/host_vars/rp_2.yml b/ansible/host_vars/rp_2.yml new file mode 100644 index 0000000..15f17ca --- /dev/null +++ b/ansible/host_vars/rp_2.yml @@ -0,0 +1,2 @@ +ansible_host: 192.168.88.243 +ansible_user: aleksandar \ No newline at end of file diff --git a/ansible/host_vars/rp_3.yml b/ansible/host_vars/rp_3.yml new file mode 100644 index 0000000..43084db --- /dev/null +++ b/ansible/host_vars/rp_3.yml @@ -0,0 +1,2 @@ +ansible_host: 192.168.88.241 +ansible_user: aleksandar \ No newline at end of file diff --git a/ansible/host_vars/rp_4.yml b/ansible/host_vars/rp_4.yml new file mode 100644 index 0000000..d3aabc7 --- /dev/null +++ b/ansible/host_vars/rp_4.yml @@ -0,0 +1,2 @@ +ansible_host: 192.168.88.240 +ansible_user: aleksandar \ No newline at end of file diff --git a/ansible/inventory.yml b/ansible/inventory.yml new file mode 100644 index 0000000..77b2a0a --- /dev/null +++ b/ansible/inventory.yml @@ -0,0 +1,16 @@ +all_nodes: + hosts: + rp_1: + rp_2: + rp_3: + rp_4: + +worker_nodes: # A group for all worker nodes + hosts: + rp_2: + rp_3: + rp_4: + +postgres_and_redis: # A group for the PostgreSQL and Redis servers + hosts: + rp_2: \ No newline at end of file diff --git a/ansible/library/my_test.py b/ansible/library/my_test.py new file mode 100644 index 0000000..c925129 --- /dev/null +++ b/ansible/library/my_test.py @@ -0,0 +1,134 @@ +#!/usr/bin/python + +# Copyright: (c) 2018, Terry Jones +# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +DOCUMENTATION = r''' +--- +module: my_test + +short_description: This is my test module + +# If this is part of a collection, you need to use semantic versioning, +# i.e. the version is of the form "2.5.0" and not "2.4". +version_added: "1.0.0" + +description: This is my longer description explaining my test module. + +options: + name: + description: This is the message to send to the test module. + required: true + type: str + new: + description: + - Control to demo if the result of this module is changed or not. + - Parameter description can be a list as well. + required: false + type: bool +# Specify this value according to your collection +# in format of namespace.collection.doc_fragment_name +# extends_documentation_fragment: +# - my_namespace.my_collection.my_doc_fragment_name + +author: + - Your Name (@yourGitHubHandle) +''' + +EXAMPLES = r''' +# Pass in a message +- name: Test with a message + my_namespace.my_collection.my_test: + name: hello world + +# pass in a message and have changed true +- name: Test with a message and changed output + my_namespace.my_collection.my_test: + name: hello world + new: true + +# fail the module +- name: Test failure of the module + my_namespace.my_collection.my_test: + name: fail me +''' + +RETURN = r''' +# These are examples of possible return values, and in general should use other names for return values. +original_message: + description: The original name param that was passed in. + type: str + returned: always + sample: 'hello world' +message: + description: The output message that the test module generates. + type: str + returned: always + sample: 'goodbye' +''' + +from ansible.module_utils.basic import AnsibleModule + + +def run_module(): + # define available arguments/parameters a user can pass to the module + module_args = dict( + name=dict(type='str', required=True), + new=dict(type='bool', required=False, default=False) + ) + + # seed the result dict in the object + # we primarily care about changed and state + # changed is if this module effectively modified the target + # state will include any data that you want your module to pass back + # for consumption, for example, in a subsequent task + result = dict( + changed=False, + original_message='', + message='' + ) + + # the AnsibleModule object will be our abstraction working with Ansible + # this includes instantiation, a couple of common attr would be the + # args/params passed to the execution, as well as if the module + # supports check mode + module = AnsibleModule( + argument_spec=module_args, + supports_check_mode=True + ) + + # if the user is working with this module in only check mode we do not + # want to make any changes to the environment, just return the current + # state with no modifications + if module.check_mode: + module.exit_json(**result) + + # manipulate or modify the state as needed (this is going to be the + # part where your module will do what it needs to do) + result['original_message'] = module.params['name'] + result['message'] = 'goodbye' + + # use whatever logic you need to determine whether or not this module + # made any modifications to your target + if module.params['new']: + result['changed'] = True + + # during the execution of the module, if there is an exception or a + # conditional state that effectively causes a failure, run + # AnsibleModule.fail_json() to pass in the message and the result + if module.params['name'] == 'fail me': + module.fail_json(msg='You requested this to fail', **result) + + # in the event of a successful module execution, you will want to + # simple AnsibleModule.exit_json(), passing the key/value results + module.exit_json(**result) + + +def main(): + run_module() + + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/ansible/playbooks/apt-update.yml b/ansible/playbooks/apt-update.yml new file mode 100644 index 0000000..7f86334 --- /dev/null +++ b/ansible/playbooks/apt-update.yml @@ -0,0 +1,12 @@ +--- +- name: Update all Raspberry Pi hosts + hosts: all_nodes + become: yes # Enables privilege escalation for a task or playbook, allowing it to run as another user (by default, root). + tasks: + - name: Update package cache + ansible.builtin.apt: + update_cache: yes + + - name: Upgrade all packages + ansible.builtin.apt: + upgrade: dist diff --git a/ansible/playbooks/disable-bluetooth.yml b/ansible/playbooks/disable-bluetooth.yml new file mode 100644 index 0000000..714efd5 --- /dev/null +++ b/ansible/playbooks/disable-bluetooth.yml @@ -0,0 +1,50 @@ +--- +- name: Disable and turn off Bluetooth on Raspberry Pi + hosts: all_nodes + become: yes + + tasks: + + # 1. Stop the Bluetooth service immediately + - name: Stop Bluetooth service + systemd: + name: bluetooth + state: stopped + enabled: false + + # 2. Disable Bluetooth modules in the configuration + - name: Disable Bluetooth by blacklisting the module + ansible.builtin.lineinfile: + path: /etc/modprobe.d/raspi-blacklist.conf # Create or modify blacklist file + line: "blacklist btbcm" + create: yes + state: present + + - name: Add blacklist for the hci_uart module (Raspberry Pi specific Bluetooth module) + ansible.builtin.lineinfile: + path: /etc/modprobe.d/raspi-blacklist.conf + line: "blacklist hci_uart" + create: yes + state: present + + # 3. Disable Bluetooth services in system configuration (optional) + - name: Disable Bluetooth in /boot/config.txt (Raspberry Pi specific) + ansible.builtin.lineinfile: + path: /boot/config.txt + regexp: "^#?dtoverlay=disable-bt" + line: "dtoverlay=disable-bt" + state: present + + - name: Ensure no Bluetooth devices can wake up the Raspberry Pi + ansible.builtin.lineinfile: + path: /boot/config.txt + regexp: "^#?dtoverlay=disable-bt" + line: "dtoverlay=disable-bt" + state: present + + # 4. Reboot the system to apply the changes (recommended) + - name: Reboot the system to apply the disabling of Bluetooth modules + ansible.builtin.reboot: + msg: "Rebooting to fully apply Bluetooth disable changes." + connect_timeout: 5 + reboot_timeout: 300 diff --git a/ansible/playbooks/disable-swap.yml b/ansible/playbooks/disable-swap.yml new file mode 100644 index 0000000..13efbaf --- /dev/null +++ b/ansible/playbooks/disable-swap.yml @@ -0,0 +1,55 @@ +--- +- name: Disable swap temporarily and configure permanently + hosts: all_nodes # Target all nodes by default + become: yes # Ensure the tasks are executed with sudo privileges + + tasks: + + # Step 1: Turn off swap temporarily (disable it for the current session) + - name: Disable swap temporarily + ansible.builtin.command: swapoff -a + ignore_errors: true # Continue even if swap is already off + + # Step 2: Set CONF_SWAPSIZE to 0 in /etc/dphys-swapfile to disable swap permanently + - name: Set CONF_SWAPSIZE to 0 in /etc/dphys-swapfile + ansible.builtin.lineinfile: + path: /etc/dphys-swapfile + regexp: '^CONF_SWAPSIZE=' + line: 'CONF_SWAPSIZE=0' + state: present # Ensure the line is present + + # Step 3: Delete the existing swap file as it is no longer needed + - name: Remove existing /var/swap file + ansible.builtin.file: + path: /var/swap + state: absent # Remove the file if it exists + + # Step 4: Stop the dphys-swapfile service immediately + - name: Stop dphys-swapfile service + ansible.builtin.service: + name: dphys-swapfile + state: stopped + + # Step 5: Disable dphys-swapfile service to prevent it from running on boot + - name: Disable dphys-swapfile service + ansible.builtin.service: + name: dphys-swapfile + enabled: no + + # Step 6: Verify swap is turned off and the removal has been successful + - name: Verify swap is turned off + ansible.builtin.command: free -m + register: memory_status + changed_when: false # This won't change any system state, just checking command output + + # Step 7: Display the memory status to confirm swap is turned off + - name: Display memory status (to verify swap is disabled) + ansible.builtin.debug: + var: memory_status.stdout_lines + + # Step 8: Reboot the machine to apply changes fully + - name: Reboot the machines to complete swap disabling + ansible.builtin.reboot: + reboot_timeout: 600 # Give the node 10 minutes to reboot and come back online + msg: "Rebooting the node to apply permanent swap configuration changes" + pre_reboot_delay: 5 # Delay 5 seconds before issuing the reboot command diff --git a/ansible/playbooks/disable-wifi.yml b/ansible/playbooks/disable-wifi.yml new file mode 100644 index 0000000..b562c0f --- /dev/null +++ b/ansible/playbooks/disable-wifi.yml @@ -0,0 +1,38 @@ +--- +- name: Disable Wi-Fi on Raspberry Pi + hosts: all_nodes + become: true + tasks: + - name: Disable Wi-Fi network in wpa_supplicant.conf + lineinfile: + path: /etc/wpa_supplicant/wpa_supplicant.conf + line: | + network={ + ssid="" + key_mgmt=NONE + } + state: present + + - name: Bring down the wlan0 interface + command: ifconfig wlan0 down + ignore_errors: true # Ignore errors if wlan0 does not exist + + - name: Block Wi-Fi using rfkill + command: rfkill block wifi + ignore_errors: true # Ignore errors if rfkill is unavailable + + - name: Create raspi-blacklist.conf if it does not exist + file: + path: /etc/modprobe.d/raspi-blacklist.conf + state: touch + + - name: Blacklist Wi-Fi module in raspi-blacklist.conf + lineinfile: + path: /etc/modprobe.d/raspi-blacklist.conf + line: "blacklist brcmfmac" + state: present + + - name: Reboot the Raspberry Pi to apply changes + reboot: + msg: "Rebooting to apply Wi-Fi disable settings." + connect_timeout: 5 diff --git a/ansible/playbooks/enable-memory-groups.yml b/ansible/playbooks/enable-memory-groups.yml new file mode 100644 index 0000000..bbf4159 --- /dev/null +++ b/ansible/playbooks/enable-memory-groups.yml @@ -0,0 +1,18 @@ +--- +- name: Enable Memory Cgroups on Raspberry Pi + hosts: all_nodes + become: true + tasks: + - name: Ensure memory cgroups are enabled in /boot/firmware/cmdline.txt + lineinfile: + path: /boot/firmware/cmdline.txt + regexp: '(^.*$)' # Match the entire existing line (everything in the file) + line: '\1 cgroup_memory=1 cgroup_enable=memory' # Append these parameters to the matched line + backrefs: yes # Use backreference to ensure existing content is preserved + notify: Reboot Raspberry Pi + + handlers: + - name: Reboot Raspberry Pi + reboot: + msg: "Rebooting to enable memory cgroups." + connect_timeout: 5 diff --git a/ansible/playbooks/fan-control/fan-control.py b/ansible/playbooks/fan-control/fan-control.py new file mode 100644 index 0000000..8b4a331 --- /dev/null +++ b/ansible/playbooks/fan-control/fan-control.py @@ -0,0 +1,33 @@ +import time +from gpiozero import OutputDevice +import psutil + +FAN_PIN = 18 +TEMP_ON = 60 +TEMP_OFF = 50 + +fan = OutputDevice(FAN_PIN) + +def get_cpu_temperature(): + temp = psutil.sensors_temperatures()['cpu_thermal'][0].current + return temp + +def control_fan(): + current_temp = get_cpu_temperature() + if current_temp >= TEMP_ON: + if not fan.value: + print(f"Temperature is {current_temp}°C — Turning fan ON") + fan.on() + elif current_temp <= TEMP_OFF: + if fan.value: + print(f"Temperature is {current_temp}°C — Turning fan OFF") + fan.off() + +if __name__ == '__main__': + try: + while True: + control_fan() + time.sleep(5) + except KeyboardInterrupt: + fan.off() + print("Fan control stopped.") diff --git a/ansible/playbooks/fan-control/fan-control.yml b/ansible/playbooks/fan-control/fan-control.yml new file mode 100644 index 0000000..6bfc271 --- /dev/null +++ b/ansible/playbooks/fan-control/fan-control.yml @@ -0,0 +1,63 @@ +--- +- name: Setup fan control on Raspberry Pis + hosts: rpi-cluster + become: yes + gather_facts: yes + + tasks: + - name: Ensure Python3 and required packages are installed + apt: + name: + - python3 + - python3-gpiozero + - python3-psutil + state: present + update_cache: yes + + - name: Create directory for fan control script + file: + path: /home/{{ ansible_user }}/fan_control + state: directory + mode: '0755' + owner: "{{ ansible_user }}" + group: "{{ ansible_user }}" + + - name: Deploy fan control Python script + copy: + src: fan_control.py + dest: /home/{{ ansible_user }}/fan_control/fan_control.py + mode: '0755' + owner: "{{ ansible_user }}" + group: "{{ ansible_user }}" + + - name: Create systemd service file for fan control + copy: + content: | + [Unit] + Description=Fan Control Service + After=multi-user.target + + [Service] + ExecStart=/usr/bin/python3 /home/{{ ansible_user }}/fan_control/fan_control.py + Restart=always + User={{ ansible_user }} + + [Install] + WantedBy=multi-user.target + dest: /etc/systemd/system/fan_control.service + mode: '0644' + + - name: Reload systemd to apply new service + systemd: + daemon_reload: true + + - name: Enable and start fan control service + systemd: + name: fan_control.service + enabled: yes + state: started + + - name: Ensure fan control service is running + systemd: + name: fan_control.service + state: started diff --git a/ansible/playbooks/join-worker-nodes.yml b/ansible/playbooks/join-worker-nodes.yml new file mode 100644 index 0000000..fb5bef6 --- /dev/null +++ b/ansible/playbooks/join-worker-nodes.yml @@ -0,0 +1,37 @@ +--- +- name: Join Worker Nodes to K3s Cluster + hosts: all_nodes + become: true + vars: + k3s_token: "" + # Identify your master node + k3s_master_node: rp_1 + + tasks: + + - name: Retrieve join token from the master node + shell: cat /var/lib/rancher/k3s/server/token + register: join_token + delegate_to: "{{ k3s_master_node }}" + run_once: true # Retrieve the token only once on the master node + + - name: Set K3S_TOKEN variable with the join token + set_fact: + k3s_token: "{{ join_token.stdout }}" # Directly access stdout of token retrieval + + - name: Install K3s and join the cluster + shell: | + curl -sfL https://get.k3s.io | K3S_URL=https://192.168.88.242:6443 K3S_TOKEN={{ k3s_token }} sh - + args: + executable: /bin/bash + + - name: Verify that the node has joined the cluster + command: kubectl get nodes + register: node_status + retries: 5 + delay: 10 + until: node_status.stdout is search(ansible_hostname) + + - name: Show the status of the nodes + debug: + var: node_status.stdout diff --git a/ansible/playbooks/partition-and-format.yml b/ansible/playbooks/partition-and-format.yml new file mode 100644 index 0000000..7ef2bf0 --- /dev/null +++ b/ansible/playbooks/partition-and-format.yml @@ -0,0 +1,65 @@ +# Use - ansible-playbook -e "disk_device=/dev/sda mount_point=/mnt/storage filesystem_type=ext4" -l rp_2 -i ansible/inventory.yml ansible/playbooks/partition-and-format.yml +--- +- hosts: all + become: yes + vars: + disk_device: "{{ disk_device }}" # The disk device (e.g., /dev/sda) + mount_point: "{{ mount_point }}" # The mount point (e.g., /mnt/mydisk) + filesystem_type: "{{ filesystem_type }}" # File system type (e.g., ext4) + + tasks: + + # 1. Unmount any existing partition associated with the disk (ignore errors if unmounted) + - name: Unmount any existing partitions if mounted + ansible.builtin.command: + cmd: "umount {{ disk_device }}1" # Using the default partition number 1 for the single partition + ignore_errors: true # Avoid fail if it's already unmounted + + # 2. Recreate partition table with a single partition using fdisk + - name: Recreate partition table and create new single partition using fdisk + ansible.builtin.shell: | + (echo o; echo n; echo p; echo 1; echo ; echo; echo w) | fdisk {{ disk_device }} + args: + warn: false + + # 3. Create and format the partition using the defined filesystem (e.g., ext4) + - name: Create ext4 (or chosen) filesystem on the new partition + ansible.builtin.filesystem: + fstype: "{{ filesystem_type }}" + dev: "{{ disk_device }}1" # Format the single partition (e.g., /dev/sda1) + + # 4. Create a mount point if it doesn't exist + - name: Ensure mount point directory exists + ansible.builtin.file: + path: "{{ mount_point }}" + state: directory + owner: root + group: root + mode: '0755' + + # 5. Mount the newly created partition to the mount point + - name: Mount the partition + ansible.builtin.mount: + path: "{{ mount_point }}" + src: "{{ disk_device }}1" + fstype: "{{ filesystem_type }}" + state: mounted + + # 6. Fetch UUID of the newly created partition (not PARTUUID) using blkid + - name: Fetch UUID of the partition + ansible.builtin.command: "blkid -s UUID -o value {{ disk_device }}1" + register: blkid_output + + # 7. Add /etc/fstab entry to ensure the partition is automatically mounted on reboot + - name: Add entry to /etc/fstab for auto-mounting using UUID + ansible.builtin.lineinfile: + path: /etc/fstab + insertafter: EOF + line: "UUID={{ blkid_output.stdout }} {{ mount_point }} {{ filesystem_type }} defaults 0 0" + state: present + notify: Remount partitions + + + handlers: + - name: Remount partitions + ansible.builtin.command: mount -a diff --git a/ansible/playbooks/setup-postgres.yml b/ansible/playbooks/setup-postgres.yml new file mode 100644 index 0000000..a995cd5 --- /dev/null +++ b/ansible/playbooks/setup-postgres.yml @@ -0,0 +1,70 @@ +--- +- name: Setup PostgreSQL Docker Container with Python Virtual Environment + hosts: postgres_and_redis + become: true + vars: + postgres_db: test_db # Replace with your database name + postgres_user: test_user # Replace with your username + postgres_password: test_password # Replace with your password + docker_network: test-pg-network # Name of the Docker network + postgres_container_name: test-postgres # Name of the PostgreSQL container + mount_point: /mnt/storage # External storage location for PostgreSQL data + pgdata_directory: "{{ mount_point }}/pgdata" # Directory on disk to bind mount + venv_path: /opt/venv_ansible_docker # Path to the Python virtual environment + + tasks: + - name: Ensure Python 3, venv, and Docker are installed + apt: + name: + - python3 + - python3-pip + - python3-venv # Ensure venv is installed for creating virtual environments + - docker.io # Install Docker + state: present + + - name: Create a Python Virtual Environment + command: python3 -m venv {{ venv_path }} + args: + creates: "{{ venv_path }}/bin/activate" # Idempotent task, only create if not exists + + - name: Install Docker SDK in the virtual environment (via pip) + command: "{{ venv_path }}/bin/pip install docker" + environment: + PATH: "{{ venv_path }}/bin:{{ ansible_env.PATH }}" # Use venv's pip + + - name: Start Docker service + systemd: + name: docker + state: started + enabled: true + + - name: Ensure Docker network exists + docker_network: + name: "{{ docker_network }}" + state: present + + - name: Ensure the pgdata directory exists on the mounted drive + ansible.builtin.file: + path: "{{ pgdata_directory }}" + state: directory + owner: root + group: root + mode: '0755' + + - name: Run PostgreSQL container + docker_container: + name: "{{ postgres_container_name }}" + image: postgres + restart_policy: always + network_mode: "{{ docker_network }}" + ports: + - "5432:5432" + volumes: + - "{{ pgdata_directory }}:/var/lib/postgresql/data" # Bind mount storage + env: + POSTGRES_DB: "{{ postgres_db }}" + POSTGRES_USER: "{{ postgres_user }}" + POSTGRES_PASSWORD: "{{ postgres_password }}" + state: started + environment: + PATH: "{{ venv_path }}/bin:{{ ansible_env.PATH }}" # Use venv's path for Docker SDK diff --git a/ansible/playbooks/setup-redis.yml b/ansible/playbooks/setup-redis.yml new file mode 100644 index 0000000..ce64944 --- /dev/null +++ b/ansible/playbooks/setup-redis.yml @@ -0,0 +1,41 @@ +--- +- name: Setup Redis Docker Container + hosts: postgres_and_redis + become: true + vars: + redis_container_name: test-redis + docker_network: test-redis-network + redis_password: test_password + + tasks: + - name: Ensure Docker is installed + apt: + name: docker.io + state: present + when: ansible_os_family == "Debian" + + - name: Start and enable Docker + systemd: + name: docker + state: started + enabled: true + + - name: Ensure Docker network exists for Redis + docker_network: + name: "{{ docker_network }}" + state: present # This ensures the network is created if it doesn't exist + + - name: Remove existing Redis container (if present) + docker_container: + name: "{{ redis_container_name }}" + state: absent + ignore_errors: true # Ignore if the container doesn't exist + + - name: Run Redis container + command: > + docker run --restart always --name {{ redis_container_name }} -d + --network {{ docker_network }} -p 6379:6379 + -e REDIS_PASSWORD={{ redis_password }} + redis:alpine redis-server --requirepass {{ redis_password }} + args: + creates: "/var/lib/docker/containers/{{ redis_container_name }}" diff --git a/assets/diagrams/plan.drawio b/assets/diagrams/plan.drawio new file mode 100644 index 0000000..e751e8f --- /dev/null +++ b/assets/diagrams/plan.drawio @@ -0,0 +1,135 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/exercises/simple-deployments/3.expose-deployment.yaml b/exercises/simple-deployments/3.expose-deployment.yaml index a1f81bb..67a015f 100644 --- a/exercises/simple-deployments/3.expose-deployment.yaml +++ b/exercises/simple-deployments/3.expose-deployment.yaml @@ -7,14 +7,11 @@ metadata: # Namespace to create the service in namespace: my-apps spec: - # Select Pods with this label to expose via the Service selector: app: hello-world ports: - protocol: TCP - # Expose the Service on this port port: 80 - # Map the Service port to the target Port on the Pod targetPort: 80 - # The type of Service; ClusterIP makes it reachable only within the cluster - type: ClusterIP \ No newline at end of file + nodePort: 30000 # 30000-32767 + type: NodePort \ No newline at end of file From 4e56dc338a357939b70a118b41a8354c9a99a3f3 Mon Sep 17 00:00:00 2001 From: Aleksandar Grbic Date: Tue, 5 Nov 2024 20:24:39 +0100 Subject: [PATCH 2/6] updates --- README.md | 426 +++++++++++++++++++++++++++++++++------------- ansible/README.md | 3 - 2 files changed, 306 insertions(+), 123 deletions(-) delete mode 100644 ansible/README.md diff --git a/README.md b/README.md index d747307..6116724 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,20 @@ **Goal**: By the end of this journey, aim to have the capability to rapidly instantiate new development and production environments and expose them to the external world with equal ease. ## Table of Contents -- [Introduction & Theoretical Foundations](#introduction--theoretical-foundations) +- [Hardware](#hardware) + - [Hardware Components](#hardware-components) + - [Why These Choices?](#why-these-choices) +- [Raspberry Pi's Setup](#raspberry-pis-setup) + - [Flash SD Cards with Raspberry Pi OS](#flash-sd-cards-with-raspberry-pi-os-using-pi-imager) + - [Initial Boot and Setup](#initial-boot-and-setup) + - [Update and Upgrade](#update-and-upgrade---ansible-playbook) + - [Disable Wi-Fi](#disable-wi-fi-ansible-playbook) + - [Disable Swap](#disable-swap-ansible-playbook) + - [Disable Bluetooth](#disable-bluetooth) + - [Assign Static IP Addresses](#assign-static-ip-addresses) + - [Set SSH Aliases](#set-ssh-aliases) + +- [Kubernetes](#kubernetes) - [What is Kubernetes](#1-what-is-kubernetes-) - [Kubernetes Components Explained](#kubernetes-components-explained) - [Control Plane Components](#control-plane-components) @@ -21,95 +34,23 @@ - [Read and Research](#5-read-and-research) - [Architecture Overview](#4-architecture-overview) - [Community and Ecosystem](#6-community-and-ecosystem) -- [Hardware](#hardware) - - [Hardware Components](#hardware-components) - - [Why These Choices?](#why-these-choices) -- [Setup](#setup) - - [Raspberry Pi](#raspberry-pi) - - [1. Flash SD Cards with Raspberry Pi OS](#1-flash-sd-cards-with-raspberry-pi-os) - - [2. Initial Boot and Setup](#2-initial-boot-and-setup) - - [3. Update and Upgrade](#3-update-and-upgrade) - - [4. Disable Wi-Fi](#4-disable-wi-fi) - - [5. Assign Static IP Addresses](#5-assign-static-ip-addresses) - - [5. Set SSH Aliases](#ssh-aliases) - - [6. K3S Setup](#k3s-setup) - - [Master Node](#master-node) - - [Worker Nodes](#worker-nodes) - - [Kubectl on local machine](#setup-kubectl-on-your-local-machine) -- [Basic Kubernetes Deployments](#basic-kubernetes-deployments) + +- [K3S Setup](#k3s-setup) + - [Enable Memory CGroups](#enable-memory-cgroups-ansible-playbook) + - [Master Node](#setup-the-master-node) + - [Worker Nodes](#setup-worker-nodes) + - [Kubectl on local machine](#setup-kubectl-on-your-local-machine) + +- [Getting Started with Kubernetes](#gettting-started-with-kubernetes) - [Namespace Setup](#namespace-setup) - [Basic Deployment](#basic-deployment) - [Service Exposure](#service-exposure) - [Verify Deployment](#verify-deployment) - [Cleanup](#cleanup-wiping-everything-and-starting-over) + - [Basic Kubernetes Deployments](#basic-kubernetes-deployments) --- -## Introduction & Theoretical Foundations - -#### 1. What is Kubernetes? 🎥 -- [Kubernetes Explained in 6 Minutes | k8s Architecture](https://www.youtube.com/watch?v=TlHvYWVUZyc&ab_channel=ByteByteGo) -- [Kubernetes Explained in 15 Minutes](https://www.youtube.com/watch?v=r2zuL9MW6wc) -- Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. - -## Kubernetes Components Explained - -### Control Plane Components - -- **API Server**: - - Acts as the front-end for the Kubernetes control plane. - -- **etcd**: - - Consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data. - -- **Scheduler**: - - Responsible for scheduling pods onto nodes. - -- **Controller Manager**: - - Runs controllers, which are background threads that handle routine tasks in the cluster. - -### Worker Node Components - -- **Worker Node**: - - Machines, VMs, or physical computers that run your applications. - -- **Pods**: - - The smallest deployable units of computing that can be created and managed in Kubernetes. - -- **kubelet**: - - An agent that runs on each worker node in the cluster and ensures that containers are running in a pod. - -- **kube-proxy**: - - Maintains network rules on nodes, allowing network communication to your Pods from network sessions inside or outside of your cluster. - - - -#### 2. Why Use Kubernetes? -- **Scaling**: Easily scale applications up or down as needed. -- **High Availability**: Ensure that your applications are fault-tolerant and highly available. -- **Portability**: Move workloads across different cloud providers or on-premises environments. -- **Declarative Configuration**: Describe what you want, and Kubernetes makes it happen. - -#### 3. Core Components and Concepts -- **Control Plane**: The set of components that manage the overall state of the cluster. -- **Nodes**: The worker machines that run containers. -- **Pods**: The smallest deployable units that can contain one or more containers. -- **Services**: A way to expose Pods to the network. -- **Ingress**: Manages external access to services within a cluster. -- **ConfigMaps and Secrets**: Manage configuration data and secrets separately from container images. - -#### 4. Architecture Overview -- **Bottom-Up View**: Understand Kubernetes from the infrastructure (Nodes) to Pods, to Services, and upwards. -- **Top-Down View**: Start from the user's perspective, breaking down what you want to deploy into services, pods, and the underlying infrastructure. - -#### 5. Read and Research -- Go through [Kubernetes' official documentation](https://kubernetes.io/docs/home/). -- Watch [beginner-friendly YouTube tutorials](https://www.youtube.com/watch?v=d6WC5n9G_sM&ab_channel=freeCodeCamp.org) or online courses. - -#### 6. Community and Ecosystem -- Get familiar with the wider Kubernetes ecosystem, including tooling, forums, and meetups. - ---- ## Hardware ### Hardware Components @@ -134,24 +75,31 @@ The setup illustrated here is not mandatory but reflects my personal choices bas - **[CSL CAT.8 Network Cable 40 Gigabit](https://www.amazon.de/-/en/gp/product/B08FCLHTH5/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1)**: CSL CAT.8 Network Cable 40 Gigabit +- **[2x Verbatim Vi550 S3 SSD](https://www.amazon.de/dp/B07LGKQLT5?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1)** + +- **[2x JSAUX USB 3.0 to SATA Adapter](https://www.amazon.de/dp/B086W944YT?ref=ppx_yo2ov_dt_b_fed_asin_title)** + + + ### Why These Choices? -- **Mobility**: The 4U Rack allows me to move the entire setup easily, making it convenient for different scenarios, from a home office to a small business environment. +**Mobility**: The 4U Rack allows me to move the entire setup easily, making it convenient for different scenarios, from a home office to a small business environment. -- **Professional-Grade Networking**: The Mikrotik router provides a rich feature set generally found in enterprise-grade hardware, offering me a sandbox to experiment with advanced networking configurations. +**Professional-Grade Networking**: The Mikrotik router provides a rich feature set generally found in enterprise-grade hardware, offering me a sandbox to experiment with advanced networking configurations. -- **Scalability**: The Raspberry Pi units and the Rack setup are easily scalable. I can effortlessly add more Pis to the cluster, enhancing its capabilities. +**Scalability**: The Raspberry Pi units and the Rack setup are easily scalable. I can effortlessly add more Pis to the cluster, enhancing its capabilities. -- **Affordability**: This setup provides a balance between cost and performance, giving me a powerful Kubernetes cluster without breaking the bank. +**Affordability**: This setup provides a balance between cost and performance, giving me a powerful Kubernetes cluster without breaking the bank. --- -# Setup -## Raspberry Pi +# Raspberry Pi's Setup + +For most steps, an [Ansible playbook](./ansible/playbooks/) is available. However, I strongly recommend that you initially set up the first Raspberry Pi manually. This hands-on approach will help you understand each step more deeply and gain practical experience. Once you've completed the manual setup, you can then use the [Ansible playbook](./ansible/playbooks/) to automate the same tasks across the other devices. -#### 1. Flash SD Cards with Raspberry Pi OS Using Pi Imager +#### Flash SD Cards with Raspberry Pi OS Using Pi Imager - Open [Raspberry Pi Imager](https://www.raspberrypi.com/software/). - Choose the 'OS' you want to install from the list. The tool will download the selected OS image for you. - Insert your SD card and select it in the 'Storage' section. @@ -160,18 +108,19 @@ The setup illustrated here is not mandatory but reflects my personal choices bas - Enable SSH and select the "allow public-key authorization only" option. - Click on 'Write' to begin the flashing process. -#### 2. Initial Boot and Setup +#### Initial Boot and Setup - Insert the flashed SD card into the Raspberry Pi and power it on. - On the first boot, ssh into the Pi to perform initial configuration -#### 3. Update and Upgrade +#### Update and Upgrade - ([Ansible Playbook](./ansible/playbooks/apt-update.yml)) - Run the following commands to update the package list and upgrade the installed packages: + ```bash sudo apt update sudo apt upgrade ``` -#### 4. Disable Wi-Fi +#### Disable Wi-Fi ([Ansible Playbook](./ansible/playbooks/disable-wifi.yml)) ```sh sudo vi /etc/wpa_supplicant/wpa_supplicant.conf @@ -216,8 +165,177 @@ Reboot your Raspberry Pi: sudo reboot ``` +#### Disable Swap ([Ansible Playbook](./ansible/playbooks/disable-swap.yml)) + +Disabling swap in a K3s cluster is crucial because Kubernetes relies on precise memory management to allocate resources, schedule workloads, and handle potential memory limits. When swap is enabled, it introduces unpredictability in how memory is used. The Linux kernel may move inactive memory to disk (swap), giving the impression that there is available memory when, in reality, the node might be under significant memory pressure. This can lead to performance degradation for applications, as accessing memory from the swap space (on disk) is significantly slower than accessing it from RAM. In addition, Kubernetes, by default, expects swap to be off and prevents the kubelet from running unless explicitly overridden, as swap complicates memory monitoring and scheduling. + +Beyond performance, swap interferes with Kubernetes' ability to react to out-of-memory (OOM) conditions. With swap enabled, a node might avoid crashing but at the cost of drastically reduced performance, disk I/O bottlenecks, and inconsistent resource allocation. In contrast, with swap disabled, Kubernetes can correctly identify memory shortages and kill misbehaving pods in a controlled way, allowing the system to recover predictably. For edge cases like K3s, which often operate on lightweight and resource-constrained systems (e.g., Raspberry Pis or IoT devices), disabling swap ensures efficient and stable operation without unnecessary disk wear and performance hits. + +- Open a terminal. +- Run the following command to turn off swap for the current session: + +```bash +sudo swapoff -a +``` + +This command disables the swap immediately, but it will be re-enabled after a reboot unless further steps are taken. + +##### Modify `/etc/dphys-swapfile` to Disable Swap Permanently + +Open the swap configuration file `/etc/dphys-swapfile` in a text editor: + +```bash +sudo nano /etc/dphys-swapfile +``` + +Search for the line starting with `CONF_SWAPSIZE=`. +Modify that line to read: + +```bash +CONF_SWAPSIZE=0 +``` + +Save (Ctrl+O in `nano`) and exit the editor (Ctrl+X in `nano`). + +##### Remove the Existing Swap File + +Run the following command to remove the current swap file (`/var/swap`): + +```bash +sudo rm /var/swap +``` + +##### Stop the `dphys-swapfile` service immediately + +Stop the `dphys-swapfile` service, which manages swap: +```bash +sudo systemctl stop dphys-swapfile +``` + +##### Disable the `dphys-swapfile` service to prevent it from running on boot + +Prevent the `dphys-swapfile` service from starting during system boot by disabling it: + +```bash +sudo systemctl disable dphys-swapfile +``` + +--- + +##### Verify swap is turned off + +Run the following command to verify that swap is no longer in use: + +```bash +free -m +``` + +In the output, ensure that the "Swap" line shows `0` for total, used, and free space: + +``` +total used free shared buffers cached +Mem: 2003 322 1681 18 12 129 +-/+ buffers/cache: 180 1822 +Swap: 0 0 0 +``` + +--- + +##### Reboot the system + +Finally, reboot the system in order to apply all changes fully and ensure swap remains permanently disabled: + +```bash +sudo reboot +``` + +After the system comes back online, run `free -m` again to confirm that swap is still disabled. + + +#### Disable Bluetooth + +When using Raspberry Pi devices in a Kubernetes-based environment like K3s, any unused hardware features, such as Bluetooth, can consume system resources or introduce potential security risks. Disabling Bluetooth on each Raspberry Pi optimizes performance by reducing background services and freeing up resources like CPU and memory. Additionally, by disabling an unused service, you reduce the attack surface of your Raspberry Pi-based K3s cluster, providing a more secure and streamlined operating environment. + + +##### Stop and disable the bluetooth service + +**Stop the Bluetooth service** that might be currently running on your Raspberry Pi: + +```bash +sudo systemctl stop bluetooth +``` + +**Disable the service** so it doesn't start automatically during system boot: + +```bash +sudo systemctl disable bluetooth +``` + +This ensures that the Bluetooth service is not running in the background, conserving system resources. + +##### Blacklist bluetooth kernel modules + +To prevent the operating system from loading Bluetooth modules at boot time, you'll need to blacklist specific modules. + +**Open the blacklist configuration file for editing (or create it)**: + +```bash +sudo nano /etc/modprobe.d/raspi-blacklist.conf +``` + +**Add the following lines to disable Bluetooth modules**: + +```bash +blacklist btbcm # Disables Broadcom Bluetooth module +blacklist hci_uart # Disables hci_uart module specific to Raspberry Pi Bluetooth +``` + +**Save the file** (Ctrl+O in `nano`) and **exit** the editor (Ctrl+X in `nano`). + +By blacklisting these modules, they won’t be loaded during boot, effectively preventing Bluetooth from running. + +##### Disable bluetooth in the system configuration -#### 5. Assign Static IP Addresses +Bluetooth can be disabled directly at the device level by editing specific Raspberry Pi system configurations. + +**Open the boot configuration file for editing**: + +```bash +sudo nano /boot/config.txt +``` + +**Add the following line to disable Bluetooth**: + +```bash +dtoverlay=disable-bt +``` + +Ensure no Bluetooth device can wake up your Raspberry Pi by ensuring the line is not commented out. + +**Save the changes** (Ctrl+O in `nano`) and **exit** the editor (Ctrl+X in `nano`). + +This command ensures that the Raspberry Pi doesn’t enable Bluetooth at boot by making system-wide firmware adjustments. + +**Reboot the Raspberry Pi** + +To fully apply the changes (stopping the service, blacklisting modules, and adjusting system configuration), it’s recommended to reboot the system. + +**Reboot the Raspberry Pi**: + +```bash +sudo reboot +``` + +After rebooting, you can verify that Bluetooth has been disabled by checking the status of the service: + +```bash +sudo systemctl status bluetooth +``` + +It should indicate that the Bluetooth service is inactive or dead. + + +#### Assign Static IP Addresses ##### MikroTik Router @@ -269,13 +387,77 @@ ssh rp1 That's it! You've set up SSH aliases for your Raspberry Pi cluster. -## K3S Setup +# Kubernetes -### Master Node +## What is Kubernetes? 🎥 +- [Kubernetes Explained in 6 Minutes | k8s Architecture](https://www.youtube.com/watch?v=TlHvYWVUZyc&ab_channel=ByteByteGo) +- [Kubernetes Explained in 15 Minutes](https://www.youtube.com/watch?v=r2zuL9MW6wc) +- Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. -1. **Enable Memory Cgroups**: +### Kubernetes Components Explained -``` +#### Control Plane Components + +- **API Server**: + - Acts as the front-end for the Kubernetes control plane. + +- **etcd**: + - Consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data. + +- **Scheduler**: + - Responsible for scheduling pods onto nodes. + +- **Controller Manager**: + - Runs controllers, which are background threads that handle routine tasks in the cluster. + +#### Worker Node Components + +- **Worker Node**: + - Machines, VMs, or physical computers that run your applications. + +- **Pods**: + - The smallest deployable units of computing that can be created and managed in Kubernetes. + +- **kubelet**: + - An agent that runs on each worker node in the cluster and ensures that containers are running in a pod. + +- **kube-proxy**: + - Maintains network rules on nodes, allowing network communication to your Pods from network sessions inside or outside of your cluster. + + + +#### 2. Why Use Kubernetes? +- **Scaling**: Easily scale applications up or down as needed. +- **High Availability**: Ensure that your applications are fault-tolerant and highly available. +- **Portability**: Move workloads across different cloud providers or on-premises environments. +- **Declarative Configuration**: Describe what you want, and Kubernetes makes it happen. + +#### 3. Core Components and Concepts +- **Control Plane**: The set of components that manage the overall state of the cluster. +- **Nodes**: The worker machines that run containers. +- **Pods**: The smallest deployable units that can contain one or more containers. +- **Services**: A way to expose Pods to the network. +- **Ingress**: Manages external access to services within a cluster. +- **ConfigMaps and Secrets**: Manage configuration data and secrets separately from container images. + +#### 4. Architecture Overview +- **Bottom-Up View**: Understand Kubernetes from the infrastructure (Nodes) to Pods, to Services, and upwards. +- **Top-Down View**: Start from the user's perspective, breaking down what you want to deploy into services, pods, and the underlying infrastructure. + +#### 5. Read and Research +- Go through [Kubernetes' official documentation](https://kubernetes.io/docs/home/). +- Watch [beginner-friendly YouTube tutorials](https://www.youtube.com/watch?v=d6WC5n9G_sM&ab_channel=freeCodeCamp.org) or online courses. + +#### 6. Community and Ecosystem +- Get familiar with the wider Kubernetes ecosystem, including tooling, forums, and meetups. + +--- + +## K3S Setup + +### Enable Memory Cgroups ([Ansible Playbook](./ansible/playbooks/enable-memory-groups.yml)) + +```txt Control Groups (Cgroups) are a Linux kernel feature that allows you to allocate resources such as CPU time, system memory, and more among user-defined groups of tasks (processes). K3s requires memory cgroups to be enabled to better manage and restrict the resources that each container can use. This is crucial in a multi-container environment where resource allocation needs to be as efficient as possible. Simple Analogy: Imagine you live in a house with multiple people (processes), and there are limited resources like time (CPU), space (memory), and tools (I/O). Without a system in place, one person might hog the vacuum cleaner all day (CPU time), while someone else fills the fridge with their stuff (memory). @@ -285,33 +467,33 @@ With a `"chore schedule"` (cgroups), you ensure everyone gets an allocated time Before installing K3s, it's essential to enable memory cgroups on the Raspberry Pi for effective container resource management. -1. Edit the `/boot/cmdline.txt` file on your Raspberry Pi. +Edit the `/boot/firmware/cmdline.txt` file on your Raspberry Pi. ```bash -sudo vi /boot/cmdline.txt +sudo vi /boot/firmware/cmdline.txt ``` -2. Append the following to enable memory cgroups. +Append the following to enable memory cgroups. ```text cgroup_memory=1 cgroup_enable=memory ``` -3. Save the file and reboot your Raspberry Pi. +Save the file and reboot your Raspberry Pi. ```bash sudo reboot ``` -2. **Choose a Master Node**: Select one Raspberry Pi to act as the master node. +### Setup the Master Node -3. **Install K3s**: Use the following command to install K3s on the master node. +Select one Raspberry Pi to act as the master node, and install K3S: ```bash curl -sfL https://get.k3s.io | sh - ``` -3. **Copy and Set Permissions for Kubeconfig**: To avoid permission issues when using kubectl, copy the generated Kubeconfig to your home directory and update its ownership. +**Copy and Set Permissions for Kubeconfig**: To avoid permission issues when using kubectl, copy the generated Kubeconfig to your home directory and update its ownership. ```bash # Create the .kube directory in the user's home directory if it doesn't already exist @@ -324,17 +506,18 @@ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config sudo chown $(id -u):$(id -g) ~/.kube/config ``` -4. **Verify Cluster**: Ensure that `/etc/rancher/k3s/k3s.yaml` was created and the cluster is accessible. +**Verify Cluster**: Ensure that `/etc/rancher/k3s/k3s.yaml` was created and the cluster is accessible. ```bash kubectl --kubeconfig ~/.kube/config get nodes ``` -5. **Set KUBECONFIG Environment Variable**: To make it more convenient to run `kubectl` commands without having to specify the `--kubeconfig` flag every time, you can set an environment variable to automatically point to the kubeconfig file. +**Set KUBECONFIG Environment Variable**: To make it more convenient to run `kubectl` commands without having to specify the `--kubeconfig` flag every time, you can set an environment variable to automatically point to the kubeconfig file. ```bash export KUBECONFIG=~/.kube/config ``` + To make this setting permanent across shell sessions, add it to your shell profile: ```bash @@ -346,20 +529,21 @@ By doing this, you streamline your workflow, allowing you to simply run `kubectl --- -### Worker Nodes +### Setup Worker Nodes -1. **Join Tokens**: On the master node, retrieve the join token from `/var/lib/rancher/k3s/server/token`. +**Join Tokens**: On the master node, retrieve the join token from `/var/lib/rancher/k3s/server/token`. ```bash vi /var/lib/rancher/k3s/server/token ``` -2. **Worker Installation**: Use this token to join each worker node to the master. + +**Worker Installation**: Use this token to join each worker node to the master. ```bash curl -sfL https://get.k3s.io | K3S_URL=https://:6443 K3S_TOKEN= sh - ``` -3. **Node Verification**: Check that all worker nodes have joined the cluster. On your master node, run: +**Node Verification**: Check that all worker nodes have joined the cluster. On your master node, run: ```bash kubectl get nodes @@ -371,15 +555,17 @@ kubectl get nodes #### Kubeconfig -After setting up your cluster, it's more convenient to manage it remotely from your local machine. Here's how to do that: +After setting up your cluster, it's more convenient to manage it remotely from your local machine. + +Here's how to do that: -1. **Create the `.kube` directory on your local machine if it doesn't already exist.** +**Create the `.kube` directory on your local machine if it doesn't already exist.** ```bash mkdir -p ~/.kube ``` -2. **Copy the kubeconfig from the master node to your local `.kube` directory.** +**Copy the kubeconfig from the master node to your local `.kube` directory.** ```bash scp @:~/.kube/config ~/.kube/config @@ -388,7 +574,7 @@ Replace `` with your username and `` with the IP address o **Note**: If you encounter a permissions issue while copying, ensure that the `~/.kube/config` on your master node is owned by your user and is accessible. You might have to adjust file permissions or ownership on the master node accordingly. -3. **Update the kubeconfig server details (Optional)** +**Update the kubeconfig server details (Optional)** Open your local `~/.kube/config` and make sure the `server` IP matches your master node's IP. If it's set to `127.0.0.1`, you'll need to update it. @@ -402,9 +588,9 @@ After completing these steps, you should be able to run `kubectl` commands from --- -## Basic Kubernetes Deployments +# Gettting Started with Kubernetes -### Namespace Setup +## Namespace Setup 1. **Create a new Kubernetes Namespace**: @@ -429,7 +615,7 @@ metadata: kubectl apply -f namespace.yaml ``` -### Basic Deployment +## Basic Deployment 2. **Deploy a Simple App**: @@ -477,7 +663,7 @@ spec: kubectl apply -f deployment.yaml ``` -### Service Exposure +## Service Exposure 3. **Expose the Deployment**: @@ -517,7 +703,7 @@ spec: kubectl apply -f service.yaml ``` -### Verify Deployment +## Verify Deployment 4. **Verify Using Port-Forward**: @@ -526,7 +712,7 @@ kubectl apply -f service.yaml kubectl port-forward deployment/hello-world 8081:80 --namespace=my-apps ``` -### Cleanup: Wiping Everything and Starting Over +## Cleanup: Wiping Everything and Starting Over **Remove All Resources**: diff --git a/ansible/README.md b/ansible/README.md deleted file mode 100644 index 7599c68..0000000 --- a/ansible/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Programmer Network Ansible - -[Popular Ansible Modules](https://opensource.com/article/19/9/must-know-ansible-modules) \ No newline at end of file From c09c72c6de317af84c34e924f686869e285bc6b5 Mon Sep 17 00:00:00 2001 From: Aleksandar Grbic Date: Wed, 6 Nov 2024 22:13:04 +0100 Subject: [PATCH 3/6] updates --- .vscode/settings.json | 7 +++ ansible/playbooks/fan-control/fan-control.py | 33 ---------- ansible/playbooks/fan-control/fan-control.yml | 63 ------------------- 3 files changed, 7 insertions(+), 96 deletions(-) create mode 100644 .vscode/settings.json delete mode 100644 ansible/playbooks/fan-control/fan-control.py delete mode 100644 ansible/playbooks/fan-control/fan-control.yml diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 0000000..7893d48 --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,7 @@ +{ + "background.windowBackgrounds": [ + "https://i.ibb.co/xYkFskM/space-colorful-waves-abstract-4k-36.jpg" + ], + "background.autoInstall": true, + "background.smoothImageRendering": true +} \ No newline at end of file diff --git a/ansible/playbooks/fan-control/fan-control.py b/ansible/playbooks/fan-control/fan-control.py deleted file mode 100644 index 8b4a331..0000000 --- a/ansible/playbooks/fan-control/fan-control.py +++ /dev/null @@ -1,33 +0,0 @@ -import time -from gpiozero import OutputDevice -import psutil - -FAN_PIN = 18 -TEMP_ON = 60 -TEMP_OFF = 50 - -fan = OutputDevice(FAN_PIN) - -def get_cpu_temperature(): - temp = psutil.sensors_temperatures()['cpu_thermal'][0].current - return temp - -def control_fan(): - current_temp = get_cpu_temperature() - if current_temp >= TEMP_ON: - if not fan.value: - print(f"Temperature is {current_temp}°C — Turning fan ON") - fan.on() - elif current_temp <= TEMP_OFF: - if fan.value: - print(f"Temperature is {current_temp}°C — Turning fan OFF") - fan.off() - -if __name__ == '__main__': - try: - while True: - control_fan() - time.sleep(5) - except KeyboardInterrupt: - fan.off() - print("Fan control stopped.") diff --git a/ansible/playbooks/fan-control/fan-control.yml b/ansible/playbooks/fan-control/fan-control.yml deleted file mode 100644 index 6bfc271..0000000 --- a/ansible/playbooks/fan-control/fan-control.yml +++ /dev/null @@ -1,63 +0,0 @@ ---- -- name: Setup fan control on Raspberry Pis - hosts: rpi-cluster - become: yes - gather_facts: yes - - tasks: - - name: Ensure Python3 and required packages are installed - apt: - name: - - python3 - - python3-gpiozero - - python3-psutil - state: present - update_cache: yes - - - name: Create directory for fan control script - file: - path: /home/{{ ansible_user }}/fan_control - state: directory - mode: '0755' - owner: "{{ ansible_user }}" - group: "{{ ansible_user }}" - - - name: Deploy fan control Python script - copy: - src: fan_control.py - dest: /home/{{ ansible_user }}/fan_control/fan_control.py - mode: '0755' - owner: "{{ ansible_user }}" - group: "{{ ansible_user }}" - - - name: Create systemd service file for fan control - copy: - content: | - [Unit] - Description=Fan Control Service - After=multi-user.target - - [Service] - ExecStart=/usr/bin/python3 /home/{{ ansible_user }}/fan_control/fan_control.py - Restart=always - User={{ ansible_user }} - - [Install] - WantedBy=multi-user.target - dest: /etc/systemd/system/fan_control.service - mode: '0644' - - - name: Reload systemd to apply new service - systemd: - daemon_reload: true - - - name: Enable and start fan control service - systemd: - name: fan_control.service - enabled: yes - state: started - - - name: Ensure fan control service is running - systemd: - name: fan_control.service - state: started From 9266ba51da42f67a1ffd33340b1f2ef04a0d3812 Mon Sep 17 00:00:00 2001 From: Aleksandar Grbic Date: Wed, 6 Nov 2024 22:13:16 +0100 Subject: [PATCH 4/6] updates --- MAINTENANCE.md | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/MAINTENANCE.md b/MAINTENANCE.md index f0de574..4f4121a 100644 --- a/MAINTENANCE.md +++ b/MAINTENANCE.md @@ -12,39 +12,39 @@ To update k3s on your Raspberry Pis, you can follow these steps: 2. **Drain the node**: If you're updating one node at a time in a cluster, drain the node to safely remove it from the cluster during the update. - ```bash - kubectl drain --ignore-daemonsets --delete-emptydir-data - ``` +```bash +kubectl drain --ignore-daemonsets --delete-emptydir-data +``` 3. **Stop k3s service**: Before updating, stop the k3s service on the node. - ```bash - sudo systemctl stop k3s - ``` +```bash +sudo systemctl stop k3s +``` 4. **Update k3s**: Download and install the latest version of k3s on the Raspberry Pi. You can use the installation script provided by k3s for updating it as well. - ```bash - curl -sfL https://get.k3s.io | sh - - ``` +```bash +curl -sfL https://get.k3s.io | sh - +``` 5. **Start k3s service**: After the update, start the k3s service again. - ```bash - sudo systemctl start k3s - ``` +```bash +sudo systemctl start k3s +``` 6. **Uncordon the node**: If you drained the node earlier, make it schedulable again by uncordoning it. - ```bash - kubectl uncordon - ``` +```bash +kubectl uncordon +``` 7. **Verify the update**: Check the version of k3s to confirm the update was successful. - ```bash - k3s --version - ``` +```bash +k3s --version +``` 8. **Repeat for other nodes**: If you have multiple Raspberry Pis, repeat these steps for each node. From a1b935dc3dbaa9f587beaee9c013a4d74c603984 Mon Sep 17 00:00:00 2001 From: Aleksandar Grbic Date: Wed, 6 Nov 2024 22:26:50 +0100 Subject: [PATCH 5/6] Add Ansible automation section and update README --- README.md | 2 + SETTING_UP_ANSIBLE.md | 148 ++++++++++++++++-------------------------- 2 files changed, 58 insertions(+), 92 deletions(-) diff --git a/README.md b/README.md index 6116724..24c2b08 100644 --- a/README.md +++ b/README.md @@ -24,6 +24,8 @@ - [Assign Static IP Addresses](#assign-static-ip-addresses) - [Set SSH Aliases](#set-ssh-aliases) +- [Automation with Ansible](./SETTING_UP_ANSIBLE.md) + - [Kubernetes](#kubernetes) - [What is Kubernetes](#1-what-is-kubernetes-) - [Kubernetes Components Explained](#kubernetes-components-explained) diff --git a/SETTING_UP_ANSIBLE.md b/SETTING_UP_ANSIBLE.md index 5711c32..ce101af 100644 --- a/SETTING_UP_ANSIBLE.md +++ b/SETTING_UP_ANSIBLE.md @@ -1,125 +1,89 @@ -# Starting with Ansible +# Getting Started with Ansible -To get started with Ansible, check out the official [Getting Started](https://docs.ansible.com/ansible/latest/getting_started/index.html) guide. +After setting up one of our Raspberry Pi devices, it's easy to see how tedious it would be to SSH into the other three devices and manually repeat each step. This process is not only time-consuming but also error-prone, given that each step is done manually. -## Installing Ansible +To make things more efficient, we can turn to **Ansible**—a tool that allows us to automate tasks across multiple machines. To get started, refer to the official [Getting Started](https://docs.ansible.com/ansible/latest/getting_started/index.html) guide. -In order to install Ansible, in case you don't already have it, you will need to install Python. +## Installation and PATH Configuration -After that run +Once Ansible has been installed, you *might** encounter a warning indicating that some Ansible executables (like `ansible-doc`, `ansible-galaxy`, and others) are installed in `/home/YOUR_USER/.local/bin`, which is not included in your system’s PATH. -```python -pip install ansible -``` +To resolve this, you will need to edit your shell profile. If you’re using Bash, open the `.bashrc` file with `nano ~/.bashrc`. For Zsh users, you should open `.zshrc` by running `nano ~/.zshrc`. -You might get a warning like +At the end of the file, you should add this line: ```bash -ansible-doc, ansible-galaxy, ansible-inventory, ansible-playbook, ansible-pull and ansible-vault are installed in '/home/YOUR_USER/.local/bin' which is not on PATH +export PATH="$HOME/.local/bin:$PATH" ``` -To add `/home/YOUR_USE/.local/bin` to your PATH, follow these steps: +Once you’ve saved and closed the file, reload your shell profile so that the new PATH takes effect. For Bash, you can run `source ~/.bashrc`, and for Zsh users, run `source ~/.zshrc`. After performing these steps, you should no longer see warnings related to the Ansible executables. -1. **Open your shell profile file** (e.g., `.bashrc`, `.zshrc`, or `.profile`): - ```bash - nano ~/.bashrc - ``` - Or, if you’re using `zsh`, open `.zshrc`: - ```bash - nano ~/.zshrc - ``` +## Creating a Project Directory -2. **Add the directory to the PATH** by appending the following line at the end of the file: - ```bash - export PATH="$HOME/.local/bin:$PATH" - ``` +With the setup completed, it's a good idea to create a dedicated directory to organize all your Ansible files. You can create a new directory called `ansible` and navigate into it using: -3. **Save and close the file**, then reload the profile with: - ```bash - source ~/.bashrc - ``` - Or, for `zsh`: - ```bash - source ~/.zshrc - ``` +```bash +mkdir ansible && cd ansible +``` -After this, the directory `/home/YOUR_USER/.local/bin` will be in your PATH, and you should be able to run the Ansible commands without seeing the warning. +In this folder, you’ll store your playbooks, inventory files, and any other Ansible configurations. -### Create a project folder +## Setting Up Ansible Vault + +Ansible Vault is a tool that allows you to securely store sensitive information such as passwords, IP addresses, or other secrets. To initialize a new encrypted vault file, use the following command: ```bash -mkdir ansible_quickstart && cd ansible_quickstart +ansible-vault create secrets.yml ``` +When prompted, set a password—this password will be required every time you access or modify the vault file. After you’ve set the password, you can include sensitive data in the `secrets.yml` file using YAML format. For example, you might include the IP addresses and credentials for each Raspberry Pi: -### TODO +```yaml +all: + hosts: + raspberry_pi_1: + ansible_host: 192.168.1.10 + raspberry_pi_2: + ansible_host: 192.168.1.11 + raspberry_pi_3: + ansible_host: 192.168.1.12 + raspberry_pi_4: + ansible_host: 192.168.1.13 + vars: + ansible_user: pi + ansible_password: "your_password_here" +``` -- Create an inventory -- Create a playbook -- Explain relations between Control Node, Mannaged Nodes, Playbook, Tasks, Roles, etc +If you already have an unencrypted inventory file and want to encrypt it for security, you can do so by running: +```bash +ansible-vault encrypt inventory.yml +``` -## Setting up Ansible Vault +To use an encrypted inventory file when running a playbook, you’ll need to provide the vault password with the `--ask-vault-pass` option, like so: -Ansible Vault is a great way to securely store sensitive information, like IP addresses, passwords, and other secrets. Here’s a step-by-step guide to setting it up and using it for sensitive inventory data: +```bash +ansible-playbook -i secrets.yml --ask-vault-pass playbook.yml +``` -### Step 1: Initialize Ansible Vault -1. To create a new encrypted file, run: +If you prefer not to manually enter the password every time, you can store the password in a text file such as `vault_pass.txt`. Ensure that the file is protected using the following command: ```bash -ansible-vault create secrets.yml +chmod 600 vault_pass.txt ``` -2. You’ll be prompted to set a password. This password will be required to access the encrypted file. +You can then run your playbook using that password file: -3. Inside `secrets.yml`, you can store sensitive data in YAML format, such as IP addresses or inventory details. Here’s an example format: +```bash +ansible-playbook -i secrets.yml --vault-password-file vault_pass.txt playbook.yml +``` -```yaml - all: - hosts: - raspberry_pi_1: - ansible_host: 192.168.1.10 - raspberry_pi_2: - ansible_host: 192.168.1.11 - raspberry_pi_3: - ansible_host: 192.168.1.12 - raspberry_pi_4: - ansible_host: 192.168.1.13 - vars: - ansible_user: pi - ansible_password: "your_password_here" +If you need to make changes to the vault file, you can use the command: + +```bash +ansible-vault edit secrets.yml ``` -### Step 2: Encrypt the Existing Inventory File (Optional) -If you already have an inventory file and want to encrypt it, run: - ```bash - ansible-vault encrypt inventory.yml - ``` - -### Step 3: Use the Encrypted Inventory File -1. When running a playbook, provide the vault password with `--ask-vault-pass`: - ```bash - ansible-playbook -i secrets.yml --ask-vault-pass playbook.yml - ``` - -2. Alternatively, create a file to store the vault password (for automation purposes): - - Save the password in a file, e.g., `vault_pass.txt`, and protect it with permissions: - ```bash - chmod 600 vault_pass.txt - ``` - - Run the playbook with the password file: - ```bash - ansible-playbook -i secrets.yml --vault-password-file vault_pass.txt playbook.yml - ``` - -### Step 4: Editing the Encrypted File -To make changes to the encrypted file, use: - ```bash - ansible-vault edit secrets.yml - ``` - -### Additional Tips -- **For multiple environments**: You can create separate encrypted inventory files (e.g., `prod_secrets.yml`, `dev_secrets.yml`) to manage environments. -- **Organizing secrets**: Use `group_vars` and `host_vars` directories for organizing secrets by groups or hosts, and encrypt files within those directories as needed. - -This setup will keep your IP addresses, credentials, and other sensitive details secure while enabling Ansible to use them when needed. \ No newline at end of file +For more complex setups, such as managing different environments, you can create separate encrypted inventory files, like `prod_secrets.yml` and `dev_secrets.yml`. You can also organize secrets by groups or hosts by creating encrypted files for each, stored in the `group_vars` and `host_vars` directories. This approach allows for fine-grained control over your environments while keeping sensitive data secure. + +By following these steps, you can ensure both automation and security when working with multiple Raspberry Pi devices through Ansible. With the help of Ansible Vault, sensitive credentials like passwords and IP addresses are encrypted and protected from unauthorized access, while still being usable whenever Ansible tasks need them. \ No newline at end of file From 8ed46d46fc24d8d633740fed3198f445f1229920 Mon Sep 17 00:00:00 2001 From: Aleksandar Grbic Date: Wed, 6 Nov 2024 22:37:41 +0100 Subject: [PATCH 6/6] Add Kubernetes documentation: theory, hardware components, and getting started guide --- README.md | 835 +----------------------- docs/getting-started-with-kubernetes.md | 225 +++++++ docs/hardware-components.md | 38 ++ docs/k3s-setup.md | 132 ++++ docs/kubernetes-theory.md | 63 ++ docs/raspberry-pi-setup.md | 291 +++++++++ 6 files changed, 784 insertions(+), 800 deletions(-) create mode 100644 docs/getting-started-with-kubernetes.md create mode 100644 docs/hardware-components.md create mode 100644 docs/k3s-setup.md create mode 100644 docs/kubernetes-theory.md create mode 100644 docs/raspberry-pi-setup.md diff --git a/README.md b/README.md index 24c2b08..0b7abe9 100644 --- a/README.md +++ b/README.md @@ -12,806 +12,41 @@ ## Table of Contents - [Hardware](#hardware) - - [Hardware Components](#hardware-components) - - [Why These Choices?](#why-these-choices) -- [Raspberry Pi's Setup](#raspberry-pis-setup) - - [Flash SD Cards with Raspberry Pi OS](#flash-sd-cards-with-raspberry-pi-os-using-pi-imager) - - [Initial Boot and Setup](#initial-boot-and-setup) - - [Update and Upgrade](#update-and-upgrade---ansible-playbook) - - [Disable Wi-Fi](#disable-wi-fi-ansible-playbook) - - [Disable Swap](#disable-swap-ansible-playbook) - - [Disable Bluetooth](#disable-bluetooth) - - [Assign Static IP Addresses](#assign-static-ip-addresses) - - [Set SSH Aliases](#set-ssh-aliases) + - [Hardware Components](./docs/hardware-components.md#hardware) + - [Why These Choices?](./docs/hardware-components.md#why-these-choices) +- [Raspberry Pi's Setup](./docs/raspberry-pi-setup.md#raspberry-pis-setup) + - [Flash SD Cards with Raspberry Pi OS](./docs/raspberry-pi-setup.md#flash-sd-cards-with-raspberry-pi-os-using-pi-imager) + - [Initial Boot and Setup](./docs/raspberry-pi-setup.md#initial-boot-and-setup) + - [Update and Upgrade](./docs/raspberry-pi-setup.md#update-and-upgrade---ansible-playbook) + - [Disable Wi-Fi](./docs/raspberry-pi-setup.md#disable-wi-fi-ansible-playbook) + - [Disable Swap](./docs/raspberry-pi-setup.md#disable-swap-ansible-playbook) + - [Disable Bluetooth](./docs/raspberry-pi-setup.md#disable-bluetooth) + - [Assign Static IP Addresses](./docs/raspberry-pi-setup.md#assign-static-ip-addresses) + - [Set SSH Aliases](./docs/raspberry-pi-setup.md#set-ssh-aliases) - [Automation with Ansible](./SETTING_UP_ANSIBLE.md) -- [Kubernetes](#kubernetes) - - [What is Kubernetes](#1-what-is-kubernetes-) - - [Kubernetes Components Explained](#kubernetes-components-explained) - - [Control Plane Components](#control-plane-components) - - [Worker Node Components](#worker-node-components) - - [Why Use Kubernetes](#2-why-use-kubernetes) - - [Core Components and Concepts](#3-core-components-and-concepts) - - [Read and Research](#5-read-and-research) - - [Architecture Overview](#4-architecture-overview) - - [Community and Ecosystem](#6-community-and-ecosystem) - -- [K3S Setup](#k3s-setup) - - [Enable Memory CGroups](#enable-memory-cgroups-ansible-playbook) - - [Master Node](#setup-the-master-node) - - [Worker Nodes](#setup-worker-nodes) - - [Kubectl on local machine](#setup-kubectl-on-your-local-machine) - -- [Getting Started with Kubernetes](#gettting-started-with-kubernetes) - - [Namespace Setup](#namespace-setup) - - [Basic Deployment](#basic-deployment) - - [Service Exposure](#service-exposure) - - [Verify Deployment](#verify-deployment) - - [Cleanup](#cleanup-wiping-everything-and-starting-over) - - [Basic Kubernetes Deployments](#basic-kubernetes-deployments) - ---- - - -## Hardware -### Hardware Components - -The setup illustrated here is not mandatory but reflects my personal choices based on both experience and specific requirements. I aimed for a setup that is not only robust but also relatively mobile. Therefore, I opted for a 4U Rack where all the components are neatly encapsulated, making it easy to plug and play. I plan to expand this cluster by adding another four Raspberry Pis once the prices are more accommodating. - -- **[Mikrotik RB3011UiAS-RM](https://mikrotik.com/product/RB3011UiAS-RM)**: I chose Mikrotik's router as it offers a professional-grade, feature-rich solution at an affordable price. This router allows for a myriad of configurations and functionalities that you'd typically find in higher-end solutions like Cisco. Its features like robust firewall options, VPN support, and advanced routing capabilities made it a compelling choice. - -- **[4x Raspberry Pi 4 B 8GB](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/)**: I opted for the 8GB variant of the Raspberry Pi 4 B for its performance capabilities. The 8GB RAM provides ample room for running multiple containers and allows for future scalability. - -- **[4U Rack Cabinet](https://www.compumail.dk/en/p/lanberg-rack-gra-993865294)**: A 4U Rack to encapsulate all components cleanly. It provides the benefit of space efficiency and easy access for any hardware changes or additions. - -- **[Rack Power Supply](https://www.compumail.dk/en/p/lanberg-pdu-09f-0300-bk-stromstodsbeskytter-9-stik-16a-sort-3m-996106700)**: A centralized power supply solution for the entire rack. Ensures consistent and reliable power distribution to all the components. - -- **[GeeekPi 1U Rack Kit for Raspberry Pi 4B, 19" 1U Rack Mount](https://www.amazon.de/-/en/gp/product/B0972928CN/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1)**: This 19 inch rack mount kit is specially designed for recording Raspberry Pi 4B boards and supports up to 4 units. - -- **[SanDisk Extreme microSDHC 3 Rescue Pro Deluxe Memory Card, Red/Gold 64GB](https://www.amazon.de/-/en/gp/product/B07FCMBLV6/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1)**: Up to 160MB/s Read speed and 60 MB/s. Write speed for fast recording and transferring - -- **[Vanja SD/Micro SD Card Reader](https://www.amazon.de/-/en/gp/product/B00W02VHM6/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1)**: Micro USB OTG Adapter and USB 2.0 Memory Card Reader - -- **[deleyCON 5 x 0.25 m CAT8.1](https://www.amazon.de/-/en/gp/product/B08WPJVGHR/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1)**: deleyCON CAT 8.1 patch cable network cable as set // 2x RJ45 plug // S/FTP PIMF shielding - -- **[CSL CAT.8 Network Cable 40 Gigabit](https://www.amazon.de/-/en/gp/product/B08FCLHTH5/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1)**: CSL CAT.8 Network Cable 40 Gigabit - -- **[2x Verbatim Vi550 S3 SSD](https://www.amazon.de/dp/B07LGKQLT5?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1)** - -- **[2x JSAUX USB 3.0 to SATA Adapter](https://www.amazon.de/dp/B086W944YT?ref=ppx_yo2ov_dt_b_fed_asin_title)** - - - -### Why These Choices? - -**Mobility**: The 4U Rack allows me to move the entire setup easily, making it convenient for different scenarios, from a home office to a small business environment. - -**Professional-Grade Networking**: The Mikrotik router provides a rich feature set generally found in enterprise-grade hardware, offering me a sandbox to experiment with advanced networking configurations. - -**Scalability**: The Raspberry Pi units and the Rack setup are easily scalable. I can effortlessly add more Pis to the cluster, enhancing its capabilities. - -**Affordability**: This setup provides a balance between cost and performance, giving me a powerful Kubernetes cluster without breaking the bank. - - ---- - - -# Raspberry Pi's Setup - -For most steps, an [Ansible playbook](./ansible/playbooks/) is available. However, I strongly recommend that you initially set up the first Raspberry Pi manually. This hands-on approach will help you understand each step more deeply and gain practical experience. Once you've completed the manual setup, you can then use the [Ansible playbook](./ansible/playbooks/) to automate the same tasks across the other devices. - -#### Flash SD Cards with Raspberry Pi OS Using Pi Imager -- Open [Raspberry Pi Imager](https://www.raspberrypi.com/software/). - - Choose the 'OS' you want to install from the list. The tool will download the selected OS image for you. - - Insert your SD card and select it in the 'Storage' section. - - Before writing, click on the cog icon for advanced settings. - - Set the hostname to your desired value, e.g., `RP1`. - - Enable SSH and select the "allow public-key authorization only" option. - - Click on 'Write' to begin the flashing process. - -#### Initial Boot and Setup -- Insert the flashed SD card into the Raspberry Pi and power it on. -- On the first boot, ssh into the Pi to perform initial configuration - -#### Update and Upgrade - ([Ansible Playbook](./ansible/playbooks/apt-update.yml)) -- Run the following commands to update the package list and upgrade the installed packages: - -```bash -sudo apt update -sudo apt upgrade -``` - -#### Disable Wi-Fi ([Ansible Playbook](./ansible/playbooks/disable-wifi.yml)) - -```sh -sudo vi /etc/wpa_supplicant/wpa_supplicant.conf -``` - -Add the following lines to the file: - -```sh -network={ - ssid="" - key_mgmt=NONE -} -``` - -Disable the Wi-Fi interface: - -```sh -sudo ifconfig wlan0 down -``` - -Block the Wi-Fi module using `rfkill`: - -```sh -sudo rfkill block wifi -``` - -Prevent the Wi-Fi module from loading at boot: - -```sh -sudo nano /etc/modprobe.d/raspi-blacklist.conf -``` - -Add the following line: - -```sh -blacklist brcmfmac -``` - -Reboot your Raspberry Pi: - -```sh -sudo reboot -``` - -#### Disable Swap ([Ansible Playbook](./ansible/playbooks/disable-swap.yml)) - -Disabling swap in a K3s cluster is crucial because Kubernetes relies on precise memory management to allocate resources, schedule workloads, and handle potential memory limits. When swap is enabled, it introduces unpredictability in how memory is used. The Linux kernel may move inactive memory to disk (swap), giving the impression that there is available memory when, in reality, the node might be under significant memory pressure. This can lead to performance degradation for applications, as accessing memory from the swap space (on disk) is significantly slower than accessing it from RAM. In addition, Kubernetes, by default, expects swap to be off and prevents the kubelet from running unless explicitly overridden, as swap complicates memory monitoring and scheduling. - -Beyond performance, swap interferes with Kubernetes' ability to react to out-of-memory (OOM) conditions. With swap enabled, a node might avoid crashing but at the cost of drastically reduced performance, disk I/O bottlenecks, and inconsistent resource allocation. In contrast, with swap disabled, Kubernetes can correctly identify memory shortages and kill misbehaving pods in a controlled way, allowing the system to recover predictably. For edge cases like K3s, which often operate on lightweight and resource-constrained systems (e.g., Raspberry Pis or IoT devices), disabling swap ensures efficient and stable operation without unnecessary disk wear and performance hits. - -- Open a terminal. -- Run the following command to turn off swap for the current session: - -```bash -sudo swapoff -a -``` - -This command disables the swap immediately, but it will be re-enabled after a reboot unless further steps are taken. - -##### Modify `/etc/dphys-swapfile` to Disable Swap Permanently - -Open the swap configuration file `/etc/dphys-swapfile` in a text editor: - -```bash -sudo nano /etc/dphys-swapfile -``` - -Search for the line starting with `CONF_SWAPSIZE=`. -Modify that line to read: - -```bash -CONF_SWAPSIZE=0 -``` - -Save (Ctrl+O in `nano`) and exit the editor (Ctrl+X in `nano`). - -##### Remove the Existing Swap File - -Run the following command to remove the current swap file (`/var/swap`): - -```bash -sudo rm /var/swap -``` - -##### Stop the `dphys-swapfile` service immediately - -Stop the `dphys-swapfile` service, which manages swap: -```bash -sudo systemctl stop dphys-swapfile -``` - -##### Disable the `dphys-swapfile` service to prevent it from running on boot - -Prevent the `dphys-swapfile` service from starting during system boot by disabling it: - -```bash -sudo systemctl disable dphys-swapfile -``` - ---- - -##### Verify swap is turned off - -Run the following command to verify that swap is no longer in use: - -```bash -free -m -``` - -In the output, ensure that the "Swap" line shows `0` for total, used, and free space: - -``` -total used free shared buffers cached -Mem: 2003 322 1681 18 12 129 --/+ buffers/cache: 180 1822 -Swap: 0 0 0 -``` - ---- - -##### Reboot the system - -Finally, reboot the system in order to apply all changes fully and ensure swap remains permanently disabled: - -```bash -sudo reboot -``` - -After the system comes back online, run `free -m` again to confirm that swap is still disabled. - - -#### Disable Bluetooth - -When using Raspberry Pi devices in a Kubernetes-based environment like K3s, any unused hardware features, such as Bluetooth, can consume system resources or introduce potential security risks. Disabling Bluetooth on each Raspberry Pi optimizes performance by reducing background services and freeing up resources like CPU and memory. Additionally, by disabling an unused service, you reduce the attack surface of your Raspberry Pi-based K3s cluster, providing a more secure and streamlined operating environment. - - -##### Stop and disable the bluetooth service - -**Stop the Bluetooth service** that might be currently running on your Raspberry Pi: - -```bash -sudo systemctl stop bluetooth -``` - -**Disable the service** so it doesn't start automatically during system boot: - -```bash -sudo systemctl disable bluetooth -``` - -This ensures that the Bluetooth service is not running in the background, conserving system resources. - -##### Blacklist bluetooth kernel modules - -To prevent the operating system from loading Bluetooth modules at boot time, you'll need to blacklist specific modules. - -**Open the blacklist configuration file for editing (or create it)**: - -```bash -sudo nano /etc/modprobe.d/raspi-blacklist.conf -``` - -**Add the following lines to disable Bluetooth modules**: - -```bash -blacklist btbcm # Disables Broadcom Bluetooth module -blacklist hci_uart # Disables hci_uart module specific to Raspberry Pi Bluetooth -``` - -**Save the file** (Ctrl+O in `nano`) and **exit** the editor (Ctrl+X in `nano`). - -By blacklisting these modules, they won’t be loaded during boot, effectively preventing Bluetooth from running. - -##### Disable bluetooth in the system configuration - -Bluetooth can be disabled directly at the device level by editing specific Raspberry Pi system configurations. - -**Open the boot configuration file for editing**: - -```bash -sudo nano /boot/config.txt -``` - -**Add the following line to disable Bluetooth**: - -```bash -dtoverlay=disable-bt -``` - -Ensure no Bluetooth device can wake up your Raspberry Pi by ensuring the line is not commented out. - -**Save the changes** (Ctrl+O in `nano`) and **exit** the editor (Ctrl+X in `nano`). - -This command ensures that the Raspberry Pi doesn’t enable Bluetooth at boot by making system-wide firmware adjustments. - -**Reboot the Raspberry Pi** - -To fully apply the changes (stopping the service, blacklisting modules, and adjusting system configuration), it’s recommended to reboot the system. - -**Reboot the Raspberry Pi**: - -```bash -sudo reboot -``` - -After rebooting, you can verify that Bluetooth has been disabled by checking the status of the service: - -```bash -sudo systemctl status bluetooth -``` - -It should indicate that the Bluetooth service is inactive or dead. - - -#### Assign Static IP Addresses - -##### MikroTik Router - -- Open the MikroTik Web UI and navigate to `IP > DHCP Server`. -- Locate the `Leases` tab and identify the MAC addresses of your Raspberry Pi units. -- Click on the entry for each Raspberry Pi and change it from "dynamic" to "static". - -## Set SSH Aliases - -Once you have assigned static IPs on your router, you can simplify the SSH process by setting up SSH aliases. Here's how to do it: - -1. **Open the SSH config file on your local machine:** - -```bash -vi ~/.ssh/config -``` - -2. **Add the following entries for each Raspberry Pi:** - -```bash -Host rp1 - HostName - User YOUR_USERNAME - -Host rp2 - HostName - User YOUR_USERNAME - -Host rp3 - HostName - User YOUR_USERNAME - -Host rp4 - HostName - User YOUR_USERNAME -``` - -Replace ``, ``, ``, and `` with the actual static IP addresses of your Raspberry Pis. - -3. **Save and Close the File** - -5. **Test Your Aliases** - -You should now be able to SSH into each Raspberry Pi using the alias: - -```bash -ssh rp1 -``` - -That's it! You've set up SSH aliases for your Raspberry Pi cluster. - -# Kubernetes - -## What is Kubernetes? 🎥 -- [Kubernetes Explained in 6 Minutes | k8s Architecture](https://www.youtube.com/watch?v=TlHvYWVUZyc&ab_channel=ByteByteGo) -- [Kubernetes Explained in 15 Minutes](https://www.youtube.com/watch?v=r2zuL9MW6wc) -- Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. - -### Kubernetes Components Explained - -#### Control Plane Components - -- **API Server**: - - Acts as the front-end for the Kubernetes control plane. - -- **etcd**: - - Consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data. - -- **Scheduler**: - - Responsible for scheduling pods onto nodes. - -- **Controller Manager**: - - Runs controllers, which are background threads that handle routine tasks in the cluster. - -#### Worker Node Components - -- **Worker Node**: - - Machines, VMs, or physical computers that run your applications. - -- **Pods**: - - The smallest deployable units of computing that can be created and managed in Kubernetes. - -- **kubelet**: - - An agent that runs on each worker node in the cluster and ensures that containers are running in a pod. - -- **kube-proxy**: - - Maintains network rules on nodes, allowing network communication to your Pods from network sessions inside or outside of your cluster. - - - -#### 2. Why Use Kubernetes? -- **Scaling**: Easily scale applications up or down as needed. -- **High Availability**: Ensure that your applications are fault-tolerant and highly available. -- **Portability**: Move workloads across different cloud providers or on-premises environments. -- **Declarative Configuration**: Describe what you want, and Kubernetes makes it happen. - -#### 3. Core Components and Concepts -- **Control Plane**: The set of components that manage the overall state of the cluster. -- **Nodes**: The worker machines that run containers. -- **Pods**: The smallest deployable units that can contain one or more containers. -- **Services**: A way to expose Pods to the network. -- **Ingress**: Manages external access to services within a cluster. -- **ConfigMaps and Secrets**: Manage configuration data and secrets separately from container images. - -#### 4. Architecture Overview -- **Bottom-Up View**: Understand Kubernetes from the infrastructure (Nodes) to Pods, to Services, and upwards. -- **Top-Down View**: Start from the user's perspective, breaking down what you want to deploy into services, pods, and the underlying infrastructure. - -#### 5. Read and Research -- Go through [Kubernetes' official documentation](https://kubernetes.io/docs/home/). -- Watch [beginner-friendly YouTube tutorials](https://www.youtube.com/watch?v=d6WC5n9G_sM&ab_channel=freeCodeCamp.org) or online courses. - -#### 6. Community and Ecosystem -- Get familiar with the wider Kubernetes ecosystem, including tooling, forums, and meetups. - ---- - -## K3S Setup - -### Enable Memory Cgroups ([Ansible Playbook](./ansible/playbooks/enable-memory-groups.yml)) - -```txt -Control Groups (Cgroups) are a Linux kernel feature that allows you to allocate resources such as CPU time, system memory, and more among user-defined groups of tasks (processes). K3s requires memory cgroups to be enabled to better manage and restrict the resources that each container can use. This is crucial in a multi-container environment where resource allocation needs to be as efficient as possible. - -Simple Analogy: Imagine you live in a house with multiple people (processes), and there are limited resources like time (CPU), space (memory), and tools (I/O). Without a system in place, one person might hog the vacuum cleaner all day (CPU time), while someone else fills the fridge with their stuff (memory). - -With a `"chore schedule"` (cgroups), you ensure everyone gets an allocated time with the vacuum cleaner, some space in the fridge, and so on. This schedule ensures that everyone can do their chores without stepping on each other's toes, much like how cgroups allocate system resources to multiple processes. -``` - -Before installing K3s, it's essential to enable memory cgroups on the Raspberry Pi for effective container resource management. - -Edit the `/boot/firmware/cmdline.txt` file on your Raspberry Pi. - -```bash -sudo vi /boot/firmware/cmdline.txt -``` - -Append the following to enable memory cgroups. - -```text -cgroup_memory=1 cgroup_enable=memory -``` - -Save the file and reboot your Raspberry Pi. - -```bash -sudo reboot -``` - -### Setup the Master Node - -Select one Raspberry Pi to act as the master node, and install K3S: - -```bash -curl -sfL https://get.k3s.io | sh - -``` - -**Copy and Set Permissions for Kubeconfig**: To avoid permission issues when using kubectl, copy the generated Kubeconfig to your home directory and update its ownership. - -```bash -# Create the .kube directory in the user's home directory if it doesn't already exist -mkdir -p ~/.kube - -# Copy the k3s.yaml file from its default location to the user's .kube directory as the default kubectl config file -sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config - -# Change the ownership of the copied config file to the current user and group, so kubectl can access it without requiring sudo -sudo chown $(id -u):$(id -g) ~/.kube/config -``` - -**Verify Cluster**: Ensure that `/etc/rancher/k3s/k3s.yaml` was created and the cluster is accessible. - -```bash -kubectl --kubeconfig ~/.kube/config get nodes -``` - -**Set KUBECONFIG Environment Variable**: To make it more convenient to run `kubectl` commands without having to specify the `--kubeconfig` flag every time, you can set an environment variable to automatically point to the kubeconfig file. - -```bash -export KUBECONFIG=~/.kube/config -``` - -To make this setting permanent across shell sessions, add it to your shell profile: - -```bash -echo "export KUBECONFIG=~/.kube/config" >> ~/.bashrc -source ~/.bashrc -``` - -By doing this, you streamline your workflow, allowing you to simply run `kubectl get nodes` instead of specifying the kubeconfig path each time. - ---- - -### Setup Worker Nodes - -**Join Tokens**: On the master node, retrieve the join token from `/var/lib/rancher/k3s/server/token`. - -```bash -vi /var/lib/rancher/k3s/server/token -``` - -**Worker Installation**: Use this token to join each worker node to the master. - -```bash -curl -sfL https://get.k3s.io | K3S_URL=https://:6443 K3S_TOKEN= sh - -``` - -**Node Verification**: Check that all worker nodes have joined the cluster. On your master node, run: - -```bash -kubectl get nodes -``` - ---- - -### Setup kubectl on your local machine - -#### Kubeconfig - -After setting up your cluster, it's more convenient to manage it remotely from your local machine. - -Here's how to do that: - -**Create the `.kube` directory on your local machine if it doesn't already exist.** - -```bash -mkdir -p ~/.kube -``` - -**Copy the kubeconfig from the master node to your local `.kube` directory.** - -```bash -scp @:~/.kube/config ~/.kube/config -``` -Replace `` with your username and `` with the IP address of your master node. - -**Note**: If you encounter a permissions issue while copying, ensure that the `~/.kube/config` on your master node is owned by your user and is accessible. You might have to adjust file permissions or ownership on the master node accordingly. - -**Update the kubeconfig server details (Optional)** - -Open your local `~/.kube/config` and make sure the `server` IP matches your master node's IP. If it's set to `127.0.0.1`, you'll need to update it. - -```yaml -server: https://:6443 -``` - -Replace `` with the IP address of your master node. - -After completing these steps, you should be able to run `kubectl` commands from your local machine to interact with your Kubernetes cluster. This avoids the need to SSH into the master node for cluster management tasks. - ---- - -# Gettting Started with Kubernetes - -## Namespace Setup - -1. **Create a new Kubernetes Namespace**: - -**Command:** -```bash -kubectl create namespace my-apps -``` - -**YAML Version**: `namespace.yaml` - -```yaml -# Define the API version and the kind of resource -apiVersion: v1 -kind: Namespace -metadata: - # The name of the Namespace - name: my-apps - ``` -**Apply with:** - -```bash -kubectl apply -f namespace.yaml -``` - -## Basic Deployment - -2. **Deploy a Simple App**: - -**Command:** - -```bash -kubectl create deployment hello-world --image=nginx --namespace=my-apps -``` - -**YAML Version**: `deployment.yaml` - -```yaml -# Define the API version and the kind of resource -apiVersion: apps/v1 -kind: Deployment -metadata: - # The name of the Deployment - name: hello-world - # Namespace to deploy into - namespace: my-apps -spec: - # Number of replica Pods to maintain - replicas: 1 - selector: - # Labels to match against when selecting Pods for this Deployment - matchLabels: - app: hello-world - template: - metadata: - # Labels to assign to the Pods spawned by this Deployment - labels: - app: hello-world - spec: - containers: - - name: nginx - image: nginx - ports: - # Container port that needs to be exposed - - containerPort: 80 - -``` -**Apply with:** - -```bash -kubectl apply -f deployment.yaml -``` - -## Service Exposure - -3. **Expose the Deployment**: - -**Command:** - -```bash -kubectl expose deployment hello-world --type=ClusterIP --port=80 --namespace=my-apps -``` - -**YAML Version**: `service.yaml` - -```yaml -# Define the API version and the kind of resource -apiVersion: v1 -kind: Service -metadata: - # Name of the Service - name: hello-world - # Namespace to create the service in - namespace: my-apps -spec: - # Select Pods with this label to expose via the Service - selector: - app: hello-world - ports: - - protocol: TCP - # Expose the Service on this port - port: 80 - # Map the Service port to the target Port on the Pod - targetPort: 80 - # The type of Service; ClusterIP makes it reachable only within the cluster - type: ClusterIP - -``` -**Apply with:** -```bash -kubectl apply -f service.yaml -``` - -## Verify Deployment - -4. **Verify Using Port-Forward**: - -```bash -# This is only needed if service type is ClusterIP -kubectl port-forward deployment/hello-world 8081:80 --namespace=my-apps -``` - -## Cleanup: Wiping Everything and Starting Over - -**Remove All Resources**: - -```bash -kubectl delete namespace my-apps -``` -**Or remove individual resources with:** - -```bash -kubectl delete -f .yaml -``` - -**Warning**: Deleting the namespace will remove all resources in that namespace. Ensure you're okay with that before running the command. - ---- - -## Exercises - -### Exercise 1: Create and Examine a Pod - -1. Create a simple Pod running Nginx. - -```bash -kubectl run nginx-pod --image=nginx --restart=Never -``` - -2. Examine the Pod. - -```bash -kubectl describe pod nginx-pod -``` - -3. Delete the Pod. - -```bash -kubectl delete pod nginx-pod -``` - -**Objective**: Familiarize yourself with the Pod lifecycle. - ---- - -### Exercise 2: Create a Deployment - -1. Create a Deployment for a simple Node.js app (You can use a Docker image like `node:20`). - -```bash -kubectl create deployment node-app --image=node:20 -``` - -2. Scale the Deployment. - -```bash -kubectl scale deployment node-app --replicas=3 -``` - -3. Rollback the Deployment. - -```bash -kubectl rollout undo deployment node-app -``` - -**Objective**: Learn how to manage application instances declaratively using Deployments. - ---- - -### Exercise 3: Expose the Deployment as a Service - -1. Expose the Deployment as a ClusterIP service. - -```bash -kubectl expose deployment node-app --type=ClusterIP --port=80 -``` - -2. Access the service within the cluster. - -```bash -kubectl get svc -``` - -Use `kubectl port-forward` to test the service. - -```bash -kubectl port-forward svc/node-app 8080:80 -``` - -**Objective**: Learn how Services allow you to abstract and access your Pods. - ---- - -### Exercise 4: Cleanup - -1. Remove the service and deployment. - -```bash -kubectl delete svc node-app -kubectl delete deployment node-app -``` - -**Objective**: Understand cleanup and resource management. \ No newline at end of file +- [K3S Setup](./docs/k3s-setup.md#k3s-setup) + - [Enable Memory CGroups](./docs/k3s-setup.md#enable-memory-cgroups-ansible-playbook) + - [Master Node](./docs/k3s-setup.md#setup-the-master-node) + - [Worker Nodes](./docs/k3s-setup.md#setup-worker-nodes) + - [Kubectl on local machine](./docs/k3s-setup.md#setup-kubectl-on-your-local-machine) + +- [Kubernetes Theory](./docs/kubernetes-theory.md#kubernetes) + - [What is Kubernetes](./docs/kubernetes-theory.md#1-what-is-kubernetes-) + - [Kubernetes Components Explained](./docs/kubernetes-theory.md#kubernetes-components-explained) + - [Control Plane Components](./docs/kubernetes-theory.md#control-plane-components) + - [Worker Node Components](./docs/kubernetes-theory.md#worker-node-components) + - [Why Use Kubernetes](./docs/kubernetes-theory.md#2-why-use-kubernetes) + - [Core Components and Concepts](./docs/kubernetes-theory.md#3-core-components-and-concepts) + - [Read and Research](./docs/kubernetes-theory.md#5-read-and-research) + - [Architecture Overview](./docs/kubernetes-theory.md#4-architecture-overview) + - [Community and Ecosystem](./docs/kubernetes-theory.md#6-community-and-ecosystem) + +- [Getting Started with Kubernetes](./docs/getting-started-with-kubernetes.md#gettting-started-with-kubernetes) + - [Namespace Setup](./docs/getting-started-with-kubernetes.md#namespace-setup) + - [Basic Deployment](./docs/getting-started-with-kubernetes.md#basic-deployment) + - [Service Exposure](./docs/getting-started-with-kubernetes.md#service-exposure) + - [Verify Deployment](./docs/getting-started-with-kubernetes.md#verify-deployment) + - [Cleanup](./docs/getting-started-with-kubernetes.md#cleanup-wiping-everything-and-starting-over) + - [Basic Kubernetes Deployments](./docs/getting-started-with-kubernetes.md#basic-kubernetes-deployments) \ No newline at end of file diff --git a/docs/getting-started-with-kubernetes.md b/docs/getting-started-with-kubernetes.md new file mode 100644 index 0000000..14df13b --- /dev/null +++ b/docs/getting-started-with-kubernetes.md @@ -0,0 +1,225 @@ +# Gettting Started with Kubernetes + +## Namespace Setup + +1. **Create a new Kubernetes Namespace**: + +**Command:** +```bash +kubectl create namespace my-apps +``` + +**YAML Version**: `namespace.yaml` + +```yaml +# Define the API version and the kind of resource +apiVersion: v1 +kind: Namespace +metadata: + # The name of the Namespace + name: my-apps + ``` +**Apply with:** + +```bash +kubectl apply -f namespace.yaml +``` + +## Basic Deployment + +2. **Deploy a Simple App**: + +**Command:** + +```bash +kubectl create deployment hello-world --image=nginx --namespace=my-apps +``` + +**YAML Version**: `deployment.yaml` + +```yaml +# Define the API version and the kind of resource +apiVersion: apps/v1 +kind: Deployment +metadata: + # The name of the Deployment + name: hello-world + # Namespace to deploy into + namespace: my-apps +spec: + # Number of replica Pods to maintain + replicas: 1 + selector: + # Labels to match against when selecting Pods for this Deployment + matchLabels: + app: hello-world + template: + metadata: + # Labels to assign to the Pods spawned by this Deployment + labels: + app: hello-world + spec: + containers: + - name: nginx + image: nginx + ports: + # Container port that needs to be exposed + - containerPort: 80 + +``` +**Apply with:** + +```bash +kubectl apply -f deployment.yaml +``` + +## Service Exposure + +3. **Expose the Deployment**: + +**Command:** + +```bash +kubectl expose deployment hello-world --type=ClusterIP --port=80 --namespace=my-apps +``` + +**YAML Version**: `service.yaml` + +```yaml +# Define the API version and the kind of resource +apiVersion: v1 +kind: Service +metadata: + # Name of the Service + name: hello-world + # Namespace to create the service in + namespace: my-apps +spec: + # Select Pods with this label to expose via the Service + selector: + app: hello-world + ports: + - protocol: TCP + # Expose the Service on this port + port: 80 + # Map the Service port to the target Port on the Pod + targetPort: 80 + # The type of Service; ClusterIP makes it reachable only within the cluster + type: ClusterIP + +``` +**Apply with:** +```bash +kubectl apply -f service.yaml +``` + +## Verify Deployment + +4. **Verify Using Port-Forward**: + +```bash +# This is only needed if service type is ClusterIP +kubectl port-forward deployment/hello-world 8081:80 --namespace=my-apps +``` + +## Cleanup: Wiping Everything and Starting Over + +**Remove All Resources**: + +```bash +kubectl delete namespace my-apps +``` +**Or remove individual resources with:** + +```bash +kubectl delete -f .yaml +``` + +**Warning**: Deleting the namespace will remove all resources in that namespace. Ensure you're okay with that before running the command. + +--- + +## Exercises + +### Exercise 1: Create and Examine a Pod + +1. Create a simple Pod running Nginx. + +```bash +kubectl run nginx-pod --image=nginx --restart=Never +``` + +2. Examine the Pod. + +```bash +kubectl describe pod nginx-pod +``` + +3. Delete the Pod. + +```bash +kubectl delete pod nginx-pod +``` + +**Objective**: Familiarize yourself with the Pod lifecycle. + +--- + +### Exercise 2: Create a Deployment + +1. Create a Deployment for a simple Node.js app (You can use a Docker image like `node:20`). + +```bash +kubectl create deployment node-app --image=node:20 +``` + +2. Scale the Deployment. + +```bash +kubectl scale deployment node-app --replicas=3 +``` + +3. Rollback the Deployment. + +```bash +kubectl rollout undo deployment node-app +``` + +**Objective**: Learn how to manage application instances declaratively using Deployments. + +--- + +### Exercise 3: Expose the Deployment as a Service + +1. Expose the Deployment as a ClusterIP service. + +```bash +kubectl expose deployment node-app --type=ClusterIP --port=80 +``` + +2. Access the service within the cluster. + +```bash +kubectl get svc +``` + +Use `kubectl port-forward` to test the service. + +```bash +kubectl port-forward svc/node-app 8080:80 +``` + +**Objective**: Learn how Services allow you to abstract and access your Pods. + +--- + +### Exercise 4: Cleanup + +1. Remove the service and deployment. + +```bash +kubectl delete svc node-app +kubectl delete deployment node-app +``` + +**Objective**: Understand cleanup and resource management. \ No newline at end of file diff --git a/docs/hardware-components.md b/docs/hardware-components.md new file mode 100644 index 0000000..fd70caf --- /dev/null +++ b/docs/hardware-components.md @@ -0,0 +1,38 @@ +## Hardware +### Hardware Components + +The setup illustrated here is not mandatory but reflects my personal choices based on both experience and specific requirements. I aimed for a setup that is not only robust but also relatively mobile. Therefore, I opted for a 4U Rack where all the components are neatly encapsulated, making it easy to plug and play. I plan to expand this cluster by adding another four Raspberry Pis once the prices are more accommodating. + +- **[Mikrotik RB3011UiAS-RM](https://mikrotik.com/product/RB3011UiAS-RM)**: I chose Mikrotik's router as it offers a professional-grade, feature-rich solution at an affordable price. This router allows for a myriad of configurations and functionalities that you'd typically find in higher-end solutions like Cisco. Its features like robust firewall options, VPN support, and advanced routing capabilities made it a compelling choice. + +- **[4x Raspberry Pi 4 B 8GB](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/)**: I opted for the 8GB variant of the Raspberry Pi 4 B for its performance capabilities. The 8GB RAM provides ample room for running multiple containers and allows for future scalability. + +- **[4U Rack Cabinet](https://www.compumail.dk/en/p/lanberg-rack-gra-993865294)**: A 4U Rack to encapsulate all components cleanly. It provides the benefit of space efficiency and easy access for any hardware changes or additions. + +- **[Rack Power Supply](https://www.compumail.dk/en/p/lanberg-pdu-09f-0300-bk-stromstodsbeskytter-9-stik-16a-sort-3m-996106700)**: A centralized power supply solution for the entire rack. Ensures consistent and reliable power distribution to all the components. + +- **[GeeekPi 1U Rack Kit for Raspberry Pi 4B, 19" 1U Rack Mount](https://www.amazon.de/-/en/gp/product/B0972928CN/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1)**: This 19 inch rack mount kit is specially designed for recording Raspberry Pi 4B boards and supports up to 4 units. + +- **[SanDisk Extreme microSDHC 3 Rescue Pro Deluxe Memory Card, Red/Gold 64GB](https://www.amazon.de/-/en/gp/product/B07FCMBLV6/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1)**: Up to 160MB/s Read speed and 60 MB/s. Write speed for fast recording and transferring + +- **[Vanja SD/Micro SD Card Reader](https://www.amazon.de/-/en/gp/product/B00W02VHM6/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1)**: Micro USB OTG Adapter and USB 2.0 Memory Card Reader + +- **[deleyCON 5 x 0.25 m CAT8.1](https://www.amazon.de/-/en/gp/product/B08WPJVGHR/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1)**: deleyCON CAT 8.1 patch cable network cable as set // 2x RJ45 plug // S/FTP PIMF shielding + +- **[CSL CAT.8 Network Cable 40 Gigabit](https://www.amazon.de/-/en/gp/product/B08FCLHTH5/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1)**: CSL CAT.8 Network Cable 40 Gigabit + +- **[2x Verbatim Vi550 S3 SSD](https://www.amazon.de/dp/B07LGKQLT5?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1)** + +- **[2x JSAUX USB 3.0 to SATA Adapter](https://www.amazon.de/dp/B086W944YT?ref=ppx_yo2ov_dt_b_fed_asin_title)** + + + +### Why These Choices? + +**Mobility**: The 4U Rack allows me to move the entire setup easily, making it convenient for different scenarios, from a home office to a small business environment. + +**Professional-Grade Networking**: The Mikrotik router provides a rich feature set generally found in enterprise-grade hardware, offering me a sandbox to experiment with advanced networking configurations. + +**Scalability**: The Raspberry Pi units and the Rack setup are easily scalable. I can effortlessly add more Pis to the cluster, enhancing its capabilities. + +**Affordability**: This setup provides a balance between cost and performance, giving me a powerful Kubernetes cluster without breaking the bank. diff --git a/docs/k3s-setup.md b/docs/k3s-setup.md new file mode 100644 index 0000000..2377625 --- /dev/null +++ b/docs/k3s-setup.md @@ -0,0 +1,132 @@ +## K3S Setup + +### Enable Memory Cgroups ([Ansible Playbook](./ansible/playbooks/enable-memory-groups.yml)) + +```txt +Control Groups (Cgroups) are a Linux kernel feature that allows you to allocate resources such as CPU time, system memory, and more among user-defined groups of tasks (processes). K3s requires memory cgroups to be enabled to better manage and restrict the resources that each container can use. This is crucial in a multi-container environment where resource allocation needs to be as efficient as possible. + +Simple Analogy: Imagine you live in a house with multiple people (processes), and there are limited resources like time (CPU), space (memory), and tools (I/O). Without a system in place, one person might hog the vacuum cleaner all day (CPU time), while someone else fills the fridge with their stuff (memory). + +With a `"chore schedule"` (cgroups), you ensure everyone gets an allocated time with the vacuum cleaner, some space in the fridge, and so on. This schedule ensures that everyone can do their chores without stepping on each other's toes, much like how cgroups allocate system resources to multiple processes. +``` + +Before installing K3s, it's essential to enable memory cgroups on the Raspberry Pi for effective container resource management. + +Edit the `/boot/firmware/cmdline.txt` file on your Raspberry Pi. + +```bash +sudo vi /boot/firmware/cmdline.txt +``` + +Append the following to enable memory cgroups. + +```text +cgroup_memory=1 cgroup_enable=memory +``` + +Save the file and reboot your Raspberry Pi. + +```bash +sudo reboot +``` + +### Setup the Master Node + +Select one Raspberry Pi to act as the master node, and install K3S: + +```bash +curl -sfL https://get.k3s.io | sh - +``` + +**Copy and Set Permissions for Kubeconfig**: To avoid permission issues when using kubectl, copy the generated Kubeconfig to your home directory and update its ownership. + +```bash +# Create the .kube directory in the user's home directory if it doesn't already exist +mkdir -p ~/.kube + +# Copy the k3s.yaml file from its default location to the user's .kube directory as the default kubectl config file +sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config + +# Change the ownership of the copied config file to the current user and group, so kubectl can access it without requiring sudo +sudo chown $(id -u):$(id -g) ~/.kube/config +``` + +**Verify Cluster**: Ensure that `/etc/rancher/k3s/k3s.yaml` was created and the cluster is accessible. + +```bash +kubectl --kubeconfig ~/.kube/config get nodes +``` + +**Set KUBECONFIG Environment Variable**: To make it more convenient to run `kubectl` commands without having to specify the `--kubeconfig` flag every time, you can set an environment variable to automatically point to the kubeconfig file. + +```bash +export KUBECONFIG=~/.kube/config +``` + +To make this setting permanent across shell sessions, add it to your shell profile: + +```bash +echo "export KUBECONFIG=~/.kube/config" >> ~/.bashrc +source ~/.bashrc +``` + +By doing this, you streamline your workflow, allowing you to simply run `kubectl get nodes` instead of specifying the kubeconfig path each time. + +--- + +### Setup Worker Nodes + +**Join Tokens**: On the master node, retrieve the join token from `/var/lib/rancher/k3s/server/token`. + +```bash +vi /var/lib/rancher/k3s/server/token +``` + +**Worker Installation**: Use this token to join each worker node to the master. + +```bash +curl -sfL https://get.k3s.io | K3S_URL=https://:6443 K3S_TOKEN= sh - +``` + +**Node Verification**: Check that all worker nodes have joined the cluster. On your master node, run: + +```bash +kubectl get nodes +``` + +--- + +### Setup kubectl on your local machine + +#### Kubeconfig + +After setting up your cluster, it's more convenient to manage it remotely from your local machine. + +Here's how to do that: + +**Create the `.kube` directory on your local machine if it doesn't already exist.** + +```bash +mkdir -p ~/.kube +``` + +**Copy the kubeconfig from the master node to your local `.kube` directory.** + +```bash +scp @:~/.kube/config ~/.kube/config +``` +Replace `` with your username and `` with the IP address of your master node. + +**Note**: If you encounter a permissions issue while copying, ensure that the `~/.kube/config` on your master node is owned by your user and is accessible. You might have to adjust file permissions or ownership on the master node accordingly. + +**Update the kubeconfig server details (Optional)** + +Open your local `~/.kube/config` and make sure the `server` IP matches your master node's IP. If it's set to `127.0.0.1`, you'll need to update it. + +```yaml +server: https://:6443 +``` + +Replace `` with the IP address of your master node. + +After completing these steps, you should be able to run `kubectl` commands from your local machine to interact with your Kubernetes cluster. This avoids the need to SSH into the master node for cluster management tasks. \ No newline at end of file diff --git a/docs/kubernetes-theory.md b/docs/kubernetes-theory.md new file mode 100644 index 0000000..4d2d054 --- /dev/null +++ b/docs/kubernetes-theory.md @@ -0,0 +1,63 @@ +# Kubernetes Theory + +## What is Kubernetes? 🎥 +- [Kubernetes Explained in 6 Minutes | k8s Architecture](https://www.youtube.com/watch?v=TlHvYWVUZyc&ab_channel=ByteByteGo) +- [Kubernetes Explained in 15 Minutes](https://www.youtube.com/watch?v=r2zuL9MW6wc) +- Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. + +### Kubernetes Components Explained + +#### Control Plane Components + +- **API Server**: + - Acts as the front-end for the Kubernetes control plane. + +- **etcd**: + - Consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data. + +- **Scheduler**: + - Responsible for scheduling pods onto nodes. + +- **Controller Manager**: + - Runs controllers, which are background threads that handle routine tasks in the cluster. + +#### Worker Node Components + +- **Worker Node**: + - Machines, VMs, or physical computers that run your applications. + +- **Pods**: + - The smallest deployable units of computing that can be created and managed in Kubernetes. + +- **kubelet**: + - An agent that runs on each worker node in the cluster and ensures that containers are running in a pod. + +- **kube-proxy**: + - Maintains network rules on nodes, allowing network communication to your Pods from network sessions inside or outside of your cluster. + + + +#### 2. Why Use Kubernetes? +- **Scaling**: Easily scale applications up or down as needed. +- **High Availability**: Ensure that your applications are fault-tolerant and highly available. +- **Portability**: Move workloads across different cloud providers or on-premises environments. +- **Declarative Configuration**: Describe what you want, and Kubernetes makes it happen. + +#### 3. Core Components and Concepts +- **Control Plane**: The set of components that manage the overall state of the cluster. +- **Nodes**: The worker machines that run containers. +- **Pods**: The smallest deployable units that can contain one or more containers. +- **Services**: A way to expose Pods to the network. +- **Ingress**: Manages external access to services within a cluster. +- **ConfigMaps and Secrets**: Manage configuration data and secrets separately from container images. + +#### 4. Architecture Overview +- **Bottom-Up View**: Understand Kubernetes from the infrastructure (Nodes) to Pods, to Services, and upwards. +- **Top-Down View**: Start from the user's perspective, breaking down what you want to deploy into services, pods, and the underlying infrastructure. + +#### 5. Read and Research +- Go through [Kubernetes' official documentation](https://kubernetes.io/docs/home/). +- Watch [beginner-friendly YouTube tutorials](https://www.youtube.com/watch?v=d6WC5n9G_sM&ab_channel=freeCodeCamp.org) or online courses. + +#### 6. Community and Ecosystem +- Get familiar with the wider Kubernetes ecosystem, including tooling, forums, and meetups. diff --git a/docs/raspberry-pi-setup.md b/docs/raspberry-pi-setup.md new file mode 100644 index 0000000..2b53cf6 --- /dev/null +++ b/docs/raspberry-pi-setup.md @@ -0,0 +1,291 @@ +# Raspberry Pi's Setup + +For most steps, an [Ansible playbook](./ansible/playbooks/) is available. However, I strongly recommend that you initially set up the first Raspberry Pi manually. This hands-on approach will help you understand each step more deeply and gain practical experience. Once you've completed the manual setup, you can then use the [Ansible playbook](./ansible/playbooks/) to automate the same tasks across the other devices. + +#### Flash SD Cards with Raspberry Pi OS Using Pi Imager +- Open [Raspberry Pi Imager](https://www.raspberrypi.com/software/). + - Choose the 'OS' you want to install from the list. The tool will download the selected OS image for you. + - Insert your SD card and select it in the 'Storage' section. + - Before writing, click on the cog icon for advanced settings. + - Set the hostname to your desired value, e.g., `RP1`. + - Enable SSH and select the "allow public-key authorization only" option. + - Click on 'Write' to begin the flashing process. + +#### Initial Boot and Setup +- Insert the flashed SD card into the Raspberry Pi and power it on. +- On the first boot, ssh into the Pi to perform initial configuration + +#### Update and Upgrade - ([Ansible Playbook](./ansible/playbooks/apt-update.yml)) +- Run the following commands to update the package list and upgrade the installed packages: + +```bash +sudo apt update +sudo apt upgrade +``` + +#### Disable Wi-Fi ([Ansible Playbook](./ansible/playbooks/disable-wifi.yml)) + +```sh +sudo vi /etc/wpa_supplicant/wpa_supplicant.conf +``` + +Add the following lines to the file: + +```sh +network={ + ssid="" + key_mgmt=NONE +} +``` + +Disable the Wi-Fi interface: + +```sh +sudo ifconfig wlan0 down +``` + +Block the Wi-Fi module using `rfkill`: + +```sh +sudo rfkill block wifi +``` + +Prevent the Wi-Fi module from loading at boot: + +```sh +sudo nano /etc/modprobe.d/raspi-blacklist.conf +``` + +Add the following line: + +```sh +blacklist brcmfmac +``` + +Reboot your Raspberry Pi: + +```sh +sudo reboot +``` + +#### Disable Swap ([Ansible Playbook](./ansible/playbooks/disable-swap.yml)) + +Disabling swap in a K3s cluster is crucial because Kubernetes relies on precise memory management to allocate resources, schedule workloads, and handle potential memory limits. When swap is enabled, it introduces unpredictability in how memory is used. The Linux kernel may move inactive memory to disk (swap), giving the impression that there is available memory when, in reality, the node might be under significant memory pressure. This can lead to performance degradation for applications, as accessing memory from the swap space (on disk) is significantly slower than accessing it from RAM. In addition, Kubernetes, by default, expects swap to be off and prevents the kubelet from running unless explicitly overridden, as swap complicates memory monitoring and scheduling. + +Beyond performance, swap interferes with Kubernetes' ability to react to out-of-memory (OOM) conditions. With swap enabled, a node might avoid crashing but at the cost of drastically reduced performance, disk I/O bottlenecks, and inconsistent resource allocation. In contrast, with swap disabled, Kubernetes can correctly identify memory shortages and kill misbehaving pods in a controlled way, allowing the system to recover predictably. For edge cases like K3s, which often operate on lightweight and resource-constrained systems (e.g., Raspberry Pis or IoT devices), disabling swap ensures efficient and stable operation without unnecessary disk wear and performance hits. + +- Open a terminal. +- Run the following command to turn off swap for the current session: + +```bash +sudo swapoff -a +``` + +This command disables the swap immediately, but it will be re-enabled after a reboot unless further steps are taken. + +##### Modify `/etc/dphys-swapfile` to Disable Swap Permanently + +Open the swap configuration file `/etc/dphys-swapfile` in a text editor: + +```bash +sudo nano /etc/dphys-swapfile +``` + +Search for the line starting with `CONF_SWAPSIZE=`. +Modify that line to read: + +```bash +CONF_SWAPSIZE=0 +``` + +Save (Ctrl+O in `nano`) and exit the editor (Ctrl+X in `nano`). + +##### Remove the Existing Swap File + +Run the following command to remove the current swap file (`/var/swap`): + +```bash +sudo rm /var/swap +``` + +##### Stop the `dphys-swapfile` service immediately + +Stop the `dphys-swapfile` service, which manages swap: +```bash +sudo systemctl stop dphys-swapfile +``` + +##### Disable the `dphys-swapfile` service to prevent it from running on boot + +Prevent the `dphys-swapfile` service from starting during system boot by disabling it: + +```bash +sudo systemctl disable dphys-swapfile +``` + +--- + +##### Verify swap is turned off + +Run the following command to verify that swap is no longer in use: + +```bash +free -m +``` + +In the output, ensure that the "Swap" line shows `0` for total, used, and free space: + +``` +total used free shared buffers cached +Mem: 2003 322 1681 18 12 129 +-/+ buffers/cache: 180 1822 +Swap: 0 0 0 +``` + +--- + +##### Reboot the system + +Finally, reboot the system in order to apply all changes fully and ensure swap remains permanently disabled: + +```bash +sudo reboot +``` + +After the system comes back online, run `free -m` again to confirm that swap is still disabled. + + +#### Disable Bluetooth + +When using Raspberry Pi devices in a Kubernetes-based environment like K3s, any unused hardware features, such as Bluetooth, can consume system resources or introduce potential security risks. Disabling Bluetooth on each Raspberry Pi optimizes performance by reducing background services and freeing up resources like CPU and memory. Additionally, by disabling an unused service, you reduce the attack surface of your Raspberry Pi-based K3s cluster, providing a more secure and streamlined operating environment. + + +##### Stop and disable the bluetooth service + +**Stop the Bluetooth service** that might be currently running on your Raspberry Pi: + +```bash +sudo systemctl stop bluetooth +``` + +**Disable the service** so it doesn't start automatically during system boot: + +```bash +sudo systemctl disable bluetooth +``` + +This ensures that the Bluetooth service is not running in the background, conserving system resources. + +##### Blacklist bluetooth kernel modules + +To prevent the operating system from loading Bluetooth modules at boot time, you'll need to blacklist specific modules. + +**Open the blacklist configuration file for editing (or create it)**: + +```bash +sudo nano /etc/modprobe.d/raspi-blacklist.conf +``` + +**Add the following lines to disable Bluetooth modules**: + +```bash +blacklist btbcm # Disables Broadcom Bluetooth module +blacklist hci_uart # Disables hci_uart module specific to Raspberry Pi Bluetooth +``` + +**Save the file** (Ctrl+O in `nano`) and **exit** the editor (Ctrl+X in `nano`). + +By blacklisting these modules, they won’t be loaded during boot, effectively preventing Bluetooth from running. + +##### Disable bluetooth in the system configuration + +Bluetooth can be disabled directly at the device level by editing specific Raspberry Pi system configurations. + +**Open the boot configuration file for editing**: + +```bash +sudo nano /boot/config.txt +``` + +**Add the following line to disable Bluetooth**: + +```bash +dtoverlay=disable-bt +``` + +Ensure no Bluetooth device can wake up your Raspberry Pi by ensuring the line is not commented out. + +**Save the changes** (Ctrl+O in `nano`) and **exit** the editor (Ctrl+X in `nano`). + +This command ensures that the Raspberry Pi doesn’t enable Bluetooth at boot by making system-wide firmware adjustments. + +**Reboot the Raspberry Pi** + +To fully apply the changes (stopping the service, blacklisting modules, and adjusting system configuration), it’s recommended to reboot the system. + +**Reboot the Raspberry Pi**: + +```bash +sudo reboot +``` + +After rebooting, you can verify that Bluetooth has been disabled by checking the status of the service: + +```bash +sudo systemctl status bluetooth +``` + +It should indicate that the Bluetooth service is inactive or dead. + + +#### Assign Static IP Addresses + +##### MikroTik Router + +- Open the MikroTik Web UI and navigate to `IP > DHCP Server`. +- Locate the `Leases` tab and identify the MAC addresses of your Raspberry Pi units. +- Click on the entry for each Raspberry Pi and change it from "dynamic" to "static". + +## Set SSH Aliases + +Once you have assigned static IPs on your router, you can simplify the SSH process by setting up SSH aliases. Here's how to do it: + +1. **Open the SSH config file on your local machine:** + +```bash +vi ~/.ssh/config +``` + +2. **Add the following entries for each Raspberry Pi:** + +```bash +Host rp1 + HostName + User YOUR_USERNAME + +Host rp2 + HostName + User YOUR_USERNAME + +Host rp3 + HostName + User YOUR_USERNAME + +Host rp4 + HostName + User YOUR_USERNAME +``` + +Replace ``, ``, ``, and `` with the actual static IP addresses of your Raspberry Pis. + +3. **Save and Close the File** + +5. **Test Your Aliases** + +You should now be able to SSH into each Raspberry Pi using the alias: + +```bash +ssh rp1 +``` + +That's it! You've set up SSH aliases for your Raspberry Pi cluster. \ No newline at end of file