Skip to content

Commit

Permalink
Remove network infra VM
Browse files Browse the repository at this point in the history
Since all this runs is the image cache now it can be removed,
instead we'll use the podman container cache running on the
hypervisor
  • Loading branch information
hardys authored and Kristian-ZH committed Dec 13, 2023
1 parent b8c6ab8 commit 4e0fb66
Show file tree
Hide file tree
Showing 29 changed files with 74 additions and 449 deletions.
17 changes: 17 additions & 0 deletions 03_launch_mgmt_cluster.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -eux

PROJECT_DIR=$(dirname -- $(readlink -e -- ${BASH_SOURCE[0]}))
EXTRA_VARS_FILE=${EXTRA_VARS_FILE:-$PROJECT_DIR/extra_vars.yml}

if [[ "$(id -u)" -eq 0 ]]; then
echo "Please run as a non-root user"
exit 1
fi

# Run ansible configure host playbook
export ANSIBLE_ROLES_PATH=$PROJECT_DIR/roles
ANSIBLE_FORCE_COLOR=true ansible-playbook \
-i ${PROJECT_DIR}/inventories/localhost_inventory.yml \
-e "@${EXTRA_VARS_FILE}" \
$PROJECT_DIR/playbooks/setup_metal3_core.yml $@
16 changes: 1 addition & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# Table Of Contents

- [Overview](#overview)
- [Networking](#networking)
- [Prerequisites](#prerequisites)
- [How To Setup Metal3 Demo Environment](#how_to_setup_metal3_demo)

Expand All @@ -14,24 +13,12 @@ of [Kubernetes Cluster API][CAPI], for Kubernetes workload cluster
life cycle management. The demo environment consist of two VMs,
Metal3 Network Infra and Metal3 Core respectively.
![Metal3 Demo Overview](images/Metal3-Demo-Overview.png)
As depicted by the diagram above, the Metal3 Network Infra VM is designed
to emulate the infrastructure pieces, namely DNS, DHCP, and media server,
which are required by Metal3 and typically expected to be deployed outside
of the management cluster in a production environment. Metal3 Core VM has
Metal3 Core VM has
all the pieces, namely CAPI (Cluster API) controller, RKE2 bootstrap provider
(CABPR), RKE2 control plane provider (CACPPR),
Metal3 infrastructure provider (CAPM3), Baremetal Operator, and
OpenStack Ironic, in a typical production Metal3 management cluster.

## Networking <a name="networking" />

For security purposes, network segmentation is expected in production
environment, which usually consist of an internal provisioning network
for bare metal provisioning, and public network which is routable to
the internet. In the demo environment is used only one network and the host where
the VMs are running is expected to have a networking bridge for the
public network (i.e. tagged VLAN).

# Prerequisites <a name="prerequisites" />

* Host is expected to have one network bridge for the public network (i.e. tagged VLAN).
Expand All @@ -46,5 +33,4 @@ public network (i.e. tagged VLAN).
- For the automation of this deployment, [click here.](./scripts/README.md)

[CAPI]: https://cluster-api.sigs.k8s.io/introduction.html
[cloud_init_network_config]: https://cloudinit.readthedocs.io/en/latest/reference/network-config.html
[metal3]: https://github.com/metal3-io
42 changes: 0 additions & 42 deletions common_setup.sh

This file was deleted.

6 changes: 3 additions & 3 deletions docs/example-manifests/dhcp/rke2-agent.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,10 +58,10 @@ spec:
matchLabels:
cluster-role: worker
image:
checksum: http://media.suse.baremetal/openSUSE-Leap-15.5.x86_64-NoCloud.qcow2.md5
checksumType: md5
checksum: http://imagecache.local:8080/openSUSE-Leap-15.5.x86_64-NoCloud.metal3.qcow2.sha256
checksumType: sha256
format: qcow2
url: http://media.suse.baremetal/openSUSE-Leap-15.5.x86_64-NoCloud.qcow2
url: http://imagecache.local:8080/openSUSE-Leap-15.5.x86_64-NoCloud.metal3.qcow2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: Metal3DataTemplate
Expand Down
6 changes: 3 additions & 3 deletions docs/example-manifests/dhcp/rke2-control-plane.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,10 @@ spec:
matchLabels:
cluster-role: control-plane
image:
checksum: http://media.suse.baremetal/openSUSE-Leap-15.5.x86_64-NoCloud.qcow2.md5
checksumType: md5
checksum: http://imagecache.local:8080/openSUSE-Leap-15.5.x86_64-NoCloud.metal3.qcow2.sha256
checksumType: sha256
format: qcow2
url: http://media.suse.baremetal/openSUSE-Leap-15.5.x86_64-NoCloud.qcow2
url: http://imagecache.local:8080/openSUSE-Leap-15.5.x86_64-NoCloud.metal3.qcow2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: Metal3DataTemplate
Expand Down
22 changes: 10 additions & 12 deletions docs/setup/metal3-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,26 +46,24 @@ If desired the defaults from `extra_vars.yml` can be customized, copy the file a
```
- If you plan not to use the virsh networks, you will need to set up your own network bridges.

3. Create the Network Infra VM
3. Configure the host

- In the main directory of the repository, execute the script to create the network-infra VM
- In the main directory of the repository, execute the script to configure the host:

```shell
./setup_metal3_network_infra.sh
./02_configure_host.sh
```

- You may pass `-vvv` at the end of the script to see the output of the script
- The network-infra script must have completed without any errors before creating the core VM in step 8

4. Create the core VM
4. Create management cluster

```shell
./setup_metal3_core.sh
./03_launch_mgmt_cluster.sh
```

- You may pass `-vvv` at the end of the script to see the output

5. Assuming you are using the default configuration you can ssh into each of the VMs using the IPs below:
5. Assuming you are using the default configuration you can ssh into the management cluster VM as follows:

- Core VM Running Metal3: `ssh [email protected]` or `virsh console metal3-core`
- Network Infra VM Running with public internet access: `ssh [email protected]` or `virsh console metal3-network-infra`

## Development Notes

- You may pass `-vvv` at the end of the scripts to see more verbose output, or to pass arbitrary additional arguments to ansible-playbook
28 changes: 6 additions & 22 deletions docs/setup/rke2-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,28 +37,12 @@ EOF
virsh net-update egress add-last ip-dhcp-host host.xml --live
```

4. Create an XML file containing the following

```shell
cat << EOF > ~/vbmc/dns.xml
<host ip='192.168.125.100'>
<hostname>media.suse.baremetal</hostname>
</host>
EOF
```

5. Live update the egress network once again

```shell
virsh net-update egress add-last dns-host dns.xml --live
```

6. SSH into the metal3-core VM
4. SSH into the metal3-core VM
```shell
ssh [email protected]
```

7. Download the example manifests
5. Download the example manifests

```shell
curl https://raw.githubusercontent.com/suse-edge/metal3-demo/main/docs/example-manifests/dhcp/rke2-control-plane.yaml > rke2-control-plane.yaml
Expand All @@ -69,13 +53,13 @@ curl https://raw.githubusercontent.com/suse-edge/metal3-demo/main/docs/example-m
If you have made your own changes or have differences in your setup, you may need to update the manifests.
- This configuration assumes DHCP-only network setup.

8. Deploy the control plane
6. Deploy the control plane

```shell
kubectl apply -f rke2-control-plane.yaml
```

9. Verify that the control plane is properly provisioned
7. Verify that the control plane is properly provisioned

```shell
$ clusterctl describe cluster sample-cluster
Expand All @@ -86,13 +70,13 @@ $ clusterctl describe cluster sample-cluster
│ └─Machine/sample-cluster-chflc True 23m
```

10. Deploy the agent
8. Deploy the agent

```shell
kubectl apply -f rke2-agent.yaml
```

11. Verify that the agent is properly provisioned and has successfully joined the cluster
9. Verify that the agent is properly provisioned and has successfully joined the cluster

```shell
$ clusterctl describe cluster sample-cluster
Expand Down
4 changes: 2 additions & 2 deletions docs/setup/vbmh-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ EOF
Add this line to /etc/hosts

```text
192.168.125.100 media.suse.baremetal
192.168.125.1 imagecache.local
```

- This is necessary for DNS resolutions for Metal3 in the metal3-demo environment.
Expand Down Expand Up @@ -229,4 +229,4 @@ kubectl apply -f node2.yaml
- Using `baremetal node list` may show `manageable` immediately after creating the nodes,
but this is only temporary, we want to wait for it to say `manageable` after it has been inspected.
- If there is an issue during the provisioning process. Take the UUID of the baremetal node
and run `baremetal node show UUID` for a detailed output on what might have gone wrong.
and run `baremetal node show UUID` for a detailed output on what might have gone wrong.
13 changes: 0 additions & 13 deletions extra_vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ rke2_channel_version: v1.24

metal3_vm_libvirt_network_params: '--network bridge=m3-egress,model=virtio'

metal3_network_infra_public_ip: 192.168.125.100
vm_egress_gw: 192.168.125.1


Expand All @@ -32,18 +31,6 @@ enable_dhcp: true
dhcp_router: 192.168.124.1
dhcp_range: 192.168.124.150,192.168.124.180

metal3_network_infra_vm_network:
version: 2
ethernets:
eth0:
dhcp4: false
addresses: ["{{ metal3_network_infra_public_ip }}/24"]
nameservers:
addresses: "{{ vm_egress_gw }}"
routes:
- to: default
via: "{{ vm_egress_gw }}"

#
# Public IPs
#
Expand Down
82 changes: 0 additions & 82 deletions playbooks/setup_metal3_network_infra.yml

This file was deleted.

29 changes: 0 additions & 29 deletions roles/diskimage_builder/.travis.yml

This file was deleted.

10 changes: 0 additions & 10 deletions roles/diskimage_builder/defaults/main.yml

This file was deleted.

Loading

0 comments on commit 4e0fb66

Please sign in to comment.