Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Followed "Installing to a local machine" steps to deploy Charmed kubernetes on local LXD, can't delete pods #871

Open
106106 opened this issue Dec 20, 2024 · 7 comments

Comments

@106106
Copy link

106106 commented Dec 20, 2024

Everything seems to deploy fine

Model  Controller           Cloud/Region         Version  SLA          Timestamp
ck8s   localhost-localhost  localhost/localhost  3.6.1    unsupported  12:14:13-08:00

App                       Version  Status  Scale  Charm                     Channel      Rev  Exposed  Message
calico                    v3.27.3  active      5  calico                    1.30/stable  108  no       Ready
containerd                1.7.12   active      5  containerd                1.30/stable   78  no       Container runtime available
easyrsa                   3.0.1    active      1  easyrsa                   1.30/stable   59  no       Certificate Authority connected.
etcd                      3.4.22   active      3  etcd                      1.30/stable  766  no       Healthy with 3 known peers
kubeapi-load-balancer     1.18.0   active      1  kubeapi-load-balancer     1.30/stable  145  yes      Ready
kubernetes-control-plane  1.30.8   active      2  kubernetes-control-plane  1.30/stable  503  no       Ready
kubernetes-worker         1.30.8   active      3  kubernetes-worker         1.30/stable  237  yes      Ready

Unit                         Workload  Agent  Machine  Public address  Ports         Message
easyrsa/0*                   active    idle   0        10.201.235.146                Certificate Authority connected.
etcd/0*                      active    idle   1        10.201.235.243  2379/tcp      Healthy with 3 known peers
etcd/1                       active    idle   2        10.201.235.70   2379/tcp      Healthy with 3 known peers
etcd/2                       active    idle   3        10.201.235.132  2379/tcp      Healthy with 3 known peers
kubeapi-load-balancer/0*     active    idle   4        10.201.235.116  443,6443/tcp  Ready
kubernetes-control-plane/0   active    idle   5        10.201.235.31   6443/tcp      Ready
  calico/4                   active    idle            10.201.235.31                 Ready
  containerd/4               active    idle            10.201.235.31                 Container runtime available
kubernetes-control-plane/1*  active    idle   6        10.201.235.67   6443/tcp      Ready
  calico/3                   active    idle            10.201.235.67                 Ready
  containerd/3               active    idle            10.201.235.67                 Container runtime available
kubernetes-worker/0          active    idle   7        10.201.235.127  80,443/tcp    Ready
  calico/1                   active    idle            10.201.235.127                Ready
  containerd/1               active    idle            10.201.235.127                Container runtime available
kubernetes-worker/1          active    idle   8        10.201.235.20   80,443/tcp    Ready
  calico/2                   active    idle            10.201.235.20                 Ready
  containerd/2               active    idle            10.201.235.20                 Container runtime available
kubernetes-worker/2*         active    idle   9        10.201.235.61   80,443/tcp    Ready
  calico/0*                  active    idle            10.201.235.61                 Ready
  containerd/0*              active    idle            10.201.235.61                 Container runtime available

Machine  State    Address         Inst id        Base          AZ  Message
0        started  10.201.235.146  juju-e5a38c-0  [email protected]      Running
1        started  10.201.235.243  juju-e5a38c-1  [email protected]      Running
2        started  10.201.235.70   juju-e5a38c-2  [email protected]      Running
3        started  10.201.235.132  juju-e5a38c-3  [email protected]      Running
4        started  10.201.235.116  juju-e5a38c-4  [email protected]      Running
5        started  10.201.235.31   juju-e5a38c-5  [email protected]      Running
6        started  10.201.235.67   juju-e5a38c-6  [email protected]      Running
7        started  10.201.235.127  juju-e5a38c-7  [email protected]      Running
8        started  10.201.235.20   juju-e5a38c-8  [email protected]      Running
9        started  10.201.235.61   juju-e5a38c-9  [email protected]      Running

Creating a pod works as well, but deleting pods (or deployments) hangs.

ubuntu@charmed-kubernetes:~$ kubectl run test --image=nginx
pod/test created
ubuntu@charmed-kubernetes:~$ kubectl get pods
NAME    READY   STATUS        RESTARTS   AGE
nginx   1/1     Terminating   0          5m1s
test    1/1     Running       0          10s
ubuntu@charmed-kubernetes:~$ kubectl delete pod test -v=8
I1220 12:15:35.098892  397787 loader.go:395] Config loaded from file:  /home/ubuntu/.kube/config
I1220 12:15:35.103487  397787 request.go:1212] Request Body: {"propagationPolicy":"Background"}
I1220 12:15:35.103566  397787 round_trippers.go:463] DELETE https://10.201.235.116:443/api/v1/namespaces/default/pods/test
I1220 12:15:35.103571  397787 round_trippers.go:469] Request Headers:
I1220 12:15:35.103580  397787 round_trippers.go:473]     Accept: application/json
I1220 12:15:35.103585  397787 round_trippers.go:473]     User-Agent: kubectl/v1.30.8 (linux/amd64) kubernetes/354eac7
I1220 12:15:35.103588  397787 round_trippers.go:473]     Content-Type: application/json
I1220 12:15:35.103592  397787 round_trippers.go:473]     Authorization: Bearer <masked>
I1220 12:15:35.128075  397787 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
I1220 12:15:35.128124  397787 round_trippers.go:577] Response Headers:
I1220 12:15:35.128131  397787 round_trippers.go:580]     Date: Fri, 20 Dec 2024 20:15:35 GMT
I1220 12:15:35.128136  397787 round_trippers.go:580]     Content-Type: application/json
I1220 12:15:35.128139  397787 round_trippers.go:580]     Audit-Id: 5670d856-f2bb-45a1-bade-6cb48f7612d4
I1220 12:15:35.128143  397787 round_trippers.go:580]     Cache-Control: no-cache, private
I1220 12:15:35.128146  397787 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 807eea7b-00c7-43c6-9078-a59a6af0bc2b
I1220 12:15:35.128150  397787 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55e926fb-8485-4a19-9f46-5adf06bc57bf
I1220 12:15:35.128154  397787 round_trippers.go:580]     Server: nginx/1.18.0 (Ubuntu)
I1220 12:15:35.128668  397787 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"test","namespace":"default","uid":"6cab65a3-66d6-4aed-91bd-69d654a10297","resourceVersion":"124169","creationTimestamp":"2024-12-20T20:14:55Z","deletionTimestamp":"2024-12-20T20:16:05Z","deletionGracePeriodSeconds":30,"labels":{"run":"test"},"managedFields":[{"manager":"kubectl-run","operation":"Update","apiVersion":"v1","time":"2024-12-20T20:14:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:run":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-12-20T20:14:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:la [truncated 2963 chars]
pod "test" deleted
I1220 12:15:35.128953  397787 round_trippers.go:463] GET https://10.201.235.116:443/api/v1/namespaces/default/pods/test
I1220 12:15:35.128962  397787 round_trippers.go:469] Request Headers:
I1220 12:15:35.128968  397787 round_trippers.go:473]     User-Agent: kubectl/v1.30.8 (linux/amd64) kubernetes/354eac7
I1220 12:15:35.128972  397787 round_trippers.go:473]     Accept: application/json
I1220 12:15:35.128976  397787 round_trippers.go:473]     Authorization: Bearer <masked>
I1220 12:15:35.136079  397787 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I1220 12:15:35.136122  397787 round_trippers.go:577] Response Headers:
I1220 12:15:35.136129  397787 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55e926fb-8485-4a19-9f46-5adf06bc57bf
I1220 12:15:35.136134  397787 round_trippers.go:580]     Server: nginx/1.18.0 (Ubuntu)
I1220 12:15:35.136137  397787 round_trippers.go:580]     Date: Fri, 20 Dec 2024 20:15:35 GMT
I1220 12:15:35.136141  397787 round_trippers.go:580]     Content-Type: application/json
I1220 12:15:35.136145  397787 round_trippers.go:580]     Audit-Id: e9e8bb92-0565-48d3-88d1-7ca8d853388d
I1220 12:15:35.136148  397787 round_trippers.go:580]     Cache-Control: no-cache, private
I1220 12:15:35.136151  397787 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 807eea7b-00c7-43c6-9078-a59a6af0bc2b
I1220 12:15:35.136416  397787 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"test","namespace":"default","uid":"6cab65a3-66d6-4aed-91bd-69d654a10297","resourceVersion":"124169","creationTimestamp":"2024-12-20T20:14:55Z","deletionTimestamp":"2024-12-20T20:16:05Z","deletionGracePeriodSeconds":30,"labels":{"run":"test"},"managedFields":[{"manager":"kubectl-run","operation":"Update","apiVersion":"v1","time":"2024-12-20T20:14:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:run":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-12-20T20:14:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:la [truncated 2963 chars]
I1220 12:15:35.137156  397787 reflector.go:296] Starting reflector *unstructured.Unstructured (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I1220 12:15:35.137191  397787 reflector.go:332] Listing and watching *unstructured.Unstructured from k8s.io/client-go/tools/watch/informerwatcher.go:146
I1220 12:15:35.137340  397787 round_trippers.go:463] GET https://10.201.235.116:443/api/v1/namespaces/default/pods?fieldSelector=metadata.name%3Dtest&limit=500&resourceVersion=0
I1220 12:15:35.137362  397787 round_trippers.go:469] Request Headers:
I1220 12:15:35.137369  397787 round_trippers.go:473]     Accept: application/json
I1220 12:15:35.137372  397787 round_trippers.go:473]     User-Agent: kubectl/v1.30.8 (linux/amd64) kubernetes/354eac7
I1220 12:15:35.137378  397787 round_trippers.go:473]     Authorization: Bearer <masked>
I1220 12:15:35.144877  397787 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I1220 12:15:35.144946  397787 round_trippers.go:577] Response Headers:
I1220 12:15:35.144956  397787 round_trippers.go:580]     Server: nginx/1.18.0 (Ubuntu)
I1220 12:15:35.144963  397787 round_trippers.go:580]     Date: Fri, 20 Dec 2024 20:15:35 GMT
I1220 12:15:35.144968  397787 round_trippers.go:580]     Content-Type: application/json
I1220 12:15:35.144973  397787 round_trippers.go:580]     Audit-Id: 5d20710f-3678-4dda-b318-904bf43ff0d6
I1220 12:15:35.144978  397787 round_trippers.go:580]     Cache-Control: no-cache, private
I1220 12:15:35.144984  397787 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 807eea7b-00c7-43c6-9078-a59a6af0bc2b
I1220 12:15:35.144988  397787 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55e926fb-8485-4a19-9f46-5adf06bc57bf
I1220 12:15:35.145955  397787 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"124169"},"items":[{"metadata":{"name":"test","namespace":"default","uid":"6cab65a3-66d6-4aed-91bd-69d654a10297","resourceVersion":"124169","creationTimestamp":"2024-12-20T20:14:55Z","deletionTimestamp":"2024-12-20T20:16:05Z","deletionGracePeriodSeconds":30,"labels":{"run":"test"},"managedFields":[{"manager":"kubectl-run","operation":"Update","apiVersion":"v1","time":"2024-12-20T20:14:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:run":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-12-20T20:14:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditi [truncated 3019 chars]
I1220 12:15:35.146497  397787 reflector.go:359] Caches populated for *unstructured.Unstructured from k8s.io/client-go/tools/watch/informerwatcher.go:146
I1220 12:15:35.146689  397787 round_trippers.go:463] GET https://10.201.235.116:443/api/v1/namespaces/default/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest&resourceVersion=124169&timeoutSeconds=396&watch=true
I1220 12:15:35.146785  397787 round_trippers.go:469] Request Headers:
I1220 12:15:35.146913  397787 round_trippers.go:473]     Accept: application/json
I1220 12:15:35.146995  397787 round_trippers.go:473]     User-Agent: kubectl/v1.30.8 (linux/amd64) kubernetes/354eac7
I1220 12:15:35.147101  397787 round_trippers.go:473]     Authorization: Bearer <masked>
I1220 12:16:05.146379  397787 round_trippers.go:574] Response Status: 200 OK in 29999 milliseconds
I1220 12:16:05.146420  397787 round_trippers.go:577] Response Headers:
I1220 12:16:05.146432  397787 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 807eea7b-00c7-43c6-9078-a59a6af0bc2b
I1220 12:16:05.146442  397787 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55e926fb-8485-4a19-9f46-5adf06bc57bf
I1220 12:16:05.146451  397787 round_trippers.go:580]     Server: nginx/1.18.0 (Ubuntu)
I1220 12:16:05.146460  397787 round_trippers.go:580]     Date: Fri, 20 Dec 2024 20:15:35 GMT
I1220 12:16:05.146467  397787 round_trippers.go:580]     Content-Type: application/json
I1220 12:16:05.146480  397787 round_trippers.go:580]     Audit-Id: abafb333-40ef-4d17-95dd-f8bc398cdff0
I1220 12:16:05.146492  397787 round_trippers.go:580]     Cache-Control: no-cache, private
@106106 106106 changed the title Followed "Installing to a local machine" steps to deploy Charmed kubernetes on local LXD, can Followed "Installing to a local machine" steps to deploy Charmed kubernetes on local LXD, can't delete pods Dec 20, 2024
@106106
Copy link
Author

106106 commented Dec 20, 2024

I suspect some sort of permissions issue in the kubernetes interaction with LXD, but not sure where to look. Any ideas?

@addyess
Copy link
Member

addyess commented Dec 20, 2024

I presume your deployment went somewhat like this:
https://ubuntu.com/kubernetes/docs/install-local

in recent months i've heard advising from LXD community to encourage use of VMs rather than containers for hosting Kubernetes environment b/c Kubernetes requires a privileged container to allow for "container on container". Privileged containers expose a security risk to the host OS and therefore that support is going to be drawn back.

Did you deploy with virtual-machine on lxd, or containers? You might have a nicer go of things with VMs

@106106
Copy link
Author

106106 commented Dec 20, 2024

Hi @addyess,

Yes, I followed https://ubuntu.com/kubernetes/docs/install-local.

And the deployment used containers:

+---------------+---------+-------------------------------+------+-----------+-----------+
|     NAME      |  STATE  |             IPV4              | IPV6 |   TYPE    | SNAPSHOTS |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-5c30f3-0 | RUNNING | 10.201.235.193 (eth0)         |      | CONTAINER | 0         |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-0 | RUNNING | 10.201.235.146 (eth0)         |      | CONTAINER | 0         |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-1 | RUNNING | 10.201.235.243 (eth0)         |      | CONTAINER | 0         |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-2 | RUNNING | 10.201.235.70 (eth0)          |      | CONTAINER | 0         |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-3 | RUNNING | 10.201.235.132 (eth0)         |      | CONTAINER | 0         |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-4 | RUNNING | 10.201.235.116 (eth0)         |      | CONTAINER | 0         |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-5 | RUNNING | 192.168.59.0 (vxlan.calico)   |      | CONTAINER | 0         |
|               |         | 10.201.235.31 (eth0)          |      |           |           |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-6 | RUNNING | 192.168.31.64 (vxlan.calico)  |      | CONTAINER | 0         |
|               |         | 10.201.235.67 (eth0)          |      |           |           |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-7 | RUNNING | 192.168.155.0 (vxlan.calico)  |      | CONTAINER | 0         |
|               |         | 10.201.235.127 (eth0)         |      |           |           |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-8 | RUNNING | 192.168.224.0 (vxlan.calico)  |      | CONTAINER | 0         |
|               |         | 10.201.235.20 (eth0)          |      |           |           |
+---------------+---------+-------------------------------+------+-----------+-----------+
| juju-e5a38c-9 | RUNNING | 192.168.93.128 (vxlan.calico) |      | CONTAINER | 0         |
|               |         | 10.201.235.61 (eth0)          |      |           |           |
+---------------+---------+-------------------------------+------+-----------+-----------+

I'll see how to use VMs instead, and give that a try.

@addyess
Copy link
Member

addyess commented Dec 20, 2024

you can create an overlay bundle for your machines and specify a constraint for them:

https://juju.is/docs/juju/constraint#heading--virt-type virt-type=virtual-machine

@addyess
Copy link
Member

addyess commented Dec 20, 2024

the overlay could look like:

applications:
    kubernetes-control-plane:
        constraints: "cores=2 mem=8G root-disk=16G virt-type=virtual-machine"
    kubernetes-worker:
        constraints: "cores=2 mem=8G root-disk=16G virt-type=virtual-machine"

@106106
Copy link
Author

106106 commented Jan 7, 2025

Thanks @addyess, sorry for the delay on getting back to you. I tried it with the overlay but now lxd/juju is failing to create the aadisable4 which was specified in the profile.yaml file of the tutorial.

controller-0: 15:25:57 WARNING juju.worker.provisioner machine 5 failed to start in availability zone charmed-kubernetes: Failed setting up device via monitor: Failed adding block device for disk device "aadisable4": Failed adding block device: aio=native was specified, but it requires cache.direct=on, which was not specified.
controller-0: 15:25:57 WARNING juju.worker.provisioner failed to start machine 5 (Failed setting up device via monitor: Failed adding block device for disk device "aadisable4": Failed adding block device: aio=native was specified, but it requires cache.direct=on, which was not specified.), retrying in 10s (3 more attempts)

Here's the full juju status

Model  Controller           Cloud/Region         Version  SLA          Timestamp
ck8s   localhost-localhost  localhost/localhost  3.6.1    unsupported  15:51:27-08:00

App                       Version  Status   Scale  Charm                     Channel      Rev  Exposed  Message
calico                             unknown      0  calico                    1.30/stable  108  no       
containerd                         unknown      0  containerd                1.30/stable   78  no       
easyrsa                   3.0.1    active       1  easyrsa                   1.30/stable   59  no       Certificate Authority connected.
etcd                      3.4.22   active       3  etcd                      1.30/stable  766  no       Healthy with 3 known peers
kubeapi-load-balancer     1.18.0   waiting      1  kubeapi-load-balancer     1.30/stable  145  yes      Load Balancer request not ready
kubernetes-control-plane           waiting    0/2  kubernetes-control-plane  1.30/stable  503  no       waiting for machine
kubernetes-worker                  waiting    0/3  kubernetes-worker         1.30/stable  237  yes      waiting for machine

Unit                        Workload  Agent       Machine  Public address  Ports     Message
easyrsa/0*                  active    idle        0        10.35.157.253             Certificate Authority connected.
etcd/0                      active    idle        1        10.35.157.213   2379/tcp  Healthy with 3 known peers
etcd/1*                     active    idle        2        10.35.157.50    2379/tcp  Healthy with 3 known peers
etcd/2                      active    idle        3        10.35.157.150   2379/tcp  Healthy with 3 known peers
kubeapi-load-balancer/0*    waiting   idle        4        10.35.157.141             Load Balancer request not ready
kubernetes-control-plane/0  waiting   allocating  5                                  waiting for machine
kubernetes-control-plane/1  waiting   allocating  6                                  waiting for machine
kubernetes-worker/0         waiting   allocating  7                                  waiting for machine
kubernetes-worker/1         waiting   allocating  8                                  waiting for machine
kubernetes-worker/2         waiting   allocating  9                                  waiting for machine

Machine  State    Address        Inst id        Base          AZ  Message
0        started  10.35.157.253  juju-a0777e-0  [email protected]      Running
1        started  10.35.157.213  juju-a0777e-1  [email protected]      Running
2        started  10.35.157.50   juju-a0777e-2  [email protected]      Running
3        started  10.35.157.150  juju-a0777e-3  [email protected]      Running
4        started  10.35.157.141  juju-a0777e-4  [email protected]      Running
5        down                    pending        [email protected]      Failed setting up device via monitor: Failed adding block device for disk device "aadisable4": Failed adding block de...
6        down                    pending        [email protected]      Failed setting up device via monitor: Failed adding block device for disk device "aadisable4": Failed adding block de...
7        down                    pending        [email protected]      Failed setting up device via monitor: Failed adding block device for disk device "aadisable4": Failed adding block de...
8        down                    pending        [email protected]      Creating container
9        down                    pending        [email protected]      Failed setting up device via monitor: Failed adding block device for disk device "aadisable4": Failed adding block de...

@addyess
Copy link
Member

addyess commented Jan 8, 2025

hmm... it still seems that lxd is starting containers and not using qemu to start VMs. The lxd container profile shouldn't be used with VMs as far as I understand. so -- vms aside since it's clear we're going the container route here.

I'm afraid this is going pretty deep into something wrong between the version of lxd and profile in the docs. Ii've not spent much time looking into running charmed-kubernetes on lxd as of late.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants