-
Notifications
You must be signed in to change notification settings - Fork 450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0.21] Upgraded to 0.22 from 0.20 , getting storageClass name not found issue #2384
Comments
Can you provide the complete vcluster configuration yaml file |
@JayShane : on top of OOB provided file , we have this customization at out end |
Thank you for your time @JayShane: I just want to add that the same configuration is working fine with 0.20 version. We rolled back the vcluster today to 0.20 and the issue disappeared. But we would really like to upgrade to the latest version. |
@surabhinir @mfabbri good news, I found the issue and it should be fixed via #2402, we will backport that to v0.21 and v0.22 as well |
Hi @FabianKramm great news! We will monitor the #2402 then; when do you think it could be available? Thank you |
hi @FabianKramm , I upgraded to 0.22.3 and i see bug is still there. Also i see this issue is reopened. |
What happened?
We upgraded from 0.20 t0 0.22. After upgrading PVC are not coming up.
Warning VolumeMismatch 7s (x22 over 5m13s) persistentvolume-controller Cannot bind to requested volume "vcluster-search-master-pv-auth-x-test1-orntd-x-vclus-b839fd130d": storageClassName does not match
in PV storage class name is coming as : azureblob-fuse-premium
in PVC : vcluster-azureblob-fuse-premium-x-test1-orntd-x-vclu-bde7a601ce
What did you expect to happen?
Storage class mismatch should not be there
How can we reproduce it (as minimally and precisely as possible)?
upgrade from 0.20 to 0.22 with custom pv and pvc with storage class as azureblob-fuse-premium
Anything else we need to know?
No response
Host cluster Kubernetes version
v1.30.5
vcluster version
VCluster Config
controlPlane:
proxy:
extraSANs:
- '{{ .Release.Name }}.{{ .Release.Namespace }}'
ingress:
spec:
ingressClassName: nginx
tls: []
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
statefulSet:
scheduling:
podManagementPolicy: OrderedReady
statefulSet:
persistence:
volumeClaim:
storageClass: standard-disk-zrs
integrations:
metricsServer:
enabled: true
networking:
replicateServices:
fromHost:
- from: test1-orntd/vcluster-commerce-dev-eastus2-001
to: test1-orntd/vcluster-commerce-dev-eastus2-001
- from: test2-orntd/vcluster-commerce-dev-eastus2-002
to: test2-orntd/vcluster-commerce-dev-eastus2-002
- from: test4-orntd/vcluster-commerce-dev-eastus2-004
to: test4-orntd/vcluster-commerce-dev-eastus2-004
- from: development/vcluster-commerce-dev-eastus2-003
to: development/vcluster-commerce-dev-eastus2-003
- from: test3-orntd/vcluster-database-dev-eastus2-001
to: test3-orntd/vcluster-database-dev-eastus2-001
sync:
fromHost:
ingressClasses:
enabled: true
storageClasses:
enabled: true
toHost:
ingresses:
enabled: true
persistentVolumes:
enabled: true
rbac:
clusterRole:
enabled: false
The text was updated successfully, but these errors were encountered: