Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding PVC's annotation to CreateVolumeRequest.Parameters #86

Open
orainxiong opened this issue May 14, 2018 · 67 comments
Open

Adding PVC's annotation to CreateVolumeRequest.Parameters #86

orainxiong opened this issue May 14, 2018 · 67 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@orainxiong
Copy link

As the title says, there is a great possibility that the application-specific Operator would leverage PVC's Annotation for storing custom data. The custom data will tell storage provisioner how CSI drive create volume correctly.

Here is already existing logic for provision volume :

	// Create a CSI CreateVolumeRequest and Response
	req := csi.CreateVolumeRequest{

		Name:       pvName,
		Parameters: options.Parameters,
		VolumeCapabilities: []*csi.VolumeCapability{
			{
				AccessType: accessType,
				AccessMode: accessMode,
			},
		},
		CapacityRange: &csi.CapacityRange{
			RequiredBytes: int64(volSizeBytes),
		},
	}

Note : more details

CSI Plugin specifies that CreateVolumeRequest's field Parameters is OPTIONAL and opaque.
What I am suggesting is that we could pass in PVC's Annotation in addition to Storageclass.Parameter.

@clintonk
Copy link

I second this request, and we discussed this idea with @saad-ali at this week's Storage SIG F2F. The chief concern is name collisions between storage class parameters and PVC annotations. Saad suggested this capability be gated via a startup flag. @orainxiong, are you planning to submit a PR for this?

@orainxiong
Copy link
Author

orainxiong commented May 21, 2018

@clintonk Thx for your replay.
In order to solve this issue immediately, I handle this issue in a temporary way.

  • A new function, called appendPVCAnnotation, is introduced to encode PVC annotation to JSON string and modify options.parameter.
func appendPVCAnnotation(options controller.VolumeOptions) error {
	annotationPVC, err := json.Marshal(options.PVC.Annotations)
	if err != nil {
		return err
	}

	parameters := map[string]string{
		"annotationsPVC": string(annotation),
	}
	return nil
}

Note : Reverse key name annotationsPVC to avoid name conflict.

  • A new flag, called annotationPVCNeeded, is added to determine whether PVC annotation should be appended into options.parameter.

The advantage of implementation hardly modifies existing external-provisioner logic. Honestly, it is a not a preferable way.
Looking forward to your suggestions.

@jsafrane
Copy link
Contributor

While I agree that there should be a way how to pass parameters from user to CreateVolume, I don't like suggested solution:

  • It makes the CSI driver Kubernetes aware. Whole point of CSI is to have independent plugins that can be used in Kubernetes, Mesos, Cloud Foundry, Docker Swarm and others. The plugins should not know anything about PVCs, these are pure Kubernetes objects.

  • Kubernetes admin needs a way how to restrict what parameters can be set by users. Kubernetes is (or can be) multi tenant system and not all users can use all parameters that a CSI plugin exposes.

  • User should be allowed to list possible parameters together with some description so they can choose the right value. Note that there may be GUI between users and Kubernetes, so the GUI must be able at least to enumerate these parameters. We tried in Proposal for user configuration of dynamic provisioning kubernetes/community#543, however, nobody has stepped in to finish the proposal and implemented it.

@fatih
Copy link

fatih commented May 21, 2018

After creating the csi-digitalocean driver, I also received a feedback, that needs this feature: digitalocean/csi-digitalocean#30

I'm using the name of the request ot create the volumes, which ends up being the PV name. So it's in the form of kubernetes-dynamic-pv-<randomid>. However on the UI (at DigitalOcean.com), this is not useful because it doesn't give the person an easy way to track back to which entity it belongs to.

I thought maybe I can use the StorageClass parameters and add a new field called volumePrefix, such as:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: do-block-storage
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: com.digitalocean.csi.dobs
parameters:
  volumePrefix: "csi-digitalocean"

and then in the Controller plugin, I'll change the name in CreateVolume according to this information:

volumeName := req.Name
// change the name if the storage-class has a prefix option
prefix, ok := req.Parameters["volumePrefix"]
if ok {
	volumeName = prefix + "-" + volumeName
}

This allows me to create a volume based on this users CSI deployment. However, this is not scalable because:

  • you only create a single StorageClass, so it's not easy to update it every single time
  • you're not allows to update the parameters field of a StorageClass
  • StorageClass is global, whereas we need a local information from a local resource

A better idea would be to use the annotations of a PVC. This would allow us to create volume's directly based on the information bound to a specific PVC.

We're already passing down the information from a Kubernetes resource (i.e StorageClass) down to the plugin, I think there is not much that prevents us to pass the information fo another resource (i.e PersistentVolumeClaim) to the plugin.

@sbezverk
Copy link
Contributor

check this out:
https://github.com/kubernetes-csi/external-provisioner/blob/master/cmd/csi-provisioner/csi-provisioner.go#L46
I think you can leverage this parameter.

@fatih
Copy link

fatih commented May 22, 2018

@sbezverk thanks for the tip, I didn't know it.

Though, the prefix was just an example, it could be also something else. Such tagging the resource on the cloud provider side based on the created PVC labels.

@orainxiong
Copy link
Author

@jsafrane
Thx for your reply.

If I understand correctly, I notice that external-provisioner has exposed StorageClass parameters to end storage drivers through VolumeOptions.

	options := VolumeOptions{
		PersistentVolumeReclaimPolicy: reclaimPolicy,
		PVName:     pvName,
		PVC:        claim,
		Parameters: parameters,  -- Store `StorageClass` parameters here
	}

	volume, err = ctrl.provisioner.Provision(options)

End storage drivers can get StorageClass parameters depending on their own logic. However, PVC's annotation is still missing in VolumeOptions.

I very agree with your opinion. It's a temporary solution which makes storage driver Kubernetes aware. It is a very time consuming job to figure out a COs neutral solution.
As far as I think of parameters and description, it is very likely to be exhaust to create a common body of naming language across COs.

Looking forward to your suggestions.

@jsafrane
Copy link
Contributor

@fatih, for tagging we could perhaps extend CSI's CreateVolume call with optional metadata such as Kubernetes PVC namespace + name, PV name, storage class name (?) or PVC labels (which one? all of them?). It would be used only to tag volumes in the storage backend so users can map these volumes to CO objects. These cannot be used to configure provisioned volume (see #86 (comment) above).

CO would provide generic map[string]string, opaque to CSI driver and CSI driver would blindly "attach" it to the volume via AWS tags or GCE labels or OpenStack metadata or whatever the storage supports (or throw them away if the storage does not support tagging).

However, PVC's annotation is still missing in VolumeOptions.

@orainxiong, that's intentional.

I could imagine our CSI provisioner passes content of some alpha annotations from PVC as CreateVolumeRequest.parameters in CreateVolume call, but these should be really alpha. To get the to GA, we need policy and discovery and whatnot.

@fatih
Copy link

fatih commented May 24, 2018

for tagging we could perhaps extend CSI's CreateVolume call with optional metadata such as Kubernetes PVC namespace + name, PV name, storage class name (?) or PVC labels (which one? all of them?).

I think passing the PVC metadata would be great. Because it already includes the following important bits:

  • name
  • namespace
  • labels
  • annotations

I'm not sure how to pass them via CO though. a map[string]string is the easiest way. But then how are we going to represent nested maps? We could pass it as a JSON maybe:

map[string]string{
 "pvc" : "{\"name\": ...}",
 "storageclass": " { ... }",
}

I know that Kubernetes annotations mostly store data like this and it's a common thing there.

CO would provide generic map[string]string, opaque to CSI driver and CSI driver would blindly "attach" it to the volume via AWS tags or GCE labels or OpenStack metadata or whatever the storage supports (or throw them away if the storage does not support tagging).

Yeap. Attaching is only one of the things the CSI driver can accomplish. Changing the name is another action it should be allowed to do.

@orainxiong
Copy link
Author

@jsafrane Thanks. I got it. I will close this issue.

@fatih
Copy link

fatih commented May 28, 2018

Why is this closed? Did we have a PR that fixes this issue or a solution that is applicable right now?

@orainxiong
Copy link
Author

@fatih

At least, in my opinion, there is a temporary way to handle this issue. But, it isn't applicable for all COs that support CSI.

As your suggestions, a new function injectCOParameter is introduced into external-provisioner :

func injectCOParameter(options controller.VolumeOptions) (map[string]string, error) {

	annotationsPVC, err := json.Marshal(options.PVC.Annotations)
	if err != nil {
		return nil, err
	}

	labelsPVC, err := json.Marshal(options.PVC.Labels)
	if err != nil {
		return nil, err
	}

	storageClass, err := json.Marshal(options.Parameters)
	if err != nil {
		return nil, err
	}

	parameters := map[string]string{
		"annotationsPVC": string(annotationsPVC),
		"labelsPVC":     string(labelsPVC),
		"storageClass":  string(storageClass),
	}
	return parameters, nil
}

injectCOParameter is in the charge of encoding labels and annotations and storageclass into
CreateVolumeRequest.Parameters. Specifically, CreateVolumeRequest.Parameters is defined by CSI to store opaque parameter and COs will ignore it.

On the storage provisioner side, they can decode parameters from CreateVolumeRequest.Parameters according to their own operational logic.

@fatih
Copy link

fatih commented May 29, 2018

@orainxiong Thanks for the reply. The title says Adding PVC's annotation to CreateVolumeRequest.Parameters and unless I'm missing something, there is no such feature and no one contributed to the provisioner/attacher to add this feature. Hence my question why we have closed this issue. Nothing is fixed yet and it's not possible to pass any PVC/PV data to the plugin from the CO. This issue should be re-opened, and closed when there is an action task we can work together.

@orainxiong
Copy link
Author

@fatih ok, you're right. : - )

@orainxiong orainxiong reopened this May 30, 2018
@mkimuram
Copy link
Contributor

mkimuram commented Jun 6, 2018

@orainxiong

Has injectCOParameter() already been merged to external-provisioner?
I couldn't find it in the codes nor PRs, so I would appreciate it if you could share any links to it.

@orainxiong
Copy link
Author

@mkimuram
Copy link
Contributor

@orainxiong
Thank you for providing code. Great work!

I tested your code as below and confirmed that annotationsPVC and labelsPVC were added to parameters that are passed to each driver.
The concept looks good to me and will solve my concern(#93).

In my opinion, there would be still rooms for discussion to make this code merged:

  1. What parameters will be needed to be passed,
    (I found that annotation for storage class is not passed, and there might be some other fields that will be useful if we pass.)
  2. How they will be passed,
    (Should parameters for storageClass be passed directory to keep compatibility, and so on?)
  3. Should they be passed in the first place,
    (Although, I personally believe that they should, it doesn't seem to be agreed in the community.)

I found a related discussion below for 3, and this will be needed to be discussed, first .
container-storage-interface/spec#248

Any comments for this issue? @jsafrane @princerachit @vladimirvivien
Especially, I would like to know whether k8s can add this code as a temporary workaround, before we reach to a conclusion for above CSI discussion.

Summary of test Results:
[Without patch]

# kubectl logs csi-pod hostpath-driver 
I0612 18:57:50.339548       1 utils.go:97] GRPC request: name:"pvc-80ee79686e7211e8" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"foo21" value:"bar21" > parameters:<key:"foo22" value:"bar22" > 

[With patch]

# kubectl logs csi-pod hostpath-driver 
I0612 20:37:56.738211       1 utils.go:97] GRPC request: name:"pvc-7d0658ed6e8011e8" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"annotationsPVC" value:"{\"foo31\":\"bar31\",\"foo32\":\"bar32\",\"volume.beta.kubernetes.io/storage-provisioner\":\"csi-hostpath\"}" > parameters:<key:"labelsPVC" value:"{\"foo41\":\"bar41\",\"foo42\":\"bar42\"}" > parameters:<key:"storageClass" value:"{\"foo21\":\"bar21\",\"foo22\":\"bar22\"}" > 

Please see below for how I tested:

[Without patch] (Also see [files] below for csi-sc2.yaml and csi-pvc2.yaml.)

# FEATURE_GATES=CSIPersistentVolume=true ALLOW_PRIVILEGED=true ALLOW_SECURITY_CONTEXT=true hack/local-up-cluster.sh
# export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
# kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/csi-setup.yaml
# kubectl create -f csi-sc2.yaml
# kubectl create -f csi-pvc2.yaml
# kubectl logs csi-pod hostpath-driver | grep -A 3 "CreateVolume" | tail -4
I0612 18:57:50.339545       1 utils.go:96] GRPC call: /csi.v0.Controller/CreateVolume
I0612 18:57:50.339548       1 utils.go:97] GRPC request: name:"pvc-80ee79686e7211e8" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"foo21" value:"bar21" > parameters:<key:"foo22" value:"bar22" > 
I0612 18:57:50.339719       1 controllerserver.go:87] create volume /tmp/80f0db83-6e72-11e8-b5be-0242ac110003
I0612 18:57:50.339724       1 utils.go:102] GRPC response: volume:<capacity_bytes:1073741824 id:"80f0db83-6e72-11e8-b5be-0242ac110003" attributes:<key:"foo21" value:"bar21" > attributes:<key:"foo22" value:"bar22" > > 

[With patch] (Also see [files] below for csi-sc2.yaml and csi-pvc2.yaml.)

# git clone https://github.com/orainxiong/external-provisioner.git
# cd external-provisioner
# git checkout issue-86
# sed -i "s/canary/hack/" Makefile
# make container
# curl https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/csi-setup.yaml | sed "s/quay.io\/k8scsi\/csi-provisioner:v0.2.1/quay.io\/k8scsi\/csi-provisioner:hack/" > csi-setup2.yaml
# sed -i "s/imagePullPolicy: Always/imagePullPolicy: Never/" csi-setup2.yaml

# FEATURE_GATES=CSIPersistentVolume=true ALLOW_PRIVILEGED=true ALLOW_SECURITY_CONTEXT=true hack/local-up-cluster.sh
# export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
# kubectl create -f csi-setup2.yaml
# kubectl create -f csi-sc2.yaml  
# kubectl create -f csi-pvc2.yaml
# kubectl logs csi-pod hostpath-driver | grep -A 3 "CreateVolume" | tail -4
I0612 20:37:56.738207       1 utils.go:96] GRPC call: /csi.v0.Controller/CreateVolume
I0612 20:37:56.738211       1 utils.go:97] GRPC request: name:"pvc-7d0658ed6e8011e8" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"annotationsPVC" value:"{\"foo31\":\"bar31\",\"foo32\":\"bar32\",\"volume.beta.kubernetes.io/storage-provisioner\":\"csi-hostpath\"}" > parameters:<key:"labelsPVC" value:"{\"foo41\":\"bar41\",\"foo42\":\"bar42\"}" > parameters:<key:"storageClass" value:"{\"foo21\":\"bar21\",\"foo22\":\"bar22\"}" > 
I0612 20:37:56.738409       1 controllerserver.go:87] create volume /tmp/7d088fc3-6e80-11e8-84a2-0242ac110002
I0612 20:37:56.738416       1 utils.go:102] GRPC response: volume:<capacity_bytes:1073741824 id:"7d088fc3-6e80-11e8-84a2-0242ac110002" attributes:<key:"annotationsPVC" value:"{\"foo31\":\"bar31\",\"foo32\":\"bar32\",\"volume.beta.kubernetes.io/storage-provisioner\":\"csi-hostpath\"}" > attributes:<key:"labelsPVC" value:"{\"foo41\":\"bar41\",\"foo42\":\"bar42\"}" > attributes:<key:"storageClass" value:"{\"foo21\":\"bar21\",\"foo22\":\"bar22\"}" > > 
# kubectl logs csi-pod external-provisioner | grep -A 3 "CreateVolume" | tail -4
I0612 20:37:56.738068       1 controller.go:104] GRPC call: /csi.v0.Controller/CreateVolume
I0612 20:37:56.738072       1 controller.go:105] GRPC request: name:"pvc-7d0658ed6e8011e8" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"annotationsPVC" value:"{\"foo31\":\"bar31\",\"foo32\":\"bar32\",\"volume.beta.kubernetes.io/storage-provisioner\":\"csi-hostpath\"}" > parameters:<key:"labelsPVC" value:"{\"foo41\":\"bar41\",\"foo42\":\"bar42\"}" > parameters:<key:"storageClass" value:"{\"foo21\":\"bar21\",\"foo22\":\"bar22\"}" > 
I0612 20:37:56.738515       1 controller.go:107] GRPC response: volume:<capacity_bytes:1073741824 id:"7d088fc3-6e80-11e8-84a2-0242ac110002" attributes:<key:"annotationsPVC" value:"{\"foo31\":\"bar31\",\"foo32\":\"bar32\",\"volume.beta.kubernetes.io/storage-provisioner\":\"csi-hostpath\"}" > attributes:<key:"labelsPVC" value:"{\"foo41\":\"bar41\",\"foo42\":\"bar42\"}" > attributes:<key:"storageClass" value:"{\"foo21\":\"bar21\",\"foo22\":\"bar22\"}" > > 
I0612 20:37:56.738538       1 controller.go:108] GRPC error: <nil>

[Files]

# cat csi-sc2.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-hostpath-sc2
  annotations: 
    foo11: bar11
    foo12: bar12
provisioner: csi-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
  foo21: bar21
  foo22: bar22
# cat csi-pvc2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc2
  annotations:
    foo31: bar31
    foo32: bar32
  labels:
    foo41: bar41
    foo42: bar42
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-hostpath-sc2

@princerachit
Copy link

princerachit commented Jun 13, 2018

@mkimuram today's discussion concluded that we are not going to immediately touch the spec for CreateVolumeRequest to pass CO specific metadata(PVC name,PVC id etc). A proposed workaround is to write your own external provisioner which knows what params to pass for your specific driver.

However, as pvc annotations are also used to add volume related information, adding them to external-provisioner would be possible. We need someone from k8s wg-csi to nod on this. @saad-ali

@orainxiong
Copy link
Author

@clintonk You're welcome. : - )

@kfox1111
Copy link

We've talked about this a bit in kubernetes/community#2273 as well.

I'd like a way to have VolumeAttributes be passed in PVC's and the external provisioner can decide if its ok merging none/some/all of them with the StorageClasses VolumeAttributes when sending to the CSI drivers Create/DeleteVolume. And also if none/some/all should be copied to the resulting PV for the CSI driver's mount/unmount.

This wouldn't be specific to Kubernetes, as they would be CSI attributes, not Kubernetes annotations.

@mkimuram
Copy link
Contributor

@kfox1111

I'd like a way to have VolumeAttributes be passed in PVC's and the external provisioner can decide if its ok merging none/some/all of them with the StorageClasses VolumeAttributes when sending to the CSI drivers Create/DeleteVolume.

Is it the same logic to @orainxiong 's above code or different one?

According to @princerachit 's below comment, at least pvc annotations could be allowed to be passed by above way, if k8s wg-csi agrees.

However, as pvc annotations are also used to add volume related information, adding them to external-provisioner would be possible. We need someone from k8s wg-csi to nod on this. @saad-ali

@saad-ali
Copy link
Member

My current standing on this issue: Annotations on PVCs MUST NOT be passed to CSI drivers. The Kubernetes PVC object is intended for application portability. When we start leaking cluster/implementation specific details in to it we are violating that principle. And explicitly passing PVC annotations to CSI drivers encourages that pattern. Workload portability is a corner stone of Kubernetes. It is one of the major reasons for the success of the project. We must be very careful when we violate it.

Instead let's focus on specific use cases. And let's see if we can come up with better solutions for each of those uses cases rather then opening up a hole in the API.

@mlmhl
Copy link
Contributor

mlmhl commented Mar 17, 2019

@saad-ali There is a similar use case in my team:

We support multiple storage types in a cluster, such as Ceph RBD and CephFS. What's more, we also support multiple file system types for Ceph RBD, such as ext4, xfs, etc. The only solution at present is creating a StorageClass for each file system type, for example, a ext4-sc StorageClass for ext4 fs type, and a xfs-sc StorageClass for xfs fs type.

However, we also need to set quota for PVCs, so we need to set quota for these two StorageClasses(ext4-sc and xfs-sc) separately if we adopt this approach. This is strange for users because these two StorageClasses are the same storage type(Ceph RBD), just different file systems.

@pohly
Copy link
Contributor

pohly commented Nov 30, 2020

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Nov 30, 2020
@silentred
Copy link

This argument extra-create-metadata passes pvc ns/name to CreateVolumeRequest.Parameters. This is really useful.

@ppenzo
Copy link

ppenzo commented Apr 1, 2022

I need to map existing shares of NAS filers to PV without the cluster/storage admins intervention #kubernetes-csi/csi-driver-smb#398 hence placing the sharename in the PVC would be the solution to achieve this.

If using an annotation is not suitable since it would break the CSI specs, would it be possible to use an extra parameter as described in #714 to pass parameters directly to the driver?

@mloskot
Copy link

mloskot commented Apr 7, 2023

This would also help to tag Azure file share/disk by overriding parameters.tags of StorageClass based on Azure CSI drivers
Interestingly, there are CSI drivers which apparently provide support for PVC annotations to override SC parameters:
https://scod.hpedev.io/csi_driver/using.html#using_pvc_overrides

@andyzhangx
Copy link
Member

since passing pvc annotation to CreateVolumeRequest.Parameters is not supported by external-provisioner, I see there is other project(e.g. HPE CSI Driver) trying to cache all pvc and then pass pvc annotation to the CSI driver, not a graceful implementation, but looks like this is the only way now:
https://github.com/hpe-storage/csi-driver/blob/abf94a40b242477e51afac75bb9a68772aede0e7/pkg/flavor/kubernetes/flavor.go#L163-L172

@egoldblum
Copy link

We're also in need of a feature like this to be able to associate disks with applications to better track costs.

@sathieu
Copy link

sathieu commented Aug 24, 2024

This PR: #1239

@Garagoth
Copy link

Garagoth commented Sep 19, 2024

Hi!

I came here from #808, but since it is closed and several people there advised to comment further here and not on a closed PR I am writing here.

Firstly I would like to address main concerns from PR that were raised:

  • "it makes PVCs not portable"
    Well, yes, but this already happened. You can pass secret names as annotations from PVC to StorageClass (https://kubernetes-csi.github.io/docs/secrets-and-credentials-storage-class.html#per-volume-secrets). Also, maybe this prompts to extend PVC/PV/SC specification to accommodate such parameters in new resource version?
    Changing StorageClass driver to different CSI driver would not be made difficult by that as well - either it would use annotation in subDir (as administrator permits and configures explicitly) or not and user would get share with subDir that simply ignores annotation.
    There are other resources (like ingresses) that are not portable in same way because they use implementation-specific annotations to pass extra info. I am aware that this is a very poor excuse :-).
  • "It opens doors for security issues (...) anyone who can create a PVC can actually read data of other PVs that are dynamically provisioned from the same StorageClass, just by guessing the subdirectory name."
    It is right now possible to start guessing secret name and secret namespace that are passed as annotations on PVC and possibly elevate share privileges from RO to RW.
    Or, in situation where already supported subDir: '${pvc.metadata.name}' can make PVC name collide with PVC from another namespace and effectively mount their share.
    Configuration wise there are plenty of places that one can already create a security hole. But configuring SC subDir: '/prefix/${namespace}/${pvc.annotations['csi.storage.k8s.io/subDir']} does not allow much wiggle room (as opposed to subDir: '${pvc.annotations['subDir']}'). Kubernetes allows a lot of freedom and opportunity to shoot own foot in plenty other places by incorrect configuration, but as with every tool some wisdom in configuration/usage is required.
    There is a security concern here that was not addressed - path traversal using .. but it is easy to fix.

This issue is open for 6 years already and I think that most PR rejections that tried to implement this failed on portability, meanwhile several other drivers struggle to work around this and implement their own mechanisms (HPE, NetApp, Linode, Digitalocean, ...) based on PVC annotations.
Several CSI users voiced their need for configurable/template-able subDir in specific drivers (smb and nfs among others) and it is really hard to meet externally enforced directory structure on shared storage (or impossible in some cases).
There other use cases out there that could benefit from this (like different uid/gid for each PVC among other parameters, which could be at least temporarily implemented using annotations)
Even in kubernetes-sigs there is a project that walks around this: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner with pathPattern...

To sum up:
What can be done to move this issue forward (fully or partially, like passing annotations but only those prefixed with csi.storage.k8s.io/ to avoid accidentally leaking everything to driver - that would satisfy need for tags and could allow more subDir templating options)?
Any possibilities at all or any other ideas what could be done to support use cases that people have?
Is it possible to change PVC to add new stuff there that would be portable (and make new version of resource) - like parameters section (but that would still be a freeform and driver specific meaning, same as parameters on StorageClass)? There are plenty of examples where annotation-based config migrated to proper resources in subsequent resource versions.

Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
No open projects
Status: Need Attention