This is the official Container Storage Interface driver for QNAP NAS devices.
Driver Version | Supported Kubernetes Versions | Supported QNAP NAS |
---|---|---|
v1.2.0 | 1.21 to 1.27 | NAS running QTS 5.0.0 or later |
- Debian 8 or later
- Ubuntu 16.04 or later
- CentOS 7.0 or later
- RHEL 7.0 or later
- CoreOS 1353.8.0 or later
- AMD 64
- ARM 64 and ARM v7
- Add StorageClasses
- Add, resize, clone, and import Persistent Volume Claims (PVCs)
- Take snapshots
Run the following command in both the master and worker nodes.
apt install open-iscsi
Note: Minikube is not supported.
- Make sure
kubectl
is installed and working.- Run the following commands one at a time.
kubectl get pods
kubectl version
- Verify that you are logged in as a Kubernetes cluster administrator.
kubectl auth can-i '*' '*' --all-namespaces
- The result should be "yes".
- Verify that you can launch a pod that uses an image from Docker Hub and can reach your storage system over the pod network.
kubectl run -i --tty ping --image=busybox --restart=Never --rm -- \ping <NAS management IP>
- For example:
kubectl run -i --tty ping --image=busybox --restart=Never --rm -- \ping 10.64.118.157
- Verify that your NAS has created a storage pool and iSCSI service is enabled.
- To check storage pools on your NAS, open Storage & Snapshots and go to Storage > Storage/Snapshots.
- To check iSCSI service on your NAS, open iSCSI & Fibre Channel and verify that the toggle button is on.
- Clone the git repository.
git clone https://github.com/qnap-dev/QNAP-CSI-PlugIn.git
- Enter the directory.
cd QNAP-CSI-PlugIn
- Select one of the following installation methods.
Run the following commands one at a time in order.
kubectl apply -f Deploy/Trident/namespace.yaml
kubectl apply -f Deploy/crds/tridentorchestrator_crd.yaml
kubectl apply -f Deploy/Trident/bundle.yaml
kubectl apply -f Deploy/Trident/tridentorchestrator.yaml
Run the following commands one at a time in order.
kubectl apply -k Deploy/crds
kubectl apply -k Deploy/Trident
- Install Helm (for Ubuntu).
- Run the following commands one at a time in order.
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
- Install the CSI plugin.
helm install qnap-trident ./Helm/trident -n trident --create-namespace
- Upgrade
helm upgrade qnap-trident Helm/trident/ -n trident
Note: You need VolumeSnapshot
to take snapshots.
kubectl apply -k VolumeSnapshot
kubectl get deployment -n trident
- The result should include
trident-controller
andtrident-operator
.
kubectl get service -n trident
- The result should include
trident-csi
.
Edit the file Samples/backend-config-sample.yaml
or create a new one as shown below.
You must configure this file before you create a volume. Each column is required.
apiVersion: v1
kind: Secret
metadata:
name: backend-qts-sample-secret
namespace: trident
type: Opaque
stringData:
username: david
password: abcd1234
storageAddress: 10.20.91.69
---
apiVersion: trident.qnap.io/v1
kind: TridentBackendConfig
metadata:
name: backend-qts-sample-config
namespace: trident
spec:
version: 1
storageDriverName: qnap-iscsi
backendName: qts-david
networkInterfaces: ["K8s-ISCSI"] #optional
credentials:
name: backend-qts-sample-secret
debugTraceFlags:
method: false
storage:
- labels:
storage: qts-david
serviceLevel: Any
- labels:
performance: premium
features:
tiering: Enable
tierType: SSD
ssdCache: "true"
serviceLevel: SSD-Cache
- labels:
performance: standard
features:
tiering: Enable
serviceLevel: Tiering
- labels:
performance: basic
features:
tiering: Disable
serviceLevel: Non-Tiering
Add a backend to your orchestrator.
Edit the file Samples/backend-qts1.json
or create a new one as shown below.
You must configure this file before you create a volume. Each column is required.
{
"version": 1,
"storageDriverName": "qnap-iscsi",
"backendName": "qts-david",
"storageAddress": "10.20.91.69",
"username": "david",
"password": "abcd1234",
"networkInterfaces": ["K8s-ISCSI"],
"debugTraceFlags": {"method":true},
"storage": [
{
"labels": {"storage": "qts-david"},
"serviceLevel": "Any"
},
{
"labels": {"performance": "premium"},
"features":{
"tiering": "Enable",
"ssdCache": "true"
},
"serviceLevel": "SSD-Cache"
},
{
"labels": {"performance": "standard"},
"features":{
"tiering": "Enable"
},
"serviceLevel": "Tiering"
},
{
"labels": {"performance": "basic"},
"features":{
"tiering": "Disable"
},
"serviceLevel": "Non-Tiering"
}
]
}
Edit the file Samples/storage-class-qnap-qos.yaml
or create a new one as shown below.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: premium
provisioner: csi.trident.qnap.io #k8s CSI provisioner
parameters:
selector: "performance=premium"
allowVolumeExpansion: true
Edit the file Samples/pvc-basic.yaml
or create a new one as shown below.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-basic
annotations:
trident.qnap.io/ThinAllocate: "false"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: basic
- Make sure you have a corresponding pool.
- Add a backend based on the yaml file you configured earlier.
kubectl apply -f <backend yaml file path>
- Check the result.
- Run the following commands one at a time.
kubectl get tridentbackendconfigs.trident.qnap.io -n trident
- Make sure you have a corresponding pool.
- Ensure you have the permission to execute
tridentctl
.
chmod u+x tridentctl
- Add a backend based on the JSON file you configured earlier.
./bin/tridentctl create backend -f <backend.json> -n trident
- For example:
./bin/tridentctl create backend -f Samples/backend-qts1.json -n trident
- This should take around 30 seconds or less.
- If it takes over 30 seconds and shows the error "Command terminated with exit code 1" due to timeout, check your network connection and try again.
- Check the result.
- Run the following commands one at a time.
kubectl get pods -n trident
kubectl apply -f <StorageClass.yaml>
- For example:
kubectl apply -f Samples/storage-class-qnap-qos.yaml
kubectl apply -f <pvc.yaml>
- For example:
kubectl apply -f Samples/pvc-basic.yaml
- This example creates a thick LUN. If you want to create a thin LUN, refer to the
Samples/pvc-standard.yaml
file.
kubectl edit pvc <pvc name>
- For example:
kubectl edit pvc pvc-basic
- After a few seconds, the capacity will increase on the NAS.
- After the pod restarts, information on the capacity will be updated.
kubectl apply -f <pvc-clone.yaml>
kubectl apply -f <pvc-import.yaml>
kubectl apply -f <vol-snapshot.yaml>
- Deploy a pod.
kubectl apply -f <pod yaml file>
- For example:
kubectl apply -f Samples/pod.yaml
- Check the result.
kubectl get pods
- Check the connection has been mapped on the NAS by opening iSCSI & Fibre Channel.
- Use the logs to check whether the pod is running normally.
You can now mount a PVC to the pod.
_Note: After deploying the pod, the only thing you can do with the pod is print out its timestamp.
- Create a VolumeSnapshot from a PVC.
- Run the following commands one at a time.
kubectl apply -f <VolumeSnapshotClass.yaml>
kubectl apply -f <VolumeSnapshot.yaml>
- Check the result.
- Run the following commands one at a time.
kubectl get volumesnapshot
- Create a PVC from a snapshot.
- Run the following commands one at a time.
kubectl apply -f <pvc-from-snapshot.yaml>
kubectl apply -f <pod2.yaml>
- Verify the PVC has been successfully created from the snapshot.
- Create a new pod and mount the snapshot-pvc in the pod.
- Access the pod and check if the PVC contains the directory you created before.
Run the following commands one at a time in order.
kubectl delete deployment trident-operator -n trident
./bin/tridentctl uninstall -n trident
kubectl delete tridentorchestrator trident
helm delete qnap-trident -n trident
kubectl delete tridentorchestrator trident
kubectl apply -f Deploy/Trident/tridentorchestrator.yaml