Edge Local Storage

1. Check the local storage resources on the node

Check the corresponding Relation between existing block devices and nodes.

2. Create Configmap

Create a ConfigMap in a cluster. Here is a relatively generic ConfigMap configuration that configures local storage resources. For details, see DOC

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: node-resource-topo
  5. namespace: kube-system
  6. data:
  7. volumegroup: |-
  8. volumegroup:
  9. - name: volumegroup1
  10. key: kubernetes.io/hostname
  11. operator: In
  12. value: cn-zhangjiakou.192.168.3.114
  13. topology:
  14. type: device
  15. devices:
  16. - /dev/vdb
  17. - /dev/vdc
  18. quotapath: |-
  19. quotapath:
  20. - name: /mnt/path1
  21. key: kubernetes.io/hostname
  22. operator: In
  23. value: cn-beijing.192.168.3.35
  24. topology:
  25. type: device
  26. options: prjquota
  27. fstype: ext4
  28. devices:
  29. - /dev/vdb

The previous configuration provides the following functionsThe previous configuration provides the following functions

  • In the test cluster, we used two pieces of equipment : /dev/VDB & / dev/VDC to create an LVM volumegroup on the worker node: “cn - zhangjiakou. 192.168.3.114 “. Devices here can add paths that do not exist, because the plug-in will automatically ignore this path during the node initialisation.

  • Meanwhile, we format the block device “/dev/vdb” to prjquota format on worker node “cn-beijing.192.168.3.35”, and mount it to path “/mnt/path1”, and then subdirectories created under this path can set the maximum quota for each directory. Devices here can also add paths that do not exist, the component will automatically select the first existing block device for formatting and binding.

3. Install node-resource-manager

  1. kubectl apply -f https://raw.githubusercontent.com/openyurtio/node-resource-manager/main/deploy/nrm.yaml

4. Deploy application in cluster(with lvm)

Create storageclass

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: csi-local
  6. provisioner: localplugin.csi.alibabacloud.com
  7. parameters:
  8. volumeType: LVM
  9. vgName: volumegroup1
  10. fsType: ext4
  11. lvmType: "striping"
  12. reclaimPolicy: Delete
  13. volumeBindingMode: WaitForFirstConsumer
  14. allowVolumeExpansion: true
  15. EOF

Parameters. vgName is the VolumeGroup defined in node-resource-topo configmap, named volumegroup1.

Create PVC

  1. cat << EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: lvm-pvc
  6. annotations:
  7. volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.3.114
  8. spec:
  9. accessModes:
  10. - ReadWriteOnce
  11. resources:
  12. requests:
  13. storage: 2Gi
  14. storageClassName: csi-local
  15. EOF

You need to specify the node where the storage is located in the PVC’s annotation,

Create application

  1. cat << EOF | kubectl apply -f -
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: deployment-lvm
  6. labels:
  7. app: nginx
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: nginx
  12. template:
  13. metadata:
  14. labels:
  15. app: nginx
  16. spec:
  17. containers:
  18. - name: nginx
  19. image: nginx:1.7.9
  20. volumeMounts:
  21. - name: lvm-pvc
  22. mountPath: "/data"
  23. volumes:
  24. - name: lvm-pvc
  25. persistentVolumeClaim:
  26. claimName: lvm-pvc
  27. EOF

Above, we have completed the basic use of local storage, Quotapath mode is basically the same, just need to change the StorageClass.