EdgeX Foundry

This document demonstrates how to install Yurt-Device-Controller,Yurt-EdgeX-Manager, and manage edge leaf devices via cloud native style based on virtual devices.

For more details about these two components, please refer to Yurt-Device-Controller, Yurt-EdgeX-Manager

If you don’t have an OpenYurt on hand, you can use yurtctl to create one or convert from an exist Kubernetes cluster.

Environment

  • OpenYurt v0.5.0+

  • You should first install Yurt-app-manager.

  • Deploy CoreDNS for every edge node

  • Set ServiceTopology to kubernetes.io/hostname for CoreDNS service. For details, please refer to ServiceTopology

1. install yurt-edgex-manager and create an EdgeX insatnce

install yurt-edgex-manager

  1. $ kubectl apply -f https://github.com/openyurtio/yurt-edgex-manager/releases/download/v0.2.0/yurt-edgex-manager.yaml
  2. # check status of yurt-edgex-manager
  3. $ kubectl get pods -n edgex-system |grep edgex
  4. edgex-controller-manager-6c99fd9f9f-b9nnk 2/2 Running 0 6d22h

Create a nodepool named hangzhou, join node into this nodepool.

  1. $ export WORKER_NODEPOOL="hangzhou"
  2. $ export EDGE_NODE="node1"
  3. # create nodepool hangzhou
  4. $ cat <<EOF | kubectl apply -f -
  5. apiVersion: apps.openyurt.io/v1alpha1
  6. kind: NodePool
  7. metadata:
  8. name: $WORKER_NODEPOOL
  9. spec:
  10. type: Edge
  11. EOF
  12. # join edge node into nodepool hangzhou
  13. $ kubectl label node $EDGE_NODE apps.openyurt.io/desired-nodepool=hangzhou
  14. # check node status
  15. $ kubectl get nodepool
  16. NAME TYPE READYNODES NOTREADYNODES AGE
  17. hangzhou Edge 0 1 6d22h

create EdgeX Foundry instance in nodepool hangzhou and deploy edgex-device-virtual

  1. apiVersion: device.openyurt.io/v1alpha1
  2. kind: EdgeX
  3. metadata:
  4. name: edgex-sample-beijing
  5. spec:
  6. version: jakarta
  7. poolname: hangzhou
  8. ---
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. metadata:
  12. labels:
  13. org.edgexfoundry.service: edgex-device-virtual
  14. name: edgex-device-virtual
  15. spec:
  16. replicas: 1
  17. selector:
  18. matchLabels:
  19. org.edgexfoundry.service: edgex-device-virtual
  20. strategy:
  21. type: Recreate
  22. template:
  23. metadata:
  24. labels:
  25. org.edgexfoundry.service: edgex-device-virtual
  26. spec:
  27. hostname: edgex-device-virtual
  28. nodeSelector:
  29. apps.openyurt.io/nodepool: hangzhou
  30. containers:
  31. - name: edgex-device-virtual
  32. image: openyurt/device-virtual:2.1.0
  33. imagePullPolicy: IfNotPresent
  34. ports:
  35. - containerPort: 59900
  36. name: "tcp-59900"
  37. protocol: TCP
  38. env:
  39. - name: MESSAGEQUEUE_HOST
  40. value: edgex-redis
  41. - name: SERVICE_HOST
  42. value: edgex-device-virtual
  43. envFrom:
  44. - configMapRef:
  45. name: common-variables
  46. startupProbe:
  47. tcpSocket:
  48. port: 59900
  49. periodSeconds: 1
  50. failureThreshold: 120
  51. livenessProbe:
  52. tcpSocket:
  53. port: 59900
  54. restartPolicy: Always
  55. ---
  56. apiVersion: v1
  57. kind: Service
  58. metadata:
  59. labels:
  60. org.edgexfoundry.service: edgex-device-virtual
  61. name: edgex-device-virtual
  62. spec:
  63. ports:
  64. - name: "tcp-59900"
  65. port: 59900
  66. protocol: TCP
  67. targetPort: 59900
  68. selector:
  69. org.edgexfoundry.service: edgex-device-virtual
  70. type: NodePort

check EdgeX instance status

  1. $ kubectl get edgex
  2. NAME READY SERVICE READYSERVICE DEPLOYMENT READYDEPLOYMENT
  3. edgex-sample-hangzhou true 9 9 9 9

2. install yurt-device-controller

install CRDs of yurt-device-controller

  1. $ kubectl apply -f https://raw.githubusercontent.com/openyurtio/yurt-device-controller/main/config/setup/crd.yaml

use UnitedDeployment to deploy yurt-device-controller instance in nodepool hangzhou. It should be pointed out that we use cluster-admin ClusterRole just for demo purpose

  1. apiVersion: apps.openyurt.io/v1alpha1
  2. kind: UnitedDeployment
  3. metadata:
  4. labels:
  5. controller-tools.k8s.io: "1.0"
  6. name: ud-device
  7. namespace: default
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: ud-device
  12. topology:
  13. pools:
  14. - name: hangzhou
  15. nodeSelectorTerm:
  16. matchExpressions:
  17. - key: apps.openyurt.io/nodepool
  18. operator: In
  19. values:
  20. - hangzhou
  21. replicas: 1
  22. tolerations:
  23. - operator: Exists
  24. workloadTemplate:
  25. deploymentTemplate:
  26. metadata:
  27. creationTimestamp: null
  28. labels:
  29. app: ud-device
  30. spec:
  31. selector:
  32. matchLabels:
  33. app: ud-device
  34. strategy: {}
  35. template:
  36. metadata:
  37. creationTimestamp: null
  38. labels:
  39. app: ud-device
  40. control-plane: controller-manager
  41. spec:
  42. containers:
  43. - args:
  44. - --health-probe-bind-address=:8081
  45. - --metrics-bind-address=127.0.0.1:8080
  46. - --leader-elect=false
  47. - --namespace=default
  48. - --v=5
  49. command:
  50. - /yurt-device-controller
  51. image: openyurt/yurt-device-controller:v0.2.0
  52. imagePullPolicy: IfNotPresent
  53. livenessProbe:
  54. failureThreshold: 3
  55. httpGet:
  56. path: /healthz
  57. port: 8081
  58. scheme: HTTP
  59. initialDelaySeconds: 15
  60. periodSeconds: 20
  61. successThreshold: 1
  62. timeoutSeconds: 1
  63. name: manager
  64. readinessProbe:
  65. failureThreshold: 3
  66. httpGet:
  67. path: /readyz
  68. port: 8081
  69. scheme: HTTP
  70. initialDelaySeconds: 5
  71. periodSeconds: 10
  72. successThreshold: 1
  73. timeoutSeconds: 1
  74. resources:
  75. limits:
  76. cpu: 100m
  77. memory: 512Mi
  78. requests:
  79. cpu: 100m
  80. memory: 512Mi
  81. securityContext:
  82. allowPrivilegeEscalation: false
  83. dnsPolicy: ClusterFirst
  84. restartPolicy: Always
  85. securityContext:
  86. runAsUser: 65532
  87. ---
  88. apiVersion: rbac.authorization.k8s.io/v1
  89. kind: ClusterRoleBinding
  90. metadata:
  91. name: ud-rolebinding
  92. roleRef:
  93. apiGroup: rbac.authorization.k8s.io
  94. kind: ClusterRole
  95. name: cluster-admin
  96. subjects:
  97. - kind: ServiceAccount
  98. name: default
  99. namespace: default

check whether yurt-device-controller has been deployed successful

  1. $ kubectl get pod |grep yurt-device-controller
  2. yurt-device-controller-xxxxxx-sf7xz-79c9cbf4b7-mbfds 1/1 Running 0 6d22h

3. Check virtual devices synced from EdgeX

The device-virtual-go driver will automatically create and register 5 virtual devices of different kinds upon start, yurt-device-controller will then sync them to OpenYurt. You can use kubectl to check it:

  1. $ kubectl get device
  2. NAME NODEPOOL SYNCED AGE
  3. hangzhou-random-binary-device hangzhou true 19h
  4. hangzhou-random-boolean-device hangzhou true 19h
  5. hangzhou-random-float-device hangzhou true 19h
  6. hangzhou-random-integer-device hangzhou true 19h
  7. hangzhou-random-unsignedinteger-device hangzhou true 19h

4. Uninstall and cleanup

  1. $ export WORKER_NODEPOOL="hangzhou"
  2. $ export EDGE_NODE="node1"
  3. # 1.1 delete all device, deviceservice, deviceprofile资源
  4. $ kubectl delete device --all
  5. $ kubectl delete deviceprofile --all
  6. $ kubectl delete deviceservice --all
  7. # 1.2 uninstall yurt-device-controller
  8. $ kubectl delete uniteddeployment yurt-device-controller
  9. $ kubectl delete clusterrolebinding ud-rolebinding
  10. # 1.3 delete CRDs of yurt-device-controller
  11. $ kubectl delete -f https://raw.githubusercontent.com/openyurtio/yurt-device-controller/main/config/setup/crd.yaml
  12. # 2.1 delete EdgeX instance
  13. $ kubectl delete edgex --all
  14. # 2.2 uninstall yurt-edgex-manager
  15. $ kubectl delete -f https://github.com/openyurtio/yurt-edgex-manager/releases/download/v0.2.0/yurt-edgex-manager.yaml
  16. # (optional)
  17. # 3.1 remove node from nodepool
  18. $ kubectl label node $EDGE_NODE apps.openyurt.io/desired-nodepool-
  19. # 3.2 delete nodepool
  20. $ kubectl delete nodepool $WORKER_NODEPOOL