资源接管

有时您可能希望使用 KubeVela 应用程序来接管现有资源,或者从其他来源(如 Helm 包)获取资源。在这种情况下,您可以利用 KubeVela 中的资源接管功能。

默认情况下,当 KubeVela 应用尝试调度(创建或更新)某个资源时,它首先会检查这个资源是否属于自己。通过比较 app.oam.dev/nameapp.oam.dev/namespace 标签值,看它们是否等于应用的名称和命名空间来进行检查。

如果此资源不属于当前应用本身(属于其他应用或被其他人创建),应用将停止调度操作并报告错误。这个机制旨在防止对由其他操作员或系统管理的资源进行无意编辑。

如果资源当前由其他应用程序管理,您可以参考 资源共享 策略,并详细了解如何在多个应用之间共享资源。

如果资源没有被任何人管理,要允许 KubeVela 应用程序管理资源,您可以利用 read-only 策略或 take-over 策略来强制执行这些资源的接管。

通过使用 read-only 策略,您可以选择可以被当前应用程序接管的资源。例如,在下面的应用程序中,部署类型的资源被视为只读资源,并能够被给定的应用程序接管。

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: read-only
  5. spec:
  6. components:
  7. - name: nginx
  8. type: webservice
  9. properties:
  10. image: nginx
  11. policies:
  12. - type: read-only
  13. name: read-only
  14. properties:
  15. rules:
  16. - selector:
  17. resourceTypes: ["Deployment"]

read-only 策略允许应用程序读取所选资源,但会跳过对目标资源的所有写操作。如果目标资源不存在,则会报告错误。

目标资源将不会附带应用的标签,即多个应用程序可以同时使用具有 read-only 只读策略的相同资源,删除应用本身也不会影响目标资源,将跳过目标资源的回收流程。

尽管在 read-only 只读策略中选择的资源无法通过应用程序进行编辑,但健康检查和资源拓扑图仍可以正常工作。因此,您可以使用具有只读策略的 KubeVela 应用构建底层资源的“监控组”,并利用像 vela top 或 VelaUX 之类的工具来观察它们,而无需进行任何修改。

实践举例

  1. 创建 nginx K8s Deployment 资源。
  1. kubectl create deploy nginx --image=nginx
  1. 部署一个 read-only 策略的应用选中这个 nginx Deployment。
  1. cat <<EOF | vela up -f -
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: read-only
  6. spec:
  7. components:
  8. - name: nginx
  9. type: webservice
  10. properties:
  11. image: nginx
  12. policies:
  13. - type: read-only
  14. name: read-only
  15. properties:
  16. rules:
  17. - selector:
  18. resourceTypes: ["Deployment"]
  19. EOF
  1. 查看应用状态。
  1. vela status read-only
  1. 使用 vela top 查看资源状态和拓扑。 read-only-vela-top

  2. 使用 velaux 查看资源状态和拓扑。 read-only-velaux

如果您不仅希望 KubeVela 应用能观察底层资源,还希望该应用能够管理它们的生命周期,则可以使用 take-over 策略来替换 read-only 策略。

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: take-over
  5. spec:
  6. components:
  7. - name: nginx-take-over
  8. type: k8s-objects
  9. properties:
  10. objects:
  11. - apiVersion: apps/v1
  12. kind: Deployment
  13. metadata:
  14. name: nginx
  15. traits:
  16. - type: scaler
  17. properties:
  18. replicas: 3
  19. policies:
  20. - type: take-over
  21. name: take-over
  22. properties:
  23. rules:
  24. - selector:
  25. resourceTypes: ["Deployment"]

在上述应用程序中,nginx Deployment 将被添加 KubeVela 标识的标签(Labels)并标记为属于当前应用程序。应用程序中附加的 scaler trait 将修改目标部署的副本数为3 ,同时保留所有其他字段不变。

资源被应用接管后,应用将控制目标资源的升级和删除。因此,与 take-over 策略不同,每个资源只能由一个使用 take-over 策略的应用程序进行管理。

take-over 策略在您希望应用完全控制给定资源时非常有用。

实践举例

  1. 创建 nginx K8s Deployment 资源。
  1. kubectl create deploy nginx --image=nginx
  1. 部署 take-over 策略的应用接管这个 Deployment。
  1. cat <<EOF | vela up -f -
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: take-over
  6. spec:
  7. components:
  8. - name: nginx-take-over
  9. type: k8s-objects
  10. properties:
  11. objects:
  12. - apiVersion: apps/v1
  13. kind: Deployment
  14. metadata:
  15. name: nginx
  16. traits:
  17. - type: scaler
  18. properties:
  19. replicas: 3
  20. policies:
  21. - type: take-over
  22. name: take-over
  23. properties:
  24. rules:
  25. - selector:
  26. resourceTypes: ["Deployment"]
  27. EOF
  1. 检查应用的运行状态。其他操作同 read-only 策略。
  1. vela status take-over

read-only 策略和 take-over 策略为用户提供了一种在 KubeVela 应用 API 中直接采用资源的方式。如果您喜欢直接从头开始使用现有资源构建KubeVela 应用程序,则可以使用 vela adopt CLI命令。

通过提供一组本地 Kubernetes 资源,vela adopt 命令可以帮助您自动将这些资源纳入一个应用中。您可以按照以下步骤来尝试它:

  1. 创建用于接管的原生资源。
  1. kubectl create deploy example --image=nginx
  2. kubectl create service clusterip example --tcp=80:80
  3. kubectl create configmap example
  4. kubectl create secret generic example
  1. 运行 vela adopt 命令自动创建接管的应用。
  1. vela adopt deployment/example service/example configmap/example secret/example

期望输出

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app.oam.dev/adopt: native
  7. name: example
  8. namespace: default
  9. spec:
  10. components:
  11. - name: example.Deployment.example
  12. properties:
  13. objects:
  14. - apiVersion: apps/v1
  15. kind: Deployment
  16. metadata:
  17. name: example
  18. namespace: default
  19. spec:
  20. progressDeadlineSeconds: 600
  21. replicas: 1
  22. revisionHistoryLimit: 10
  23. selector:
  24. matchLabels:
  25. app: example
  26. strategy:
  27. rollingUpdate:
  28. maxSurge: 25%
  29. maxUnavailable: 25%
  30. type: RollingUpdate
  31. template:
  32. metadata:
  33. creationTimestamp: null
  34. labels:
  35. app: example
  36. spec:
  37. containers:
  38. - image: nginx
  39. imagePullPolicy: Always
  40. name: nginx
  41. resources: {}
  42. terminationMessagePath: /dev/termination-log
  43. terminationMessagePolicy: File
  44. dnsPolicy: ClusterFirst
  45. restartPolicy: Always
  46. schedulerName: default-scheduler
  47. securityContext: {}
  48. terminationGracePeriodSeconds: 30
  49. type: k8s-objects
  50. - name: example.Service.example
  51. properties:
  52. objects:
  53. - apiVersion: v1
  54. kind: Service
  55. metadata:
  56. name: example
  57. namespace: default
  58. spec:
  59. clusterIP: 10.43.65.46
  60. clusterIPs:
  61. - 10.43.65.46
  62. internalTrafficPolicy: Cluster
  63. ipFamilies:
  64. - IPv4
  65. ipFamilyPolicy: SingleStack
  66. ports:
  67. - name: 80-80
  68. port: 80
  69. protocol: TCP
  70. targetPort: 80
  71. selector:
  72. app: example
  73. sessionAffinity: None
  74. type: ClusterIP
  75. type: k8s-objects
  76. - name: example.config
  77. properties:
  78. objects:
  79. - apiVersion: v1
  80. kind: ConfigMap
  81. metadata:
  82. name: example
  83. namespace: default
  84. - apiVersion: v1
  85. kind: Secret
  86. metadata:
  87. name: example
  88. namespace: default
  89. type: k8s-objects
  90. policies:
  91. - name: read-only
  92. properties:
  93. rules:
  94. - selector:
  95. componentNames:
  96. - example.Deployment.example
  97. - example.Service.example
  98. - example.config
  99. type: read-only
  100. status: {}

默认情况下,应用程序首先将所有给定的资源嵌入其组件中。然后,它会附加read-only策略。您可以编辑返回的配置并创建自己的采用应用程序。或者,您可以直接使用 --apply 参数应用此应用程序。

  1. vela adopt deployment/example service/example configmap/example secret/example --apply

您还可以设置您想要使用的应用程序名称。

  1. vela adopt deployment/example service/example configmap/example secret/example --apply --app-name=adopt-example

现在,您可以使用 vela statusvela status -t -d 命令来显示已应用的应用程序的状态。

  1. vela status adopt-example

期望输出

  1. About:
  2. Name: adopt-example
  3. Namespace: default
  4. Created at: 2023-01-11 14:21:21 +0800 CST
  5. Status: running
  6. Workflow:
  7. mode: DAG-DAG
  8. finished: true
  9. Suspend: false
  10. Terminated: false
  11. Steps
  12. - id: 8d8capzw7e
  13. name: adopt-example.Deployment.example
  14. type: apply-component
  15. phase: succeeded
  16. - id: 6u6c6ai1gu
  17. name: adopt-example.Service.example
  18. type: apply-component
  19. phase: succeeded
  20. - id: r847uymujz
  21. name: adopt-example.config
  22. type: apply-component
  23. phase: succeeded
  24. Services:
  25. - Name: adopt-example.Deployment.example
  26. Cluster: local Namespace: default
  27. Type: k8s-objects
  28. Healthy
  29. No trait applied
  30. - Name: adopt-example.Service.example
  31. Cluster: local Namespace: default
  32. Type: k8s-objects
  33. Healthy
  34. No trait applied
  35. - Name: adopt-example.config
  36. Cluster: local Namespace: default
  37. Type: k8s-objects
  38. Healthy
  39. No trait applied
  1. vela status adopt-example -t -d
  1. CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
  2. local ─── default ─┬─ ConfigMap/example updated 2023-01-11 14:15:34 Data: 0 Age: 6m1s
  3. ├─ Secret/example updated 2023-01-11 14:15:52 Type: Opaque Data: 0 Age: 5m43s
  4. ├─ Service/example updated 2023-01-11 14:12:00 Type: ClusterIP Cluster-IP: 10.43.65.46 External-IP: <none> Port(s): 80/TCP Age: 9m35s
  5. └─ Deployment/example updated 2023-01-11 14:11:06 Ready: 1/1 Up-to-date: 1 Available: 1 Age: 10m

The read-only only allows the application to observe resources, but disallow any edits to it. If you want to make modifications you can use the --mode=take-over to use the take-over policy in the adoption application.

vela adopt also supports directly reading native resources from existing helm release. This is helpful if you previously deployed resources through helm.

  1. For example, you can firstly deploy a mysql instance through helm.
  1. helm repo add bitnami https://charts.bitnami.com/bitnami
  2. helm repo update
  3. helm install mysql bitnami/mysql
  1. You can validate the installation through helm ls.
  1. helm ls
  1. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  2. mysql default 1 2023-01-11 14:34:36.653778 +0800 CST deployed mysql-9.4.6 8.0.31
  1. 运行vela adopt命令,从现有的发布中采用资源。与本机资源采用类似,您可以获得一个带有 read-only 策略的 KubeVela 应用程序。
  1. vela adopt mysql --type helm

期望输出

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app.oam.dev/adopt: helm
  7. name: mysql
  8. namespace: default
  9. spec:
  10. components:
  11. - name: mysql.StatefulSet.mysql
  12. properties:
  13. objects:
  14. - apiVersion: apps/v1
  15. kind: StatefulSet
  16. metadata:
  17. name: mysql
  18. namespace: default
  19. spec:
  20. podManagementPolicy: ""
  21. replicas: 1
  22. selector:
  23. matchLabels:
  24. app.kubernetes.io/component: primary
  25. app.kubernetes.io/instance: mysql
  26. app.kubernetes.io/name: mysql
  27. serviceName: mysql
  28. template:
  29. metadata:
  30. annotations:
  31. checksum/configuration: f8f3ad4a6e3ad93ae6ed28fdb7f7b4ff9585e08fa730e4e5845db5ebe5601e4d
  32. labels:
  33. app.kubernetes.io/component: primary
  34. app.kubernetes.io/instance: mysql
  35. app.kubernetes.io/managed-by: Helm
  36. app.kubernetes.io/name: mysql
  37. helm.sh/chart: mysql-9.4.6
  38. spec:
  39. affinity:
  40. nodeAffinity: null
  41. podAffinity: null
  42. podAntiAffinity:
  43. preferredDuringSchedulingIgnoredDuringExecution:
  44. - podAffinityTerm:
  45. labelSelector:
  46. matchLabels:
  47. app.kubernetes.io/instance: mysql
  48. app.kubernetes.io/name: mysql
  49. topologyKey: kubernetes.io/hostname
  50. weight: 1
  51. containers:
  52. - env:
  53. - name: BITNAMI_DEBUG
  54. value: "false"
  55. - name: MYSQL_ROOT_PASSWORD
  56. valueFrom:
  57. secretKeyRef:
  58. key: mysql-root-password
  59. name: mysql
  60. - name: MYSQL_DATABASE
  61. value: my_database
  62. envFrom: null
  63. image: docker.io/bitnami/mysql:8.0.31-debian-11-r30
  64. imagePullPolicy: IfNotPresent
  65. livenessProbe:
  66. exec:
  67. command:
  68. - /bin/bash
  69. - -ec
  70. - |
  71. password_aux="${MYSQL_ROOT_PASSWORD:-}"
  72. if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
  73. password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
  74. fi
  75. mysqladmin status -uroot -p"${password_aux}"
  76. failureThreshold: 3
  77. initialDelaySeconds: 5
  78. periodSeconds: 10
  79. successThreshold: 1
  80. timeoutSeconds: 1
  81. name: mysql
  82. ports:
  83. - containerPort: 3306
  84. name: mysql
  85. readinessProbe:
  86. exec:
  87. command:
  88. - /bin/bash
  89. - -ec
  90. - |
  91. password_aux="${MYSQL_ROOT_PASSWORD:-}"
  92. if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
  93. password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
  94. fi
  95. mysqladmin status -uroot -p"${password_aux}"
  96. failureThreshold: 3
  97. initialDelaySeconds: 5
  98. periodSeconds: 10
  99. successThreshold: 1
  100. timeoutSeconds: 1
  101. resources:
  102. limits: {}
  103. requests: {}
  104. securityContext:
  105. runAsNonRoot: true
  106. runAsUser: 1001
  107. startupProbe:
  108. exec:
  109. command:
  110. - /bin/bash
  111. - -ec
  112. - |
  113. password_aux="${MYSQL_ROOT_PASSWORD:-}"
  114. if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
  115. password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
  116. fi
  117. mysqladmin status -uroot -p"${password_aux}"
  118. failureThreshold: 10
  119. initialDelaySeconds: 15
  120. periodSeconds: 10
  121. successThreshold: 1
  122. timeoutSeconds: 1
  123. volumeMounts:
  124. - mountPath: /bitnami/mysql
  125. name: data
  126. - mountPath: /opt/bitnami/mysql/conf/my.cnf
  127. name: config
  128. subPath: my.cnf
  129. initContainers: null
  130. securityContext:
  131. fsGroup: 1001
  132. serviceAccountName: mysql
  133. volumes:
  134. - configMap:
  135. name: mysql
  136. name: config
  137. updateStrategy:
  138. type: RollingUpdate
  139. volumeClaimTemplates:
  140. - metadata:
  141. annotations: null
  142. labels:
  143. app.kubernetes.io/component: primary
  144. app.kubernetes.io/instance: mysql
  145. app.kubernetes.io/name: mysql
  146. name: data
  147. spec:
  148. accessModes:
  149. - ReadWriteOnce
  150. resources:
  151. requests:
  152. storage: 8Gi
  153. type: k8s-objects
  154. - name: mysql.Service.mysql
  155. properties:
  156. objects:
  157. - apiVersion: v1
  158. kind: Service
  159. metadata:
  160. name: mysql
  161. namespace: default
  162. spec:
  163. ports:
  164. - name: mysql
  165. nodePort: null
  166. port: 3306
  167. protocol: TCP
  168. targetPort: mysql
  169. selector:
  170. app.kubernetes.io/component: primary
  171. app.kubernetes.io/instance: mysql
  172. app.kubernetes.io/name: mysql
  173. sessionAffinity: None
  174. type: ClusterIP
  175. type: k8s-objects
  176. - name: mysql.Service.mysql-headless
  177. properties:
  178. objects:
  179. - apiVersion: v1
  180. kind: Service
  181. metadata:
  182. name: mysql-headless
  183. namespace: default
  184. spec:
  185. clusterIP: None
  186. ports:
  187. - name: mysql
  188. port: 3306
  189. targetPort: mysql
  190. publishNotReadyAddresses: true
  191. selector:
  192. app.kubernetes.io/component: primary
  193. app.kubernetes.io/instance: mysql
  194. app.kubernetes.io/name: mysql
  195. type: ClusterIP
  196. type: k8s-objects
  197. - name: mysql.config
  198. properties:
  199. objects:
  200. - apiVersion: v1
  201. kind: Secret
  202. metadata:
  203. name: mysql
  204. namespace: default
  205. - apiVersion: v1
  206. kind: ConfigMap
  207. metadata:
  208. name: mysql
  209. namespace: default
  210. type: k8s-objects
  211. - name: mysql.sa
  212. properties:
  213. objects:
  214. - apiVersion: v1
  215. kind: Secret
  216. metadata:
  217. name: mysql
  218. namespace: default
  219. - apiVersion: v1
  220. kind: ConfigMap
  221. metadata:
  222. name: mysql
  223. namespace: default
  224. type: k8s-objects
  225. policies:
  226. - name: read-only
  227. properties:
  228. rules:
  229. - selector:
  230. componentNames:
  231. - mysql.StatefulSet.mysql
  232. - mysql.Service.mysql
  233. - mysql.Service.mysql-headless
  234. - mysql.config
  235. - mysql.sa
  236. type: read-only
  237. status: {}
  1. 您可以类似地使用 --apply 参数将应用程序应用到群集中,并使用 --mode=take-over 允许强制执行 take-over 策略进行修改。除此之外,如果您想完全将 Helm Chart 中的资源采用到 KubeVela 应用程序中,并禁用对该 Helm Chart 的管理(防止多个操控源),则可以添加 --recycle 标志,在应用程序进入运行状态后删除 Helm Release 信息。
  1. vela adopt mysql --type helm --mode take-over --apply --recycle
  1. resources adopted in app default/mysql
  2. successfully clean up old helm release
  1. 你可以用 vela statusvela status -t -d 查看应用状态。
  1. vela status mysql

期望输出

  1. About:
  2. Name: mysql
  3. Namespace: default
  4. Created at: 2023-01-11 14:40:16 +0800 CST
  5. Status: running
  6. Workflow:
  7. mode: DAG-DAG
  8. finished: true
  9. Suspend: false
  10. Terminated: false
  11. Steps
  12. - id: orq8dnqbyv
  13. name: mysql.StatefulSet.mysql
  14. type: apply-component
  15. phase: succeeded
  16. - id: k5kwoc49jv
  17. name: mysql.Service.mysql-headless
  18. type: apply-component
  19. phase: succeeded
  20. - id: p5qe1drkoh
  21. name: mysql.Service.mysql
  22. type: apply-component
  23. phase: succeeded
  24. - id: odicbhtf9a
  25. name: mysql.config
  26. type: apply-component
  27. phase: succeeded
  28. - id: o36adyqqal
  29. name: mysql.sa
  30. type: apply-component
  31. phase: succeeded
  32. Services:
  33. - Name: mysql.StatefulSet.mysql
  34. Cluster: local Namespace: default
  35. Type: k8s-objects
  36. Healthy
  37. No trait applied
  38. - Name: mysql.Service.mysql-headless
  39. Cluster: local Namespace: default
  40. Type: k8s-objects
  41. Healthy
  42. No trait applied
  43. - Name: mysql.Service.mysql
  44. Cluster: local Namespace: default
  45. Type: k8s-objects
  46. Healthy
  47. No trait applied
  48. - Name: mysql.config
  49. Cluster: local Namespace: default
  50. Type: k8s-objects
  51. Healthy
  52. No trait applied
  53. - Name: mysql.sa
  54. Cluster: local Namespace: default
  55. Type: k8s-objects
  56. Healthy
  57. No trait applied
  1. vela status mysql -t -d
  1. CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
  2. local ─── default ─┬─ ConfigMap/mysql updated 2023-01-11 14:40:16 Data: 1 Age: 7m41s
  3. ├─ Secret/mysql updated 2023-01-11 14:40:16 Type: Opaque Data: 2 Age: 7m41s
  4. ├─ Service/mysql updated 2023-01-11 14:40:16 Type: ClusterIP Cluster-IP: 10.43.154.7 External-IP: <none> Port(s): 3306/TCP Age: 7m41s
  5. ├─ Service/mysql-headless updated 2023-01-11 14:40:16 Type: ClusterIP Cluster-IP: None External-IP: <none> Port(s): 3306/TCP Age: 7m41s
  6. └─ StatefulSet/mysql updated 2023-01-11 14:40:16 Ready: 1/1 Age: 7m41s
  1. 如果运行 helm ls 命令,您将无法找到原始的 MySQL Helm 发布,因为记录已被回收。
  1. helm ls
  1. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION

资源接管 - 图3提示

有多种方式可以将 KubeVela 与 Helm 结合使用。

如果您想要使用 Helm Chart 的发布流程并使用 KubeVela 监控这些资源,则可以使用默认模式(read-only),并且不回收 Helm 发布密钥。在这种情况下,您将能够使用 KubeVela 工具或生态系统(例如在 Grafana 上查看)监视由 Helm Chart 分派的资源。

如果您想将现有资源从 Helm Chart 迁移到 KubeVela 应用模式来完整的管理资源的生命周期,则可以使用 take-over 模式,并使用 --apply 标志回收 Helm 发布记录。

默认情况下,vela adopt 将从给定源(本机资源列表或 Helm Chart)中获取资源并将其分组到不同的组件中。对于像 Deployment 或 StatefulSet 这样的资源,原始的字段将被保留。对于其他资源,如 ConfigMap 或 Secret,接管时应用不会保存其中的数据(这也意味着 vela application 不会对其中的内容做终态保持)。对于特殊资源(CustomResourceDefinition),garbage-collectapply-once应用策略将附加到应用程序中。

将资源转换为应用程序是通过使用 CUE 模板来实现的。您可以参考 GitHub 查看默认模板。

您还可以使用 CUE 构建自己的采用规则,并将 --adopt-template 添加到 vela adopt 命令中。

  1. 例如,让我们创建一个示例部署。
  1. kubectl create deploy custom-adopt --image=nginx
  1. 创建一个名为 my-adopt-rule.cue 的自定义规则。
  1. import "list"
  2. #Resource: {
  3. apiVersion: string
  4. kind: string
  5. metadata: {
  6. name: string
  7. namespace?: string
  8. ...
  9. }
  10. ...
  11. }
  12. #Component: {
  13. type: string
  14. name: string
  15. properties: {...}
  16. dependsOn?: [...string]
  17. traits?: [...#Trait]
  18. }
  19. #Trait: {
  20. type: string
  21. properties: {...}
  22. }
  23. #Policy: {
  24. type: string
  25. name: string
  26. properties?: {...}
  27. }
  28. #Application: {
  29. apiVersion: "core.oam.dev/v1beta1"
  30. kind: "Application"
  31. metadata: {
  32. name: string
  33. namespace?: string
  34. labels?: [string]: string
  35. annotations?: [string]: string
  36. }
  37. spec: {
  38. components: [...#Component]
  39. policies?: [...#Policy]
  40. workflow?: {...}
  41. }
  42. }
  43. #AdoptOptions: {
  44. mode: *"read-only" | "take-over"
  45. type: *"helm" | string
  46. appName: string
  47. appNamespace: string
  48. resources: [...#Resource]
  49. ...
  50. }
  51. #Adopt: {
  52. $args: #AdoptOptions
  53. $returns: #Application
  54. // adopt logics
  55. $returns: #Application & {
  56. metadata: {
  57. name: $args.appName
  58. labels: "app.oam.dev/adopt": $args.type
  59. }
  60. spec: components: [for r in $args.resources if r.kind == "Deployment" {
  61. type: "webservice"
  62. name: r.metadata.name
  63. properties: image: r.spec.template.spec.containers[0].image
  64. traits: [{
  65. type: "scaler"
  66. properties: replicas: r.spec.replicas
  67. }]
  68. }]
  69. spec: policies: [
  70. {
  71. type: $args.mode
  72. name: $args.mode
  73. properties: rules: [{
  74. selector: componentNames: [ for comp in spec.components {comp.name}]
  75. }]
  76. }]
  77. }
  78. }

此自定义接管规则将自动识别 Deployment 资源并将其转换为 KubeVela 应用的 webservice 组件。它可以智能地检测给定部署的副本数,并将一个scaler trait 附加到该组件上。

  1. 运行 vela adopt deployment/custom-adopt --adopt-template=my-adopt-rule.cue。你就可以看到转换出来的应用了。
  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app.oam.dev/adopt: native
  7. name: custom-adopt
  8. spec:
  9. components:
  10. - name: custom-adopt
  11. properties:
  12. image: nginx
  13. traits:
  14. - properties:
  15. replicas: 1
  16. type: scaler
  17. type: webservice
  18. policies:
  19. - name: read-only
  20. properties:
  21. rules:
  22. - selector:
  23. componentNames:
  24. - custom-adopt
  25. type: read-only
  26. status: {}

有了这个能力,您可以为从现有资源或 Helm Chart 中构建应用程序创建自己的规则。

如果你想要批量接管一个 K8s 命名空间的所有资源,你可以使用 --all 标记。

  1. vela adopt --all

默认情况下,它将接管命名空间中的所有部署(Deployment)/有状态集(StatefulSet)/守护程序集(DaemonSet)资源。您还可以指定一个自定义资源进行采用。

  1. vela adopt mycrd --all

此命令将首先尝试在命名空间中列出所有指定的资源,并根据资源拓扑规则找到相关资源(如 ConfigMap、Secret、Service 等)进行采用。

资源拓扑规则是使用 CUE 模板编写的,默认模板在 GitHub 中。使用此默认规则,将与 Deployment/StatefulSet/DaemonSet 相关的资源(ConfigMap、Secret、Service、Ingress)一起接管。

例如,如果在集群中有以下资源:

Resources(Deployment, ConfigMap, Service, Ingress)

apiVersion: apps/v1 kind: Deployment metadata: name: test1 namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: myapp: test1 strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: myapp: test1 spec: containers: - image: crccheck/hello-world imagePullPolicy: Always name: test1 ports: - containerPort: 8000 name: port-8000 protocol: TCP resources: terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /test name: configmap-my-test dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: my-test1 name: configmap-my-test —- apiVersion: v1 kind: ConfigMap metadata: name: my-test1 namespace: default —- apiVersion: v1 kind: Service metadata: name: test1 spec: ports: - port: 8000 protocol: TCP targetPort: 8000 selector: myapp: test1 type: ClusterIP —- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: test1 namespace: default spec: rules: - host: testsvc.example.com http: paths: - backend: service: name: test1 port: number: 8000 path: / pathType: ImplementationSpecific status: loadBalancer:

通过 vela adopt --all 命令,这些资源会自动被接管到如下应用中:

接管后的应用

apiVersion: core.oam.dev/v1beta1 kind: Application metadata: name: test1 namespace: default spec: components: - name: test1.Deployment.test1 properties: objects: - apiVersion: apps/v1 kind: Deployment metadata: name: test1 namespace: default spec: … type: k8s-objects - name: test1.Service.test1 properties: objects: - apiVersion: v1 kind: Service metadata: name: test1 namespace: default spec: … type: k8s-objects - name: test1.Ingress.test1 properties: objects: - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test1 namespace: default spec: … type: k8s-objects - name: test1.config properties: objects: - apiVersion: v1 kind: ConfigMap metadata: name: record-event namespace: default type: k8s-objects policies: - name: read-only properties: rules: - selector: componentNames: - test1.Deployment.test1 - test1.Service.test1 - test1.Ingress.test1 - test1.config type: read-only

您还可以使用 CUE 构建自己的资源拓扑规则,以查找自定义资源关系,并将 --resource-topology-rule 添加到 vela adopt 命令中。

  1. vela adopt --all --resource-topology-rule=my-rule.cue

在采用所有资源并将其应用于集群之后,您可以使用 vela ls 或仪表板查看所有采用的应用程序。

资源接管 - 图4

如果您想在命名空间中批量接管 Helm 发布,则可以使用 --all 标志以及 --type=helm

  1. vela adopt --all --type helm

以上就是所有的接管功能了,期待 KubeVela 为你的应用交付保驾护航。

Last updated on 2023年8月4日 by Daniel Higuero