Distribute Reference Objects

Distribute Reference Objects - 图1note

This section requires you to know the basics about how to deploy multi-cluster application with policy and workflow.

You can reference and distribute existing Kubernetes objects with KubeVela in the following scenarios:

  • Copying secrets from the hub cluster into managed clusters.
  • Promote deployments from canary clusters into production clusters.
  • Using Kubernetes apiserver as the control plane and storing all Kubernetes objects data in external databases. Then dispatch those data into real Kuberenetes managed clusters.

Besides, you can also refer to Kubernetes objects from remote URL links.

To use existing Kubernetes objects in the component, you need to use the ref-objects typed component and declare which resources you want to refer to. For example, in the following example, the secret image-credential-to-copy in namespace examples will be taken as the source object for the component. Then you can use the topology policy to dispatch it into hangzhou clusters.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: ref-objects-example
  5. namespace: examples
  6. spec:
  7. components:
  8. - name: image-pull-secrets
  9. type: ref-objects
  10. properties:
  11. objects:
  12. - resource: secret
  13. name: image-credential-to-copy
  14. policies:
  15. - name: topology-hangzhou-clusters
  16. type: topology
  17. properties:
  18. clusterLabelSelector:
  19. region: hangzhou

If your source Kubernetes objects are from remote URLs, you can also refer to them in the component properties as follows. Your remote URL files could include multiple-resources as well.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: example-app
  5. namespace: default
  6. spec:
  7. components:
  8. - name: busybox
  9. type: ref-objects
  10. properties:
  11. urls: ["https://gist.githubusercontent.com/Somefive/b189219a9222eaa70b8908cf4379402b/raw/e603987b3e0989e01e50f69ebb1e8bb436461326/example-busybox-deployment.yaml"]

The most simple way to specify resources is to directly use resource: secret or resource: deployment to describe the kind of resources. If no name or labelSelector is set, the application will try to find the resource with the same name as the component name in the application’s namespace. You can also explicitly specify name and namespace for the target resource as well.

In addition to name and namespace, you can also specify the cluster field to let the application component refer to resources in managed clusters. You can also use the labelSelector to select resources in replace of finding resources by names.

In the following example, the application will select all deployments in the hangzhou-1 cluster inside the examples namespace, which matches the desided labels. Then the application will copy these deployments into hangzhou-2 cluster.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: ref-objects-duplicate-deployments
  5. namespace: examples
  6. spec:
  7. components:
  8. - name: duplicate-deployment
  9. type: ref-objects
  10. properties:
  11. objects:
  12. - resource: deployment
  13. cluster: hangzhou-1
  14. # select all deployment in the `examples` namespace in cluster `hangzhou-1` that matches the labelSelector
  15. labelSelector:
  16. need-duplicate: "true"
  17. policies:
  18. - name: topology-hangzhou-2
  19. type: topology
  20. properties:
  21. clusters: ["hangzhou-2"]

In some cases, you might want to restrict the scope for the application to access resources. You can set the --ref-objects-available-scope to namespace or cluster in KubeVela controller’s bootstrap parameter, to retrict the application to be only able to refer to the resources inside the same namespace or the same cluster.

The override policy can be used to override properties defined in component and traits while the reference objects don’t have those properties.

If you want to override configuration for the ref-objects typed component, you can use traits. The implicit main workload is the first referenced object and trait patch will be applied on it. The following example demonstrate how to set the replica number for the referenced deployment while deploying it in hangzhou clusters.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: ref-objects-multiple-resources
  5. namespace: examples
  6. spec:
  7. components:
  8. - name: nginx-ref-multiple-resources
  9. type: ref-objects
  10. properties:
  11. objects:
  12. - resource: deployment
  13. - resource: service
  14. traits:
  15. - type: scaler
  16. properties:
  17. replicas: 3
  18. policies:
  19. - name: topology-hangzhou-clusters
  20. type: topology
  21. properties:
  22. clusterLabelSelector:
  23. region: hangzhou

There are several commonly used trait that could be used together with the ref-objects, particularly for Deployment.

The container-image trait can be used to change the default image settings declared in the original deployment.

By default, the container-image will replace the original image in the main container (the container uses the name of the component).

  1. traits:
  2. - type: container-image
  3. properties:
  4. image: busybox-1.34.0

You can modify other containers by setting the containerName field.

  1. traits:
  2. - type: container-image
  3. properties:
  4. image: busybox-1.34.0
  5. containerName: sidecar-nginx

You can also modify the ImagePullPolicy as well.

  1. traits:
  2. - type: container-image
  3. properties:
  4. image: busybox-1.34.0
  5. containerName: sidecar-nginx
  6. imagePullPolicy: IfNotPresent

Multiple container patch is also available.

  1. traits:
  2. - type: container-image
  3. properties:
  4. containers:
  5. - containerName: busybox
  6. image: busybox-1.34.0
  7. imagePullPolicy: IfNotPresent
  8. - containerName: sidecar-nginx
  9. image: nginx-1.20

The command trait can be used to modify the original running command in deployment’s pods.

  1. traits:
  2. - type: command
  3. properties:
  4. command: ["sleep", "8640000"]

The above configuration can be used to patch the main container (the container that uses the name of the component). If you would like to modify another container, you could use the field containerName.

  1. traits:
  2. - type: command
  3. properties:
  4. command: ["sleep", "8640000"]
  5. containerName: sidecar-nginx

If you want to replace the existing args in the container, instead of the command, use the args parameter.

  1. traits:
  2. - type: command
  3. properties:
  4. args: ["86400"]

If you want to append/delete args to the existing args, use the addArgs/delArgs parameter. This can be useful if you have lots of args to be managed.

  1. traits:
  2. - type: command
  3. properties:
  4. addArgs: ["86400"]
  1. traits:
  2. - type: command
  3. properties:
  4. delArgs: ["86400"]

You can also configure commands in multiple containers.

  1. traits:
  2. - type: command
  3. properties:
  4. containers:
  5. - containerName: busybox
  6. command: ["sleep", "8640000"]
  7. - containerName: sidecar-nginx
  8. args: ["-q"]

With the trait env, you can easily manipulate the declared environment variables.

For example, the following usage shows how to set multiple environment variables in the main container (the container uses the component’s name). If any environment variable does not exist, it will be added. If exists, it will be updated.

  1. traits:
  2. - type: env
  3. properties:
  4. env:
  5. key_first: value_first
  6. key_second: value_second

You can remove existing environment variables by setting the unset field.

  1. traits:
  2. - type: env
  3. properties:
  4. unset: ["key_existing_first", "key_existing_second"]

If you would like to clear all the existing environment variables first, and then add new variables, use replace: true.

  1. traits:
  2. - type: env
  3. properties:
  4. env:
  5. key_first: value_first
  6. key_second: value_second
  7. replace: true

If you want to modify the environment variable in other containers, use the containerName field.

  1. traits:
  2. - type: env
  3. properties:
  4. env:
  5. key_first: value_first
  6. key_second: value_second
  7. containerName: sidecar-nginx

You can set environment variables in multiple containers as well.

  1. traits:
  2. - type: env
  3. properties:
  4. containers:
  5. - containerName: busybox
  6. env:
  7. key_for_busybox_first: value_first
  8. key_for_busybox_second: value_second
  9. - containerName: sidecar-nginx
  10. env:
  11. key_for_nginx_first: value_first
  12. key_for_nginx_second: value_second

To add/update/remove labels or annotations for the workload (like Kubernetes Deployment), use the labels or annotations trait.

  1. traits:
  2. # the `labels` trait will add/delete label key/value pair to the
  3. # labels of the workload and the template inside the spec of the workload (if exists)
  4. # 1. if original labels contains the key, value will be overridden
  5. # 2. if original labels do not contain the key, value will be added
  6. # 3. if original labels contains the key and the value is null, the key will be removed
  7. - type: labels
  8. properties:
  9. added-label-key: added-label-value
  10. label-key: modified-label-value
  11. to-delete-label-key: null
  1. traits:
  2. # the `annotations` trait will add/delete annotation key/value pair to the
  3. # labels of the workload and the template inside the spec of the workload (if exists)
  4. # 1. if original annotations contains the key, value will be overridden
  5. # 2. if original annotations do not contain the key, value will be added
  6. # 3. if original annotations contains the key and the value is null, the key will be removed
  7. - type: annotations
  8. properties:
  9. added-annotation-key: added-annotation-value
  10. annotation-key: modified-annotation-value
  11. to-delete-annotation-key: null

Except for the above trait, a more powerful but more complex way to modify the original resources is to use the json-patch or json-merge-patch trait. They follow the RFC 6902 and RFC 7386 respectively. Usage examples are shown below.

  1. traits:
  2. # the json patch can be used to add, replace and delete fields
  3. # the following part will
  4. # 1. add `deploy-label-key` to deployment labels
  5. # 2. set deployment replicas to 3
  6. # 3. set `pod-label-key` to `pod-label-modified-value` in pod labels
  7. # 4. delete `to-delete-label-key` in pod labels
  8. # 5. add sidecar container for pod
  9. - type: json-patch
  10. properties:
  11. operations:
  12. - op: add
  13. path: "/spec/replicas"
  14. value: 3
  15. - op: replace
  16. path: "/spec/template/metadata/labels/pod-label-key"
  17. value: pod-label-modified-value
  18. - op: remove
  19. path: "/spec/template/metadata/labels/to-delete-label-key"
  20. - op: add
  21. path: "/spec/template/spec/containers/1"
  22. value:
  23. name: busybox-sidecar
  24. image: busybox:1.34
  25. command: ["sleep", "864000"]
  1. traits:
  2. # the json merge patch can be used to add, replace and delete fields
  3. # the following part will
  4. # 1. add `deploy-label-key` to deployment labels
  5. # 2. set deployment replicas to 3
  6. # 3. set `pod-label-key` to `pod-label-modified-value` in pod labels
  7. # 4. delete `to-delete-label-key` in pod labels
  8. # 5. reset `containers` for pod
  9. - type: json-merge-patch
  10. properties:
  11. metadata:
  12. labels:
  13. deploy-label-key: deploy-label-added-value
  14. spec:
  15. replicas: 3
  16. template:
  17. metadata:
  18. labels:
  19. pod-label-key: pod-label-modified-value
  20. to-delete-label-key: null
  21. spec:
  22. containers:
  23. - name: busybox-new
  24. image: busybox:1.34
  25. command: ["sleep", "864000"]

The general idea is to using override policy to override traits. Then you can distribute reference objects with different traits for different clusters.

Assume we’re distributing the following Deployment YAML to multi-clusters:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: demo
  6. name: demo
  7. namespace: demo
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: demo
  13. template:
  14. metadata:
  15. labels:
  16. app: demo
  17. spec:
  18. containers:
  19. - image: oamdev/testapp:v1
  20. name: demo

We can specify the following topology policies.

  1. apiVersion: core.oam.dev/v1alpha1
  2. kind: Policy
  3. metadata:
  4. name: cluster-beijing
  5. namespace: demo
  6. type: topology
  7. properties:
  8. clusters: ["<clusterid1>"]
  9. ---
  10. apiVersion: core.oam.dev/v1alpha1
  11. kind: Policy
  12. metadata:
  13. name: cluster-hangzhou
  14. namespace: demo
  15. type: topology
  16. properties:
  17. clusters: ["<clusterid2>"]

Then we can use override policy to override with different traits for the reference objects.

  1. apiVersion: core.oam.dev/v1alpha1
  2. kind: Policy
  3. metadata:
  4. name: override-replic-beijing
  5. namespace: demo
  6. type: override
  7. properties:
  8. components:
  9. - name: "demo"
  10. traits:
  11. - type: scaler
  12. properties:
  13. replicas: 3
  14. ---
  15. apiVersion: core.oam.dev/v1alpha1
  16. kind: Policy
  17. metadata:
  18. name: override-replic-hangzhou
  19. namespace: demo
  20. type: override
  21. properties:
  22. components:
  23. - name: "demo"
  24. traits:
  25. - type: scaler
  26. properties:
  27. replicas: 5

The workflow can be defined like:

  1. apiVersion: core.oam.dev/v1alpha1
  2. kind: Workflow
  3. metadata:
  4. name: deploy-demo
  5. namespace: demo
  6. steps:
  7. - type: deploy
  8. name: deploy-bejing
  9. properties:
  10. policies: ["override-replic-beijing", "cluster-beijing"]
  11. - type: deploy
  12. name: deploy-hangzhou
  13. properties:
  14. policies: ["override-replic-hangzhou", "cluster-hangzhou"]

As a result, we can combine them and trigger the final deploy by the following application:

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: demo
  5. namespace: demo
  6. annotations:
  7. app.oam.dev/publishVersion: version1
  8. spec:
  9. components:
  10. - name: demo
  11. type: ref-objects
  12. properties:
  13. objects:
  14. - apiVersion: apps/v1
  15. kind: Deployment
  16. name: demo
  17. workflow:
  18. ref: deploy-demo

With the help of KubeVela, you can reference and distribute any Kubernetes resources to multi clusters.

Last updated on Aug 4, 2023 by Daniel Higuero