Version: v1.8

Patch in the Definitions

When we are writing the definition, sometimes we need to patch to the corresponding component or traits. We can use the patch capability when you’re writing trait definitions or workflow step definitions.

By default, KubeVela will merge patched values with CUE’s merge. However, CUE cannot handle conflicting fields currently.

For example, if replicas=5 has been set in a component instance, once there is another trait, attempting to patch the value of the replicas field, it will fail. So we recommend that you need to plan ahead and don’t use duplicate fields between components and traits.

But in some cases, we do need to deal with overwriting fields that have already been assigned a value. For example, when set up resources in multi-environments, we hope that the envs in different environments are different: i.e., the default env is MODE=PROD, and in the test environment, it needs to be modified to MODE=DEV DEBUG=true .

In this case, we can apply the following application:

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: deploy-with-override
  5. spec:
  6. components:
  7. - name: nginx-with-override
  8. type: webservice
  9. properties:
  10. image: nginx
  11. env:
  12. - name: MODE
  13. value: prod
  14. policies:
  15. - name: test
  16. type: topology
  17. properties:
  18. clusters: ["local"]
  19. namespace: test
  20. - name: prod
  21. type: topology
  22. properties:
  23. clusters: ["local"]
  24. namespace: prod
  25. - name: override-env
  26. type: override
  27. properties:
  28. components:
  29. - name: nginx-with-override
  30. traits:
  31. - type: env
  32. properties:
  33. env:
  34. MODE: test
  35. DEBUG: "true"
  36. workflow:
  37. steps:
  38. - type: deploy
  39. name: deploy-test
  40. properties:
  41. policies: ["test", "override-env"]
  42. - type: deploy
  43. name: deploy-prod
  44. properties:
  45. policies: ["prod"]

After deploying the application, you can see that in the test namespace, the envs of the nginx application are as follows:

  1. spec:
  2. containers:
  3. - env:
  4. - name: MODE
  5. value: test
  6. - name: DEBUG
  7. value: "true"

At the same time, in the prod namespace, the envs are as follows:

  1. spec:
  2. containers:
  3. - env:
  4. - name: MODE
  5. value: prod

deploy-test will deploy nginx to the test namespace. At the same time, the env trait overwrite the same envs by using the patch strategy, thus adding MODE=test DEBUG=true in the test namespace, while the nginx in the prod namespace will retain the original MODE=prod env.

KubeVela provides a series of patching strategies to help resolve conflicting issues. When writing patch traits and workflow steps, you can use these patch strategies to solve conflicting values. Note that the patch strategy is not an official capability provided by CUE, but an extension developed by KubeVela.

For the usage of all patch strategies, please refer to Patch Strategy.

Patch is a very common pattern of trait definitions, i.e. the app operators can amend/patch attributes to the component instance or traits to enable certain operational features such as sidecar or node affinity rules (and this should be done before the resources applied to target cluster).

This pattern is extremely useful when the component definition is provided by third-party component provider (e.g. software distributor) so app operators do not have privilege to change its template.

Below is an example for node-affinity trait:

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: TraitDefinition
  3. metadata:
  4. annotations:
  5. definition.oam.dev/description: "affinity specify node affinity and toleration"
  6. name: node-affinity
  7. spec:
  8. appliesToWorkloads:
  9. - deployments.apps
  10. podDisruptive: true
  11. schematic:
  12. cue:
  13. template: |
  14. patch: {
  15. spec: template: spec: {
  16. if parameter.affinity != _|_ {
  17. affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
  18. // +patchStrategy=retainKeys
  19. matchExpressions: [
  20. for k, v in parameter.affinity {
  21. key: k
  22. operator: "In"
  23. values: v
  24. },
  25. ]}]
  26. }
  27. if parameter.tolerations != _|_ {
  28. // +patchStrategy=retainKeys
  29. tolerations: [
  30. for k, v in parameter.tolerations {
  31. effect: "NoSchedule"
  32. key: k
  33. operator: "Equal"
  34. value: v
  35. }]
  36. }
  37. }
  38. }
  39. parameter: {
  40. affinity?: [string]: [...string]
  41. tolerations?: [string]: string
  42. }

In patch, we declare the component object fields that this trait will patch to.

The patch trait above assumes the target component instance have spec.template.spec.affinity field. Hence, we need to use appliesToWorkloads to enforce the trait only applies to those workload types have this field. Meanwhile, we use // +patchStrategy=retainKeys to override the conflict fields in the original component instance.

Another important field is podDisruptive, this patch trait will patch to the pod template field, so changes on any field of this trait will cause the pod to restart, We should add podDisruptive and make it to be true to tell users that applying this trait will cause the pod to restart.

Now the users could declare they want to add node affinity rules to the component instance as below:

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: testapp
  5. spec:
  6. components:
  7. - name: express-server
  8. type: webservice
  9. properties:
  10. image: oamdev/testapp:v1
  11. traits:
  12. - type: "gateway"
  13. properties:
  14. domain: testsvc.example.com
  15. http:
  16. "/": 8000
  17. - type: "node-affinity"
  18. properties:
  19. affinity:
  20. server-owner: ["owner1","owner2"]
  21. resource-pool: ["pool1","pool2","pool3"]
  22. tolerations:
  23. resource-pool: "broken-pool1"
  24. server-owner: "old-owner"

Note: it’s available after KubeVela v1.4.

You can also patch to other traits by using patchOutputs in the Definition. Such as:

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: TraitDefinition
  3. metadata:
  4. name: patch-annotation
  5. spec:
  6. schematic:
  7. cue:
  8. template: |
  9. patchOutputs: {
  10. ingress: {
  11. metadata: annotations: {
  12. "kubernetes.io/ingress.class": "istio"
  13. }
  14. }
  15. }

The patch trait above assumes that the component it binds has other traits which have ingress resource. The patch trait will patch an istio annotation to the ingress resource.

We can deploy the following application:

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: testapp
  5. spec:
  6. components:
  7. - name: express-server
  8. type: webservice
  9. properties:
  10. image: oamdev/testapp:v1
  11. traits:
  12. - type: "gateway"
  13. properties:
  14. domain: testsvc.example.com
  15. http:
  16. "/": 8000
  17. - type: "patch-annotation"
  18. properties:
  19. name: "patch-annotation-trait"

And the ingress resource is now like:

  1. apiVersion: networking.k8s.io/v1beta1
  2. kind: Ingress
  3. metadata:
  4. annotations:
  5. kubernetes.io/ingress.class: istio
  6. name: ingress
  7. spec:
  8. rules:
  9. spec:
  10. rules:
  11. - host: testsvc.example.com
  12. http:
  13. paths:
  14. - backend:
  15. service:
  16. name: express-server
  17. port:
  18. number: 8000
  19. path: /
  20. pathType: ImplementationSpecific

Note: You need to put this kind of trait at the last place to patch for other traits.

You can even write a for-loop in patch trait, below is an example that can patch for all resources with specific annotation.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: TraitDefinition
  3. metadata:
  4. name: patch-for-argo
  5. spec:
  6. schematic:
  7. cue:
  8. template: |
  9. patch: {
  10. metadata: annotations: {
  11. "argocd.argoproj.io/compare-options": "IgnoreExtraneous"
  12. "argocd.argoproj.io/sync-options": "Prune=false"
  13. }
  14. }
  15. patchOutputs: {
  16. for k, v in context.outputs {
  17. "\(k)": {
  18. metadata: annotations: {
  19. "argocd.argoproj.io/compare-options": "IgnoreExtraneous"
  20. "argocd.argoproj.io/sync-options": "Prune=false"
  21. }
  22. }
  23. }
  24. }

This example solved a real case.

When you use op.#ApplyComponent in a custom workflow step definition, you can patch component or traits in the patch field.

For example, when using Istio for canary release, you can add annotations of the release name to the component in patch: workload of op.#ApplyComponent; meanwhile, you can change the traffic and destination rule in patch: traits: <trait-name>.

Following is a real example of canary rollout in a custom workflow step:

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: WorkflowStepDefinition
  3. metadata:
  4. name: canary-rollout
  5. namespace: vela-system
  6. spec:
  7. schematic:
  8. cue:
  9. template: |-
  10. import ("vela/op")
  11. parameter: {
  12. batchPartition: int
  13. traffic: weightedTargets: [...{
  14. revision: string
  15. weight: int
  16. }]
  17. }
  18. comps__: op.#Load
  19. compNames__: [ for name, c in comps__.value {name}]
  20. comp__: compNames__[0]
  21. apply: op.#ApplyComponent & {
  22. value: comps__.value[comp__]
  23. patch: {
  24. workload: {
  25. // +patchStrategy=retainKeys
  26. metadata: metadata: annotations: {
  27. "rollout": context.name
  28. }
  29. }
  30. traits: "rollout": {
  31. spec: rolloutPlan: batchPartition: parameter.batchPartition
  32. }
  33. traits: "virtualService": {
  34. spec:
  35. // +patchStrategy=retainKeys
  36. http: [
  37. {
  38. route: [
  39. for i, t in parameter.traffic.weightedTargets {
  40. destination: {
  41. host: comp__
  42. subset: t.revision
  43. }
  44. weight: t.weight
  45. }]
  46. },
  47. ]
  48. }
  49. traits: "destinationRule": {
  50. // +patchStrategy=retainKeys
  51. spec: {
  52. host: comp__
  53. subsets: [
  54. for i, t in parameter.traffic.weightedTargets {
  55. name: t.revision
  56. labels: {"app.oam.dev/revision": t.revision}
  57. },
  58. ]}
  59. }
  60. }
  61. }
  62. applyRemaining: op.#ApplyRemaining & {
  63. exceptions: [comp__]
  64. }

After deploying the above definition, you can apply the following workflow to control the canary rollout:

  1. ...
  2. workflow:
  3. steps:
  4. - name: rollout-1st-batch
  5. type: canary-rollout
  6. properties:
  7. batchPartition: 0
  8. traffic:
  9. weightedTargets:
  10. - revision: reviews-v1
  11. weight: 90
  12. - revision: reviews-v2
  13. weight: 10
  14. - name: manual-approval
  15. type: suspend
  16. - name: rollout-rest
  17. type: canary-rollout
  18. properties:
  19. batchPartition: 1
  20. traffic:
  21. weightedTargets:
  22. - revision: reviews-v2
  23. weight: 100
  24. ...

In the first and third steps, we declared different revisions and weight in traffic. In the step definition of canary-rollout, we will overwrite the revision and weight declared by the user through patch, so as to control the progressive rollout in the workflow.

For more details of using KubeVela with Istio progressive release, please refer to Progressive Rollout with Istio.

Last updated on May 6, 2023 by Tianxin Dong