Version: v1.0

Debug, Test and Dry-run

With flexibility in defining abstractions, it’s important to be able to debug, test and dry-run the CUE based definitions. This tutorial will show this step by step.

Prerequisites

Please make sure below CLIs are present in your environment:

Define Definition and Template

We recommend to define the Definition Object in two separate parts: the CRD part and the CUE template. This enable us to debug, test and dry-run the CUE template.

Let’s name the CRD part as def.yaml.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: ComponentDefinition
  3. metadata:
  4. name: microservice
  5. annotations:
  6. definition.oam.dev/description: "Describes a microservice combo Deployment with Service."
  7. spec:
  8. workload:
  9. definition:
  10. apiVersion: apps/v1
  11. kind: Deployment
  12. schematic:
  13. cue:
  14. template: |

And the CUE template part as def.cue, then we can use CUE commands such as cue fmt / cue vet to format and validate the CUE file.

  1. output: {
  2. // Deployment
  3. apiVersion: "apps/v1"
  4. kind: "Deployment"
  5. metadata: {
  6. name: context.name
  7. namespace: "default"
  8. }
  9. spec: {
  10. selector: matchLabels: {
  11. "app": context.name
  12. }
  13. template: {
  14. metadata: {
  15. labels: {
  16. "app": context.name
  17. "version": parameter.version
  18. }
  19. }
  20. spec: {
  21. serviceAccountName: "default"
  22. terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
  23. containers: [{
  24. name: context.name
  25. image: parameter.image
  26. ports: [{
  27. if parameter.containerPort != _|_ {
  28. containerPort: parameter.containerPort
  29. }
  30. if parameter.containerPort == _|_ {
  31. containerPort: parameter.servicePort
  32. }
  33. }]
  34. if parameter.env != _|_ {
  35. env: [
  36. for k, v in parameter.env {
  37. name: k
  38. value: v
  39. },
  40. ]
  41. }
  42. resources: {
  43. requests: {
  44. if parameter.cpu != _|_ {
  45. cpu: parameter.cpu
  46. }
  47. if parameter.memory != _|_ {
  48. memory: parameter.memory
  49. }
  50. }
  51. }
  52. }]
  53. }
  54. }
  55. }
  56. }
  57. // Service
  58. outputs: service: {
  59. apiVersion: "v1"
  60. kind: "Service"
  61. metadata: {
  62. name: context.name
  63. labels: {
  64. "app": context.name
  65. }
  66. }
  67. spec: {
  68. type: "ClusterIP"
  69. selector: {
  70. "app": context.name
  71. }
  72. ports: [{
  73. port: parameter.servicePort
  74. if parameter.containerPort != _|_ {
  75. targetPort: parameter.containerPort
  76. }
  77. if parameter.containerPort == _|_ {
  78. targetPort: parameter.servicePort
  79. }
  80. }]
  81. }
  82. }
  83. parameter: {
  84. version: *"v1" | string
  85. image: string
  86. servicePort: int
  87. containerPort?: int
  88. // +usage=Optional duration in seconds the pod needs to terminate gracefully
  89. podShutdownGraceSeconds: *30 | int
  90. env: [string]: string
  91. cpu?: string
  92. memory?: string
  93. }

After everything is done, there’s a script hack/vela-templates/mergedef.sh to merge the def.yaml and def.cue into a completed Definition Object.

  1. $ ./hack/vela-templates/mergedef.sh def.yaml def.cue > microservice-def.yaml

Debug CUE template

Use cue vet to Validate

  1. $ cue vet def.cue
  2. output.metadata.name: reference "context" not found:
  3. ./def.cue:6:14
  4. output.spec.selector.matchLabels.app: reference "context" not found:
  5. ./def.cue:11:11
  6. output.spec.template.metadata.labels.app: reference "context" not found:
  7. ./def.cue:16:17
  8. output.spec.template.spec.containers.name: reference "context" not found:
  9. ./def.cue:24:13
  10. outputs.service.metadata.name: reference "context" not found:
  11. ./def.cue:62:9
  12. outputs.service.metadata.labels.app: reference "context" not found:
  13. ./def.cue:64:11
  14. outputs.service.spec.selector.app: reference "context" not found:
  15. ./def.cue:70:11

The reference "context" not found is a common error in this step as context is a runtime information that only exist in KubeVela controllers. In order to validate the CUE template end-to-end, we can add a mock context in def.cue.

Note that you need to remove all mock data when you finished the validation.

  1. ... // existing template data
  2. context: {
  3. name: string
  4. }

Then execute the command:

  1. $ cue vet def.cue
  2. some instances are incomplete; use the -c flag to show errors or suppress this message

The reference "context" not found error is gone, but cue vet only validates the data type which is not enough to ensure the login in template is correct. Hence we need to use cue vet -c for complete validation:

  1. $ cue vet def.cue -c
  2. context.name: incomplete value string
  3. output.metadata.name: incomplete value string
  4. output.spec.selector.matchLabels.app: incomplete value string
  5. output.spec.template.metadata.labels.app: incomplete value string
  6. output.spec.template.spec.containers.0.image: incomplete value string
  7. output.spec.template.spec.containers.0.name: incomplete value string
  8. output.spec.template.spec.containers.0.ports.0.containerPort: incomplete value int
  9. outputs.service.metadata.labels.app: incomplete value string
  10. outputs.service.metadata.name: incomplete value string
  11. outputs.service.spec.ports.0.port: incomplete value int
  12. outputs.service.spec.ports.0.targetPort: incomplete value int
  13. outputs.service.spec.selector.app: incomplete value string
  14. parameter.image: incomplete value string
  15. parameter.servicePort: incomplete value int

It now complains some runtime data is incomplete (because context and parameter do not have value), let’s now fill in more mock data in the def.cue file:

  1. context: {
  2. name: "test-app"
  3. }
  4. parameter: {
  5. version: "v2"
  6. image: "image-address"
  7. servicePort: 80
  8. containerPort: 8000
  9. env: {"PORT": "8000"}
  10. cpu: "500m"
  11. memory: "128Mi"
  12. }

It won’t complain now which means validation is passed:

  1. cue vet def.cue -c

Use cue export to Check the Rendered Resources

The cue export can export rendered result in YAMl foramt:

  1. $ cue export -e output def.cue --out yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: test-app
  6. namespace: default
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: test-app
  11. template:
  12. metadata:
  13. labels:
  14. app: test-app
  15. version: v2
  16. spec:
  17. serviceAccountName: default
  18. terminationGracePeriodSeconds: 30
  19. containers:
  20. - name: test-app
  21. image: image-address
  1. $ cue export -e outputs.service def.cue --out yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: test-app
  6. labels:
  7. app: test-app
  8. spec:
  9. selector:
  10. app: test-app
  11. type: ClusterIP

Test CUE Template with Kube package

KubeVela automatically generates internal CUE packages for all built-in Kubernetes API resources including CRDs. You can import them in CUE template to simplify your templates and help you do the validation.

There are two kinds of ways to import internal kube packages.

  1. Import them with fixed style: kube/<apiVersion> and using it by Kind.

    1. import (
    2. apps "kube/apps/v1"
    3. corev1 "kube/v1"
    4. )
    5. // output is validated by Deployment.
    6. output: apps.#Deployment
    7. outputs: service: corev1.#Service

    This way is very easy to remember and use because it aligns with the K8s Object usage, only need to add a prefix kube/ before apiVersion. While this way only supported in KubeVela, so you can only debug and test it with vela system dry-run.

  2. Import them with third-party packages style. You can run vela system cue-packages to list all build-in kube packages to know the third-party packages supported currently.

    1. $ vela system cue-packages
    2. DEFINITION-NAME IMPORT-PATH USAGE
    3. #Deployment k8s.io/apps/v1 Kube Object for apps/v1.Deployment
    4. #Service k8s.io/core/v1 Kube Object for v1.Service
    5. #Secret k8s.io/core/v1 Kube Object for v1.Secret
    6. #Node k8s.io/core/v1 Kube Object for v1.Node
    7. #PersistentVolume k8s.io/core/v1 Kube Object for v1.PersistentVolume
    8. #Endpoints k8s.io/core/v1 Kube Object for v1.Endpoints
    9. #Pod k8s.io/core/v1 Kube Object for v1.Pod

    In fact, they are all built-in packages, but you can import them with the import-path like the third-party packages. In this way, you could debug with cue cli client.

A workflow to debug with kube packages

Here’s a workflow that you can debug and test the CUE template with cue CLI and use exactly the same CUE template in KubeVela.

  1. Create a test directory, Init CUE modules.
  1. mkdir cue-debug && cd cue-debug/
  2. cue mod init oam.dev
  3. go mod init oam.dev
  4. touch def.cue
  1. Download the third-party packages by using cue CLI.

In KubeVela, we don’t need to download these packages as they’re automatically generated from K8s API. But for local test, we need to use cue get go to fetch Go packages and convert them to CUE format files.

So, by using K8s Deployment and Serivice, we need download and convert to CUE definitions for the core and apps Kubernetes modules like below:

  1. cue get go k8s.io/api/core/v1
  2. cue get go k8s.io/api/apps/v1

After that, the module directory will show the following contents:

  1. ├── cue.mod
  2. ├── gen
  3. └── k8s.io
  4. ├── api
  5. ├── apps
  6. └── core
  7. └── apimachinery
  8. └── pkg
  9. ├── module.cue
  10. ├── pkg
  11. └── usr
  12. ├── def.cue
  13. ├── go.mod
  14. └── go.sum

The package import path in CUE template should be:

  1. import (
  2. apps "k8s.io/api/apps/v1"
  3. corev1 "k8s.io/api/core/v1"
  4. )
  1. Refactor directory hierarchy.

Our goal is to test template locally and use the same template in KubeVela. So we need to refactor our local CUE module directories a bit to align with the import path provided by KubeVela,

Copy the apps and core from cue.mod/gen/k8s.io/api to cue.mod/gen/k8s.io. (Note we should keep the source directory apps and core in gen/k8s.io/api to avoid package dependency issues).

  1. cp -r cue.mod/gen/k8s.io/api/apps cue.mod/gen/k8s.io
  2. cp -r cue.mod/gen/k8s.io/api/core cue.mod/gen/k8s.io

The modified module directory should like:

  1. ├── cue.mod
  2. ├── gen
  3. └── k8s.io
  4. ├── api
  5. ├── apps
  6. └── core
  7. ├── apimachinery
  8. └── pkg
  9. ├── apps
  10. └── core
  11. ├── module.cue
  12. ├── pkg
  13. └── usr
  14. ├── def.cue
  15. ├── go.mod
  16. └── go.sum

So, you can import the package use the following path that aligns with KubeVela:

  1. import (
  2. apps "k8s.io/apps/v1"
  3. corev1 "k8s.io/core/v1"
  4. )
  1. Test and Run.

Finally, we can test CUE Template which use the Kube package.

  1. import (
  2. apps "k8s.io/apps/v1"
  3. corev1 "k8s.io/core/v1"
  4. )
  5. // output is validated by Deployment.
  6. output: apps.#Deployment
  7. output: {
  8. metadata: {
  9. name: context.name
  10. namespace: "default"
  11. }
  12. spec: {
  13. selector: matchLabels: {
  14. "app": context.name
  15. }
  16. template: {
  17. metadata: {
  18. labels: {
  19. "app": context.name
  20. "version": parameter.version
  21. }
  22. }
  23. spec: {
  24. terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
  25. containers: [{
  26. name: context.name
  27. image: parameter.image
  28. ports: [{
  29. if parameter.containerPort != _|_ {
  30. containerPort: parameter.containerPort
  31. }
  32. if parameter.containerPort == _|_ {
  33. containerPort: parameter.servicePort
  34. }
  35. }]
  36. if parameter.env != _|_ {
  37. env: [
  38. for k, v in parameter.env {
  39. name: k
  40. value: v
  41. },
  42. ]
  43. }
  44. resources: {
  45. requests: {
  46. if parameter.cpu != _|_ {
  47. cpu: parameter.cpu
  48. }
  49. if parameter.memory != _|_ {
  50. memory: parameter.memory
  51. }
  52. }
  53. }
  54. }]
  55. }
  56. }
  57. }
  58. }
  59. outputs:{
  60. service: corev1.#Service
  61. }
  62. // Service
  63. outputs: service: {
  64. metadata: {
  65. name: context.name
  66. labels: {
  67. "app": context.name
  68. }
  69. }
  70. spec: {
  71. //type: "ClusterIP"
  72. selector: {
  73. "app": context.name
  74. }
  75. ports: [{
  76. port: parameter.servicePort
  77. if parameter.containerPort != _|_ {
  78. targetPort: parameter.containerPort
  79. }
  80. if parameter.containerPort == _|_ {
  81. targetPort: parameter.servicePort
  82. }
  83. }]
  84. }
  85. }
  86. parameter: {
  87. version: *"v1" | string
  88. image: string
  89. servicePort: int
  90. containerPort?: int
  91. // +usage=Optional duration in seconds the pod needs to terminate gracefully
  92. podShutdownGraceSeconds: *30 | int
  93. env: [string]: string
  94. cpu?: string
  95. memory?: string
  96. }
  97. // mock context data
  98. context: {
  99. name: "test"
  100. }
  101. // mock parameter data
  102. parameter: {
  103. image: "test-image"
  104. servicePort: 8000
  105. env: {
  106. "HELLO": "WORLD"
  107. }
  108. }

Use cue export to see the export result.

  1. $ cue export def.cue --out yaml
  2. output:
  3. metadata:
  4. name: test
  5. namespace: default
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: test
  10. template:
  11. metadata:
  12. labels:
  13. app: test
  14. version: v1
  15. spec:
  16. terminationGracePeriodSeconds: 30
  17. containers:
  18. - name: test
  19. image: test-image
  20. ports:
  21. - containerPort: 8000
  22. env:
  23. - name: HELLO
  24. value: WORLD
  25. resources:
  26. requests: {}
  27. outputs:
  28. service:
  29. metadata:
  30. name: test
  31. labels:
  32. app: test
  33. spec:
  34. selector:
  35. app: test
  36. ports:
  37. - port: 8000
  38. targetPort: 8000
  39. parameter:
  40. version: v1
  41. image: test-image
  42. servicePort: 8000
  43. podShutdownGraceSeconds: 30
  44. env:
  45. HELLO: WORLD
  46. context:
  47. name: test

Dry-Run the Application

When CUE template is good, we can use vela system dry-run to dry run and check the rendered resources in real Kubernetes cluster. This command will exactly execute the same render logic in KubeVela’s Application Controller and output the result for you.

First, we need use mergedef.sh to merge the definition and cue files.

  1. $ mergedef.sh def.yaml def.cue > componentdef.yaml

Then, let’s create an Application named test-app.yaml.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: boutique
  5. namespace: default
  6. spec:
  7. components:
  8. - name: frontend
  9. type: microservice
  10. properties:
  11. image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
  12. servicePort: 80
  13. containerPort: 8080
  14. env:
  15. PORT: "8080"
  16. cpu: "100m"
  17. memory: "64Mi"

Dry run the application by using vela system dry-run.

  1. $ vela system dry-run -f test-app.yaml -d componentdef.yaml
  2. ---
  3. # Application(boutique) -- Comopnent(frontend)
  4. ---
  5. apiVersion: apps/v1
  6. kind: Deployment
  7. metadata:
  8. labels:
  9. app.oam.dev/component: frontend
  10. app.oam.dev/name: boutique
  11. workload.oam.dev/type: microservice
  12. name: frontend
  13. namespace: default
  14. spec:
  15. selector:
  16. matchLabels:
  17. app: frontend
  18. template:
  19. metadata:
  20. labels:
  21. app: frontend
  22. version: v1
  23. spec:
  24. containers:
  25. - env:
  26. - name: PORT
  27. value: "8080"
  28. image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
  29. name: frontend
  30. ports:
  31. - containerPort: 8080
  32. resources:
  33. requests:
  34. cpu: 100m
  35. memory: 64Mi
  36. serviceAccountName: default
  37. terminationGracePeriodSeconds: 30
  38. ---
  39. apiVersion: v1
  40. kind: Service
  41. metadata:
  42. labels:
  43. app: frontend
  44. app.oam.dev/component: frontend
  45. app.oam.dev/name: boutique
  46. trait.oam.dev/resource: service
  47. trait.oam.dev/type: AuxiliaryWorkload
  48. name: frontend
  49. spec:
  50. ports:
  51. - port: 80
  52. targetPort: 8080
  53. selector:
  54. app: frontend
  55. type: ClusterIP
  56. ---

-d or --definitions is a useful flag permitting user to provide capability definitions used in the application from local files. dry-run cmd will prioritize the provided capabilities than the living ones in the cluster. If the capability is not found in local files and cluster, it will raise an error.

Live-Diff the Application

vela system live-diff allows users to have a preview of what would change if upgrade an application. It basically generates a diff between the specific revision of an application and the result of vela system dry-run. The result shows the changes (added/modified/removed/no_change) of the application as well as its sub-resources, such as components and traits. live-diff will not make any changes to the living cluster, so it’s very helpful if you want to update an application but worry about the unknown results that may be produced.

Let’s prepare an application and deploy it.

ComponentDefinitions and TraitDefinitions used in this sample are stored in ./doc/examples/live-diff/definitions.

  1. # app.yaml
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: livediff-demo
  6. spec:
  7. components:
  8. - name: myweb-1
  9. type: myworker
  10. properties:
  11. image: "busybox"
  12. cmd:
  13. - sleep
  14. - "1000"
  15. lives: "3"
  16. enemies: "alien"
  17. traits:
  18. - type: myingress
  19. properties:
  20. domain: "www.example.com"
  21. http:
  22. "/": 80
  23. - type: myscaler
  24. properties:
  25. replicas: 2
  26. - name: myweb-2
  27. type: myworker
  28. properties:
  29. image: "busybox"
  30. cmd:
  31. - sleep
  32. - "1000"
  33. lives: "3"
  34. enemies: "alien"
  1. kubectl apply ./doc/examples/live-diff/definitions
  2. kubectl apply ./doc/examples/live-diff/app.yaml

Then, assume we want to update the application with below configuration. To preview changes brought by updating while not really apply updated configuration into the cluster, we can use live-diff here.

  1. # app-updated.yaml
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: livediff-demo
  6. spec:
  7. components:
  8. - name: myweb-1
  9. type: myworker
  10. properties:
  11. image: "busybox"
  12. cmd:
  13. - sleep
  14. - "2000" # change a component property
  15. lives: "3"
  16. enemies: "alien"
  17. traits:
  18. - type: myingress
  19. properties:
  20. domain: "www.example.com"
  21. http:
  22. "/": 90 # change a trait
  23. # - type: myscaler # remove a trait
  24. # properties:
  25. # replicas: 2
  26. - name: myweb-2
  27. type: myworker
  28. properties: # no change on component property
  29. image: "busybox"
  30. cmd:
  31. - sleep
  32. - "1000"
  33. lives: "3"
  34. enemies: "alien"
  35. traits:
  36. - type: myingress # add a trait
  37. properties:
  38. domain: "www.example.com"
  39. http:
  40. "/": 90
  41. - name: myweb-3 # add a component
  42. type: myworker
  43. properties:
  44. image: "busybox"
  45. cmd:
  46. - sleep
  47. - "1000"
  48. lives: "3"
  49. enemies: "alien"
  50. traits:
  51. - type: myingress
  52. properties:
  53. domain: "www.example.com"
  54. http:
  55. "/": 90
  1. vela system live-diff -f ./doc/examples/live-diff/app-modified.yaml -r livediff-demo-v1

-r or --revision is a flag that specifies the name of a living ApplicationRevision with which you want to compare the updated application.

-c or --context is a flag that specifies the number of lines shown around a change. The unchanged lines which are out of the context of a change will be omitted. It’s useful if the diff result contains a lot of unchanged content while you just want to focus on the changed ones.

Click to view diff result

  1. ---
  2. # Application (application-sample) has been modified(*)
  3. ---
  4. apiVersion: core.oam.dev/v1beta1
  5. kind: Application
  6. metadata:
  7. creationTimestamp: null
  8. - name: application-sample
  9. + name: livediff-demo
  10. namespace: default
  11. spec:
  12. components:
  13. - name: myweb-1
  14. + properties:
  15. + cmd:
  16. + - sleep
  17. + - "2000"
  18. + enemies: alien
  19. + image: busybox
  20. + lives: "3"
  21. + traits:
  22. + - properties:
  23. + domain: www.example.com
  24. + http:
  25. + /: 90
  26. + type: myingress
  27. + type: myworker
  28. + - name: myweb-2
  29. properties:
  30. cmd:
  31. - sleep
  32. - "1000"
  33. enemies: alien
  34. image: busybox
  35. lives: "3"
  36. traits:
  37. - properties:
  38. domain: www.example.com
  39. http:
  40. - /: 80
  41. + /: 90
  42. type: myingress
  43. - - properties:
  44. - replicas: 2
  45. - type: myscaler
  46. type: myworker
  47. - - name: myweb-2
  48. + - name: myweb-3
  49. properties:
  50. cmd:
  51. - sleep
  52. - "1000"
  53. enemies: alien
  54. image: busybox
  55. lives: "3"
  56. + traits:
  57. + - properties:
  58. + domain: www.example.com
  59. + http:
  60. + /: 90
  61. + type: myingress
  62. type: myworker
  63. status:
  64. batchRollingState: ""
  65. currentBatch: 0
  66. rollingState: ""
  67. upgradedReadyReplicas: 0
  68. upgradedReplicas: 0
  69. ---
  70. ## Component (myweb-1) has been modified(*)
  71. ---
  72. apiVersion: core.oam.dev/v1alpha2
  73. kind: Component
  74. metadata:
  75. creationTimestamp: null
  76. labels:
  77. - app.oam.dev/name: application-sample
  78. + app.oam.dev/name: livediff-demo
  79. name: myweb-1
  80. spec:
  81. workload:
  82. apiVersion: apps/v1
  83. kind: Deployment
  84. metadata:
  85. labels:
  86. app.oam.dev/appRevision: ""
  87. app.oam.dev/component: myweb-1
  88. - app.oam.dev/name: application-sample
  89. + app.oam.dev/name: livediff-demo
  90. workload.oam.dev/type: myworker
  91. spec:
  92. selector:
  93. matchLabels:
  94. app.oam.dev/component: myweb-1
  95. template:
  96. metadata:
  97. labels:
  98. app.oam.dev/component: myweb-1
  99. spec:
  100. containers:
  101. - command:
  102. - sleep
  103. - - "1000"
  104. + - "2000"
  105. image: busybox
  106. name: myweb-1
  107. status:
  108. observedGeneration: 0
  109. ---
  110. ### Component (myweb-1) / Trait (myingress/ingress) has been modified(*)
  111. ---
  112. apiVersion: networking.k8s.io/v1beta1
  113. kind: Ingress
  114. metadata:
  115. labels:
  116. app.oam.dev/appRevision: ""
  117. app.oam.dev/component: myweb-1
  118. - app.oam.dev/name: application-sample
  119. + app.oam.dev/name: livediff-demo
  120. trait.oam.dev/resource: ingress
  121. trait.oam.dev/type: myingress
  122. name: myweb-1
  123. spec:
  124. rules:
  125. - host: www.example.com
  126. http:
  127. paths:
  128. - backend:
  129. serviceName: myweb-1
  130. - servicePort: 80
  131. + servicePort: 90
  132. path: /
  133. ---
  134. ### Component (myweb-1) / Trait (myingress/service) has been modified(*)
  135. ---
  136. apiVersion: v1
  137. kind: Service
  138. metadata:
  139. labels:
  140. app.oam.dev/appRevision: ""
  141. app.oam.dev/component: myweb-1
  142. - app.oam.dev/name: application-sample
  143. + app.oam.dev/name: livediff-demo
  144. trait.oam.dev/resource: service
  145. trait.oam.dev/type: myingress
  146. name: myweb-1
  147. spec:
  148. ports:
  149. - - port: 80
  150. - targetPort: 80
  151. + - port: 90
  152. + targetPort: 90
  153. selector:
  154. app.oam.dev/component: myweb-1
  155. ---
  156. ### Component (myweb-1) / Trait (myscaler/scaler) has been removed(-)
  157. ---
  158. - apiVersion: core.oam.dev/v1alpha2
  159. - kind: ManualScalerTrait
  160. - metadata:
  161. - labels:
  162. - app.oam.dev/appRevision: ""
  163. - app.oam.dev/component: myweb-1
  164. - app.oam.dev/name: application-sample
  165. - trait.oam.dev/resource: scaler
  166. - trait.oam.dev/type: myscaler
  167. - spec:
  168. - replicaCount: 2
  169. ---
  170. ## Component (myweb-2) has been modified(*)
  171. ---
  172. apiVersion: core.oam.dev/v1alpha2
  173. kind: Component
  174. metadata:
  175. creationTimestamp: null
  176. labels:
  177. - app.oam.dev/name: application-sample
  178. + app.oam.dev/name: livediff-demo
  179. name: myweb-2
  180. spec:
  181. workload:
  182. apiVersion: apps/v1
  183. kind: Deployment
  184. metadata:
  185. labels:
  186. app.oam.dev/appRevision: ""
  187. app.oam.dev/component: myweb-2
  188. - app.oam.dev/name: application-sample
  189. + app.oam.dev/name: livediff-demo
  190. workload.oam.dev/type: myworker
  191. spec:
  192. selector:
  193. matchLabels:
  194. app.oam.dev/component: myweb-2
  195. template:
  196. metadata:
  197. labels:
  198. app.oam.dev/component: myweb-2
  199. spec:
  200. containers:
  201. - command:
  202. - sleep
  203. - "1000"
  204. image: busybox
  205. name: myweb-2
  206. status:
  207. observedGeneration: 0
  208. ---
  209. ### Component (myweb-2) / Trait (myingress/ingress) has been added(+)
  210. ---
  211. + apiVersion: networking.k8s.io/v1beta1
  212. + kind: Ingress
  213. + metadata:
  214. + labels:
  215. + app.oam.dev/appRevision: ""
  216. + app.oam.dev/component: myweb-2
  217. + app.oam.dev/name: livediff-demo
  218. + trait.oam.dev/resource: ingress
  219. + trait.oam.dev/type: myingress
  220. + name: myweb-2
  221. + spec:
  222. + rules:
  223. + - host: www.example.com
  224. + http:
  225. + paths:
  226. + - backend:
  227. + serviceName: myweb-2
  228. + servicePort: 90
  229. + path: /
  230. ---
  231. ### Component (myweb-2) / Trait (myingress/service) has been added(+)
  232. ---
  233. + apiVersion: v1
  234. + kind: Service
  235. + metadata:
  236. + labels:
  237. + app.oam.dev/appRevision: ""
  238. + app.oam.dev/component: myweb-2
  239. + app.oam.dev/name: livediff-demo
  240. + trait.oam.dev/resource: service
  241. + trait.oam.dev/type: myingress
  242. + name: myweb-2
  243. + spec:
  244. + ports:
  245. + - port: 90
  246. + targetPort: 90
  247. + selector:
  248. + app.oam.dev/component: myweb-2
  249. ---
  250. ## Component (myweb-3) has been added(+)
  251. ---
  252. + apiVersion: core.oam.dev/v1alpha2
  253. + kind: Component
  254. + metadata:
  255. + creationTimestamp: null
  256. + labels:
  257. + app.oam.dev/name: livediff-demo
  258. + name: myweb-3
  259. + spec:
  260. + workload:
  261. + apiVersion: apps/v1
  262. + kind: Deployment
  263. + metadata:
  264. + labels:
  265. + app.oam.dev/appRevision: ""
  266. + app.oam.dev/component: myweb-3
  267. + app.oam.dev/name: livediff-demo
  268. + workload.oam.dev/type: myworker
  269. + spec:
  270. + selector:
  271. + matchLabels:
  272. + app.oam.dev/component: myweb-3
  273. + template:
  274. + metadata:
  275. + labels:
  276. + app.oam.dev/component: myweb-3
  277. + spec:
  278. + containers:
  279. + - command:
  280. + - sleep
  281. + - "1000"
  282. + image: busybox
  283. + name: myweb-3
  284. + status:
  285. + observedGeneration: 0
  286. ---
  287. ### Component (myweb-3) / Trait (myingress/ingress) has been added(+)
  288. ---
  289. + apiVersion: networking.k8s.io/v1beta1
  290. + kind: Ingress
  291. + metadata:
  292. + labels:
  293. + app.oam.dev/appRevision: ""
  294. + app.oam.dev/component: myweb-3
  295. + app.oam.dev/name: livediff-demo
  296. + trait.oam.dev/resource: ingress
  297. + trait.oam.dev/type: myingress
  298. + name: myweb-3
  299. + spec:
  300. + rules:
  301. + - host: www.example.com
  302. + http:
  303. + paths:
  304. + - backend:
  305. + serviceName: myweb-3
  306. + servicePort: 90
  307. + path: /
  308. ---
  309. ### Component (myweb-3) / Trait (myingress/service) has been added(+)
  310. ---
  311. + apiVersion: v1
  312. + kind: Service
  313. + metadata:
  314. + labels:
  315. + app.oam.dev/appRevision: ""
  316. + app.oam.dev/component: myweb-3
  317. + app.oam.dev/name: livediff-demo
  318. + trait.oam.dev/resource: service
  319. + trait.oam.dev/type: myingress
  320. + name: myweb-3
  321. + spec:
  322. + ports:
  323. + - port: 90
  324. + targetPort: 90
  325. + selector:
  326. + app.oam.dev/component: myweb-3