Version: v1.1

FAQ

对比 X

KubeVela 和 Helm 有什么区别?

KubeVela 是一个平台构建工具,用于创建基于 Kubernete 的易使用、可拓展的应用交付/管理系统。KubeVela 将 Helm 作为模板引擎和应用包的标准。但是 Helm 不是 KubeVela 唯一支持的模板模块。另一个同样最优先支持的是 CUE。

同时,KubeVale 被设计为 Kubernetes 的一个控制器(即工作在服务端),即使是其 Helm 部分,也会安装一个 Helm Operator。

常见问题

目前 KubeVela 中使用的 Crossplane 云资源数量支持比较有限的,是否有计划加快开发、增加云资源数量?

目前 KubeVela 支持 Crossplane 和 Terraform Controller 两种模式提供云资源,Terraform Controller 可以直接使用现成的 Terraform 模块,所以支持的云资源广度和数量都相对庞大,对于 Crossplane 不支持的云资源,可以考虑使用 Terraform Controller。目前 KubeVela 正在添加常用的云资源最佳实践用例,到 1.2 版本常用云资源都可以开箱即用。

另一方面,Crossplane 中 阿里云的云资源支持也是由 KubeVela 的维护团队在负责维护,我们也非常乐意支持比较常用的云资源,在 Crossplane 项目中做更精细化的支持,所以比较倾向于使用 Crossplane 的用户可以在社区提 Issue 表达诉求,我们会根据用户的意愿确定开发计划。

Kubevela 未来具体要往哪个方向发展,是基于 GitOps 的 CD 工具,还是基于Workflow 的类似于 Tekton 或 Argo Worflow 这样的 Pipeline,还是重点在 OAM 的实现上?

KubeVela 和 OAM 是一体的,OAM 是 KubeVela 背后的模型,随着 KubeVela 的演进,OAM 模型也会随之迭代发展。

从最开始我们提出 OAM 模型,希望能够通过关注点分离的理念,降低云原生应用管理的复杂度,到后来出现 KubeVela 开箱即用的应用管理引擎,再到 v1.1 发布了混合云应用交付功能和工作流引擎。KubeVela 和 OAM 要解决的问题一直都是“让云原生的应用交付和应用管理更简单”。

为了让云原生应用交付和管理更简单,我们需要标准化的模型,以应用为中心降低用户使用门槛和心智负担,同时支持工作流、多集群等技术,也是为了让应用交付可以更流畅、更高效、成本更低。整个理念和发展方向都是一致的。

整体来说:Kubevela 正在向一款:原生面向混合云环境,以应用为中心的发布流水线一体化平台演进。

Error: unable to create new content in namespace cert-manager because it is being terminated

你可能偶尔会碰到如下问题。它发生在上一个 KubeVele 版本没有删除完成时。

  1. $ vela install
  2. - Installing Vela Core Chart:
  3. install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
  4. Failed to install the chart with error: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
  5. failed to create resource
  6. helm.sh/helm/v3/pkg/kube.(*Client).Update.func1
  7. /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/kube/client.go:190
  8. ...
  9. Error: failed to create resource: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated

稍事休息,然后在几秒内重试。

  1. $ vela install
  2. - Installing Vela Core Chart:
  3. Vela system along with OAM runtime already exist.
  4. Automatically discover capabilities successfully Add(0) Update(0) Delete(8)
  5. TYPE CATEGORY DESCRIPTION
  6. -task workload One-off task to run a piece of code or script to completion
  7. -webservice workload Long-running scalable service with stable endpoint to receive external traffic
  8. -worker workload Long-running scalable backend worker without network endpoint
  9. -autoscale trait Automatically scale the app following certain triggers or metrics
  10. -metrics trait Configure metrics targets to be monitored for the app
  11. -rollout trait Configure canary deployment strategy to release the app
  12. -route trait Configure route policy to the app
  13. -scaler trait Manually scale the app
  14. - Finished successfully.

手动应用所有 WorkloadDefinition 和 TraitDefinition manifests 以恢复所有功能。

  1. $ kubectl apply -f charts/vela-core/templates/defwithtemplate
  2. traitdefinition.core.oam.dev/autoscale created
  3. traitdefinition.core.oam.dev/scaler created
  4. traitdefinition.core.oam.dev/metrics created
  5. traitdefinition.core.oam.dev/rollout created
  6. traitdefinition.core.oam.dev/route created
  7. workloaddefinition.core.oam.dev/task created
  8. workloaddefinition.core.oam.dev/webservice created
  9. workloaddefinition.core.oam.dev/worker created
  10. $ vela workloads
  11. Automatically discover capabilities successfully Add(8) Update(0) Delete(0)
  12. TYPE CATEGORY DESCRIPTION
  13. +task workload One-off task to run a piece of code or script to completion
  14. +webservice workload Long-running scalable service with stable endpoint to receive external traffic
  15. +worker workload Long-running scalable backend worker without network endpoint
  16. +autoscale trait Automatically scale the app following certain triggers or metrics
  17. +metrics trait Configure metrics targets to be monitored for the app
  18. +rollout trait Configure canary deployment strategy to release the app
  19. +route trait Configure route policy to the app
  20. +scaler trait Manually scale the app
  21. NAME DESCRIPTION
  22. task One-off task to run a piece of code or script to completion
  23. webservice Long-running scalable service with stable endpoint to receive external traffic
  24. worker Long-running scalable backend worker without network endpoint

Error: ScopeDefinition exists

你可能偶尔会碰到如下问题。它发生在存在一个老的 OAM Kubernetes Runtime 发行版时,或者你之前已经部署过 ScopeDefinition

  1. $ vela install
  2. - Installing Vela Core Chart:
  3. install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
  4. Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
  5. rendered manifests contain a resource that already exists. Unable to continue with install
  6. helm.sh/helm/v3/pkg/action.(*Install).Run
  7. /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
  8. ...
  9. Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"

删除 ScopeDefinition “healthscopes.core.oam.dev” 然后重试.

  1. $ kubectl delete ScopeDefinition "healthscopes.core.oam.dev"
  2. scopedefinition.core.oam.dev "healthscopes.core.oam.dev" deleted
  3. $ vela install
  4. - Installing Vela Core Chart:
  5. install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
  6. Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 16:26:41.491426 +0800 CST m=+4.026069452
  7. WARN: handle workload template `containerizedworkloads.core.oam.dev` failed: no template found, you will unable to use this workload capabilityWARN: handle trait template `manualscalertraits.core.oam.dev` failed
  8. : no template found, you will unable to use this trait capabilityAutomatically discover capabilities successfully Add(8) Update(0) Delete(0)
  9. TYPE CATEGORY DESCRIPTION
  10. +task workload One-off task to run a piece of code or script to completion
  11. +webservice workload Long-running scalable service with stable endpoint to receive external traffic
  12. +worker workload Long-running scalable backend worker without network endpoint
  13. +autoscale trait Automatically scale the app following certain triggers or metrics
  14. +metrics trait Configure metrics targets to be monitored for the app
  15. +rollout trait Configure canary deployment strategy to release the app
  16. +route trait Configure route policy to the app
  17. +scaler trait Manually scale the app
  18. - Finished successfully.

You have reached your pull rate limit

当你查看 Pod kubevela-vela-core 的日志并发现如下问题时。

  1. $ kubectl get pod -n vela-system -l app.kubernetes.io/name=vela-core
  2. NAME READY STATUS RESTARTS AGE
  3. kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m

Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

你可以换成 github 的镜像仓库。

  1. $ docker pull ghcr.io/oam-dev/kubevela/vela-core:latest

Warning: Namespace cert-manager exists

如果碰到以下问题,则可能存在一个 cert-manager 发行版,其 namespace 及 RBAC 相关资源与 KubeVela 存在冲突。

  1. $ vela install
  2. - Installing Vela Core Chart:
  3. install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
  4. Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
  5. rendered manifests contain a resource that already exists. Unable to continue with install
  6. helm.sh/helm/v3/pkg/action.(*Install).Run
  7. /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
  8. ...
  9. /opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373
  10. Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"

尝试如下步骤修复这个问题。

  • 删除 cert-manager 发行版
  • 删除 cert-manager namespace
  • 重装 KubeVela
  1. $ helm delete cert-manager -n cert-manager
  2. release "cert-manager" uninstalled
  3. $ kubectl delete ns cert-manager
  4. namespace "cert-manager" deleted
  5. $ vela install
  6. - Installing Vela Core Chart:
  7. install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
  8. Successfully installed the chart, status: deployed, last deployed time = 2020-12-04 10:46:46.782617 +0800 CST m=+4.248889379
  9. Automatically discover capabilities successfully (no changes)
  10. TYPE CATEGORY DESCRIPTION
  11. task workload One-off task to run a piece of code or script to completion
  12. webservice workload Long-running scalable service with stable endpoint to receive external traffic
  13. worker workload Long-running scalable backend worker without network endpoint
  14. autoscale trait Automatically scale the app following certain triggers or metrics
  15. metrics trait Configure metrics targets to be monitored for the app
  16. rollout trait Configure canary deployment strategy to release the app
  17. route trait Configure route policy to the app
  18. scaler trait Manually scale the app
  19. - Finished successfully.

如何修复问题:MutatingWebhookConfiguration mutating-webhook-configuration exists?

如果你部署的其他服务会安装 MutatingWebhookConfiguration mutating-webhook-configuration,则安装 KubeVela 时会碰到如下问题。

  1. - Installing Vela Core Chart:
  2. install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file
  3. Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
  4. rendered manifests contain a resource that already exists. Unable to continue with install
  5. helm.sh/helm/v3/pkg/action.(*Install).Run
  6. /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
  7. github.com/oam-dev/kubevela/pkg/commands.InstallOamRuntime
  8. /home/runner/work/kubevela/kubevela/pkg/commands/system.go:259
  9. github.com/oam-dev/kubevela/pkg/commands.(*initCmd).run
  10. /home/runner/work/kubevela/kubevela/pkg/commands/system.go:162
  11. github.com/oam-dev/kubevela/pkg/commands.NewInstallCommand.func2
  12. /home/runner/work/kubevela/kubevela/pkg/commands/system.go:119
  13. github.com/spf13/cobra.(*Command).execute
  14. /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850
  15. github.com/spf13/cobra.(*Command).ExecuteC
  16. /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958
  17. github.com/spf13/cobra.(*Command).Execute
  18. /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
  19. main.main
  20. /home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16
  21. runtime.main
  22. /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203
  23. runtime.goexit
  24. /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373
  25. Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"

要解决这个问题,请从 KubeVela releases 将 KubeVela Cli vela 版本升级到 v0.2.2 以上。

运维

Autoscale: 如何在多个 Kubernetes 集群上开启 metrics server ?

运维 Autoscale 依赖 metrics server,所以它在许多集群中都是开启的。请通过命令 kubectl top nodeskubectl top pods 检查 metrics server 是否开启。

如果输出如下相似内容,那么 metrics 已经开启。

  1. $ kubectl top nodes
  2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  3. cn-hongkong.10.0.1.237 288m 7% 5378Mi 78%
  4. cn-hongkong.10.0.1.238 351m 8% 5113Mi 74%
  5. $ kubectl top pods
  6. NAME CPU(cores) MEMORY(bytes)
  7. php-apache-65f444bf84-cjbs5 0m 1Mi
  8. wordpress-55c59ccdd5-lf59d 1m 66Mi

或者需要在你的 kubernetes 集群中手动开启 metrics 。

  • ACK (Alibaba Cloud Container Service for Kubernetes)

Metrics server 已经开启。

  • ASK (Alibaba Cloud Serverless Kubernetes)

Metrics server 已经在如下 Alibaba Cloud console Operations/Add-ons 部分开启。

FAQ - 图1

如果你有更多问题,请访问 metrics server 排错指导

  • Kind

使用如下命令安装 metrics server,或者可以安装 最新版本

  1. $ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

并且在通过 kubectl edit deploy -n kube-system metrics-server 加载的 yaml 文件中 .spec.template.spec.containers 下增加如下部分。

注意:这里只是一个示例,而不是用于生产级别的使用。

  1. command:
  2. - /metrics-server
  3. - --kubelet-insecure-tls
  • MiniKube

使用如下命令开启。

  1. $ minikube addons enable metrics-server

享受在你的应用中 设置 autoscale