版本:v1.6

FAQ

https://github.com/kubevela/kubevela/issues/1662

Refer to the comparison details.

KubeVela natively support Crossplane as they’re already CRDs, while terraform was not a CRD controller, so the KubeVela community author a terraform controller for integration. You can choose any of them as you wish.

  • OAM(Open Application Model) is the model behind KubeVela, it provides a platform-agnostic application model including the best practices and methodology for different vendors to follow. The evolution of the model depends primarily on the practices of KubeVela currently.
  • KubeVela is the control plane running on Kubernetes, it works as a CRD controller and brings OAM model into your Cloud Native PaaS along with lots of addon capabilities. KubeVela will mainly focus on application delivery, the goal is to make deploying and operating applications across today’s hybrid, multi-cloud environments easier, faster and more reliable.

You can use https://kubevela.net/ as a faster alternative.

By default, the community use images from docker registry for installation. You can use the following alternatives:

  1. You can use github container registry, check the list of official images for more details. There’re two kinds of format:
  • Before v1.4.1, the image format is ghcr.io/<git-repo>/vela-core:<version>, e.g. “ghcr.io/kubevela/kubevela/vela-core:latest”.
  • After v1.4.1, the image format has changed to ghcr.io/kubevela/<align with docker hub>, e.g. “ghcr.io/kubevela/oamdev/vela-core:latest”.
  1. Alibaba Container Registry also sponsor KubeVela community, you can use acr.kubevela.net/ as prefix for the docker registry, acr has a sync for each KubeVela official images. Use it like acr.kubevela.net/oamdev/vela-core:latest.

  2. If you insist on using Docker registry, you may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit .

SchemaError(dev.oam.core.v1beta1.ApplicationRevision.spec.referredObjects): array should have exactly one sub-item)

https://github.com/kubevela/kubevela/issues/3874

This could be a potential problem for k8s 1.19. Try to run the following command and see if it can help.

  1. kubectl patch crd applicationrevisions.core.oam.dev --type json -p='[{"op": "remove", "path": "/spec/versions/1/schema/openAPIV3Schema/properties/spec/properties/referredObjects/x-kubernetes-preserve-unknown-fields"}]'

If you’re using v1.3, the default password is in the secret:

  1. kubectl get secret admin -n vela-system

If you’re using v1.4+, the default password is VelaUX12345. After you first log in, you must change the password. If you forget the password, you could delete the admin user from the database, then restart the API server. The admin user will regenerate.

  1. # Delete the admin user. If you use the MongoDB, you should delete from the MongoDB.
  2. kubectl delete cm usr-admin -n kubevela
  3. # Restart the API server and regenerate the admin user
  4. kubectl delete pod -l app.oam.dev/component=apiserver -n vela-system

https://kubevela.io/docs/platform-engineers/addon/addon-registry

We recommend you to use CRD Operator for stateful workload. If you just want to use StatefulSet, you can refer to this blog to build your own easily.

There are several common possible reasons for slow multi-cluster requests.

  1. Your managed cluster is far from your hub cluster. For example, one in Beijing and another one in London. This is hard to speed up. You might need to find methods to build stable network connections between them. KubeVela currently cannot help it.

  2. Your cluster-gateway uses version <= 1.3.2. There is a rate limiter in cluster-gateway in old versions. Therefore, if you have lots of multi-cluster requests, they will be blocked at cluster-gateway. Upgrade cluster-gateway to newer version could solve it.

  3. Your cluster-gateway hits the resource limits. For example, the CPU/Memory usage is high. This could happen if you have lots of clusters. You can raise the resource configuration for the cluster-gateway.

Last updated on 2022年12月1日 by Somefive