Cloud Controller Manager Administration

FEATURE STATE: Kubernetes v1.11 [beta]

Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the [cloud-controller-manager]($d05139e88cb9c77e.md "Control plane component that integrates Kubernetes with third-party cloud providers.") binary allows cloud vendors to evolve independently from the core Kubernetes code.

The cloud-controller-manager can be linked to any cloud provider that satisfies cloudprovider.Interface. For backwards compatibility, the cloud-controller-manager provided in the core Kubernetes project uses the same cloud libraries as kube-controller-manager. Cloud providers already supported in Kubernetes core are expected to use the in-tree cloud-controller-manager to transition out of Kubernetes core.

Administration

Requirements

Every cloud has their own set of requirements for running their own cloud provider integration, it should not be too different from the requirements when running kube-controller-manager. As a general rule of thumb you’ll need:

  • cloud authentication/authorization: your cloud may require a token or IAM rules to allow access to their APIs
  • kubernetes authentication/authorization: cloud-controller-manager may need RBAC rules set to speak to the kubernetes apiserver
  • high availability: like kube-controller-manager, you may want a high available setup for cloud controller manager using leader election (on by default).

Running cloud-controller-manager

Successfully running cloud-controller-manager requires some changes to your cluster configuration.

  • kube-apiserver and kube-controller-manager MUST NOT specify the --cloud-provider flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed.
  • kubelet must run with --cloud-provider=external. This is to ensure that the kubelet is aware that it must be initialized by the cloud controller manager before it is scheduled any work.

Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways:

  • kubelets specifying --cloud-provider=external will add a taint node.cloudprovider.kubernetes.io/uninitialized with an effect NoSchedule during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that cloud controller manager is not available, new nodes in the cluster will be left unschedulable. The taint is important since the scheduler may require cloud specific information about nodes such as their region or type (high cpu, gpu, high memory, spot instance, etc).
  • cloud information about nodes in the cluster will no longer be retrieved using local metadata, but instead all API calls to retrieve node information will go through cloud controller manager. This may mean you can restrict access to your cloud API on the kubelets for better security. For larger clusters you may want to consider if cloud controller manager will hit rate limits since it is now responsible for almost all API calls to your cloud from within the cluster.

The cloud controller manager can implement:

  • Node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud.
  • Service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer.
  • Route controller - responsible for setting up network routes on your cloud
  • any other features you would like to implement if you are running an out-of-tree provider.

Examples

If you are using a cloud that is currently supported in Kubernetes core and would like to adopt cloud controller manager, see the cloud controller manager in kubernetes core.

For cloud controller managers not in Kubernetes core, you can find the respective projects in repositories maintained by cloud vendors or by SIGs.

For providers already in Kubernetes core, you can run the in-tree cloud controller manager as a DaemonSet in your cluster, use the following as a guideline:

admin/cloud/ccm-example.yaml Cloud Controller Manager Administration - 图1

  1. # This is an example of how to setup cloud-controller-manger as a Daemonset in your cluster.
  2. # It assumes that your masters can run pods and has the role node-role.kubernetes.io/master
  3. # Note that this Daemonset will not work straight out of the box for your cloud, this is
  4. # meant to be a guideline.
  5. ---
  6. apiVersion: v1
  7. kind: ServiceAccount
  8. metadata:
  9. name: cloud-controller-manager
  10. namespace: kube-system
  11. ---
  12. apiVersion: rbac.authorization.k8s.io/v1
  13. kind: ClusterRoleBinding
  14. metadata:
  15. name: system:cloud-controller-manager
  16. roleRef:
  17. apiGroup: rbac.authorization.k8s.io
  18. kind: ClusterRole
  19. name: cluster-admin
  20. subjects:
  21. - kind: ServiceAccount
  22. name: cloud-controller-manager
  23. namespace: kube-system
  24. ---
  25. apiVersion: apps/v1
  26. kind: DaemonSet
  27. metadata:
  28. labels:
  29. k8s-app: cloud-controller-manager
  30. name: cloud-controller-manager
  31. namespace: kube-system
  32. spec:
  33. selector:
  34. matchLabels:
  35. k8s-app: cloud-controller-manager
  36. template:
  37. metadata:
  38. labels:
  39. k8s-app: cloud-controller-manager
  40. spec:
  41. serviceAccountName: cloud-controller-manager
  42. containers:
  43. - name: cloud-controller-manager
  44. # for in-tree providers we use k8s.gcr.io/cloud-controller-manager
  45. # this can be replaced with any other image for out-of-tree providers
  46. image: k8s.gcr.io/cloud-controller-manager:v1.8.0
  47. command:
  48. - /usr/local/bin/cloud-controller-manager
  49. - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!
  50. - --leader-elect=true
  51. - --use-service-account-credentials
  52. # these flags will vary for every cloud provider
  53. - --allocate-node-cidrs=true
  54. - --configure-cloud-routes=true
  55. - --cluster-cidr=172.17.0.0/16
  56. tolerations:
  57. # this is required so CCM can bootstrap itself
  58. - key: node.cloudprovider.kubernetes.io/uninitialized
  59. value: "true"
  60. effect: NoSchedule
  61. # this is to have the daemonset runnable on master nodes
  62. # the taint may vary depending on your cluster setup
  63. - key: node-role.kubernetes.io/master
  64. effect: NoSchedule
  65. # this is to restrict CCM to only run on master nodes
  66. # the node selector may vary depending on your cluster setup
  67. nodeSelector:
  68. node-role.kubernetes.io/master: ""

Limitations

Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it’s important that you are aware of these limitations for production workloads.

Support for Volumes

Cloud controller manager does not implement any of the volume controllers found in kube-controller-manager as the volume integrations also require coordination with kubelets. As we evolve CSI (container storage interface) and add stronger support for flex volume plugins, necessary support will be added to cloud controller manager so that clouds can fully integrate with volumes. Learn more about out-of-tree CSI volume plugins here.

Scalability

The cloud-controller-manager queries your cloud provider’s APIs to retrieve information for all nodes. For very large clusters, consider possible bottlenecks such as resource requirements and API rate limiting.

Chicken and Egg

The goal of the cloud controller manager project is to decouple development of cloud features from the core Kubernetes project. Unfortunately, many aspects of the Kubernetes project has assumptions that cloud provider features are tightly integrated into the project. As a result, adopting this new architecture can create several situations where a request is being made for information from a cloud provider, but the cloud controller manager may not be able to return that information without the original request being complete.

A good example of this is the TLS bootstrapping feature in the Kubelet. TLS bootstrapping assumes that the Kubelet has the ability to ask the cloud provider (or a local metadata service) for all its address types (private, public, etc) but cloud controller manager cannot set a node’s address types without being initialized in the first place which requires that the kubelet has TLS certificates to communicate with the apiserver.

As this initiative evolves, changes will be made to address these issues in upcoming releases.

What’s next

To build and develop your own cloud controller manager, read Developing Cloud Controller Manager.