Harvester Cloud Provider

RKE1 and RKE2 clusters can be provisioned in Rancher using the built-in Harvester Node Driver. Harvester provides load balancer and Harvester cluster storage passthrough support to the guest Kubernetes cluster.

In this page we will learn:

Deploying

Prerequisites

  • The Kubernetes cluster is built on top of Harvester virtual machines.
  • The Harvester virtual machines run as guest Kubernetes nodes are in the same namespace.

Deploying to the RKE1 Cluster with Harvester Node Driver

When spinning up an RKE cluster using the Harvester node driver, you can perform two steps to deploy the Harvester cloud provider:

  1. Select Harvester(Out-of-tree) option.

    Harvester Cloud Provider - 图1

  2. Install Harvester Cloud Provider from the Rancher marketplace.

    Harvester Cloud Provider - 图2

Deploying to the RKE2 Cluster with Harvester Node Driver

When spinning up an RKE2 cluster using the Harvester node driver, select the Harvester cloud provider. The node driver will then help deploy both the CSI driver and CCM automatically.

Harvester Cloud Provider - 图3

Deploying to the K3s Cluster with Harvester Node Driver [Experimental]

When spinning up a K3s cluster using the Harvester node driver, you can perform the following steps to deploy the harvester cloud provider:

  1. Generate and inject cloud config for harvester-cloud-provider

The cloud provider needs a kubeconfig file to work, a limited scoped one can be generated using the generate_addon.sh script available in the harvester/cloud-provider-harvester repo.

Harvester Cloud Provider - 图4note

The script depends on kubectl and jq to operate the Harvester cluster

The script needs access to the Harvester Cluster kubeconfig to work.

The namespace needs to be the namespace in which the guest cluster will be created.

  1. ./deploy/generate_addon.sh <serviceaccount name> <namespace>

The output will look as follows:

  1. # ./deploy/generate_addon.sh harvester-cloud-provider default
  2. Creating target directory to hold files in ./tmp/kube...done
  3. Creating a service account in default namespace: harvester-cloud-provider
  4. W1104 16:10:21.234417 4319 helpers.go:663] --dry-run is deprecated and can be replaced with --dry-run=client.
  5. serviceaccount/harvester-cloud-provider configured
  6. Creating a role in default namespace: harvester-cloud-provider
  7. role.rbac.authorization.k8s.io/harvester-cloud-provider unchanged
  8. Creating a rolebinding in default namespace: harvester-cloud-provider
  9. W1104 16:10:21.986771 4369 helpers.go:663] --dry-run is deprecated and can be replaced with --dry-run=client.
  10. rolebinding.rbac.authorization.k8s.io/harvester-cloud-provider configured
  11. Getting uid of service account harvester-cloud-provider on default
  12. Service Account uid: ea951643-53d2-4ea8-a4aa-e1e72a9edc91
  13. Creating a user token secret in default namespace: harvester-cloud-provider-token
  14. Secret name: harvester-cloud-provider-token
  15. Extracting ca.crt from secret...done
  16. Getting user token from secret...done
  17. Setting current context to: local
  18. Cluster name: local
  19. Endpoint: https://HARVESTER_ENDPOINT/k8s/clusters/local
  20. Preparing k8s-harvester-cloud-provider-default-conf
  21. Setting a cluster entry in kubeconfig...Cluster "local" set.
  22. Setting token credentials entry in kubeconfig...User "harvester-cloud-provider-default-local" set.
  23. Setting a context entry in kubeconfig...Context "harvester-cloud-provider-default-local" created.
  24. Setting the current-context in the kubeconfig file...Switched to context "harvester-cloud-provider-default-local".
  25. ########## cloud config ############
  26. apiVersion: v1
  27. clusters:
  28. - cluster:
  29. certificate-authority-data: <CACERT>
  30. server: https://HARVESTER-ENDPOINT/k8s/clusters/local
  31. name: local
  32. contexts:
  33. - context:
  34. cluster: local
  35. namespace: default
  36. user: harvester-cloud-provider-default-local
  37. name: harvester-cloud-provider-default-local
  38. current-context: harvester-cloud-provider-default-local
  39. kind: Config
  40. preferences: {}
  41. users:
  42. - name: harvester-cloud-provider-default-local
  43. user:
  44. token: <TOKEN>
  45. ########## cloud-init user data ############
  46. write_files:
  47. - encoding: b64
  48. content: <CONTENT>
  49. owner: root:root
  50. path: /etc/kubernetes/cloud-config
  51. permissions: '0644'

Copy and paste the output below cloud-init user data to Machine Pools >Show Advanced > User Data.

Harvester Cloud Provider - 图5

  1. Add the following HelmChart yaml of harvester-cloud-provider to Cluster Configuration > Add-On Config > Additional Manifest
  1. apiVersion: helm.cattle.io/v1
  2. kind: HelmChart
  3. metadata:
  4. name: harvester-cloud-provider
  5. namespace: kube-system
  6. spec:
  7. targetNamespace: kube-system
  8. bootstrap: true
  9. repo: https://charts.harvesterhci.io/
  10. chart: harvester-cloud-provider
  11. version: 0.1.13
  12. helmVersion: v3

Harvester Cloud Provider - 图6

  1. Disable the in-tree cloud provider by
  • Click the Edit as YAML button

Harvester Cloud Provider - 图7

  • Disable servicelb and Set disable-cloud-controller: true to disable default k3s cloud controller.
  1. machineGlobalConfig:
  2. disable:
  3. - servicelb
  4. disable-cloud-controller: true
  • Add cloud-provider=external to use harvester cloud provider.
  1. machineSelectorConfig:
  2. - config:
  3. kubelet-arg:
  4. - cloud-provider=external
  5. protect-kernel-defaults: false

Harvester Cloud Provider - 图8

With these settings in place a K3s cluster should provision successfully while using the external cloud provider.

Upgrade Cloud Provider

Upgrade RKE2

The cloud provider can be upgraded by upgrading the RKE2 version. You can upgrade the RKE2 cluster via the Rancher UI as follows:

  1. Click ☰ > Cluster Management.
  2. Find the guest cluster that you want to upgrade and select ⋮ > Edit Config.
  3. Select Kubernetes Version.
  4. Click Save.

Upgrade RKE/K3s

RKE/K3s upgrade cloud provider via the Rancher UI, as follows:

  1. Click ☰ > RKE/K3s Cluster > Apps > Installed Apps.
  2. Find the cloud provider chart and select ⋮ > Edit/Upgrade.
  3. Select Version.
  4. Click Next > Update.

Load Balancer Support

After deploying the Harvester Cloud provider, you can use the Kubernetes LoadBalancer service to expose a microservice inside the guest cluster to the external world. When you create a Kubernetes LoadBalancer service, a Harvester load balancer is assigned to the service and you can edit it through the Add-on Config in the Rancher UI.

Harvester Cloud Provider - 图9

IPAM

Harvester’s built-in load balancer supports both pool and dhcp modes. You can select the mode in the Rancher UI. Harvester adds the annotation cloudprovider.harvesterhci.io/ipam to the service behind.

  • pool: You should configure an IP address pool in Harvester’s Settings in advance. The Harvester LoadBalancer controller will allocate an IP address from the IP address pool for the load balancer.

    Harvester Cloud Provider - 图10

  • dhcp: A DHCP server is required. The Harvester LoadBalancer controller will request an IP address from the DHCP server.

Harvester Cloud Provider - 图11note

It is not allowed to modify the IPAM mode. You need to create a new service if you want to modify the IPAM mode.

Health Checks

The Harvester load balancer supports TCP health checks. You can specify the parameters in the Rancher UI if you enable the Health Check option.

Harvester Cloud Provider - 图12

Alternatively, you can specify the parameters by adding annotations to the service manually. The following annotations are supported:

Annotation KeyValue TypeRequiredDescription
cloudprovider.harvesterhci.io/healthcheck-portstringtrueSpecifies the port. The prober will access the address composed of the backend server IP and the port.
cloudprovider.harvesterhci.io/healthcheck-success-thresholdstringfalseSpecifies the health check success threshold. The default value is 1. The backend server will start forwarding traffic if the number of times the prober continuously detects an address successfully reaches the threshold.
cloudprovider.harvesterhci.io/healthcheck-failure-thresholdstringfalseSpecifies the health check failure threshold. The default value is 3. The backend server will stop forwarding traffic if the number of health check failures reaches the threshold.
cloudprovider.harvesterhci.io/healthcheck-periodsecondsstringfalseSpecifies the health check period. The default value is 5 seconds.
cloudprovider.harvesterhci.io/healthcheck-timeoutsecondsstringfalseSpecifies the timeout of every health check. The default value is 3 seconds.