Kubernetes Addons and Addon Manager

Addons

With kops you manage addons by using kubectl.

(For a description of the addon-manager, please see addon_management.)

Addons in Kubernetes are traditionally done by copying files to /etc/kubernetes/addons on the master. But thisdoesn't really make sense in HA master configurations. We also have kubectl available, and addons are just a thinwrapper over calling kubectl.

The command kops create cluster does not support specifying addons to be added to the cluster when it is created. Instead they can be added after cluster creation using kubectl. Alternatively when creating a cluster from a yaml manifest, addons can be specified using spec.addons.

  1. spec:
  2. addons:
  3. - manifest: kubernetes-dashboard
  4. - manifest: s3://kops-addons/addon.yaml

This document describes how to install some common addons and how to create your own custom ones.

Custom addons

The docs about the addon management describe in more detail how to define a addon resource with regards to versioning.Here is a minimal example of an addon manifest that would install two different addons.

  1. kind: Addons
  2. metadata:
  3. name: example
  4. spec:
  5. addons:
  6. - name: foo.addons.org.io
  7. version: 0.0.1
  8. selector:
  9. k8s-addon: foo.addons.org.io
  10. manifest: foo.addons.org.io/v0.0.1.yaml
  11. - name: bar.addons.org.io
  12. version: 0.0.1
  13. selector:
  14. k8s-addon: bar.addons.org.io
  15. manifest: bar.addons.org.io/v0.0.1.yaml

In this example the folder structure should look like this;

  1. addon.yaml
  2. foo.addons.org.io
  3. v0.0.1.yaml
  4. bar.addons.org.io
  5. v0.0.1.yaml

The yaml files in the foo/bar folders can be any kubernetes resource. Typically this file structure would be pushed to S3 or another of the supported backends and then referenced as above in spec.addons. In order for master nodes to be able to access the S3 bucket containing the addon manifests, one might have to add additional iam policies to the master nodes using spec.additionalPolicies, like so;

  1. spec:
  2. additionalPolicies:
  3. master: |
  4. [
  5. {
  6. "Effect": "Allow",
  7. "Action": [
  8. "s3:GetObject"
  9. ],
  10. "Resource": ["arn:aws:s3:::kops-addons/*"]
  11. },
  12. {
  13. "Effect": "Allow",
  14. "Action": [
  15. "s3:GetBucketLocation",
  16. "s3:ListBucket"
  17. ],
  18. "Resource": ["arn:aws:s3:::kops-addons"]
  19. }
  20. ]

The masters will poll for changes in the bucket and keep the addons up to date.

Dashboard

The dashboard project provides a nice administrative UI:

Install using:

  1. kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml

And then follow the instructions in the dashboard documentation to access the dashboard.

The login credentials are:

  • Username: admin
  • Password: get by running kops get secrets kube —type secret -oplaintext or kubectl config view —minify

RBAC

It's necessary to add your own RBAC permission to the dashboard. Please read the RBAC docs before applying permissions.

Below you see an example giving cluster-admin access to the dashboard.

  1. apiVersion: rbac.authorization.k8s.io/v1beta1
  2. kind: ClusterRoleBinding
  3. metadata:
  4. name: kubernetes-dashboard
  5. labels:
  6. k8s-app: kubernetes-dashboard
  7. roleRef:
  8. apiGroup: rbac.authorization.k8s.io
  9. kind: ClusterRole
  10. name: cluster-admin
  11. subjects:
  12. - kind: ServiceAccount
  13. name: kubernetes-dashboard
  14. namespace: kube-system
  15. ```
  16.  
  17. ### Monitoring with Heapster - Standalone
  18.  
  19. Monitoring supports the horizontal pod autoscaler.
  20.  
  21. Install using:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.11.0.yaml

  1. Please note that [heapster is retired](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md). Consider using [metrics-server](https://github.com/kubernetes-incubator/metrics-server) and a third party metrics pipeline to gather Prometheus-format metrics instead.
  2.  
  3. ### Monitoring with Prometheus Operator + kube-prometheus
  4.  
  5. The [Prometheus Operator](https://github.com/coreos/prometheus-operator/) makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.
  6.  
  7. [kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus) combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.
  8.  
  9. ```console
  10. kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/prometheus-operator/v0.26.0.yaml

Route53 Mapper

This addon is deprecated. Please use external-dns instead.

Please note that kops installs a Route53 DNS controller automatically (it is required for cluster discovery).The functionality of the route53-mapper overlaps with the dns-controller, but some users will prefer touse one or the other.README for the included dns-controller

route53-mapper automates creation and updating of entries on Route53 with A records pointingto ELB-backed LoadBalancer services created by Kubernetes. Install using:

The project is created by wearemolecule, and maintained atwearemolecule/route53-kubernetes.Usage instructions

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/route53-mapper/v1.3.0.yml

Addon Management

kops incorporates management of some addons; we have to manage some addons which are needed beforethe kubernetes API is functional.

In addition, kops offers end-user management of addons via the channels tool (which is still experimental,but we are working on making it a recommended part of kubernetes addon management). We ship somecurated addons in the addons directory, more information in the addons document.

kops uses the channels tool for system addon management also. Because kops uses the same toolfor system addon management as it does for user addon management, this means thataddons installed by kops as part of cluster bringup can be managed alongside additional addons.(Though note that bootstrap addons are much more likely to be replaced during a kops upgrade).

The general kops philosophy is to try to make the set of bootstrap addons minimal, andto make installation of subsequent addons easy.

Thus, kube-dns and the networking overlay (if any) are the canonical bootstrap addons.But addons such as the dashboard or the EFK stack are easily installed after kops bootstrap,with a kubectl apply -f https://… or with the channels tool.

In future, we may as a convenience make it easy to add optional addons to the kops manifest,though this will just be a convenience wrapper around doing it manually.

Update BootStrap Addons

If you want to update the bootstrap addons, you can run the following command to show you which addons need updating. Add —yes to actually apply the updates.

channels apply channel s3://KOPS_S3_BUCKET/CLUSTER_NAME/addons/bootstrap-channel.yaml

Versioning

The channels tool adds a manifest-of-manifests file, of Kind: Addons, which allows for a descriptionof the various manifest versions that are available. In this way kops can manage updatesas new versions of the addon are released. For example,the dashboard addonlists multiple versions.

For example, a typical addons declaration might looks like this:

  1. - version: 1.4.0
  2. selector:
  3. k8s-addon: kubernetes-dashboard.addons.k8s.io
  4. manifest: v1.4.0.yaml
  5. - version: 1.5.0
  6. selector:
  7. k8s-addon: kubernetes-dashboard.addons.k8s.io
  8. manifest: v1.5.0.yaml

That declares two versions of an addon, with manifests at v1.4.0.yaml and at v1.5.0.yaml.These are evaluated as relative paths to the Addons file itself. (The channels tool supportsa few more protocols than kubectl - for example s3://… for S3 hosted manifests).

The version field gives meaning to the alternative manifests. This is interpreted as asemver. The channels tool keeps track of the current version installed (currently by meansof an annotation on the kube-system namespace).

The channel tool updates the installed version when any of the following conditions apply.

  • The version declared in the addon manifest is greater then the currently installed version.
  • The version number's match, but the ids are different
  • The version number and ids match, but the hash of the addon's manifest has changed since it was installed.

This means that a user can edit a deployed addon, and changes will not be replaced, until a new version of the addon is installed. The long-term direction here is that addons will mostly be configured through a ConfigMap or Secret object, and that the addon manager will (TODO) not replace the ConfigMap.

The selector determines the objects which make up the addon. This will be usedto construct a —prune argument (TODO), so that objects that existed in theprevious but not the new version will be removed as part of an upgrade.

Kubernetes Version Selection

The addon manager now supports a kubernetesVersion field, which is a semver range specifieron the kubernetes version. If the targeted version of kubernetes does not match the semverspecified, the addon version will be ignored.

This allows you to have different versions of the manifest for significant changes to thekubernetes API. For example, 1.6 changed the taints & tolerations to a field, and RBAC movedto beta. As such it is easier to have two separate manifests.

For example:

  1. - version: 1.5.0
  2. selector:
  3. k8s-addon: kube-dashboard.addons.k8s.io
  4. manifest: v1.5.0.yaml
  5. kubernetesVersion: "<1.6.0"
  6. id: "pre-k8s-16"
  7. - version: 1.6.0
  8. selector:
  9. k8s-addon: kube-dashboard.addons.k8s.io
  10. manifest: v1.6.0.yaml
  11. kubernetesVersion: ">=1.6.0"
  12. id: "k8s-16"

On kubernetes versions before 1.6, we will install v1.5.0.yaml, whereas from kubernetesversions 1.6 on we will install v1.6.0.yaml.

Note that we remove the pre-release field of the kubernetes semver, so that 1.6.0-beta.1will match >=1.6.0. This matches the way kubernetes does pre-releases.

Semver is not enough: id

However, semver is insufficient here with the kubernetes version selection. The problemarises in the following scenario:

  • Install k8s 1.5, 1.5 version of manifest is installed
  • Upgrade to k8s 1.6, 1.6 version of manifest is installed
  • Downgrade to k8s 1.5; we want the 1.5 version of the manifest to be installed but the 1.6 version will have a semver that is greater than or equal to the 1.5 semver.

We need a way to break the ties between the semvers, and thus we introduce the id field.

Thus a manifest will actually look like this:

  1. - version: 1.6.0
  2. selector:
  3. k8s-addon: kube-dns.addons.k8s.io
  4. manifest: pre-k8s-16.yaml
  5. kubernetesVersion: "<1.6.0"
  6. id: "pre-k8s-16"
  7. - version: 1.6.0
  8. selector:
  9. k8s-addon: kube-dns.addons.k8s.io
  10. manifest: k8s-16.yaml
  11. kubernetesVersion: ">=1.6.0"
  12. id: "k8s-16"

Note that the two addons have the same version, but a different kubernetesVersion selector.But they have different id values; addons with matching semvers but different ids willbe upgraded. (We will never downgrade to an older semver though, regardless of id)

So now in the above scenario after the downgrade to 1.5, although the semver is the same,the id will not match, and the pre-k8s-16 will be installed. (And when we upgrade backto 1.6, the k8s-16 version will be installed.

A few tips:

  • The version can now more closely mirror the upstream version.
  • The manifest names should probably incorporate the id, for maintainability.