Role-based Access Control

In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified. Read more about service account permissions in the official Kubernetes docs.

Bitnami also has a fantastic guide for configuring RBAC in your cluster that takes you through RBAC basics.

This guide is for users who want to restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.

Tiller and Role-based Access Control

You can add a service account to Tiller using the --service-account <NAME> flag while you’re configuring Helm. As a prerequisite, you’ll have to create a role binding which specifies a role and a service account name that have been set up in advance.

Once you have satisfied the pre-requisite and have a service account with the correct permissions, you’ll run a command like this: helm init --service-account <NAME>

Example: Service account with cluster-admin role

In rbac-config.yaml:

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: tiller
  5. namespace: kube-system
  6. ---
  7. apiVersion: rbac.authorization.k8s.io/v1
  8. kind: ClusterRoleBinding
  9. metadata:
  10. name: tiller
  11. roleRef:
  12. apiGroup: rbac.authorization.k8s.io
  13. kind: ClusterRole
  14. name: cluster-admin
  15. subjects:
  16. - kind: ServiceAccount
  17. name: tiller
  18. namespace: kube-system

Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don’t have to define it explicitly.

  1. $ kubectl create -f rbac-config.yaml
  2. serviceaccount "tiller" created
  3. clusterrolebinding "tiller" created
  4. $ helm init --service-account tiller --history-max 200

Example: Deploy Tiller in a namespace, restricted to deploying resources only in that namespace

In the example above, we gave Tiller admin access to the entire cluster. You are not at all required to give Tiller cluster-admin access for it to work. Instead of specifying a ClusterRole or a ClusterRoleBinding, you can specify a Role and RoleBinding to limit Tiller’s scope to a particular namespace.

  1. $ kubectl create namespace tiller-world
  2. namespace "tiller-world" created
  3. $ kubectl create serviceaccount tiller --namespace tiller-world
  4. serviceaccount "tiller" created

Define a Role that allows Tiller to manage all resources in tiller-world like in role-tiller.yaml:

  1. kind: Role
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: tiller-manager
  5. namespace: tiller-world
  6. rules:
  7. - apiGroups: ["", "batch", "extensions", "apps"]
  8. resources: ["*"]
  9. verbs: ["*"]
  1. $ kubectl create -f role-tiller.yaml
  2. role "tiller-manager" created

In rolebinding-tiller.yaml,

  1. kind: RoleBinding
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: tiller-binding
  5. namespace: tiller-world
  6. subjects:
  7. - kind: ServiceAccount
  8. name: tiller
  9. namespace: tiller-world
  10. roleRef:
  11. kind: Role
  12. name: tiller-manager
  13. apiGroup: rbac.authorization.k8s.io
  1. $ kubectl create -f rolebinding-tiller.yaml
  2. rolebinding "tiller-binding" created

Afterwards you can run helm init to install Tiller in the tiller-world namespace.

  1. $ helm init --service-account tiller --tiller-namespace tiller-world
  2. $HELM_HOME has been configured at /Users/awesome-user/.helm.
  3. Tiller (the Helm server side component) has been installed into your Kubernetes Cluster.
  4. $ helm install stable/lamp --tiller-namespace tiller-world --namespace tiller-world
  5. NAME: wayfaring-yak
  6. LAST DEPLOYED: Mon Aug 7 16:00:16 2017
  7. NAMESPACE: tiller-world
  8. STATUS: DEPLOYED
  9. RESOURCES:
  10. ==> v1/Pod
  11. NAME READY STATUS RESTARTS AGE
  12. wayfaring-yak-alpine 0/1 ContainerCreating 0 0s

Example: Deploy Tiller in a namespace, restricted to deploying resources in another namespace

In the example above, we gave Tiller admin access to the namespace it was deployed inside. Now, let’s limit Tiller’s scope to deploy resources in a different namespace!

For example, let’s install Tiller in the namespace myorg-system and allow Tiller to deploy resources in the namespace myorg-users.

  1. $ kubectl create namespace myorg-system
  2. namespace "myorg-system" created
  3. $ kubectl create serviceaccount tiller --namespace myorg-system
  4. serviceaccount "tiller" created

Define a Role that allows Tiller to manage all resources in myorg-users like in role-tiller.yaml:

  1. kind: Role
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: tiller-manager
  5. namespace: myorg-users
  6. rules:
  7. - apiGroups: ["", "batch", "extensions", "apps"]
  8. resources: ["*"]
  9. verbs: ["*"]
  1. $ kubectl create -f role-tiller.yaml
  2. role "tiller-manager" created

Bind the service account to that role. In rolebinding-tiller.yaml,

  1. kind: RoleBinding
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: tiller-binding
  5. namespace: myorg-users
  6. subjects:
  7. - kind: ServiceAccount
  8. name: tiller
  9. namespace: myorg-system
  10. roleRef:
  11. kind: Role
  12. name: tiller-manager
  13. apiGroup: rbac.authorization.k8s.io
  1. $ kubectl create -f rolebinding-tiller.yaml
  2. rolebinding "tiller-binding" created

We’ll also need to grant Tiller access to read configmaps in myorg-system so it can store release information. In role-tiller-myorg-system.yaml:

  1. kind: Role
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. namespace: myorg-system
  5. name: tiller-manager
  6. rules:
  7. - apiGroups: ["", "extensions", "apps"]
  8. resources: ["configmaps"]
  9. verbs: ["*"]
  1. $ kubectl create -f role-tiller-myorg-system.yaml
  2. role "tiller-manager" created

And the respective role binding. In rolebinding-tiller-myorg-system.yaml:

  1. kind: RoleBinding
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: tiller-binding
  5. namespace: myorg-system
  6. subjects:
  7. - kind: ServiceAccount
  8. name: tiller
  9. namespace: myorg-system
  10. roleRef:
  11. kind: Role
  12. name: tiller-manager
  13. apiGroup: rbac.authorization.k8s.io
  1. $ kubectl create -f rolebinding-tiller-myorg-system.yaml
  2. rolebinding "tiller-binding" created

Helm and Role-based Access Control

When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted. Specifically, the Helm client will need to be able to create pods, forward ports and be able to list pods in the namespace where Tiller is running (so it can find Tiller).

Example: Deploy Helm in a namespace, talking to Tiller in another namespace

In this example, we will assume Tiller is running in a namespace called tiller-world and that the Helm client is running in a namespace called helm-world. By default, Tiller is running in the kube-system namespace.

In helm-user.yaml:

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: helm
  5. namespace: helm-world
  6. ---
  7. apiVersion: rbac.authorization.k8s.io/v1
  8. kind: Role
  9. metadata:
  10. name: tiller-user
  11. namespace: tiller-world
  12. rules:
  13. - apiGroups:
  14. - ""
  15. resources:
  16. - pods/portforward
  17. verbs:
  18. - create
  19. - apiGroups:
  20. - ""
  21. resources:
  22. - pods
  23. verbs:
  24. - list
  25. ---
  26. apiVersion: rbac.authorization.k8s.io/v1
  27. kind: RoleBinding
  28. metadata:
  29. name: tiller-user-binding
  30. namespace: tiller-world
  31. roleRef:
  32. apiGroup: rbac.authorization.k8s.io
  33. kind: Role
  34. name: tiller-user
  35. subjects:
  36. - kind: ServiceAccount
  37. name: helm
  38. namespace: helm-world
  1. $ kubectl create -f helm-user.yaml
  2. serviceaccount "helm" created
  3. role "tiller-user" created
  4. rolebinding "tiller-user-binding" created