Managing Role-based Access Control (RBAC)

Overview

You can use the CLI to view RBAC resources and the administrator CLI to manage the roles and bindings.

Viewing roles and bindings

Roles can be used to grant various levels of access both cluster-wide as well as at the project-scope. Users and groups can be associated with, or bound to, multiple roles at the same time. You can view details about the roles and their bindings using the oc describe command.

Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource. Users with the admin default cluster role bound locally can manage roles and bindings in that project.

Review a full list of verbs in the Evaluating Authorization section.

Viewing cluster roles

To view the cluster roles and their associated rule sets:

  1. $ oc describe clusterrole.rbac
  2. Name: admin
  3. Labels: <none>
  4. Annotations: openshift.io/description=A user that has edit rights within the project and can change the project's membership.
  5. rbac.authorization.kubernetes.io/autoupdate=true
  6. PolicyRule:
  7. Resources Non-Resource URLs Resource Names Verbs
  8. --------- ----------------- -------------- -----
  9. appliedclusterresourcequotas [] [] [get list watch]
  10. appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch]
  11. bindings [] [] [get list watch]
  12. buildconfigs [] [] [create delete deletecollection get list patch update watch]
  13. buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch]
  14. buildconfigs/instantiate [] [] [create]
  15. buildconfigs.build.openshift.io/instantiate [] [] [create]
  16. buildconfigs/instantiatebinary [] [] [create]
  17. buildconfigs.build.openshift.io/instantiatebinary [] [] [create]
  18. buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch]
  19. buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch]
  20. buildlogs [] [] [create delete deletecollection get list patch update watch]
  21. buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch]
  22. builds [] [] [create delete deletecollection get list patch update watch]
  23. builds.build.openshift.io [] [] [create delete deletecollection get list patch update watch]
  24. builds/clone [] [] [create]
  25. builds.build.openshift.io/clone [] [] [create]
  26. builds/details [] [] [update]
  27. builds.build.openshift.io/details [] [] [update]
  28. builds/log [] [] [get list watch]
  29. builds.build.openshift.io/log [] [] [get list watch]
  30. configmaps [] [] [create delete deletecollection get list patch update watch]
  31. cronjobs.batch [] [] [create delete deletecollection get list patch update watch]
  32. daemonsets.extensions [] [] [get list watch]
  33. deploymentconfigrollbacks [] [] [create]
  34. deploymentconfigrollbacks.apps.openshift.io [] [] [create]
  35. deploymentconfigs [] [] [create delete deletecollection get list patch update watch]
  36. deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch]
  37. deploymentconfigs/instantiate [] [] [create]
  38. deploymentconfigs.apps.openshift.io/instantiate [] [] [create]
  39. deploymentconfigs/log [] [] [get list watch]
  40. deploymentconfigs.apps.openshift.io/log [] [] [get list watch]
  41. deploymentconfigs/rollback [] [] [create]
  42. deploymentconfigs.apps.openshift.io/rollback [] [] [create]
  43. deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch]
  44. deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch]
  45. deploymentconfigs/status [] [] [get list watch]
  46. deploymentconfigs.apps.openshift.io/status [] [] [get list watch]
  47. deployments.apps [] [] [create delete deletecollection get list patch update watch]
  48. deployments.extensions [] [] [create delete deletecollection get list patch update watch]
  49. deployments.extensions/rollback [] [] [create delete deletecollection get list patch update watch]
  50. deployments.apps/scale [] [] [create delete deletecollection get list patch update watch]
  51. deployments.extensions/scale [] [] [create delete deletecollection get list patch update watch]
  52. deployments.apps/status [] [] [create delete deletecollection get list patch update watch]
  53. endpoints [] [] [create delete deletecollection get list patch update watch]
  54. events [] [] [get list watch]
  55. horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection get list patch update watch]
  56. horizontalpodautoscalers.extensions [] [] [create delete deletecollection get list patch update watch]
  57. imagestreamimages [] [] [create delete deletecollection get list patch update watch]
  58. imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch]
  59. imagestreamimports [] [] [create]
  60. imagestreamimports.image.openshift.io [] [] [create]
  61. imagestreammappings [] [] [create delete deletecollection get list patch update watch]
  62. imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch]
  63. imagestreams [] [] [create delete deletecollection get list patch update watch]
  64. imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch]
  65. imagestreams/layers [] [] [get update]
  66. imagestreams.image.openshift.io/layers [] [] [get update]
  67. imagestreams/secrets [] [] [create delete deletecollection get list patch update watch]
  68. imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch]
  69. imagestreams/status [] [] [get list watch]
  70. imagestreams.image.openshift.io/status [] [] [get list watch]
  71. imagestreamtags [] [] [create delete deletecollection get list patch update watch]
  72. imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch]
  73. jenkins.build.openshift.io [] [] [admin edit view]
  74. jobs.batch [] [] [create delete deletecollection get list patch update watch]
  75. limitranges [] [] [get list watch]
  76. localresourceaccessreviews [] [] [create]
  77. localresourceaccessreviews.authorization.openshift.io [] [] [create]
  78. localsubjectaccessreviews [] [] [create]
  79. localsubjectaccessreviews.authorization.k8s.io [] [] [create]
  80. localsubjectaccessreviews.authorization.openshift.io [] [] [create]
  81. namespaces [] [] [get list watch]
  82. namespaces/status [] [] [get list watch]
  83. networkpolicies.extensions [] [] [create delete deletecollection get list patch update watch]
  84. persistentvolumeclaims [] [] [create delete deletecollection get list patch update watch]
  85. pods [] [] [create delete deletecollection get list patch update watch]
  86. pods/attach [] [] [create delete deletecollection get list patch update watch]
  87. pods/exec [] [] [create delete deletecollection get list patch update watch]
  88. pods/log [] [] [get list watch]
  89. pods/portforward [] [] [create delete deletecollection get list patch update watch]
  90. pods/proxy [] [] [create delete deletecollection get list patch update watch]
  91. pods/status [] [] [get list watch]
  92. podsecuritypolicyreviews [] [] [create]
  93. podsecuritypolicyreviews.security.openshift.io [] [] [create]
  94. podsecuritypolicyselfsubjectreviews [] [] [create]
  95. podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create]
  96. podsecuritypolicysubjectreviews [] [] [create]
  97. podsecuritypolicysubjectreviews.security.openshift.io [] [] [create]
  98. processedtemplates [] [] [create delete deletecollection get list patch update watch]
  99. processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch]
  100. projects [] [] [delete get patch update]
  101. projects.project.openshift.io [] [] [delete get patch update]
  102. replicasets.extensions [] [] [create delete deletecollection get list patch update watch]
  103. replicasets.extensions/scale [] [] [create delete deletecollection get list patch update watch]
  104. replicationcontrollers [] [] [create delete deletecollection get list patch update watch]
  105. replicationcontrollers/scale [] [] [create delete deletecollection get list patch update watch]
  106. replicationcontrollers.extensions/scale [] [] [create delete deletecollection get list patch update watch]
  107. replicationcontrollers/status [] [] [get list watch]
  108. resourceaccessreviews [] [] [create]
  109. resourceaccessreviews.authorization.openshift.io [] [] [create]
  110. resourcequotas [] [] [get list watch]
  111. resourcequotas/status [] [] [get list watch]
  112. resourcequotausages [] [] [get list watch]
  113. rolebindingrestrictions [] [] [get list watch]
  114. rolebindingrestrictions.authorization.openshift.io [] [] [get list watch]
  115. rolebindings [] [] [create delete deletecollection get list patch update watch]
  116. rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch]
  117. rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
  118. roles [] [] [create delete deletecollection get list patch update watch]
  119. roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch]
  120. roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
  121. routes [] [] [create delete deletecollection get list patch update watch]
  122. routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch]
  123. routes/custom-host [] [] [create]
  124. routes.route.openshift.io/custom-host [] [] [create]
  125. routes/status [] [] [get list watch update]
  126. routes.route.openshift.io/status [] [] [get list watch update]
  127. scheduledjobs.batch [] [] [create delete deletecollection get list patch update watch]
  128. secrets [] [] [create delete deletecollection get list patch update watch]
  129. serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate]
  130. services [] [] [create delete deletecollection get list patch update watch]
  131. services/proxy [] [] [create delete deletecollection get list patch update watch]
  132. statefulsets.apps [] [] [create delete deletecollection get list patch update watch]
  133. subjectaccessreviews [] [] [create]
  134. subjectaccessreviews.authorization.openshift.io [] [] [create]
  135. subjectrulesreviews [] [] [create]
  136. subjectrulesreviews.authorization.openshift.io [] [] [create]
  137. templateconfigs [] [] [create delete deletecollection get list patch update watch]
  138. templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch]
  139. templateinstances [] [] [create delete deletecollection get list patch update watch]
  140. templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch]
  141. templates [] [] [create delete deletecollection get list patch update watch]
  142. templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch]
  143. Name: basic-user
  144. Labels: <none>
  145. Annotations: openshift.io/description=A user that can get basic information about projects.
  146. rbac.authorization.kubernetes.io/autoupdate=true
  147. PolicyRule:
  148. Resources Non-Resource URLs Resource Names Verbs
  149. --------- ----------------- -------------- -----
  150. clusterroles [] [] [get list]
  151. clusterroles.authorization.openshift.io [] [] [get list]
  152. clusterroles.rbac.authorization.k8s.io [] [] [get list watch]
  153. projectrequests [] [] [list]
  154. projectrequests.project.openshift.io [] [] [list]
  155. projects [] [] [list watch]
  156. projects.project.openshift.io [] [] [list watch]
  157. selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
  158. selfsubjectrulesreviews [] [] [create]
  159. selfsubjectrulesreviews.authorization.openshift.io [] [] [create]
  160. storageclasses.storage.k8s.io [] [] [get list]
  161. users [] [~] [get]
  162. users.user.openshift.io [] [~] [get]
  163. Name: cluster-admin
  164. Labels: <none>
  165. Annotations: authorization.openshift.io/system-only=true
  166. openshift.io/description=A super-user that can perform any action in the cluster. When granted to a user within a project, they have full control over quota and membership and can perform every action...
  167. rbac.authorization.kubernetes.io/autoupdate=true
  168. PolicyRule:
  169. Resources Non-Resource URLs Resource Names Verbs
  170. --------- ----------------- -------------- -----
  171. [*] [] [*]
  172. *.* [] [] [*]
  173. Name: cluster-debugger
  174. Labels: <none>
  175. Annotations: authorization.openshift.io/system-only=true
  176. rbac.authorization.kubernetes.io/autoupdate=true
  177. PolicyRule:
  178. Resources Non-Resource URLs Resource Names Verbs
  179. --------- ----------------- -------------- -----
  180. [/debug/pprof] [] [get]
  181. [/debug/pprof/*] [] [get]
  182. [/metrics] [] [get]
  183. Name: cluster-reader
  184. Labels: <none>
  185. Annotations: authorization.openshift.io/system-only=true
  186. rbac.authorization.kubernetes.io/autoupdate=true
  187. PolicyRule:
  188. Resources Non-Resource URLs Resource Names Verbs
  189. --------- ----------------- -------------- -----
  190. [*] [] [get]
  191. apiservices.apiregistration.k8s.io [] [] [get list watch]
  192. apiservices.apiregistration.k8s.io/status [] [] [get list watch]
  193. appliedclusterresourcequotas [] [] [get list watch]
  194. ...

Viewing cluster role bindings

To view the current set of cluster role bindings, which show the users and groups that are bound to various roles:

  1. $ oc describe clusterrolebinding.rbac
  2. Name: admin
  3. Labels: <none>
  4. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  5. Role:
  6. Kind: ClusterRole
  7. Name: admin
  8. Subjects:
  9. Kind Name Namespace
  10. ---- ---- ---------
  11. ServiceAccount template-instance-controller openshift-infra
  12. Name: basic-users
  13. Labels: <none>
  14. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  15. Role:
  16. Kind: ClusterRole
  17. Name: basic-user
  18. Subjects:
  19. Kind Name Namespace
  20. ---- ---- ---------
  21. Group system:authenticated
  22. Name: cluster-admin
  23. Labels: kubernetes.io/bootstrapping=rbac-defaults
  24. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  25. Role:
  26. Kind: ClusterRole
  27. Name: cluster-admin
  28. Subjects:
  29. Kind Name Namespace
  30. ---- ---- ---------
  31. ServiceAccount pvinstaller default
  32. Group system:masters
  33. Name: cluster-admins
  34. Labels: <none>
  35. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  36. Role:
  37. Kind: ClusterRole
  38. Name: cluster-admin
  39. Subjects:
  40. Kind Name Namespace
  41. ---- ---- ---------
  42. Group system:cluster-admins
  43. User system:admin
  44. Name: cluster-readers
  45. Labels: <none>
  46. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  47. Role:
  48. Kind: ClusterRole
  49. Name: cluster-reader
  50. Subjects:
  51. Kind Name Namespace
  52. ---- ---- ---------
  53. Group system:cluster-readers
  54. Name: cluster-status-binding
  55. Labels: <none>
  56. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  57. Role:
  58. Kind: ClusterRole
  59. Name: cluster-status
  60. Subjects:
  61. Kind Name Namespace
  62. ---- ---- ---------
  63. Group system:authenticated
  64. Group system:unauthenticated
  65. Name: registry-registry-role
  66. Labels: <none>
  67. Annotations: <none>
  68. Role:
  69. Kind: ClusterRole
  70. Name: system:registry
  71. Subjects:
  72. Kind Name Namespace
  73. ---- ---- ---------
  74. ServiceAccount registry default
  75. Name: router-router-role
  76. Labels: <none>
  77. Annotations: <none>
  78. Role:
  79. Kind: ClusterRole
  80. Name: system:router
  81. Subjects:
  82. Kind Name Namespace
  83. ---- ---- ---------
  84. ServiceAccount router default
  85. Name: self-access-reviewers
  86. Labels: <none>
  87. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  88. Role:
  89. Kind: ClusterRole
  90. Name: self-access-reviewer
  91. Subjects:
  92. Kind Name Namespace
  93. ---- ---- ---------
  94. Group system:authenticated
  95. Group system:unauthenticated
  96. Name: self-provisioners
  97. Labels: <none>
  98. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  99. Role:
  100. Kind: ClusterRole
  101. Name: self-provisioner
  102. Subjects:
  103. Kind Name Namespace
  104. ---- ---- ---------
  105. Group system:authenticated:oauth
  106. Name: system:basic-user
  107. Labels: kubernetes.io/bootstrapping=rbac-defaults
  108. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  109. Role:
  110. Kind: ClusterRole
  111. Name: system:basic-user
  112. Subjects:
  113. Kind Name Namespace
  114. ---- ---- ---------
  115. Group system:authenticated
  116. Group system:unauthenticated
  117. Name: system:build-strategy-docker-binding
  118. Labels: <none>
  119. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  120. Role:
  121. Kind: ClusterRole
  122. Name: system:build-strategy-docker
  123. Subjects:
  124. Kind Name Namespace
  125. ---- ---- ---------
  126. Group system:authenticated
  127. Name: system:build-strategy-jenkinspipeline-binding
  128. Labels: <none>
  129. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  130. Role:
  131. Kind: ClusterRole
  132. Name: system:build-strategy-jenkinspipeline
  133. Subjects:
  134. Kind Name Namespace
  135. ---- ---- ---------
  136. Group system:authenticated
  137. Name: system:build-strategy-source-binding
  138. Labels: <none>
  139. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  140. Role:
  141. Kind: ClusterRole
  142. Name: system:build-strategy-source
  143. Subjects:
  144. Kind Name Namespace
  145. ---- ---- ---------
  146. Group system:authenticated
  147. Name: system:controller:attachdetach-controller
  148. Labels: kubernetes.io/bootstrapping=rbac-defaults
  149. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  150. Role:
  151. Kind: ClusterRole
  152. Name: system:controller:attachdetach-controller
  153. Subjects:
  154. Kind Name Namespace
  155. ---- ---- ---------
  156. ServiceAccount attachdetach-controller kube-system
  157. Name: system:controller:certificate-controller
  158. Labels: kubernetes.io/bootstrapping=rbac-defaults
  159. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  160. Role:
  161. Kind: ClusterRole
  162. Name: system:controller:certificate-controller
  163. Subjects:
  164. Kind Name Namespace
  165. ---- ---- ---------
  166. ServiceAccount certificate-controller kube-system
  167. Name: system:controller:cronjob-controller
  168. Labels: kubernetes.io/bootstrapping=rbac-defaults
  169. Annotations: rbac.authorization.kubernetes.io/autoupdate=true
  170. ...

Viewing local roles and bindings

All of the default cluster roles can be bound locally to users or groups.

Custom local roles can be created.

The local role bindings are also viewable.

To view the current set of local role bindings, which show the users and groups that are bound to various roles:

  1. $ oc describe rolebinding.rbac

By default, the current project is used when viewing local role bindings. Alternatively, a project can be specified with the -n flag. This is useful for viewing the local role bindings of another project, if the user already has the admin default cluster role in it.

  1. $ oc describe rolebinding.rbac -n joe-project
  2. Name: admin
  3. Labels: <none>
  4. Annotations: <none>
  5. Role:
  6. Kind: ClusterRole
  7. Name: admin
  8. Subjects:
  9. Kind Name Namespace
  10. ---- ---- ---------
  11. User joe
  12. Name: system:deployers
  13. Labels: <none>
  14. Annotations: <none>
  15. Role:
  16. Kind: ClusterRole
  17. Name: system:deployer
  18. Subjects:
  19. Kind Name Namespace
  20. ---- ---- ---------
  21. ServiceAccount deployer joe-project
  22. Name: system:image-builders
  23. Labels: <none>
  24. Annotations: <none>
  25. Role:
  26. Kind: ClusterRole
  27. Name: system:image-builder
  28. Subjects:
  29. Kind Name Namespace
  30. ---- ---- ---------
  31. ServiceAccount builder joe-project
  32. Name: system:image-pullers
  33. Labels: <none>
  34. Annotations: <none>
  35. Role:
  36. Kind: ClusterRole
  37. Name: system:image-puller
  38. Subjects:
  39. Kind Name Namespace
  40. ---- ---- ---------
  41. Group system:serviceaccounts:joe-project

Managing role bindings

Adding, or binding, a role to users or groups gives the user or group the relevant access granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands.

When managing a user or group’s associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used.

Table 1. Local role binding operations
CommandDescription

$ oc adm policy who-can <verb> <resource>

Indicates which users can perform an action on a resource.

$ oc adm policy add-role-to-user <role> <username>

Binds a given role to specified users in the current project.

$ oc adm policy remove-role-from-user <role> <username>

Removes a given role from specified users in the current project.

$ oc adm policy remove-user <username>

Removes specified users and all of their roles in the current project.

$ oc adm policy add-role-to-group <role> <groupname>

Binds a given role to specified groups in the current project.

$ oc adm policy remove-role-from-group <role> <groupname>

Removes a given role from specified groups in the current project.

$ oc adm policy remove-group <groupname>

Removes specified groups and all of their roles in the current project.

—rolebinding-name=

Can be used with oc adm policy commands to retain the rolebinding name assigned to a user or group.

You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources.

Table 2. Cluster role binding operations
CommandDescription

$ oc adm policy add-cluster-role-to-user <role> <username>

Binds a given role to specified users for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-user <role> <username>

Removes a given role from specified users for all projects in the cluster.

$ oc adm policy add-cluster-role-to-group <role> <groupname>

Binds a given role to specified groups for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-group <role> <groupname>

Removes a given role from specified groups for all projects in the cluster.

—rolebinding-name=

Can be used with oc adm policy commands to retain the rolebinding name assigned to a user or group.

For example, you can add the admin role to the alice user in joe-project by running:

  1. $ oc adm policy add-role-to-user admin alice -n joe-project

You can then view the local role bindings and verify the addition in the output:

  1. $ oc describe rolebinding.rbac -n joe-project
  2. Name: admin
  3. Labels: <none>
  4. Annotations: <none>
  5. Role:
  6. Kind: ClusterRole
  7. Name: admin
  8. Subjects:
  9. Kind Name Namespace
  10. ---- ---- ---------
  11. User joe
  12. Name: admin-0 (1)
  13. Labels: <none>
  14. Annotations: <none>
  15. Role:
  16. Kind: ClusterRole
  17. Name: admin
  18. Subjects:
  19. Kind Name Namespace
  20. ---- ---- ---------
  21. User alice (2)
  22. Name: system:deployers
  23. Labels: <none>
  24. Annotations: <none>
  25. Role:
  26. Kind: ClusterRole
  27. Name: system:deployer
  28. Subjects:
  29. Kind Name Namespace
  30. ---- ---- ---------
  31. ServiceAccount deployer joe-project
  32. Name: system:image-builders
  33. Labels: <none>
  34. Annotations: <none>
  35. Role:
  36. Kind: ClusterRole
  37. Name: system:image-builder
  38. Subjects:
  39. Kind Name Namespace
  40. ---- ---- ---------
  41. ServiceAccount builder joe-project
  42. Name: system:image-pullers
  43. Labels: <none>
  44. Annotations: <none>
  45. Role:
  46. Kind: ClusterRole
  47. Name: system:image-puller
  48. Subjects:
  49. Kind Name Namespace
  50. ---- ---- ---------
  51. Group system:serviceaccounts:joe-project
1A new role binding is created with a default name, incremented as necessary. To specify an existing role binding to modify, use the —rolebinding-name option when adding the role to the user.
2The user alice is added.

Creating a local role

You can create a local role for a project and then bind it to a user.

  1. To create a local role for a project, run the following command:

    1. $ oc create role <name> --verb=<verb> --resource=<resource> -n <project>

    In this command, specify: * <name>, the local role’s name * <verb>, a comma-separated list of the verbs to apply to the role * <resource>, the resources that the role applies to * <project>, the project name

    + For example, to create a local role that allows a user to view pods in the blue project, run the following command:

    +

    1. $ oc create role podview --verb=get --resource=pod -n blue
  2. To bind the new role to a user, run the following command:

  1. $ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue

Creating a cluster role

To create a cluster role, run the following command:

  1. $ oc create clusterrole <name> --verb=<verb> --resource=<resource>

In this command, specify:

  • <name>, the local role’s name

  • <verb>, a comma-separated list of the verbs to apply to the role

  • <resource>, the resources that the role applies to

For example, to create a cluster role that allows a user to view pods, run the following command:

  1. $ oc create clusterrole podviewonly --verb=get --resource=pod

Cluster and local role bindings

A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.

Some cluster role names are initially confusing. You can bind the cluster-admin to a user, using a local role binding, making it appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a certain project is more like a super administrator for that project, granting the permissions of the cluster role admin, plus a few additional permissions like the ability to edit rate limits. This can appear confusing especially via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin.

Updating Policy Definitions

During a cluster upgrade, and on every restart of any master, the default cluster roles are automatically reconciled to restore any missing permissions.

If you customized default cluster roles and want to ensure a role reconciliation does not modify them:

  1. Protect each role from reconciliation:

    1. $ oc annotate clusterrole.rbac <role_name> --overwrite rbac.authorization.kubernetes.io/autoupdate=false

    You must manually update the roles that contain this setting to include any new or required permissions after upgrading.

  2. Generate a default bootstrap policy template file:

    1. $ oc adm create-bootstrap-policy-file --filename=policy.json

    The contents of the file vary based on the OKD version, but the file contains only the default policies.

  3. Update the policy.json file to include any cluster role customizations.

  4. Use the policy file to automatically reconcile roles and role bindings that are not reconcile protected:

    1. $ oc auth reconcile -f policy.json
  5. Reconcile security context constraints:

    1. # oc adm policy reconcile-sccs \
    2. --additive-only=true \
    3. --confirm