Description of Keys in config and cluster.spec

This list is not complete but aims to document any keys that are less than self-explanatory. Our godoc reference provides a more detailed list of API values. ClusterSpec, defined as kind: Cluster in YAML, and InstanceGroup, defined as kind: InstanceGroup in YAML, are the two top-level API values used to describe a cluster.

spec

api

This object configures how we expose the API:

  • dns will allow direct access to master instances, and configure DNS to point directly to the master nodes.
  • loadBalancer will configure a load balancer (ELB) in front of the master nodes, and configure DNS to point to the ELB.

DNS example:

  1. spec:
  2. api:
  3. dns: {}

When configuring a LoadBalancer, you can also choose to have a public ELB or an internal (VPC only) ELB. The typefield should be Public or Internal.

Also, you can add precreated additional security groups to the load balancer by setting additionalSecurityGroups.

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Public
  5. additionalSecurityGroups:
  6. - sg-xxxxxxxx
  7. - sg-xxxxxxxx

Additionally, you can increase idle timeout of the load balancer by setting its idleTimeoutSeconds. The default idle timeout is 5 minutes, with a maximum of 3600 seconds (60 minutes) being allowed by AWS.For more information see configuring idle timeouts.

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Public
  5. idleTimeoutSeconds: 300

You can use a valid SSL Certificate for your API Server Load Balancer. Currently, only AWS is supported:

  1. spec:
  2. api:
  3. loadBalancer:
  4. sslCertificate: arn:aws:acm:<region>:<accountId>:certificate/<uuid>

etcdClusters v3 & tls

Although kops doesn’t presently default to etcd3, it is possible to turn on both v3 and TLS authentication for communication amongst cluster members. These options may be enabled via the cluster spec (manifests only i.e. no command line options as yet). An upfront warning; at present no upgrade path exists for migrating from v2 to v3 so DO NOT try to enable this on a v2 running cluster as it must be done on cluster creation. The below example snippet assumes a HA cluster of three masters.

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master0-az0
  4. name: a-1
  5. - instanceGroup: master1-az0
  6. name: a-2
  7. - instanceGroup: master0-az1
  8. name: b-1
  9. enableEtcdTLS: true
  10. name: main
  11. version: 3.0.17
  12. - etcdMembers:
  13. - instanceGroup: master0-az0
  14. name: a-1
  15. - instanceGroup: master1-az0
  16. name: a-2
  17. - instanceGroup: master0-az1
  18. name: b-1
  19. enableEtcdTLS: true
  20. name: events
  21. version: 3.0.17

Note: The images for etcd that kops uses are from the Google Cloud Repository. Google doesn’t release every version of etcd to the gcr. Check that the version of etcd you want to use is available at the gcr before using it in your cluster spec.

By default, the Volumes created for the etcd clusters are gp2 and 20GB each. The volume size, type and Iops( for io1) can be configured via their parameters. Conversion between gp2 and io1 is not supported, nor are size changes.

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master-us-east-1a
  4. name: a
  5. volumeType: gp2
  6. volumeSize: 20
  7. name: main
  8. - etcdMembers:
  9. - instanceGroup: master-us-east-1a
  10. name: a
  11. volumeType: io1
  12. # WARNING: bear in mind that the Iops to volume size ratio has a maximum of 50 on AWS!
  13. volumeIops: 100
  14. volumeSize: 21
  15. name: events

sshAccess

This array configures the CIDRs that are able to ssh into nodes. On AWS this is manifested as inbound security group rules on the nodes and master security groups.

Use this key to restrict cluster access to an office ip address range, for example.

  1. spec:
  2. sshAccess:
  3. - 12.34.56.78/32

kubernetesApiAccess

This array configures the CIDRs that are able to access the kubernetes API. On AWS this is manifested as inbound security group rules on the ELB or master security groups.

Use this key to restrict cluster access to an office ip address range, for example.

  1. spec:
  2. kubernetesApiAccess:
  3. - 12.34.56.78/32

cluster.spec Subnet Keys

id

ID of a subnet to share in an existing VPC.

egress

The resource identifier (ID) of something in your existing VPC that you would like to use as “egress” to the outside world.

This feature was originally envisioned to allow re-use of NAT gateways. In this case, the usage is as follows. Although NAT gateways are “public”-facing resources, in the Cluster spec, you must specify them in the private subnet section. One way to think about this is that you are specifying “egress”, which is the default route out from this private subnet.

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. egress: nat-987654321
  6. type: Private
  7. zone: us-east-1a
  8. - cidr: 10.20.32.0/21
  9. name: utility-us-east-1a
  10. id: subnet-12345
  11. type: Utility
  12. zone: us-east-1a

publicIP

The IP of an existing EIP that you would like to attach to the NAT gateway.

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. publicIP: 203.93.148.142
  6. type: Private
  7. zone: us-east-1a

kubeAPIServer

This block contains configuration for the kube-apiserver.

oidc flags for Open ID Connect Tokens

Read more about this here: https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens

  1. spec:
  2. kubeAPIServer:
  3. oidcIssuerURL: https://your-oidc-provider.svc.cluster.local
  4. oidcClientID: kubernetes
  5. oidcUsernameClaim: sub
  6. oidcUsernamePrefix: "oidc:"
  7. oidcGroupsClaim: user_roles
  8. oidcGroupsPrefix: "oidc:"
  9. oidcCAFile: /etc/kubernetes/ssl/kc-ca.pem

audit logging

Read more about this here: https://kubernetes.io/docs/admin/audit

  1. spec:
  2. kubeAPIServer:
  3. auditLogPath: /var/log/kube-apiserver-audit.log
  4. auditLogMaxAge: 10
  5. auditLogMaxBackups: 1
  6. auditLogMaxSize: 100
  7. auditPolicyFile: /srv/kubernetes/audit.yaml

Note: The auditPolicyFile is needed. If the flag is omitted, no events are logged.

You could use the fileAssets feature to push an advanced audit policy file on the master nodes.

Example policy file can be found here

bootstrap tokens

Read more about this here: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/

  1. spec:
  2. kubeAPIServer:
  3. enableBootstrapTokenAuth: true

By enabling this feature you instructing two things;

  • master nodes will bypass the bootstrap token but they will build kubeconfigs with unique usernames in the system:nodes group (this ensure’s the master nodes confirm with the node authorization mode https://kubernetes.io/docs/reference/access-authn-authz/node/)
  • secondly the nodes will be configured to use a bootstrap token located by default at /var/lib/kubelet/bootstrap-kubeconfig (though this can be override in the kubelet spec). The nodes will sit the until a bootstrap file is created and once available attempt to provision the node.

Note enabling bootstrap tokens does not provision bootstrap tokens for the worker nodes. Under this configuration it is assumed a third-party process is provisioning the tokens on behalf of the worker nodes. For the full setup please read Node Authorizer Service

Max Requests Inflight

The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)

  1. spec:
  2. kubeAPIServer:
  3. maxRequestsInflight: 1000

The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)

  1. spec:
  2. kubeAPIServer:
  3. maxMutatingRequestsInflight: 450

runtimeConfig

Keys and values here are translated into --runtime-config values for kube-apiserver, separated by commas.

Use this to enable alpha features, for example:

  1. spec:
  2. kubeAPIServer:
  3. runtimeConfig:
  4. batch/v2alpha1: "true"
  5. apps/v1alpha1: "true"

Will result in the flag --runtime-config=batch/v2alpha1=true,apps/v1alpha1=true. Note that kube-apiserver accepts true as a value for switch-like flags.

serviceNodePortRange

This value is passed as --service-node-port-range for kube-apiserver.

  1. spec:
  2. kubeAPIServer:
  3. serviceNodePortRange: 30000-33000

Disable Basic Auth

This will disable the passing of the --basic-auth-file flag.

  1. spec:
  2. kubeAPIServer:
  3. disableBasicAuth: true

targetRamMb

Memory limit for apiserver in MB (used to configure sizes of caches, etc.)

  1. spec:
  2. kubeAPIServer:
  3. targetRamMb: 4096

externalDns

This block contains configuration options for your external-DNS provider.The current external-DNS provider is the kops dns-controller, which can set up DNS records for Kubernetes resources.dns-controller is scheduled to be phased out and replaced with external-dns.

  1. spec:
  2. externalDns:
  3. watchIngress: true

Default kops behavior is false. watchIngress: true uses the default dns-controller behavior which is to watch the ingress controller for changes. Set this option at risk of interrupting Service updates in some cases.

kubelet

This block contains configurations for kubelet. See https://kubernetes.io/docs/admin/kubelet/

NOTE: Where the corresponding configuration value can be empty, fields can be set to empty in the spec, and an empty string will be passed as the configuration value.

  1. spec:
  2. kubelet:
  3. resolvConf: ""

Will result in the flag --resolv-conf= being built.

Enable Custom metrics support

To use custom metrics in kubernetes as per custom metrics docwe have to set the flag --enable-custom-metrics to true on all the kubelets. We can specify that in the kubelet spec in our cluster.yml.

  1. spec:
  2. kubelet:
  3. enableCustomMetrics: true

Setting kubelet configurations together with the Amazon VPC backend

Setting kubelet configurations together with the networking Amazon VPC backend requires to also set the cloudProvider: aws setting in this block. Example:

  1. spec:
  2. kubelet:
  3. enableCustomMetrics: true
  4. cloudProvider: aws
  5. ...
  6. ...
  7. cloudProvider: aws
  8. ...
  9. ...
  10. networking:
  11. amazonvpc: {}

kubeScheduler

This block contains configurations for kube-scheduler. See https://kubernetes.io/docs/admin/kube-scheduler/

  1. spec:
  2. kubeScheduler:
  3. usePolicyConfigMap: true

Will make kube-scheduler use the scheduler policy from configmap “scheduler-policy” in namespace kube-system.

Note that as of Kubernetes 1.8.0 kube-scheduler does not reload its configuration from configmap automatically. You will need to ssh into the master instance and restart the Docker container manually.

kubeDNS

This block contains configurations for kube-dns.

  1. spec:
  2. kubeDNS:
  3. provider: KubeDNS

Specifying KubeDNS will install kube-dns as the default service discovery.

  1. spec:
  2. kubeDNS:
  3. provider: CoreDNS

This will install CoreDNS instead of kube-dns.

kubeControllerManager

This block contains configurations for the controller-manager.

  1. spec:
  2. kubeControllerManager:
  3. horizontalPodAutoscalerSyncPeriod: 15s
  4. horizontalPodAutoscalerDownscaleDelay: 5m0s
  5. horizontalPodAutoscalerUpscaleDelay: 3m0s

For more details on horizontalPodAutoscaler flags see the official HPA docs and the Kops guides on how to set it up.

Feature Gates

  1. spec:
  2. kubelet:
  3. featureGates:
  4. Accelerators: "true"
  5. AllowExtTrafficLocalEndpoints: "false"

Will result in the flag --feature-gates=Accelerators=true,AllowExtTrafficLocalEndpoints=false

NOTE: Feature gate ExperimentalCriticalPodAnnotation is enabled by default because some critical components like kube-proxy depend on its presence.

Compute Resources Reservation

  1. spec:
  2. kubelet:
  3. kubeReserved:
  4. cpu: "100m"
  5. memory: "100Mi"
  6. ephemeral-storage: "1Gi"
  7. kubeReservedCgroup: "/kube-reserved"
  8. systemReserved:
  9. cpu: "100m"
  10. memory: "100Mi"
  11. ephemeral-storage: "1Gi"
  12. systemReservedCgroup: "/system-reserved"
  13. enforceNodeAllocatable: "pods,system-reserved,kube-reserved"

Will result in the flag --kube-reserved=cpu=100m,memory=100Mi,ephemeral-storage=1Gi --kube-reserved-cgroup=/kube-reserved --system-reserved=cpu=100m,memory=100Mi,ephemeral-storage=1Gi --system-reserved-cgroup=/system-reserved --enforce-node-allocatable=pods,system-reserved,kube-reserved

Learn more about reserving compute resources.

networkID

On AWS, this is the id of the VPC the cluster is created in. If creating a cluster from scratch, this field does not need to be specified at create time; kops will create a VPC for you.

  1. spec:
  2. networkID: vpc-abcdefg1

More information about running in an existing VPC is here.

hooks

Hooks allow for the execution of an action before the installation of Kubernetes on every node in a cluster. For instance you can install Nvidia drivers for using GPUs. This hooks can be in the form of Docker images or manifest files (systemd units). Hooks can be placed in either the cluster spec, meaning they will be globally deployed, or they can be placed into the instanceGroup specification. Note: service names on the instanceGroup which overlap with the cluster spec take precedence and ignore the cluster spec definition, i.e. if you have a unit file ‘myunit.service’ in cluster and then one in the instanceGroup, only the instanceGroup is applied.

When creating a systemd unit hook using the manifest field, the hook system will construct a systemd unit file for you. It creates the [Unit] section, adding an automated description and setting Before and Requires values based on the before and requires fields. The value of the manifest field is used as the [Service] section of the unit file. To override this behavior, and instead specify the entire unit file yourself, you may specify useRawManifest: true. In this case, the contents of the manifest field will be used as a systemd unit, unmodified. The before and requires fields may not be used together with useRawManifest.

  1. spec:
  2. # many sections removed
  3. # run a docker container as a hook
  4. hooks:
  5. - before:
  6. - some_service.service
  7. requires:
  8. - docker.service
  9. execContainer:
  10. image: kopeio/nvidia-bootstrap:1.6
  11. # these are added as -e to the docker environment
  12. environment:
  13. AWS_REGION: eu-west-1
  14. SOME_VAR: SOME_VALUE
  15. # or construct a systemd unit
  16. hooks:
  17. - name: iptable-restore.service
  18. roles:
  19. - Node
  20. - Master
  21. before:
  22. - kubelet.service
  23. manifest: |
  24. EnvironmentFile=/etc/environment
  25. # do some stuff
  26. # or use a raw systemd unit
  27. hooks:
  28. - name: iptable-restore.service
  29. roles:
  30. - Node
  31. - Master
  32. useRawManifest: true
  33. manifest: |
  34. [Unit]
  35. Description=Restore iptables rules
  36. Before=kubelet.service
  37. [Service]
  38. EnvironmentFile=/etc/environment
  39. # do some stuff
  40. # or disable a systemd unit
  41. hooks:
  42. - name: update-engine.service
  43. disabled: true
  44. # or you could wrap this into a full unit
  45. hooks:
  46. - name: disable-update-engine.service
  47. before:
  48. - update-engine.service
  49. manifest: |
  50. Type=oneshot
  51. ExecStart=/usr/bin/systemctl stop update-engine.service

Install Ceph

  1. spec:
  2. # many sections removed
  3. hooks:
  4. - execContainer:
  5. command:
  6. - sh
  7. - -c
  8. - chroot /rootfs apt-get update && chroot /rootfs apt-get install -y ceph-common
  9. image: busybox

Install cachefilesd

  1. spec:
  2. # many sections removed
  3. hooks:
  4. - before:
  5. - kubelet.service
  6. manifest: |
  7. Type=oneshot
  8. ExecStart=/sbin/modprobe cachefiles
  9. name: cachefiles.service
  10. - execContainer:
  11. command:
  12. - sh
  13. - -c
  14. - chroot /rootfs apt-get update && chroot /rootfs apt-get install -y cachefilesd
  15. && chroot /rootfs sed -i s/#RUN/RUN/ /etc/default/cachefilesd && chroot /rootfs
  16. service cachefilesd restart
  17. image: busybox

fileAssets

FileAssets is an alpha feature which permits you to place inline file content into the cluster and instanceGroup specification. It’s designated as alpha as you can probably do this via kubernetes daemonsets as an alternative.

  1. spec:
  2. fileAssets:
  3. - name: iptable-restore
  4. # Note if not path is specified the default path it /srv/kubernetes/assets/<name>
  5. path: /var/lib/iptables/rules-save
  6. roles: [Master,Node,Bastion] # a list of roles to apply the asset to, zero defaults to all
  7. content: |
  8. some file content

cloudConfig

disableSecurityGroupIngress

If you are using aws as cloudProvider, you can disable authorization of ELB security group to Kubernetes Nodes security group. In other words, it will not add security group rule.This can be useful to avoid AWS limit: 50 rules per security group.

  1. spec:
  2. cloudConfig:
  3. disableSecurityGroupIngress: true

elbSecurityGroup

WARNING: this works only for Kubernetes version above 1.7.0.

To avoid creating a security group per elb, you can specify security group id, that will be assigned to your LoadBalancer. It must be security group id, not name.api.loadBalancer.additionalSecurityGroups must be empty, because Kubernetes will add rules per ports that are specified in service file.This can be useful to avoid AWS limits: 500 security groups per region and 50 rules per security group.

  1. spec:
  2. cloudConfig:
  3. elbSecurityGroup: sg-123445678

docker

It is possible to override Docker daemon options for all masters and nodes in the cluster. See the API docs for the full list of options.

registryMirrors

If you have a bunch of Docker instances (physical or vm) running, each time one of them pulls an image that is not present on the host, it will fetch it from the internet (DockerHub). By caching these images, you can keep the traffic within your local network and avoid egress bandwidth usage.This setting benefits not only cluster provisioning but also image pulling.

@see Cache-Mirror Dockerhub For Speed@see Configure the Docker daemon.

  1. spec:
  2. docker:
  3. registryMirrors:
  4. - https://registry.example.com

storage

The Docker Storage Driver can be specified in order to override the default. Be sure the driver you choose is supported by your operating system and docker version.

  1. docker:
  2. storage: devicemapper
  3. storageOpts:
  4. - "dm.thinpooldev=/dev/mapper/thin-pool"
  5. - "dm.use_deferred_deletion=true"
  6. - "dm.use_deferred_removal=true"

sshKeyName

In some cases, it may be desirable to use an existing AWS SSH key instead of allowing kops to create a new one.Providing the name of a key already in AWS is an alternative to --ssh-public-key.

  1. spec:
  2. sshKeyName: myexistingkey

target

In some use-cases you may wish to augment the target output with extra options. target supports a minimal amount of options you can do this with. Currently only the terraform target supports this, but if other use cases present themselves, kops may eventually support more.

  1. spec:
  2. target:
  3. terraform:
  4. providerExtraConfig:
  5. alias: foo

assets

Assets define alernative locations from where to retrieve static files and containers

containerRegistry

The container registry enables kops / kubernetes to pull containers from a managed registry.This is useful when pulling containers from the internet is not an option, eg. because thedeployment is offline / internet restricted or because of special requirements that applyfor deployed artifacts, eg. auditing of containers.

For a use case example, see How to use kops in AWS China Region

  1. spec:
  2. assets:
  3. containerRegistry: example.com/registry

containerProxy

The container proxy is designed to acts as a pull through cache for docker container assets.Basically, what it does is it remaps the Kubernetes image URL to point to you cache so that the docker daemon will pull the image from that location.If, for example, the containerProxy is set to proxy.example.com, the image k8s.gcr.io/kube-apiserver will be pulled from proxy.example.com/kube-apiserver instead.Note that the proxy you use has to support this feature for private registries.

  1. spec:
  2. assets:
  3. containerProxy: proxy.example.com