Using A Manifest to Manage kops Clusters

This document also applies to using the kops API to customize a Kubernetes cluster with or without using YAML or JSON.

Table of Contents

Background

We like to think of it as kubectl for Clusters.

Because of the above statement kops includes an API which provides a feature for users to utilize YAML or JSON manifests for managing their kops created Kubernetes installations. In the same way that you can use a YAML manifest to deploy a Job, you can deploy and manage a kops Kubernetes instance with a manifest. All of these values are also usable via the interactive editor with kops edit.

You can see all the options that are currently supported in Kops here or more prettily here

The following is a list of the benefits of using a file to manage instances.

  • Capability to access API values that are not accessible via the command line such as setting the max price for spot instances.
  • Create, replace, update, and delete clusters without entering an interactive editor. This feature is helpful when automating cluster creation.
  • Ability to check-in files to source control that represents an installation.
  • Run commands such as kops delete -f mycluster.yaml.

Exporting a Cluster

At this time you must run kops create cluster and then export the YAML from the state store. We plan in the future to have the capability to generate kops YAML via the command line. The following is an example of creating a cluster and exporting the YAML.

  1. export NAME=k8s.example.com
  2. export KOPS_STATE_STORE=s3://example-state-store
  3. kops create cluster $NAME \
  4. --zones "us-east-2a,us-east-2b,us-east-2c" \
  5. --master-zones "us-east-2a,us-east-2b,us-east-2c" \
  6. --networking weave \
  7. --topology private \
  8. --bastion \
  9. --node-count 3 \
  10. --node-size m4.xlarge \
  11. --kubernetes-version v1.6.6 \
  12. --master-size m4.large \
  13. --vpc vpc-6335dd1a \
  14. --dry-run \
  15. -o yaml > $NAME.yaml

The above command exports a YAML document which contains the definition of the cluster, kind: Cluster, and the definitions of the instance groups, kind: InstanceGroup.

NOTE: If you run kops get cluster $NAME -o yaml > $NAME.yaml, you will only get a cluster spec. Use the command above (kops get $NAME …)for both the cluster spec and all instance groups.

The following is the contents of the exported YAML file.

  1. apiVersion: kops.k8s.io/v1alpha2
  2. kind: Cluster
  3. metadata:
  4. creationTimestamp: 2017-05-04T23:21:47Z
  5. name: k8s.example.com
  6. spec:
  7. api:
  8. loadBalancer:
  9. type: Public
  10. authorization:
  11. alwaysAllow: {}
  12. channel: stable
  13. cloudProvider: aws
  14. configBase: s3://example-state-store/k8s.example.com
  15. etcdClusters:
  16. - etcdMembers:
  17. - instanceGroup: master-us-east-2d
  18. name: a
  19. - instanceGroup: master-us-east-2b
  20. name: b
  21. - instanceGroup: master-us-east-2c
  22. name: c
  23. name: main
  24. - etcdMembers:
  25. - instanceGroup: master-us-east-2d
  26. name: a
  27. - instanceGroup: master-us-east-2b
  28. name: b
  29. - instanceGroup: master-us-east-2c
  30. name: c
  31. name: events
  32. kubernetesApiAccess:
  33. - 0.0.0.0/0
  34. kubernetesVersion: 1.6.6
  35. masterPublicName: api.k8s.example.com
  36. networkCIDR: 172.20.0.0/16
  37. networkID: vpc-6335dd1a
  38. networking:
  39. weave: {}
  40. nonMasqueradeCIDR: 100.64.0.0/10
  41. sshAccess:
  42. - 0.0.0.0/0
  43. subnets:
  44. - cidr: 172.20.32.0/19
  45. name: us-east-2d
  46. type: Private
  47. zone: us-east-2d
  48. - cidr: 172.20.64.0/19
  49. name: us-east-2b
  50. type: Private
  51. zone: us-east-2b
  52. - cidr: 172.20.96.0/19
  53. name: us-east-2c
  54. type: Private
  55. zone: us-east-2c
  56. - cidr: 172.20.0.0/22
  57. name: utility-us-east-2d
  58. type: Utility
  59. zone: us-east-2d
  60. - cidr: 172.20.4.0/22
  61. name: utility-us-east-2b
  62. type: Utility
  63. zone: us-east-2b
  64. - cidr: 172.20.8.0/22
  65. name: utility-us-east-2c
  66. type: Utility
  67. zone: us-east-2c
  68. topology:
  69. bastion:
  70. bastionPublicName: bastion.k8s.example.com
  71. dns:
  72. type: Public
  73. masters: private
  74. nodes: private
  75.  
  76. ---
  77.  
  78. apiVersion: kops.k8s.io/v1alpha2
  79. kind: InstanceGroup
  80. metadata:
  81. creationTimestamp: 2017-05-04T23:21:48Z
  82. labels:
  83. kops.k8s.io/cluster: k8s.example.com
  84. name: bastions
  85. spec:
  86. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
  87. machineType: t2.micro
  88. maxSize: 1
  89. minSize: 1
  90. role: Bastion
  91. subnets:
  92. - utility-us-east-2d
  93. - utility-us-east-2b
  94. - utility-us-east-2c
  95.  
  96.  
  97. ---
  98.  
  99. apiVersion: kops.k8s.io/v1alpha2
  100. kind: InstanceGroup
  101. metadata:
  102. creationTimestamp: 2017-05-04T23:21:47Z
  103. labels:
  104. kops.k8s.io/cluster: k8s.example.com
  105. name: master-us-east-2d
  106. spec:
  107. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
  108. machineType: m4.large
  109. maxSize: 1
  110. minSize: 1
  111. role: Master
  112. subnets:
  113. - us-east-2d
  114.  
  115.  
  116. ---
  117.  
  118. apiVersion: kops.k8s.io/v1alpha2
  119. kind: InstanceGroup
  120. metadata:
  121. creationTimestamp: 2017-05-04T23:21:47Z
  122. labels:
  123. kops.k8s.io/cluster: k8s.example.com
  124. name: master-us-east-2b
  125. spec:
  126. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
  127. machineType: m4.large
  128. maxSize: 1
  129. minSize: 1
  130. role: Master
  131. subnets:
  132. - us-east-2b
  133.  
  134.  
  135. ---
  136.  
  137. apiVersion: kops.k8s.io/v1alpha2
  138. kind: InstanceGroup
  139. metadata:
  140. creationTimestamp: 2017-05-04T23:21:48Z
  141. labels:
  142. kops.k8s.io/cluster: k8s.example.com
  143. name: master-us-east-2c
  144. spec:
  145. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
  146. machineType: m4.large
  147. maxSize: 1
  148. minSize: 1
  149. role: Master
  150. subnets:
  151. - us-east-2c
  152.  
  153.  
  154. ---
  155.  
  156. apiVersion: kops.k8s.io/v1alpha2
  157. kind: InstanceGroup
  158. metadata:
  159. creationTimestamp: 2017-05-04T23:21:48Z
  160. labels:
  161. kops.k8s.io/cluster: k8s.example.com
  162. name: nodes
  163. spec:
  164. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
  165. machineType: m4.xlarge
  166. maxSize: 3
  167. minSize: 3
  168. role: Node
  169. subnets:
  170. - us-east-2d
  171. - us-east-2b
  172. - us-east-2c

YAML Examples

With the above YAML file, a user can add configurations that are not available via the command line. For instance, you can add a maxPrice value to a new instance group and use spot instances. Also add node and cloud labels for the new instance group.

  1. apiVersion: kops.k8s.io/v1alpha2
  2. kind: InstanceGroup
  3. metadata:
  4. creationTimestamp: 2017-05-04T23:21:48Z
  5. labels:
  6. kops.k8s.io/cluster: k8s.example.com
  7. name: my-crazy-big-nodes
  8. spec:
  9. nodeLabels:
  10. spot: "true"
  11. cloudLabels:
  12. team: example
  13. project: ion
  14. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
  15. machineType: m4.10xlarge
  16. maxSize: 42
  17. minSize: 42
  18. maxPrice: "0.35"
  19. role: Node
  20. subnets:
  21. - us-east-2c

This configuration will create an autoscale group that will include 42 m4.10xlarge nodes running as spot instances with custom labels.

To create the cluster execute:

  1. kops create -f $NAME.yaml
  2. kops create secret --name $NAME sshpublickey admin -i ~/.ssh/id_rsa.pub
  3. kops update cluster $NAME --yes
  4. kops rolling-update cluster $NAME --yes

Please refer to the rolling-update documentation.

Update the cluster spec YAML file, and to update the cluster run:

  1. kops replace -f $NAME.yaml
  2. kops update cluster $NAME --yes
  3. kops rolling-update cluster $NAME --yes

Please refer to the rolling-update documentation.

Further References

kops implements a full API that defines the various elements in the YAML file exported above. Two top level components exist; ClusterSpec and InstanceGroup.

Cluster Spec

  1. apiVersion: kops.k8s.io/v1alpha2
  2. kind: Cluster
  3. metadata:
  4. creationTimestamp: 2017-05-04T23:21:47Z
  5. name: k8s.example.com
  6. spec:
  7. api:

Full documentation is accessible via godoc.

The ClusterSpec allows a user to set configurations for such values as Docker log driver, Kubernetes API server log level, VPC for reusing a VPC (NetworkID), and the Kubernetes version.

More information about some of the elements in the ClusterSpec is available in the following:

To access the full configuration that a kops installation is running execute:

  1. kops get cluster $NAME --full -o yaml

This command prints the entire YAML configuration. But do not use the full document, you may experience strange and unique unwanted behaviors.

Instance Groups

  1. apiVersion: kops.k8s.io/v1alpha2
  2. kind: InstanceGroup
  3. metadata:
  4. creationTimestamp: 2017-05-04T23:21:48Z
  5. name: foo
  6. spec:

Full documentation is accessible via godocs.

Instance Groups map to Auto Scaling Groups in AWS, and Instance Groups in GCE. They are an API level description of a group of compute instances used as Masters or Nodes.

More documentation is available in the Instance Group document.

Closing Thoughts

Using YAML or JSON-based configuration for building and managing kops clusters is powerful, but use this strategy with caution.

  • If you do not need to define or customize a value, let kops set that value. Setting too many values prevents kops from doing its job in setting up the cluster and you may end up with strange bugs.
  • If you end up with strange bugs, try letting kops do more.
  • Be cautious, take care, and test outside of production!

If you need to run a custom version of Kubernetes Controller Manager, set kubeControllerManager.image and update your cluster. This is the beauty of using a manifest for your cluster!