Migrating from single to multi-master

Switching from a single-master to a multi-maser Kubernetes cluster is an entirely graceful procedure when using etcd-manager. If you are still using legacy etcd, you need to migrate to etcd-manager first.

Create instance groups

Create new subnets

Start out by deciding which availability zones you want to deploy the masters to. You can only have one master per availability zone.

Then you need to add new subnets for your availability zones. Which subnets you need to add depend on which topology you have chosen. Simplest is to copy the sections you already have. Make sure that you add additional subnets per type. E.g if you have a private and a utility subnet, you need to copy both.

  1. kops get cluster -o yaml > mycluster.yaml

Change the subnet section to look something like this:

  1. - cidr: 172.20.32.0/19
  2. name: eu-west-1a
  3. type: Private
  4. zone: eu-west-1a
  5. - cidr: 172.20.64.0/19
  6. name: eu-west-1b
  7. type: Private
  8. zone: eu-west-1b
  9. - cidr: 172.20.96.0/19
  10. name: eu-west-1c
  11. type: Private
  12. zone: eu-west-1c
  13. - cidr: 172.20.0.0/22
  14. name: utility-eu-west-1a
  15. type: Utility
  16. zone: eu-west-1a
  17. - cidr: 172.20.4.0/22
  18. name: utility-eu-west-1b
  19. type: Utility
  20. zone: eu-west-1b
  21. - cidr: 172.20.8.0/22
  22. name: utility-eu-west-1c
  23. type: Utility
  24. zone: eu-west-1c

Create new master instance groups

The next step is creating two new instance groups for the new masters.

  1. kops create instancegroup master-<subnet name> --subnet <subnet name> --role Master

Example:

  1. kops create ig master-us-west-1d --subnet us-west-1d --role Master

This command will bring up an editor with the default values. Ensure that:

  • maxSize and minSize is 1
  • only one zone is listed
  • you have the correct image and machine type

Reference the new masters in your cluster configuration

Bring up mycluster.yaml again to add etcd members to each of new masters.

  1. $EDITOR mycluster.yaml
  • In .spec.etcdClusters add 2 new members in each cluster, one for each new availability zone.
  1. - instanceGroup: master-<availability-zone2>
  2. name: <availability-zone2-name>
  3. - instanceGroup: master-<availability-zone3>
  4. name: <availability-zone3-name>

Example:

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master-eu-west-1a
  4. name: a
  5. - instanceGroup: master-eu-west-1b
  6. name: b
  7. - instanceGroup: master-eu-west-1c
  8. name: c
  9. name: main
  10. - etcdMembers:
  11. - instanceGroup: master-eu-west-1a
  12. name: a
  13. - instanceGroup: master-eu-west-1b
  14. name: b
  15. - instanceGroup: master-eu-west-1c
  16. name: c
  17. name: events

Update Cluster to launch new masters

Update the cluster spec and apply the config by running the following:

  1. kops replace -f mycluster.yaml
  2. kops update cluster example.com
  3. kops update cluster example.com --yes

This will launch the two new masters. You will also need to roll the old master so that it can join the new etcd cluster.

After about 5 minutes all three masters should have found each other. Run the following to ensure everything is running as expected.

  1. kops validate cluster --wait 10m

While rotating the original master is not strictly necessary, kOps will say it needs updating because of the configuration change.

  1. kops rolling-update cluster --yes