Host-level tasks

Adding a host to the cluster

For information on adding master or node hosts to a cluster, see the Adding hosts to an existing cluster section in the Install and configuration guide.

Master host tasks

Deprecating a master host

Deprecating a master host removes it from the OKD environment.

The reasons to deprecate or scale down master hosts include hardware re-sizing or replacing the underlying infrastructure.

Highly available OKD environments require at least three master hosts and three etcd nodes. Usually, the master hosts are colocated with the etcd services. If you deprecate a master host, you also remove the etcd static pods from that host.

Ensure that the master and etcd services are always deployed in odd numbers due to the voting mechanisms that take place among those services.

Creating a master host backup

Perform this backup process before any change to the OKD infrastructure, such as a system update, upgrade, or any other significant modification. Back up data regularly to ensure that recent data is available if a failure occurs.

OKD files

The master instances run important services, such as the API, controllers. The /etc/origin/master directory stores many important files:

  • The configuration, the API, controllers, services, and more

  • Certificates generated by the installation

  • All cloud provider-related configuration

  • Keys and other authentication files, such as htpasswd if you use htpasswd

  • And more

You can customize OKD services, such as increasing the log level or using proxies. The configuration files are stored in the /etc/sysconfig directory.

Because the masters are also nodes, back up the entire /etc/origin directory.

Procedure

You must perform the following steps on each master node.

  1. Create a backup of the pod definitions, located here.

  2. Create a backup of the master host configuration files:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    3. $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
    4. $ sudo cp -aR /etc/sysconfig/ ${MYBACKUPDIR}/etc/sysconfig/

    The master configuration file is/etc/origin/master/master-config.yaml.

    The /etc/origin/master/ca.serial.txt file is generated on only the first master listed in the Ansible host inventory. If you deprecate the first master host, copy the /etc/origin/master/ca.serial.txt file to the rest of master hosts before the process.

    In OKD 3.11 clusters running multiple masters, one of the master nodes includes additional CA certificates in /etc/origin/master, /etc/etcd/ca and /etc/etcd/generated_certs. These are required for application node and etcd node scale-up operations and would need to be restored on another master node should the originating master become permanently unavailable. These directories are included by default within the backup procedures documented here.

  3. Other important files that need to be considered when planning a backup include:

    File

    Description

    /etc/cni/

    Container Network Interface configuration (if used)

    /etc/sysconfig/iptables

    Where the iptables rules are stored

    /etc/sysconfig/docker-storage-setup

    The input file for container-storage-setup command

    /etc/sysconfig/docker

    The docker configuration file

    /etc/sysconfig/docker-network

    docker networking configuration (i.e. MTU)

    /etc/sysconfig/docker-storage

    docker storage configuration (generated by container-storage-setup)

    /etc/dnsmasq.conf

    Main configuration file for dnsmasq

    /etc/dnsmasq.d/

    Different dnsmasq configuration files

    /etc/sysconfig/flanneld

    flannel configuration file (if used)

    /etc/pki/ca-trust/source/anchors/

    Certificates added to the system (i.e. for external registries)

    Create a backup of those files:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    3. $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
    4. $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \
    5. ${MYBACKUPDIR}/etc/sysconfig/
    6. $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
    7. $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \
    8. ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
  4. If a package is accidentally removed or you need to resore a file that is included in an rpm package, having a list of rhel packages installed on the system can be useful.

    If you use Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems.

    To create a list of the current rhel packages installed in the system:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}
    3. $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
  5. If you used the previous steps, the following files are present in the backup directory:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n'
    3. etc/sysconfig/flanneld
    4. etc/sysconfig/iptables
    5. etc/sysconfig/docker-network
    6. etc/sysconfig/docker-storage
    7. etc/sysconfig/docker-storage-setup
    8. etc/sysconfig/docker-storage-setup.rpmnew
    9. etc/origin/master/ca.crt
    10. etc/origin/master/ca.key
    11. etc/origin/master/ca.serial.txt
    12. etc/origin/master/ca-bundle.crt
    13. etc/origin/master/master.proxy-client.crt
    14. etc/origin/master/master.proxy-client.key
    15. etc/origin/master/service-signer.crt
    16. etc/origin/master/service-signer.key
    17. etc/origin/master/serviceaccounts.private.key
    18. etc/origin/master/serviceaccounts.public.key
    19. etc/origin/master/openshift-master.crt
    20. etc/origin/master/openshift-master.key
    21. etc/origin/master/openshift-master.kubeconfig
    22. etc/origin/master/master.server.crt
    23. etc/origin/master/master.server.key
    24. etc/origin/master/master.kubelet-client.crt
    25. etc/origin/master/master.kubelet-client.key
    26. etc/origin/master/admin.crt
    27. etc/origin/master/admin.key
    28. etc/origin/master/admin.kubeconfig
    29. etc/origin/master/etcd.server.crt
    30. etc/origin/master/etcd.server.key
    31. etc/origin/master/master.etcd-client.key
    32. etc/origin/master/master.etcd-client.csr
    33. etc/origin/master/master.etcd-client.crt
    34. etc/origin/master/master.etcd-ca.crt
    35. etc/origin/master/policy.json
    36. etc/origin/master/scheduler.json
    37. etc/origin/master/htpasswd
    38. etc/origin/master/session-secrets.yaml
    39. etc/origin/master/openshift-router.crt
    40. etc/origin/master/openshift-router.key
    41. etc/origin/master/registry.crt
    42. etc/origin/master/registry.key
    43. etc/origin/master/master-config.yaml
    44. etc/origin/generated-configs/master-master-1.example.com/master.server.crt
    45. ...[OUTPUT OMITTED]...
    46. etc/origin/cloudprovider/openstack.conf
    47. etc/origin/node/system:node:master-0.example.com.crt
    48. etc/origin/node/system:node:master-0.example.com.key
    49. etc/origin/node/ca.crt
    50. etc/origin/node/system:node:master-0.example.com.kubeconfig
    51. etc/origin/node/server.crt
    52. etc/origin/node/server.key
    53. etc/origin/node/node-dnsmasq.conf
    54. etc/origin/node/resolv.conf
    55. etc/origin/node/node-config.yaml
    56. etc/origin/node/flannel.etcd-client.key
    57. etc/origin/node/flannel.etcd-client.csr
    58. etc/origin/node/flannel.etcd-client.crt
    59. etc/origin/node/flannel.etcd-ca.crt
    60. etc/pki/ca-trust/source/anchors/openshift-ca.crt
    61. etc/pki/ca-trust/source/anchors/registry-ca.crt
    62. etc/dnsmasq.conf
    63. etc/dnsmasq.d/origin-dns.conf
    64. etc/dnsmasq.d/origin-upstream-dns.conf
    65. etc/dnsmasq.d/node-dnsmasq.conf
    66. packages.txt

    If needed, you can compress the files to save space:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR
    3. $ sudo rm -Rf ${MYBACKUPDIR}

To create any of these files from scratch, the openshift-ansible-contrib repository contains the backup_master_node.sh script, which performs the previous steps. The script creates a directory on the host where you run the script and copies all the files previously mentioned.

The openshift-ansible-contrib script is not supported by Red Hat, but the reference architecture team performs testing to ensure the code operates as defined and is secure.

You can run the script on every master host with:

  1. $ mkdir ~/git
  2. $ cd ~/git
  3. $ git clone https://github.com/openshift/openshift-ansible-contrib.git
  4. $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
  5. $ ./backup_master_node.sh -h

Backing up etcd

When you back up etcd, you must back up both the etcd configuration files and the etcd data.

Backing up etcd configuration files

The etcd configuration files to be preserved are all stored in the /etc/etcd directory of the instances where etcd is running. This includes the etcd configuration file (/etc/etcd/etcd.conf) and the required certificates for cluster communication. All those files are generated at installation time by the Ansible installer.

Procedure

For each etcd member of the cluster, back up the etcd configuration.

  1. $ ssh master-0
  2. # mkdir -p /backup/etcd-config-$(date +%Y%m%d)/
  3. # cp -R /etc/etcd/ /backup/etcd-config-$(date +%Y%m%d)/

The certificates and configuration files on each etcd cluster member are unique.

Backing up etcd data
Prerequisites

The OKD installer creates aliases to avoid typing all the flags named etcdctl2 for etcd v2 tasks and etcdctl3 for etcd v3 tasks.

However, the etcdctl3 alias does not provide the full endpoint list to the etcdctl command, so you must specify the —endpoints option and list all the endpoints.

Before backing up etcd:

  • etcdctl binaries must be available or, in containerized installations, the rhel7/etcd container must be available.

  • Ensure that the OKD API service is running.

  • Ensure connectivity with the etcd cluster (port 2379/tcp).

  • Ensure the proper certificates to connect to the etcd cluster.

  • Ensure go is installed.

Procedure

While the etcdctl backup command is used to perform the backup, etcd v3 has no concept of a backup. Instead, you either take a snapshot from a live member with the etcdctl snapshot save command or copy the member/snap/db file from an etcd data directory.

The etcdctl backup command rewrites some of the metadata contained in the backup, specifically, the node ID and cluster ID, which means that in the backup, the node loses its former identity. To recreate a cluster from the backup, you create a new, single-node cluster, then add the rest of the nodes to the cluster. The metadata is rewritten to prevent the new node from joining an existing cluster.

Back up the etcd data:

+

Clusters upgraded from previous versions of OKD might contain v2 data stores. Back up all etcd data stores.

  1. Make a snapshot of the etcd node:

    1. # systemctl show etcd --property=ActiveState,SubState
    2. # mkdir -p /var/lib/etcd/backup/etcd-$(date +%Y%m%d) (1)
    3. # etcdctl3 snapshot save /var/lib/etcd/backup/etcd-$(date +%Y%m%d)/db
    1You must write the snapshot to a directory under /var/lib/etcd/.

    The etcdctl snapshot save command requires the etcd service to be running.

  2. Stop all etcd services by removing the etcd pod definition and rebooting the host:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  3. Create the etcd data backup and copy the etcd db file:

    1. # etcdctl2 backup \
    2. --data-dir /var/lib/etcd \
    3. --backup-dir /backup/etcd-$(date +%Y%m%d)

    A /backup/etcd-<date>/ directory is created, where <date> represents the current date, which must be an external NFS share, S3 bucket, or any external storage location.

    In the case of an all-in-one cluster, the etcd data directory is located in the /var/lib/origin/openshift.local.etcd directory.

    • If etcd runs as a static pod, run the following commands:

      If you use static pods, use the v3 API.

  1. Obtain the etcd endpoint IP address from the static pod manifest:

    1. $ export ETCD_POD_MANIFEST="/etc/origin/node/pods/etcd.yaml"
    2. $ export ETCD_EP=$(grep https ${ETCD_POD_MANIFEST} | cut -d '/' -f3)
  2. Obtain the etcd pod name:

    1. $ oc login -u system:admin
    2. $ export ETCD_POD=$(oc get pods -n kube-system | grep -o -m 1 '\S*etcd\S*')
  3. Take a snapshot of the etcd data in the pod and store it locally:

    1. $ oc project kube-system
    2. $ oc exec ${ETCD_POD} -c etcd -- /bin/bash -c "ETCDCTL_API=3 etcdctl \
    3. --cert /etc/etcd/peer.crt \
    4. --key /etc/etcd/peer.key \
    5. --cacert /etc/etcd/ca.crt \
    6. --endpoints $ETCD_EP \
    7. snapshot save /var/lib/etcd/snapshot.db"

Deprecating a master host

Master hosts run important services, such as the OKD API and controllers services. In order to deprecate a master host, these services must be stopped.

The OKD API service is an active/active service, so stopping the service does not affect the environment as long as the requests are sent to a separate master server. However, the OKD controllers service is an active/passive service, where the services use etcd to decide the active master.

Deprecating a master host in a multi-master architecture includes removing the master from the load balancer pool to avoid new connections attempting to use that master. This process depends heavily on the load balancer used. The steps below show the details of removing the master from haproxy. In the event that OKD is running on a cloud provider, or using a F5 appliance, see the specific product documents to remove the master from rotation.

Procedure
  1. Remove the backend section in the /etc/haproxy/haproxy.cfg configuration file. For example, if deprecating a master named master-0.example.com using haproxy, ensure the host name is removed from the following:

    1. backend mgmt8443
    2. balance source
    3. mode tcp
    4. # MASTERS 8443
    5. server master-1.example.com 192.168.55.12:8443 check
    6. server master-2.example.com 192.168.55.13:8443 check
  2. Then, restart the haproxy service.

    1. $ sudo systemctl restart haproxy
  3. Once the master is removed from the load balancer, disable the API and controller services by moving definition files out of the static pods dir /etc/origin/node/pods:

    1. # mkdir -p /etc/origin/node/pods/disabled
    2. # mv /etc/origin/node/pods/controller.yaml /etc/origin/node/pods/disabled/:
    3. +
  4. Because the master host is a schedulable OKD node, follow the steps in the Deprecating a node host section.

  5. Remove the master host from the [masters] and [nodes] groups in the /etc/ansible/hosts Ansible inventory file to avoid issues if running any Ansible tasks using that inventory file.

    Deprecating the first master host listed in the Ansible inventory file requires extra precautions.

    The /etc/origin/master/ca.serial.txt file is generated on only the first master listed in the Ansible host inventory. If you deprecate the first master host, copy the /etc/origin/master/ca.serial.txt file to the rest of master hosts before the process.

    In OKD 3.11 clusters running multiple masters, one of the master nodes includes additional CA certificates in /etc/origin/master, /etc/etcd/ca, and /etc/etcd/generated_certs. These are required for application node and etcd node scale-up operations and must be restored on another master node if the CA host master is being deprecated.

  6. The kubernetes service includes the master host IPs as endpoints. To verify that the master has been properly deprecated, review the kubernetes service output and see if the deprecated master has been removed:

    1. $ oc describe svc kubernetes -n default
    2. Name: kubernetes
    3. Namespace: default
    4. Labels: component=apiserver
    5. provider=kubernetes
    6. Annotations: <none>
    7. Selector: <none>
    8. Type: ClusterIP
    9. IP: 10.111.0.1
    10. Port: https 443/TCP
    11. Endpoints: 192.168.55.12:8443,192.168.55.13:8443
    12. Port: dns 53/UDP
    13. Endpoints: 192.168.55.12:8053,192.168.55.13:8053
    14. Port: dns-tcp 53/TCP
    15. Endpoints: 192.168.55.12:8053,192.168.55.13:8053
    16. Session Affinity: ClientIP
    17. Events: <none>

    After the master has been successfully deprecated, the host where the master was previously running can be safely deleted.

Removing an etcd host

If an etcd host fails beyond restoration, remove it from the cluster.

Steps to be performed on all masters hosts

Procedure
  1. Remove each other etcd host from the etcd cluster. Run the following command for each etcd node:

    1. # etcdctl -C https://<surviving host IP address>:2379 \
    2. --ca-file=/etc/etcd/ca.crt \
    3. --cert-file=/etc/etcd/peer.crt \
    4. --key-file=/etc/etcd/peer.key member remove <failed member ID>
  2. Restart the master API service on every master:

    1. # master-restart api restart-master controller

Steps to be performed in the current etcd cluster

Procedure
  1. Remove the failed host from the cluster:

    1. # etcdctl2 cluster-health
    2. member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
    3. member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
    4. failed to check the health of member 8372784203e11288 on https://192.168.55.21:2379: Get https://192.168.55.21:2379/health: dial tcp 192.168.55.21:2379: getsockopt: connection refused
    5. member 8372784203e11288 is unreachable: [https://192.168.55.21:2379] are all unreachable
    6. member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
    7. cluster is healthy
    8. # etcdctl2 member remove 8372784203e11288 (1)
    9. Removed member 8372784203e11288 from cluster
    10. # etcdctl2 cluster-health
    11. member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
    12. member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
    13. member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
    14. cluster is healthy
    1The remove command requires the etcd ID, not the hostname.
  2. To ensure the etcd configuration does not use the failed host when the etcd service is restarted, modify the /etc/etcd/etcd.conf file on all remaining etcd hosts and remove the failed host in the value for the ETCD_INITIAL_CLUSTER variable:

    1. # vi /etc/etcd/etcd.conf

    For example:

    1. ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380,master-2.example.com=https://192.168.55.13:2380

    becomes:

    1. ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380

    Restarting the etcd services is not required, because the failed host is removed using etcdctl.

  3. Modify the Ansible inventory file to reflect the current status of the cluster and to avoid issues when re-running a playbook:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. etcd
    5. ... [OUTPUT ABBREVIATED] ...
    6. [etcd]
    7. master-0.example.com
    8. master-1.example.com
  4. If you are using Flannel, modify the flanneld service configuration located at /etc/sysconfig/flanneld on every host and remove the etcd host:

    1. FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379
  5. Restart the flanneld service:

    1. # systemctl restart flanneld.service

Creating a master host backup

Perform this backup process before any change to the OKD infrastructure, such as a system update, upgrade, or any other significant modification. Back up data regularly to ensure that recent data is available if a failure occurs.

OKD files

The master instances run important services, such as the API, controllers. The /etc/origin/master directory stores many important files:

  • The configuration, the API, controllers, services, and more

  • Certificates generated by the installation

  • All cloud provider-related configuration

  • Keys and other authentication files, such as htpasswd if you use htpasswd

  • And more

You can customize OKD services, such as increasing the log level or using proxies. The configuration files are stored in the /etc/sysconfig directory.

Because the masters are also nodes, back up the entire /etc/origin directory.

Procedure

You must perform the following steps on each master node.

  1. Create a backup of the pod definitions, located here.

  2. Create a backup of the master host configuration files:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    3. $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
    4. $ sudo cp -aR /etc/sysconfig/ ${MYBACKUPDIR}/etc/sysconfig/

    The master configuration file is/etc/origin/master/master-config.yaml.

    The /etc/origin/master/ca.serial.txt file is generated on only the first master listed in the Ansible host inventory. If you deprecate the first master host, copy the /etc/origin/master/ca.serial.txt file to the rest of master hosts before the process.

    In OKD 3.11 clusters running multiple masters, one of the master nodes includes additional CA certificates in /etc/origin/master, /etc/etcd/ca and /etc/etcd/generated_certs. These are required for application node and etcd node scale-up operations and would need to be restored on another master node should the originating master become permanently unavailable. These directories are included by default within the backup procedures documented here.

  3. Other important files that need to be considered when planning a backup include:

    File

    Description

    /etc/cni/

    Container Network Interface configuration (if used)

    /etc/sysconfig/iptables

    Where the iptables rules are stored

    /etc/sysconfig/docker-storage-setup

    The input file for container-storage-setup command

    /etc/sysconfig/docker

    The docker configuration file

    /etc/sysconfig/docker-network

    docker networking configuration (i.e. MTU)

    /etc/sysconfig/docker-storage

    docker storage configuration (generated by container-storage-setup)

    /etc/dnsmasq.conf

    Main configuration file for dnsmasq

    /etc/dnsmasq.d/

    Different dnsmasq configuration files

    /etc/sysconfig/flanneld

    flannel configuration file (if used)

    /etc/pki/ca-trust/source/anchors/

    Certificates added to the system (i.e. for external registries)

    Create a backup of those files:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    3. $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
    4. $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \
    5. ${MYBACKUPDIR}/etc/sysconfig/
    6. $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
    7. $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \
    8. ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
  4. If a package is accidentally removed or you need to resore a file that is included in an rpm package, having a list of rhel packages installed on the system can be useful.

    If you use Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems.

    To create a list of the current rhel packages installed in the system:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}
    3. $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
  5. If you used the previous steps, the following files are present in the backup directory:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n'
    3. etc/sysconfig/flanneld
    4. etc/sysconfig/iptables
    5. etc/sysconfig/docker-network
    6. etc/sysconfig/docker-storage
    7. etc/sysconfig/docker-storage-setup
    8. etc/sysconfig/docker-storage-setup.rpmnew
    9. etc/origin/master/ca.crt
    10. etc/origin/master/ca.key
    11. etc/origin/master/ca.serial.txt
    12. etc/origin/master/ca-bundle.crt
    13. etc/origin/master/master.proxy-client.crt
    14. etc/origin/master/master.proxy-client.key
    15. etc/origin/master/service-signer.crt
    16. etc/origin/master/service-signer.key
    17. etc/origin/master/serviceaccounts.private.key
    18. etc/origin/master/serviceaccounts.public.key
    19. etc/origin/master/openshift-master.crt
    20. etc/origin/master/openshift-master.key
    21. etc/origin/master/openshift-master.kubeconfig
    22. etc/origin/master/master.server.crt
    23. etc/origin/master/master.server.key
    24. etc/origin/master/master.kubelet-client.crt
    25. etc/origin/master/master.kubelet-client.key
    26. etc/origin/master/admin.crt
    27. etc/origin/master/admin.key
    28. etc/origin/master/admin.kubeconfig
    29. etc/origin/master/etcd.server.crt
    30. etc/origin/master/etcd.server.key
    31. etc/origin/master/master.etcd-client.key
    32. etc/origin/master/master.etcd-client.csr
    33. etc/origin/master/master.etcd-client.crt
    34. etc/origin/master/master.etcd-ca.crt
    35. etc/origin/master/policy.json
    36. etc/origin/master/scheduler.json
    37. etc/origin/master/htpasswd
    38. etc/origin/master/session-secrets.yaml
    39. etc/origin/master/openshift-router.crt
    40. etc/origin/master/openshift-router.key
    41. etc/origin/master/registry.crt
    42. etc/origin/master/registry.key
    43. etc/origin/master/master-config.yaml
    44. etc/origin/generated-configs/master-master-1.example.com/master.server.crt
    45. ...[OUTPUT OMITTED]...
    46. etc/origin/cloudprovider/openstack.conf
    47. etc/origin/node/system:node:master-0.example.com.crt
    48. etc/origin/node/system:node:master-0.example.com.key
    49. etc/origin/node/ca.crt
    50. etc/origin/node/system:node:master-0.example.com.kubeconfig
    51. etc/origin/node/server.crt
    52. etc/origin/node/server.key
    53. etc/origin/node/node-dnsmasq.conf
    54. etc/origin/node/resolv.conf
    55. etc/origin/node/node-config.yaml
    56. etc/origin/node/flannel.etcd-client.key
    57. etc/origin/node/flannel.etcd-client.csr
    58. etc/origin/node/flannel.etcd-client.crt
    59. etc/origin/node/flannel.etcd-ca.crt
    60. etc/pki/ca-trust/source/anchors/openshift-ca.crt
    61. etc/pki/ca-trust/source/anchors/registry-ca.crt
    62. etc/dnsmasq.conf
    63. etc/dnsmasq.d/origin-dns.conf
    64. etc/dnsmasq.d/origin-upstream-dns.conf
    65. etc/dnsmasq.d/node-dnsmasq.conf
    66. packages.txt

    If needed, you can compress the files to save space:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR
    3. $ sudo rm -Rf ${MYBACKUPDIR}

To create any of these files from scratch, the openshift-ansible-contrib repository contains the backup_master_node.sh script, which performs the previous steps. The script creates a directory on the host where you run the script and copies all the files previously mentioned.

The openshift-ansible-contrib script is not supported by Red Hat, but the reference architecture team performs testing to ensure the code operates as defined and is secure.

You can run the script on every master host with:

  1. $ mkdir ~/git
  2. $ cd ~/git
  3. $ git clone https://github.com/openshift/openshift-ansible-contrib.git
  4. $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
  5. $ ./backup_master_node.sh -h

Restoring a master host backup

After creating a backup of important master host files, if they become corrupted or accidentally removed, you can restore the files by copying the files back to master, ensuring they contain the proper content, and restarting the affected services.

Procedure

  1. Restore the /etc/origin/master/master-config.yaml file:

    1. # MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)*
    2. # cp /etc/origin/master/master-config.yaml /etc/origin/master/master-config.yaml.old
    3. # cp /backup/$(hostname)/$(date +%Y%m%d)/origin/master/master-config.yaml /etc/origin/master/master-config.yaml
    4. # master-restart api
    5. # master-restart controllers

    Restarting the master services can lead to downtime. However, you can remove the master host from the highly available load balancer pool, then perform the restore operation. Once the service has been properly restored, you can add the master host back to the load balancer pool.

    Perform a full reboot of the affected instance to restore the iptables configuration.

  2. If you cannot restart OKD because packages are missing, reinstall the packages.

    1. Get the list of the current installed packages:

      1. $ rpm -qa | sort > /tmp/current_packages.txt
    2. View the differences between the package lists:

      1. $ diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt
      2. > ansible-2.4.0.0-5.el7.noarch
    3. Reinstall the missing packages:

      1. # yum reinstall -y <packages> (1)
      1Replace <packages> with the packages that are different between the package lists.
  1. Restore a system certificate by copying the certificate to the /etc/pki/ca-trust/source/anchors/ directory and execute the update-ca-trust:

    1. $ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)*
    2. $ sudo cp ${MYBACKUPDIR}/external_certificates/my_company.crt /etc/pki/ca-trust/source/anchors/
    3. $ sudo update-ca-trust

    Always ensure the user ID and group ID are restored when the files are copied back, as well as the SELinux context.

Node host tasks

Deprecating a node host

The procedure is the same whether deprecating an infrastructure node or an application node.

Prerequisites

Ensure enough capacity is available to migrate the existing pods from the node set to be removed. Removing an infrastructure node is advised only when at least two more nodes will stay online after the infrastructure node is removed.

Procedure

  1. List all available nodes to find the node to deprecate:

    1. $ oc get nodes
    2. NAME STATUS AGE VERSION
    3. ocp-infra-node-b7pl Ready 23h v1.6.1+5115d708d7
    4. ocp-infra-node-p5zj Ready 23h v1.6.1+5115d708d7
    5. ocp-infra-node-rghb Ready 23h v1.6.1+5115d708d7
    6. ocp-master-dgf8 Ready,SchedulingDisabled 23h v1.6.1+5115d708d7
    7. ocp-master-q1v2 Ready,SchedulingDisabled 23h v1.6.1+5115d708d7
    8. ocp-master-vq70 Ready,SchedulingDisabled 23h v1.6.1+5115d708d7
    9. ocp-node-020m Ready 23h v1.6.1+5115d708d7
    10. ocp-node-7t5p Ready 23h v1.6.1+5115d708d7
    11. ocp-node-n0dd Ready 23h v1.6.1+5115d708d7

    As an example, this topic deprecates the ocp-infra-node-b7pl infrastructure node.

  2. Describe the node and its running services:

    1. $ oc describe node ocp-infra-node-b7pl
    2. Name: ocp-infra-node-b7pl
    3. Role:
    4. Labels: beta.kubernetes.io/arch=amd64
    5. beta.kubernetes.io/instance-type=n1-standard-2
    6. beta.kubernetes.io/os=linux
    7. failure-domain.beta.kubernetes.io/region=europe-west3
    8. failure-domain.beta.kubernetes.io/zone=europe-west3-c
    9. kubernetes.io/hostname=ocp-infra-node-b7pl
    10. role=infra
    11. Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true
    12. Taints: <none>
    13. CreationTimestamp: Wed, 22 Nov 2017 09:36:36 -0500
    14. Phase:
    15. Conditions:
    16. ...
    17. Addresses: 10.156.0.11,ocp-infra-node-b7pl
    18. Capacity:
    19. cpu: 2
    20. memory: 7494480Ki
    21. pods: 20
    22. Allocatable:
    23. cpu: 2
    24. memory: 7392080Ki
    25. pods: 20
    26. System Info:
    27. Machine ID: bc95ccf67d047f2ae42c67862c202e44
    28. System UUID: 9762CC3D-E23C-AB13-B8C5-FA16F0BCCE4C
    29. Boot ID: ca8bf088-905d-4ec0-beec-8f89f4527ce4
    30. Kernel Version: 3.10.0-693.5.2.el7.x86_64
    31. OS Image: Employee SKU
    32. Operating System: linux
    33. Architecture: amd64
    34. Container Runtime Version: docker://1.12.6
    35. Kubelet Version: v1.6.1+5115d708d7
    36. Kube-Proxy Version: v1.6.1+5115d708d7
    37. ExternalID: 437740049672994824
    38. Non-terminated Pods: (2 in total)
    39. Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
    40. --------- ---- ------------ ---------- --------------- -------------
    41. default docker-registry-1-5szjs 100m (5%) 0 (0%) 256Mi (3%)0 (0%)
    42. default router-1-vzlzq 100m (5%) 0 (0%) 256Mi (3%)0 (0%)
    43. Allocated resources:
    44. (Total limits may be over 100 percent, i.e., overcommitted.)
    45. CPU Requests CPU Limits Memory Requests Memory Limits
    46. ------------ ---------- --------------- -------------
    47. 200m (10%) 0 (0%) 512Mi (7%) 0 (0%)
    48. Events: <none>

    The output above shows that the node is running two pods: router-1-vzlzq and docker-registry-1-5szjs. Two more infrastructure nodes are available to migrate these two pods.

    The cluster described above is a highly available cluster, this means both the router and docker-registry services are running on all infrastructure nodes.

  3. Mark a node as unschedulable and evacuate all of its pods:

    1. $ oc adm drain ocp-infra-node-b7pl --delete-local-data
    2. node "ocp-infra-node-b7pl" cordoned
    3. WARNING: Deleting pods with local storage: docker-registry-1-5szjs
    4. pod "docker-registry-1-5szjs" evicted
    5. pod "router-1-vzlzq" evicted
    6. node "ocp-infra-node-b7pl" drained

    If the pod has attached local storage (for example, EmptyDir), the --delete-local-data option must be provided. Generally, pods running in production should use the local storage only for temporary or cache files, but not for anything important or persistent. For regular storage, applications should use object storage or persistent volumes. In this case, the docker-registry pod’s local storage is empty, because the object storage is used instead to store the container images.

    The above operation deletes existing pods running on the node. Then, new pods are created according to the replication controller.

    In general, every application should be deployed with a deployment configuration, which creates pods using the replication controller.

    oc adm drain will not delete any bare pods (pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet, or a job). To do so, the —force option is required. Be aware that the bare pods will not be recreated on other nodes and data may be lost during this operation.

    The example below shows the output of the replication controller of the registry:

    1. $ oc describe rc/docker-registry-1
    2. Name: docker-registry-1
    3. Namespace: default
    4. Selector: deployment=docker-registry-1,deploymentconfig=docker-registry,docker-registry=default
    5. Labels: docker-registry=default
    6. openshift.io/deployment-config.name=docker-registry
    7. Annotations: ...
    8. Replicas: 3 current / 3 desired
    9. Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
    10. Pod Template:
    11. Labels: deployment=docker-registry-1
    12. deploymentconfig=docker-registry
    13. docker-registry=default
    14. Annotations: openshift.io/deployment-config.latest-version=1
    15. openshift.io/deployment-config.name=docker-registry
    16. openshift.io/deployment.name=docker-registry-1
    17. Service Account: registry
    18. Containers:
    19. registry:
    20. Image: openshift3/ose-docker-registry:v3.6.173.0.49
    21. Port: 5000/TCP
    22. Requests:
    23. cpu: 100m
    24. memory: 256Mi
    25. Liveness: http-get https://:5000/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    26. Readiness: http-get https://:5000/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    27. Environment:
    28. REGISTRY_HTTP_ADDR: :5000
    29. REGISTRY_HTTP_NET: tcp
    30. REGISTRY_HTTP_SECRET: tyGEnDZmc8dQfioP3WkNd5z+Xbdfy/JVXf/NLo3s/zE=
    31. REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA: false
    32. REGISTRY_HTTP_TLS_KEY: /etc/secrets/registry.key
    33. OPENSHIFT_DEFAULT_REGISTRY: docker-registry.default.svc:5000
    34. REGISTRY_CONFIGURATION_PATH: /etc/registry/config.yml
    35. REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/registry.crt
    36. Mounts:
    37. /etc/registry from docker-config (rw)
    38. /etc/secrets from registry-certificates (rw)
    39. /registry from registry-storage (rw)
    40. Volumes:
    41. registry-storage:
    42. Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    43. Medium:
    44. registry-certificates:
    45. Type: Secret (a volume populated by a Secret)
    46. SecretName: registry-certificates
    47. Optional: false
    48. docker-config:
    49. Type: Secret (a volume populated by a Secret)
    50. SecretName: registry-config
    51. Optional: false
    52. Events:
    53. FirstSeen LastSeen Count From SubObjectPath Type Reason Message
    54. --------- -------- ----- ---- ------------- -------- ------ -------
    55. 49m 49m 1 replication-controller Normal SuccessfulCreate Created pod: docker-registry-1-dprp5

    The event at the bottom of the output displays information about new pod creation. So, when listing all pods:

    1. $ oc get pods
    2. NAME READY STATUS RESTARTS AGE
    3. docker-registry-1-dprp5 1/1 Running 0 52m
    4. docker-registry-1-kr8jq 1/1 Running 0 1d
    5. docker-registry-1-ncpl2 1/1 Running 0 1d
    6. registry-console-1-g4nqg 1/1 Running 0 1d
    7. router-1-2gshr 0/1 Pending 0 52m
    8. router-1-85qm4 1/1 Running 0 1d
    9. router-1-q5sr8 1/1 Running 0 1d
  4. The docker-registry-1-5szjs and router-1-vzlzq pods that were running on the now deprecated node are no longer available. Instead, two new pods have been created: docker-registry-1-dprp5 and router-1-2gshr. As shown above, the new router pod is router-1-2gshr, but is in the Pending state. This is because every node can be running only on one single router and is bound to the ports 80 and 443 of the host.

  5. When observing the newly created registry pod, the example below shows that the pod has been created on the ocp-infra-node-rghb node, which is different from the deprecating node:

    1. $ oc describe pod docker-registry-1-dprp5
    2. Name: docker-registry-1-dprp5
    3. Namespace: default
    4. Security Policy: hostnetwork
    5. Node: ocp-infra-node-rghb/10.156.0.10
    6. ...

    The only difference between deprecating the infrastructure and the application node is that once the infrastructure node is evacuated, and if there is no plan to replace that node, the services running on infrastructure nodes can be scaled down:

    1. $ oc scale dc/router --replicas 2
    2. deploymentconfig "router" scaled
    3. $ oc scale dc/docker-registry --replicas 2
    4. deploymentconfig "docker-registry" scaled
  6. Now, every infrastructure node is running only one kind of each pod:

    1. $ oc get pods
    2. NAME READY STATUS RESTARTS AGE
    3. docker-registry-1-kr8jq 1/1 Running 0 1d
    4. docker-registry-1-ncpl2 1/1 Running 0 1d
    5. registry-console-1-g4nqg 1/1 Running 0 1d
    6. router-1-85qm4 1/1 Running 0 1d
    7. router-1-q5sr8 1/1 Running 0 1d
    8. $ oc describe po/docker-registry-1-kr8jq | grep Node:
    9. Node: ocp-infra-node-p5zj/10.156.0.9
    10. $ oc describe po/docker-registry-1-ncpl2 | grep Node:
    11. Node: ocp-infra-node-rghb/10.156.0.10

    To provide a full highly available cluster, at least three infrastructure nodes should always be available.

  7. To verify that the scheduling on the node is disabled:

    1. $ oc get nodes
    2. NAME STATUS AGE VERSION
    3. ocp-infra-node-b7pl Ready,SchedulingDisabled 1d v1.6.1+5115d708d7
    4. ocp-infra-node-p5zj Ready 1d v1.6.1+5115d708d7
    5. ocp-infra-node-rghb Ready 1d v1.6.1+5115d708d7
    6. ocp-master-dgf8 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7
    7. ocp-master-q1v2 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7
    8. ocp-master-vq70 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7
    9. ocp-node-020m Ready 1d v1.6.1+5115d708d7
    10. ocp-node-7t5p Ready 1d v1.6.1+5115d708d7
    11. ocp-node-n0dd Ready 1d v1.6.1+5115d708d7

    And that the node does not contain any pods:

    1. $ oc describe node ocp-infra-node-b7pl
    2. Name: ocp-infra-node-b7pl
    3. Role:
    4. Labels: beta.kubernetes.io/arch=amd64
    5. beta.kubernetes.io/instance-type=n1-standard-2
    6. beta.kubernetes.io/os=linux
    7. failure-domain.beta.kubernetes.io/region=europe-west3
    8. failure-domain.beta.kubernetes.io/zone=europe-west3-c
    9. kubernetes.io/hostname=ocp-infra-node-b7pl
    10. role=infra
    11. Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true
    12. Taints: <none>
    13. CreationTimestamp: Wed, 22 Nov 2017 09:36:36 -0500
    14. Phase:
    15. Conditions:
    16. ...
    17. Addresses: 10.156.0.11,ocp-infra-node-b7pl
    18. Capacity:
    19. cpu: 2
    20. memory: 7494480Ki
    21. pods: 20
    22. Allocatable:
    23. cpu: 2
    24. memory: 7392080Ki
    25. pods: 20
    26. System Info:
    27. Machine ID: bc95ccf67d047f2ae42c67862c202e44
    28. System UUID: 9762CC3D-E23C-AB13-B8C5-FA16F0BCCE4C
    29. Boot ID: ca8bf088-905d-4ec0-beec-8f89f4527ce4
    30. Kernel Version: 3.10.0-693.5.2.el7.x86_64
    31. OS Image: Employee SKU
    32. Operating System: linux
    33. Architecture: amd64
    34. Container Runtime Version: docker://1.12.6
    35. Kubelet Version: v1.6.1+5115d708d7
    36. Kube-Proxy Version: v1.6.1+5115d708d7
    37. ExternalID: 437740049672994824
    38. Non-terminated Pods: (0 in total)
    39. Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
    40. --------- ---- ------------ ---------- --------------- -------------
    41. Allocated resources:
    42. (Total limits may be over 100 percent, i.e., overcommitted.)
    43. CPU Requests CPU Limits Memory Requests Memory Limits
    44. ------------ ---------- --------------- -------------
    45. 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    46. Events: <none>
  8. Remove the infrastructure instance from the backend section in the /etc/haproxy/haproxy.cfg configuration file:

    1. backend router80
    2. balance source
    3. mode tcp
    4. server infra-1.example.com 192.168.55.12:80 check
    5. server infra-2.example.com 192.168.55.13:80 check
    6. backend router443
    7. balance source
    8. mode tcp
    9. server infra-1.example.com 192.168.55.12:443 check
    10. server infra-2.example.com 192.168.55.13:443 check
  9. Then, restart the haproxy service.

    1. $ sudo systemctl restart haproxy
  10. Remove the node from the cluster after all pods are evicted with command:

    1. $ oc delete node ocp-infra-node-b7pl
    2. node "ocp-infra-node-b7pl" deleted
    1. $ oc get nodes
    2. NAME STATUS AGE VERSION
    3. ocp-infra-node-p5zj Ready 1d v1.6.1+5115d708d7
    4. ocp-infra-node-rghb Ready 1d v1.6.1+5115d708d7
    5. ocp-master-dgf8 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7
    6. ocp-master-q1v2 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7
    7. ocp-master-vq70 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7
    8. ocp-node-020m Ready 1d v1.6.1+5115d708d7
    9. ocp-node-7t5p Ready 1d v1.6.1+5115d708d7
    10. ocp-node-n0dd Ready 1d v1.6.1+5115d708d7

For more information on evacuating and draining pods or nodes, see Node maintenance section.

Replacing a node host

In the event that a node would need to be added in place of the deprecated node, follow the Adding hosts to an existing cluster section.

Creating a node host backup

Creating a backup of a node host is a different use case from backing up a master host. Because master hosts contain many important files, creating a backup is highly recommended. However, the nature of nodes is that anything special is replicated over the nodes in case of failover, and they typically do not contain data that is necessary to run an environment. If a backup of a node contains something necessary to run an environment, then a creating a backup is recommended.

The backup process is to be performed before any change to the infrastructure, such as a system update, upgrade, or any other significant modification. Backups should be performed on a regular basis to ensure the most recent data is available if a failure occurs.

OKD files

Node instances run applications in the form of pods, which are based on containers. The /etc/origin/ and /etc/origin/node directories house important files, such as:

  • The configuration of the node services

  • Certificates generated by the installation

  • Cloud provider-related configuration

  • Keys and other authentication files, such as the dnsmasq configuration

The OKD services can be customized to increase the log level, use proxies, and more, and the configuration files are stored in the /etc/sysconfig directory.

Procedure

  1. Create a backup of the node configuration files:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    3. $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
    4. $ sudo cp -aR /etc/sysconfig/atomic-openshift-node ${MYBACKUPDIR}/etc/sysconfig/
  2. OKD uses specific files that must be taken into account when planning the backup policy, including:

    File

    Description

    /etc/cni/

    Container Network Interface configuration (if used)

    /etc/sysconfig/iptables

    Where the iptables rules are stored

    /etc/sysconfig/docker-storage-setup

    The input file for container-storage-setup command

    /etc/sysconfig/docker

    The docker configuration file

    /etc/sysconfig/docker-network

    docker networking configuration (i.e. MTU)

    /etc/sysconfig/docker-storage

    docker storage configuration (generated by container-storage-setup)

    /etc/dnsmasq.conf

    Main configuration file for dnsmasq

    /etc/dnsmasq.d/

    Different dnsmasq configuration files

    /etc/sysconfig/flanneld

    flannel configuration file (if used)

    /etc/pki/ca-trust/source/anchors/

    Certificates added to the system (i.e. for external registries)

    To create those files:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    3. $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
    4. $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \
    5. ${MYBACKUPDIR}/etc/sysconfig/
    6. $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
    7. $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \
    8. ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
  3. If a package is accidentally removed, or a file included in an rpm package should be restored, having a list of rhel packages installed on the system can be useful.

    If using Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems.

    To create a list of the current rhel packages installed in the system:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo mkdir -p ${MYBACKUPDIR}
    3. $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
  4. The following files should now be present in the backup directory:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n'
    3. etc/sysconfig/atomic-openshift-node
    4. etc/sysconfig/flanneld
    5. etc/sysconfig/iptables
    6. etc/sysconfig/docker-network
    7. etc/sysconfig/docker-storage
    8. etc/sysconfig/docker-storage-setup
    9. etc/sysconfig/docker-storage-setup.rpmnew
    10. etc/origin/node/system:node:app-node-0.example.com.crt
    11. etc/origin/node/system:node:app-node-0.example.com.key
    12. etc/origin/node/ca.crt
    13. etc/origin/node/system:node:app-node-0.example.com.kubeconfig
    14. etc/origin/node/server.crt
    15. etc/origin/node/server.key
    16. etc/origin/node/node-dnsmasq.conf
    17. etc/origin/node/resolv.conf
    18. etc/origin/node/node-config.yaml
    19. etc/origin/node/flannel.etcd-client.key
    20. etc/origin/node/flannel.etcd-client.csr
    21. etc/origin/node/flannel.etcd-client.crt
    22. etc/origin/node/flannel.etcd-ca.crt
    23. etc/origin/cloudprovider/openstack.conf
    24. etc/pki/ca-trust/source/anchors/openshift-ca.crt
    25. etc/pki/ca-trust/source/anchors/registry-ca.crt
    26. etc/dnsmasq.conf
    27. etc/dnsmasq.d/origin-dns.conf
    28. etc/dnsmasq.d/origin-upstream-dns.conf
    29. etc/dnsmasq.d/node-dnsmasq.conf
    30. packages.txt

    If needed, the files can be compressed to save space:

    1. $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR
    3. $ sudo rm -Rf ${MYBACKUPDIR}

To create any of these files from scratch, the openshift-ansible-contrib repository contains the backup_master_node.sh script, which performs the previous steps. The script creates a directory on the host running the script and copies all the files previously mentioned.

The openshift-ansible-contrib script is not supported by Red Hat, but the reference architecture team performs testing to ensure the code operates as defined and is secure.

The script can be executed on every master host with:

  1. $ mkdir ~/git
  2. $ cd ~/git
  3. $ git clone https://github.com/openshift/openshift-ansible-contrib.git
  4. $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
  5. $ ./backup_master_node.sh -h

Restoring a node host backup

After creating a backup of important node host files, if they become corrupted or accidentally removed, you can restore the file by copying back the file, ensuring it contains the proper content and restart the affected services.

Procedure

  1. Restore the /etc/origin/node/node-config.yaml file:

    1. # MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    2. # cp /etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml.old
    3. # cp /backup/$(hostname)/$(date +%Y%m%d)/etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml
    4. # reboot

Restarting the services can lead to downtime. See Node maintenance, for tips on how to ease the process.

Perform a full reboot of the affected instance to restore the iptables configuration.

  1. If you cannot restart OKD because packages are missing, reinstall the packages.

    1. Get the list of the current installed packages:

      1. $ rpm -qa | sort > /tmp/current_packages.txt
    2. View the differences between the package lists:

      1. $ diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt
      2. > ansible-2.4.0.0-5.el7.noarch
    3. Reinstall the missing packages:

      1. # yum reinstall -y <packages> (1)
      1Replace <packages> with the packages that are different between the package lists.
  1. Restore a system certificate by copying the certificate to the /etc/pki/ca-trust/source/anchors/ directory and execute the update-ca-trust:

    1. $ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)*
    2. $ sudo cp ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/my_company.crt /etc/pki/ca-trust/source/anchors/
    3. $ sudo update-ca-trust

    Always ensure proper user ID and group ID are restored when the files are copied back, as well as the SELinux context.

Node maintenance and next steps

See Managing nodes or Managing pods topics for various node management options. These include:

A node can reserve a portion of its resources to be used by specific components. These include the kubelet, kube-proxy, Docker, or other remaining system components such as sshd and NetworkManager. See the Allocating node resources section in the Cluster Administrator guide for more information.

etcd tasks

etcd backup

etcd is the key value store for all object definitions, as well as the persistent master state. Other components watch for changes, then bring themselves into the desired state.

OKD versions prior to 3.5 use etcd version 2 (v2), while 3.5 and later use version 3 (v3). The data model between the two versions of etcd is different. etcd v3 can use both the v2 and v3 data models, whereas etcd v2 can only use the v2 data model. In an etcd v3 server, the v2 and v3 data stores exist in parallel and are independent.

For both v2 and v3 operations, you can use the ETCDCTL_API environment variable to use the correct API:

  1. $ etcdctl -v
  2. etcdctl version: 3.2.5
  3. API version: 2
  4. $ ETCDCTL_API=3 etcdctl version
  5. etcdctl version: 3.2.5
  6. API version: 3.2

See Migrating etcd Data (v2 to v3) section in the OKD 3.7 documentation for information about how to migrate to v3.

In OKD version 3.10 and later, you can either install etcd on separate hosts or run it as a static pod on your master hosts. If you do not specify separate etcd hosts, etcd runs as a static pod on master hosts. Because of this difference, the backup process is different if you use static pods.

The etcd backup process is composed of two different procedures:

  • Configuration backup: Including the required etcd configuration and certificates

  • Data backup: Including both v2 and v3 data model.

You can perform the data backup process on any host that has connectivity to the etcd cluster, where the proper certificates are provided, and where the etcdctl tool is installed.

The backup files must be copied to an external system, ideally outside the OKD environment, and then encrypted.

Note that the etcd backup still has all the references to current storage volumes. When you restore etcd, OKD starts launching the previous pods on nodes and reattaching the same storage. This process is no different than the process of when you remove a node from the cluster and add a new one back in its place. Anything attached to that node is reattached to the pods on whatever nodes they are rescheduled to.

Backing up etcd

When you back up etcd, you must back up both the etcd configuration files and the etcd data.

Backing up etcd configuration files

The etcd configuration files to be preserved are all stored in the /etc/etcd directory of the instances where etcd is running. This includes the etcd configuration file (/etc/etcd/etcd.conf) and the required certificates for cluster communication. All those files are generated at installation time by the Ansible installer.

Procedure

For each etcd member of the cluster, back up the etcd configuration.

  1. $ ssh master-0
  2. # mkdir -p /backup/etcd-config-$(date +%Y%m%d)/
  3. # cp -R /etc/etcd/ /backup/etcd-config-$(date +%Y%m%d)/

The certificates and configuration files on each etcd cluster member are unique.

Backing up etcd data
Prerequisites

The OKD installer creates aliases to avoid typing all the flags named etcdctl2 for etcd v2 tasks and etcdctl3 for etcd v3 tasks.

However, the etcdctl3 alias does not provide the full endpoint list to the etcdctl command, so you must specify the —endpoints option and list all the endpoints.

Before backing up etcd:

  • etcdctl binaries must be available or, in containerized installations, the rhel7/etcd container must be available.

  • Ensure that the OKD API service is running.

  • Ensure connectivity with the etcd cluster (port 2379/tcp).

  • Ensure the proper certificates to connect to the etcd cluster.

  • Ensure go is installed.

Procedure

While the etcdctl backup command is used to perform the backup, etcd v3 has no concept of a backup. Instead, you either take a snapshot from a live member with the etcdctl snapshot save command or copy the member/snap/db file from an etcd data directory.

The etcdctl backup command rewrites some of the metadata contained in the backup, specifically, the node ID and cluster ID, which means that in the backup, the node loses its former identity. To recreate a cluster from the backup, you create a new, single-node cluster, then add the rest of the nodes to the cluster. The metadata is rewritten to prevent the new node from joining an existing cluster.

Back up the etcd data:

+

Clusters upgraded from previous versions of OKD might contain v2 data stores. Back up all etcd data stores.

  1. Make a snapshot of the etcd node:

    1. # systemctl show etcd --property=ActiveState,SubState
    2. # mkdir -p /var/lib/etcd/backup/etcd-$(date +%Y%m%d) (1)
    3. # etcdctl3 snapshot save /var/lib/etcd/backup/etcd-$(date +%Y%m%d)/db
    1You must write the snapshot to a directory under /var/lib/etcd/.

    The etcdctl snapshot save command requires the etcd service to be running.

  2. Stop all etcd services by removing the etcd pod definition and rebooting the host:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  3. Create the etcd data backup and copy the etcd db file:

    1. # etcdctl2 backup \
    2. --data-dir /var/lib/etcd \
    3. --backup-dir /backup/etcd-$(date +%Y%m%d)

    A /backup/etcd-<date>/ directory is created, where <date> represents the current date, which must be an external NFS share, S3 bucket, or any external storage location.

    In the case of an all-in-one cluster, the etcd data directory is located in the /var/lib/origin/openshift.local.etcd directory.

    • If etcd runs as a static pod, run the following commands:

      If you use static pods, use the v3 API.

  1. Obtain the etcd endpoint IP address from the static pod manifest:

    1. $ export ETCD_POD_MANIFEST="/etc/origin/node/pods/etcd.yaml"
    2. $ export ETCD_EP=$(grep https ${ETCD_POD_MANIFEST} | cut -d '/' -f3)
  2. Obtain the etcd pod name:

    1. $ oc login -u system:admin
    2. $ export ETCD_POD=$(oc get pods -n kube-system | grep -o -m 1 '\S*etcd\S*')
  3. Take a snapshot of the etcd data in the pod and store it locally:

    1. $ oc project kube-system
    2. $ oc exec ${ETCD_POD} -c etcd -- /bin/bash -c "ETCDCTL_API=3 etcdctl \
    3. --cert /etc/etcd/peer.crt \
    4. --key /etc/etcd/peer.key \
    5. --cacert /etc/etcd/ca.crt \
    6. --endpoints $ETCD_EP \
    7. snapshot save /var/lib/etcd/snapshot.db"

Restoring etcd

The restore procedure for etcd configuration files replaces the appropriate files, then restarts the service or static pod.

If an etcd host has become corrupted and the /etc/etcd/etcd.conf file is lost, restore it using:

  1. $ ssh master-0
  2. # cp /backup/yesterday/master-0-files/etcd.conf /etc/etcd/etcd.conf
  3. # restorecon -RvF /etc/etcd/etcd.conf

In this example, the backup file is stored in the /backup/yesterday/master-0-files/etcd.conf path where it can be used as an external NFS share, S3 bucket, or other storage solution.

If you run etcd as a static pod, follow only the steps in that section. If you run etcd as a separate service on either master or standalone nodes, follow the steps to restore data as required.

Restoring etcd data

The following process restores healthy data files and starts the etcd cluster as a single node, then adds the rest of the nodes if an etcd cluster is required.

Procedure
  1. Stop all etcd services by removing the etcd pod definition and rebooting the host:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
    3. # reboot
  2. To ensure the proper backup is restored, delete the etcd directories:

    • To back up the current etcd data before you delete the directory, run the following command:

      1. # mv /var/lib/etcd /var/lib/etcd.old
      2. # mkdir /var/lib/etcd
      3. # restorecon -RvF /var/lib/etcd/
    • Or, to delete the directory and the etcd, data, run the following command:

      1. # rm -Rf /var/lib/etcd/*

      In an all-in-one cluster, the etcd data directory is located in the /var/lib/origin/openshift.local.etcd directory.

  1. Restore a healthy backup data file to each of the etcd nodes. Perform this step on all etcd hosts, including master hosts collocated with etcd.

    1. # cp -R /backup/etcd-xxx/* /var/lib/etcd/
    2. # mv /var/lib/etcd/db /var/lib/etcd/member/snap/db
    3. # chcon -R --reference /backup/etcd-xxx/* /var/lib/etcd/
  2. Run the etcd service on one of your etcd hosts, forcing a new cluster.

    This creates a custom file for the etcd service, which overwrites the execution command adding the --force-new-cluster option:

    1. # mkdir -p /etc/systemd/system/etcd.service.d/
    2. # echo "[Service]" > /etc/systemd/system/etcd.service.d/temp.conf
    3. # echo "ExecStart=" >> /etc/systemd/system/etcd.service.d/temp.conf
    4. # sed -n '/ExecStart/s/"$/ --force-new-cluster"/p' \
    5. /usr/lib/systemd/system/etcd.service \
    6. >> /etc/systemd/system/etcd.service.d/temp.conf
    7. # systemctl daemon-reload
    8. # master-restart etcd
  3. Check for error messages:

    1. # master-logs etcd etcd
  4. Check for health status:

    1. # etcdctl3 cluster-health
    2. member 5ee217d17301 is healthy: got healthy result from https://192.168.55.8:2379
    3. cluster is healthy
  5. Restart the etcd service in cluster mode:

    1. # rm -f /etc/systemd/system/etcd.service.d/temp.conf
    2. # systemctl daemon-reload
    3. # master-restart etcd
  6. Check for health status and member list:

    1. # etcdctl3 cluster-health
    2. member 5ee217d17301 is healthy: got healthy result from https://192.168.55.8:2379
    3. cluster is healthy
    4. # etcdctl3 member list
    5. 5ee217d17301: name=master-0.example.com peerURLs=http://localhost:2380 clientURLs=https://192.168.55.8:2379 isLeader=true
  7. After the first instance is running, you can add the remaining peers back into the cluster.

Fix the peerURLS parameter

After restoring the data and creating a new cluster, the peerURLs parameter shows localhost instead of the IP where etcd is listening for peer communication:

  1. # etcdctl3 member list
  2. 5ee217d17301: name=master-0.example.com peerURLs=http://*localhost*:2380 clientURLs=https://192.168.55.8:2379 isLeader=true
Procedure
  1. Get the member ID using etcdctl member list:

    1. `etcdctl member list`
  2. Get the IP where etcd listens for peer communication:

    1. $ ss -l4n | grep 2380
  3. Update the member information with that IP:

    1. # etcdctl3 member update 5ee217d17301 https://192.168.55.8:2380
    2. Updated member with ID 5ee217d17301 in cluster
  4. To verify, check that the IP is in the member list:

    1. $ etcdctl3 member list
    2. 5ee217d17301: name=master-0.example.com peerURLs=https://*192.168.55.8*:2380 clientURLs=https://192.168.55.8:2379 isLeader=true

Restoring etcd snapshot

Snapshot integrity may be optionally verified at restore time. If the snapshot is taken with etcdctl snapshot save, it will have an integrity hash that is checked by etcdctl snapshot restore. If the snapshot is copied from the data directory, there is no integrity hash and it will only restore by using --skip-hash-check.

The procedure to restore the data must be performed on a single etcd host. You can then add the rest of the nodes to the cluster.

Procedure
  1. Stop all etcd services by removing the etcd pod definition and rebooting the host:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
    3. # reboot
  2. Clear all old data, because etcdctl recreates it in the node where the restore procedure is going to be performed:

    1. # rm -Rf /var/lib/etcd
  3. Run the snapshot restore command, substituting the values from the /etc/etcd/etcd.conf file:

    1. # etcdctl3 snapshot restore /backup/etcd-xxxxxx/backup.db \
    2. --data-dir /var/lib/etcd \
    3. --name master-0.example.com \
    4. --initial-cluster "master-0.example.com=https://192.168.55.8:2380" \
    5. --initial-cluster-token "etcd-cluster-1" \
    6. --initial-advertise-peer-urls https://192.168.55.8:2380 \
    7. --skip-hash-check=true
    8. 2017-10-03 08:55:32.440779 I | mvcc: restore compact to 1041269
    9. 2017-10-03 08:55:32.468244 I | etcdserver/membership: added member 40bef1f6c79b3163 [https://192.168.55.8:2380] to cluster 26841ebcf610583c
  4. Restore permissions and selinux context to the restored files:

    1. # restorecon -RvF /var/lib/etcd
  5. Start the etcd service:

    1. # systemctl start etcd
  6. Check for any error messages:

    1. # master-logs etcd etcd

Restoring etcd on a static pod

Before restoring etcd on a static pod:

  • etcdctl binaries must be available or, in containerized installations, the rhel7/etcd container must be available.

    You can install the etcdctl binary with the etcd package by running the following commands:

    1. # yum install etcd

    The package also installs the systemd service. Disable and mask the service so that it does not run as a systemd service when etcd runs in static pod. By disabling and masking the service, you ensure that you do not accidentally start it and prevent it from automatically restarting when you reboot the system.

    1. # systemctl disable etcd.service
    2. # systemctl mask etcd.service

To restore etcd on a static pod:

  1. If the pod is running, stop the etcd pod by moving the pod manifest YAML file to another directory:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/etcd.yaml /etc/origin/node/pods-stopped
  2. Clear all old data:

    1. # rm -rf /var/lib/etcd

    You use the etcdctl to recreate the data in the node where you restore the pod.

  3. Restore the etcd snapshot to the mount path for the etcd pod:

    1. # export ETCDCTL_API=3
    2. # etcdctl snapshot restore /etc/etcd/backup/etcd/snapshot.db
    3. --data-dir /var/lib/etcd/
    4. --name ip-172-18-3-48.ec2.internal
    5. --initial-cluster "ip-172-18-3-48.ec2.internal=https://172.18.3.48:2380"
    6. --initial-cluster-token "etcd-cluster-1"
    7. --initial-advertise-peer-urls https://172.18.3.48:2380
    8. --skip-hash-check=true

    Obtain the values for your cluster from the $/backup_files/etcd.conf file.

  4. Set required permissions and selinux context on the data directory:

    1. # restorecon -RvF /var/lib/etcd/
  5. Restart the etcd pod by moving the pod manifest YAML file to the required directory:

    1. # mv /etc/origin/node/pods-stopped/etcd.yaml /etc/origin/node/pods/

Replacing an etcd host

To replace an etcd host, scale up the etcd cluster and then remove the host. This process ensures that you keep quorum if you lose an etcd host during the replacement procedure.

The etcd cluster must maintain a quorum during the replacement operation. This means that at least one host must be in operation at all times.

If the host replacement operation occurs while the etcd cluster maintains a quorum, cluster operations are usually not affected. If a large amount of etcd data must replicate, some operations might slow down.

Before you start any procedure involving the etcd cluster, you must have a backup of the etcd data and configuration files so that you can restore the cluster if the procedure fails.

Scaling etcd

You can scale the etcd cluster vertically by adding more resources to the etcd hosts or horizontally by adding more etcd hosts.

Due to the voting system etcd uses, the cluster must always contain an odd number of members.

Having a cluster with an odd number of etcd hosts can account for fault tolerance. Having an odd number of etcd hosts does not change the number needed for a quorum but increases the tolerance for failure. For example, with a cluster of three members, quorum is two, which leaves a failure tolerance of one. This ensures the cluster continues to operate if two of the members are healthy.

Having an in-production cluster of three etcd hosts is recommended.

The new host requires a fresh Red Hat Enterprise Linux version 7 dedicated host. The etcd storage should be located on an SSD disk to achieve maximum performance and on a dedicated disk mounted in /var/lib/etcd.

Prerequisites

  1. Before you add a new etcd host, perform a backup of both etcd configuration and data to prevent data loss.

  2. Check the current etcd cluster status to avoid adding new hosts to an unhealthy cluster. Run this command:

    1. # ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \
    2. --key=/etc/etcd/peer.key \
    3. --cacert="/etc/etcd/ca.crt" \
    4. --endpoints="https://*master-0.example.com*:2379,\
    5. https://*master-1.example.com*:2379,\
    6. https://*master-2.example.com*:2379"
    7. endpoint health
    8. https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms
    9. https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms
    10. https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
  3. Before running the scaleup playbook, ensure the new host is registered to the proper Red Hat software channels:

    1. # subscription-manager register \
    2. --username=*<username>* --password=*<password>*
    3. # subscription-manager attach --pool=*<poolid>*
    4. # subscription-manager repos --disable="*"
    5. # subscription-manager repos \
    6. --enable=rhel-7-server-rpms \
    7. --enable=rhel-7-server-extras-rpms

    etcd is hosted in the rhel-7-server-extras-rpms software channel.

  4. Make sure all unused etcd members are removed from the etcd cluster. This must be completed before running the scaleup playbook.

    1. List the etcd members:

      1. # etcdctl --cert="/etc/etcd/peer.crt" --key="/etc/etcd/peer.key" \
      2. --cacert="/etc/etcd/ca.crt" --endpoints=ETCD_LISTEN_CLIENT_URLS member list -w table

      Copy the unused etcd member ID, if applicable.

    2. Remove the unused member by specifying its ID in the following command:

      1. # etcdctl --cert="/etc/etcd/peer.crt" --key="/etc/etcd/peer.key" \
      2. --cacert="/etc/etcd/ca.crt" --endpoints=ETCD_LISTEN_CLIENT_URL member remove UNUSED_ETCD_MEMBER_ID
  1. Upgrade etcd and iptables on the current etcd nodes:

    1. # yum update etcd iptables-services
  2. Back up the /etc/etcd configuration for the etcd hosts.

  3. If the new etcd members will also be OKD nodes, add the desired number of hosts to the cluster.

  4. The rest of this procedure assumes you added one host, but if you add multiple hosts, perform all steps on each host.

Adding a new etcd host using Ansible

Procedure
  1. In the Ansible inventory file, create a new group named [new_etcd] and add the new host. Then, add the new_etcd group as a child of the [OSEv3] group:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. etcd
    5. new_etcd (1)
    6. ... [OUTPUT ABBREVIATED] ...
    7. [etcd]
    8. master-0.example.com
    9. master-1.example.com
    10. master-2.example.com
    11. [new_etcd] (1)
    12. etcd0.example.com (1)
    1Add these lines.
  2. From the host that installed OKD and hosts the Ansible inventory file, change to the playbook directory and run the etcd scaleup playbook:

    1. $ cd /usr/share/ansible/openshift-ansible
    2. $ ansible-playbook playbooks/openshift-etcd/scaleup.yml
  3. After the playbook runs, modify the inventory file to reflect the current status by moving the new etcd host from the [new_etcd] group to the [etcd] group:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. etcd
    5. new_etcd
    6. ... [OUTPUT ABBREVIATED] ...
    7. [etcd]
    8. master-0.example.com
    9. master-1.example.com
    10. master-2.example.com
    11. etcd0.example.com
  4. If you use Flannel, modify the flanneld service configuration on every OKD host, located at /etc/sysconfig/flanneld, to include the new etcd host:

    1. FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379
  5. Restart the flanneld service:

    1. # systemctl restart flanneld.service

Manually adding a new etcd host

If you do not run etcd as static pods on master nodes, you might need to add another etcd host.

Procedure
Modify the current etcd cluster

To create the etcd certificates, run the openssl command, replacing the values with those from your environment.

  1. Create some environment variables:

    1. export NEW_ETCD_HOSTNAME="*etcd0.example.com*"
    2. export NEW_ETCD_IP="192.168.55.21"
    3. export CN=$NEW_ETCD_HOSTNAME
    4. export SAN="IP:${NEW_ETCD_IP}, DNS:${NEW_ETCD_HOSTNAME}"
    5. export PREFIX="/etc/etcd/generated_certs/etcd-$CN/"
    6. export OPENSSLCFG="/etc/etcd/ca/openssl.cnf"

    The custom openssl extensions used as etcdv3_ca* include the $SAN environment variable as subjectAltName. See /etc/etcd/ca/openssl.cnf for more information.

  2. Create the directory to store the configuration and certificates:

    1. # mkdir -p ${PREFIX}
  3. Create the server certificate request and sign it: (server.csr and server.crt)

    1. # openssl req -new -config ${OPENSSLCFG} \
    2. -keyout ${PREFIX}server.key \
    3. -out ${PREFIX}server.csr \
    4. -reqexts etcd_v3_req -batch -nodes \
    5. -subj /CN=$CN
    6. # openssl ca -name etcd_ca -config ${OPENSSLCFG} \
    7. -out ${PREFIX}server.crt \
    8. -in ${PREFIX}server.csr \
    9. -extensions etcd_v3_ca_server -batch
  4. Create the peer certificate request and sign it: (peer.csr and peer.crt)

    1. # openssl req -new -config ${OPENSSLCFG} \
    2. -keyout ${PREFIX}peer.key \
    3. -out ${PREFIX}peer.csr \
    4. -reqexts etcd_v3_req -batch -nodes \
    5. -subj /CN=$CN
    6. # openssl ca -name etcd_ca -config ${OPENSSLCFG} \
    7. -out ${PREFIX}peer.crt \
    8. -in ${PREFIX}peer.csr \
    9. -extensions etcd_v3_ca_peer -batch
  5. Copy the current etcd configuration and ca.crt files from the current node as examples to modify later:

    1. # cp /etc/etcd/etcd.conf ${PREFIX}
    2. # cp /etc/etcd/ca.crt ${PREFIX}
  6. While still on the surviving etcd host, add the new host to the cluster. To add additional etcd members to the cluster, you must first adjust the default localhost peer in the **peerURLs** value for the first member:

    1. Get the member ID for the first member using the member list command:

      1. # etcdctl --cert-file=/etc/etcd/peer.crt \
      2. --key-file=/etc/etcd/peer.key \
      3. --ca-file=/etc/etcd/ca.crt \
      4. --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \ (1)
      5. member list
      1Ensure that you specify the URLs of only active etcd members in the —peers parameter value.
    2. Obtain the IP address where etcd listens for cluster peers:

      1. $ ss -l4n | grep 2380
    3. Update the value of **peerURLs** using the etcdctl member update command by passing the member ID and IP address obtained from the previous steps:

      1. # etcdctl --cert-file=/etc/etcd/peer.crt \
      2. --key-file=/etc/etcd/peer.key \
      3. --ca-file=/etc/etcd/ca.crt \
      4. --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \
      5. member update 511b7fb6cc0001 https://172.18.1.18:2380
    4. Re-run the member list command and ensure the peer URLs no longer include localhost.

  1. Add the new host to the etcd cluster. Note that the new host is not yet configured, so the status stays as unstarted until the you configure the new host.

    You must add each member and bring it online one at a time. When you add each additional member to the cluster, you must adjust the peerURLs list for the current peers. The peerURLs list grows by one for each member added. The etcdctl member add command outputs the values that you must set in the etcd.conf file as you add each member, as described in the following instructions.

    1. # etcdctl -C https://${CURRENT_ETCD_HOST}:2379 \
    2. --ca-file=/etc/etcd/ca.crt \
    3. --cert-file=/etc/etcd/peer.crt \
    4. --key-file=/etc/etcd/peer.key member add ${NEW_ETCD_HOSTNAME} https://${NEW_ETCD_IP}:2380 (1)
    5. Added member named 10.3.9.222 with ID 4e1db163a21d7651 to cluster
    6. ETCD_NAME="<NEW_ETCD_HOSTNAME>"
    7. ETCD_INITIAL_CLUSTER="<NEW_ETCD_HOSTNAME>=https://<NEW_HOST_IP>:2380,<CLUSTERMEMBER1_NAME>=https:/<CLUSTERMEMBER2_IP>:2380,<CLUSTERMEMBER2_NAME>=https:/<CLUSTERMEMBER2_IP>:2380,<CLUSTERMEMBER3_NAME>=https:/<CLUSTERMEMBER3_IP>:2380"
    8. ETCD_INITIAL_CLUSTER_STATE="existing"
    1In this line, 10.3.9.222 is a label for the etcd member. You can specify the host name, IP address, or a simple name.
  2. Update the sample ${PREFIX}/etcd.conf file.

    1. Replace the following values with the values generated in the previous step:

      • ETCD_NAME

      • ETCD_INITIAL_CLUSTER

      • ETCD_INITIAL_CLUSTER_STATE

  1. 2. Modify the following variables with the new host IP from the output of the previous step. You can use `${NEW_ETCD_IP}` as the value.
  2. ```
  3. ETCD_LISTEN_PEER_URLS
  4. ETCD_LISTEN_CLIENT_URLS
  5. ETCD_INITIAL_ADVERTISE_PEER_URLS
  6. ETCD_ADVERTISE_CLIENT_URLS
  7. ```
  8. 3. If you previously used the member system as an etcd node, you must overwrite the current values in the ***/etc/etcd/etcd.conf*** file.
  9. 4. Check the file for syntax errors or missing IP addresses, otherwise the etcd service might fail:
  10. ```
  11. # vi ${PREFIX}/etcd.conf
  12. ```
  1. On the node that hosts the installation files, update the [etcd] hosts group in the /etc/ansible/hosts inventory file. Remove the old etcd hosts and add the new ones.

  2. Create a tgz file that contains the certificates, the sample configuration file, and the ca and copy it to the new host:

    1. # tar -czvf /etc/etcd/generated_certs/${CN}.tgz -C ${PREFIX} .
    2. # scp /etc/etcd/generated_certs/${CN}.tgz ${CN}:/tmp/
Modify the new etcd host
  1. Install iptables-services to provide iptables utilities to open the required ports for etcd:

    1. # yum install -y iptables-services
  2. Create the OS_FIREWALL_ALLOW firewall rules to allow etcd to communicate:

    • Port 2379/tcp for clients

    • Port 2380/tcp for peer communication

      1. # systemctl enable iptables.service --now
      2. # iptables -N OS_FIREWALL_ALLOW
      3. # iptables -t filter -I INPUT -j OS_FIREWALL_ALLOW
      4. # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT
      5. # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT
      6. # iptables-save | tee /etc/sysconfig/iptables

      In this example, a new chain OS_FIREWALL_ALLOW is created, which is the standard naming the OKD installer uses for firewall rules.

      If the environment is hosted in an IaaS environment, modify the security groups for the instance to allow incoming traffic to those ports as well.

  1. Install etcd:

    1. # yum install -y etcd

    Ensure version etcd-2.3.7-4.el7.x86_64 or greater is installed,

  2. Ensure the etcd service is not running by removing the etcd pod definition:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  3. Remove any etcd configuration and data:

    1. # rm -Rf /etc/etcd/*
    2. # rm -Rf /var/lib/etcd/*
  4. Extract the certificates and configuration files:

    1. # tar xzvf /tmp/etcd0.example.com.tgz -C /etc/etcd/
  5. Start etcd on the new host:

    1. # systemctl enable etcd --now
  6. Verify that the host is part of the cluster and the current cluster health:

    • If you use the v2 etcd api, run the following command:

      1. # etcdctl --cert-file=/etc/etcd/peer.crt \
      2. --key-file=/etc/etcd/peer.key \
      3. --ca-file=/etc/etcd/ca.crt \
      4. --peers="https://*master-0.example.com*:2379,\
      5. https://*master-1.example.com*:2379,\
      6. https://*master-2.example.com*:2379,\
      7. https://*etcd0.example.com*:2379"\
      8. cluster-health
      9. member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
      10. member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
      11. member 8b8904727bf526a5 is healthy: got healthy result from https://192.168.55.21:2379
      12. member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
      13. cluster is healthy
    • If you use the v3 etcd api, run the following command:

      1. # ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \
      2. --key=/etc/etcd/peer.key \
      3. --cacert="/etc/etcd/ca.crt" \
      4. --endpoints="https://*master-0.example.com*:2379,\
      5. https://*master-1.example.com*:2379,\
      6. https://*master-2.example.com*:2379,\
      7. https://*etcd0.example.com*:2379"\
      8. endpoint health
      9. https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms
      10. https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms
      11. https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
      12. https://etcd0.example.com:2379 is healthy: successfully committed proposal: took = 1.498829ms
Modify each OKD master
  1. Modify the master configuration in the etcClientInfo section of the /etc/origin/master/master-config.yaml file on every master. Add the new etcd host to the list of the etcd servers OKD uses to store the data, and remove any failed etcd hosts:

    1. etcdClientInfo:
    2. ca: master.etcd-ca.crt
    3. certFile: master.etcd-client.crt
    4. keyFile: master.etcd-client.key
    5. urls:
    6. - https://master-0.example.com:2379
    7. - https://master-1.example.com:2379
    8. - https://master-2.example.com:2379
    9. - https://etcd0.example.com:2379
  2. Restart the master API service:

    • On every master:

      1. # master-restart api
      2. # master-restart controllers

      The number of etcd nodes must be odd, so you must add at least two hosts.

  1. If you use Flannel, modify the flanneld service configuration located at /etc/sysconfig/flanneld on every OKD host to include the new etcd host:

    1. FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379
  2. Restart the flanneld service:

    1. # systemctl restart flanneld.service

Removing an etcd host

If an etcd host fails beyond restoration, remove it from the cluster.

Steps to be performed on all masters hosts

Procedure

  1. Remove each other etcd host from the etcd cluster. Run the following command for each etcd node:

    1. # etcdctl -C https://<surviving host IP address>:2379 \
    2. --ca-file=/etc/etcd/ca.crt \
    3. --cert-file=/etc/etcd/peer.crt \
    4. --key-file=/etc/etcd/peer.key member remove <failed member ID>
  2. Restart the master API service on every master:

    1. # master-restart api restart-master controller

Steps to be performed in the current etcd cluster

Procedure

  1. Remove the failed host from the cluster:

    1. # etcdctl2 cluster-health
    2. member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
    3. member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
    4. failed to check the health of member 8372784203e11288 on https://192.168.55.21:2379: Get https://192.168.55.21:2379/health: dial tcp 192.168.55.21:2379: getsockopt: connection refused
    5. member 8372784203e11288 is unreachable: [https://192.168.55.21:2379] are all unreachable
    6. member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
    7. cluster is healthy
    8. # etcdctl2 member remove 8372784203e11288 (1)
    9. Removed member 8372784203e11288 from cluster
    10. # etcdctl2 cluster-health
    11. member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
    12. member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
    13. member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
    14. cluster is healthy
    1The remove command requires the etcd ID, not the hostname.
  2. To ensure the etcd configuration does not use the failed host when the etcd service is restarted, modify the /etc/etcd/etcd.conf file on all remaining etcd hosts and remove the failed host in the value for the ETCD_INITIAL_CLUSTER variable:

    1. # vi /etc/etcd/etcd.conf

    For example:

    1. ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380,master-2.example.com=https://192.168.55.13:2380

    becomes:

    1. ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380

    Restarting the etcd services is not required, because the failed host is removed using etcdctl.

  3. Modify the Ansible inventory file to reflect the current status of the cluster and to avoid issues when re-running a playbook:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. etcd
    5. ... [OUTPUT ABBREVIATED] ...
    6. [etcd]
    7. master-0.example.com
    8. master-1.example.com
  4. If you are using Flannel, modify the flanneld service configuration located at /etc/sysconfig/flanneld on every host and remove the etcd host:

    1. FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379
  5. Restart the flanneld service:

    1. # systemctl restart flanneld.service