Perform a Rolling Update on a DaemonSet

This page shows how to perform a rolling update on a DaemonSet.

Before you begin

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

DaemonSet Update Strategy

DaemonSet has two update strategy types:

  • OnDelete: With OnDelete update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods. This is the same behavior of DaemonSet in Kubernetes version 1.5 or before.
  • RollingUpdate: This is the default update strategy.
    With RollingUpdate update strategy, after you update a DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods will be created automatically, in a controlled fashion. At most one pod of the DaemonSet will be running on each node during the whole update process.

Performing a Rolling Update

To enable the rolling update feature of a DaemonSet, you must set its .spec.updateStrategy.type to RollingUpdate.

You may want to set .spec.updateStrategy.rollingUpdate.maxUnavailable (default to 1), .spec.minReadySeconds (default to 0) and .spec.updateStrategy.rollingUpdate.maxSurge (defaults to 0) as well.

Creating a DaemonSet with RollingUpdate update strategy

This YAML file specifies a DaemonSet with an update strategy as ‘RollingUpdate’

controllers/fluentd-daemonset.yaml Perform a Rolling Update on a DaemonSet - 图1

  1. apiVersion: apps/v1
  2. kind: DaemonSet
  3. metadata:
  4. name: fluentd-elasticsearch
  5. namespace: kube-system
  6. labels:
  7. k8s-app: fluentd-logging
  8. spec:
  9. selector:
  10. matchLabels:
  11. name: fluentd-elasticsearch
  12. updateStrategy:
  13. type: RollingUpdate
  14. rollingUpdate:
  15. maxUnavailable: 1
  16. template:
  17. metadata:
  18. labels:
  19. name: fluentd-elasticsearch
  20. spec:
  21. tolerations:
  22. # these tolerations are to have the daemonset runnable on control plane nodes
  23. # remove them if your control plane nodes should not run pods
  24. - key: node-role.kubernetes.io/control-plane
  25. operator: Exists
  26. effect: NoSchedule
  27. - key: node-role.kubernetes.io/master
  28. operator: Exists
  29. effect: NoSchedule
  30. containers:
  31. - name: fluentd-elasticsearch
  32. image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
  33. volumeMounts:
  34. - name: varlog
  35. mountPath: /var/log
  36. - name: varlibdockercontainers
  37. mountPath: /var/lib/docker/containers
  38. readOnly: true
  39. terminationGracePeriodSeconds: 30
  40. volumes:
  41. - name: varlog
  42. hostPath:
  43. path: /var/log
  44. - name: varlibdockercontainers
  45. hostPath:
  46. path: /var/lib/docker/containers

After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:

  1. kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml

Alternatively, use kubectl apply to create the same DaemonSet if you plan to update the DaemonSet with kubectl apply.

  1. kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml

Checking DaemonSet RollingUpdate update strategy

Check the update strategy of your DaemonSet, and make sure it’s set to RollingUpdate:

  1. kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system

If you haven’t created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:

  1. kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'

The output from both commands should be:

  1. RollingUpdate

If the output isn’t RollingUpdate, go back and modify the DaemonSet object or manifest accordingly.

Updating a DaemonSet template

Any updates to a RollingUpdate DaemonSet .spec.template will trigger a rolling update. Let’s update the DaemonSet by applying a new YAML file. This can be done with several different kubectl commands.

controllers/fluentd-daemonset-update.yaml Perform a Rolling Update on a DaemonSet - 图2

  1. apiVersion: apps/v1
  2. kind: DaemonSet
  3. metadata:
  4. name: fluentd-elasticsearch
  5. namespace: kube-system
  6. labels:
  7. k8s-app: fluentd-logging
  8. spec:
  9. selector:
  10. matchLabels:
  11. name: fluentd-elasticsearch
  12. updateStrategy:
  13. type: RollingUpdate
  14. rollingUpdate:
  15. maxUnavailable: 1
  16. template:
  17. metadata:
  18. labels:
  19. name: fluentd-elasticsearch
  20. spec:
  21. tolerations:
  22. # these tolerations are to have the daemonset runnable on control plane nodes
  23. # remove them if your control plane nodes should not run pods
  24. - key: node-role.kubernetes.io/control-plane
  25. operator: Exists
  26. effect: NoSchedule
  27. - key: node-role.kubernetes.io/master
  28. operator: Exists
  29. effect: NoSchedule
  30. containers:
  31. - name: fluentd-elasticsearch
  32. image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
  33. resources:
  34. limits:
  35. memory: 200Mi
  36. requests:
  37. cpu: 100m
  38. memory: 200Mi
  39. volumeMounts:
  40. - name: varlog
  41. mountPath: /var/log
  42. - name: varlibdockercontainers
  43. mountPath: /var/lib/docker/containers
  44. readOnly: true
  45. terminationGracePeriodSeconds: 30
  46. volumes:
  47. - name: varlog
  48. hostPath:
  49. path: /var/log
  50. - name: varlibdockercontainers
  51. hostPath:
  52. path: /var/lib/docker/containers

Declarative commands

If you update DaemonSets using configuration files, use kubectl apply:

  1. kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml

Imperative commands

If you update DaemonSets using imperative commands, use kubectl edit :

  1. kubectl edit ds/fluentd-elasticsearch -n kube-system
Updating only the container image

If you only need to update the container image in the DaemonSet template, i.e. .spec.template.spec.containers[*].image, use kubectl set image:

  1. kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system

Watching the rolling update status

Finally, watch the rollout status of the latest DaemonSet rolling update:

  1. kubectl rollout status ds/fluentd-elasticsearch -n kube-system

When the rollout is complete, the output is similar to this:

  1. daemonset "fluentd-elasticsearch" successfully rolled out

Troubleshooting

DaemonSet rolling update is stuck

Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:

Some nodes run out of resources

The rollout is stuck because new DaemonSet pods can’t be scheduled on at least one node. This is possible when the node is running out of resources.

When this happens, find the nodes that don’t have the DaemonSet pods scheduled on by comparing the output of kubectl get nodes and the output of:

  1. kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system

Once you’ve found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.

Note: This will cause service disruption when deleted pods are not controlled by any controllers or pods are not replicated. This does not respect PodDisruptionBudget either.

Broken rollout

If the recent DaemonSet template update is broken, for example, the container is crash looping, or the container image doesn’t exist (often due to a typo), DaemonSet rollout won’t progress.

To fix this, update the DaemonSet template again. New rollout won’t be blocked by previous unhealthy rollouts.

Clock skew

If .spec.minReadySeconds is specified in the DaemonSet, clock skew between master and nodes will make DaemonSet unable to detect the right rollout progress.

Clean up

Delete DaemonSet from a namespace :

  1. kubectl delete ds fluentd-elasticsearch -n kube-system

What’s next