StatefulSet Basics

This tutorial provides an introduction to managing applications with StatefulSets. It demonstrates how to create, delete, scale, and update the Pods of StatefulSets.

Before you begin

Before you begin this tutorial, you should familiarize yourself with the following Kubernetes concepts:

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

You should configure kubectl to use a context that uses the default namespace. If you are using an existing cluster, make sure that it’s OK to use that cluster’s default namespace to practice. Ideally, practice in a cluster that doesn’t run any real workloads.

It’s also useful to read the concept page about StatefulSets.

Note: This tutorial assumes that your cluster is configured to dynamically provision PersistentVolumes. You’ll also need to have a default StorageClass. If your cluster is not configured to provision storage dynamically, you will have to manually provision two 1 GiB volumes prior to starting this tutorial and set up your cluster so that those PersistentVolumes map to the PersistentVolumeClaim templates that the StatefulSet defines.

Objectives

StatefulSets are intended to be used with stateful applications and distributed systems. However, the administration of stateful applications and distributed systems on Kubernetes is a broad, complex topic. In order to demonstrate the basic features of a StatefulSet, and not to conflate the former topic with the latter, you will deploy a simple web application using a StatefulSet.

After this tutorial, you will be familiar with the following.

  • How to create a StatefulSet
  • How a StatefulSet manages its Pods
  • How to delete a StatefulSet
  • How to scale a StatefulSet
  • How to update a StatefulSet’s Pods

Creating a StatefulSet

Begin by creating a StatefulSet using the example below. It is similar to the example presented in the StatefulSets concept. It creates a headless Service, nginx, to publish the IP addresses of Pods in the StatefulSet, web.

application/web/web.yaml StatefulSet Basics - 图1

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: nginx
  5. labels:
  6. app: nginx
  7. spec:
  8. ports:
  9. - port: 80
  10. name: web
  11. clusterIP: None
  12. selector:
  13. app: nginx
  14. ---
  15. apiVersion: apps/v1
  16. kind: StatefulSet
  17. metadata:
  18. name: web
  19. spec:
  20. serviceName: "nginx"
  21. replicas: 2
  22. selector:
  23. matchLabels:
  24. app: nginx
  25. template:
  26. metadata:
  27. labels:
  28. app: nginx
  29. spec:
  30. containers:
  31. - name: nginx
  32. image: registry.k8s.io/nginx-slim:0.8
  33. ports:
  34. - containerPort: 80
  35. name: web
  36. volumeMounts:
  37. - name: www
  38. mountPath: /usr/share/nginx/html
  39. volumeClaimTemplates:
  40. - metadata:
  41. name: www
  42. spec:
  43. accessModes: [ "ReadWriteOnce" ]
  44. resources:
  45. requests:
  46. storage: 1Gi

You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet’s Pods.

  1. # use this terminal to run commands that specify --watch
  2. # end this watch when you are asked to start a new watch
  3. kubectl get pods --watch -l app=nginx

In the second terminal, use kubectl apply to create the headless Service and StatefulSet:

  1. kubectl apply -f https://k8s.io/examples/application/web/web.yaml
  1. service/nginx created
  2. statefulset.apps/web created

The command above creates two Pods, each running an NGINX webserver. Get the nginx Service…

  1. kubectl get service nginx
  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. nginx ClusterIP None <none> 80/TCP 12s

…then get the web StatefulSet, to verify that both were created successfully:

  1. kubectl get statefulset web
  1. NAME DESIRED CURRENT AGE
  2. web 2 1 20s

Ordered Pod Creation

For a StatefulSet with n replicas, when Pods are being deployed, they are created sequentially, ordered from {0..n-1}. Examine the output of the kubectl get command in the first terminal. Eventually, the output will look like the example below.

  1. # Do not start a new watch;
  2. # this should already be running
  3. kubectl get pods --watch -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 0/1 Pending 0 0s
  3. web-0 0/1 Pending 0 0s
  4. web-0 0/1 ContainerCreating 0 0s
  5. web-0 1/1 Running 0 19s
  6. web-1 0/1 Pending 0 0s
  7. web-1 0/1 Pending 0 0s
  8. web-1 0/1 ContainerCreating 0 0s
  9. web-1 1/1 Running 0 18s

Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).

Note: To configure the integer ordinal assigned to each Pod in a StatefulSet, see Start ordinal.

Pods in a StatefulSet

Pods in a StatefulSet have a unique ordinal index and a stable network identity.

Examining the Pod’s ordinal index

Get the StatefulSet’s Pods:

  1. kubectl get pods -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 1m
  3. web-1 1/1 Running 0 1m

As mentioned in the StatefulSets concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet controller.
The Pods’ names take the form <statefulset name>-<ordinal index>. Since the web StatefulSet has two replicas, it creates two Pods, web-0 and web-1.

Using stable network identities

Each Pod has a stable hostname based on its ordinal index. Use kubectl exec to execute the hostname command in each Pod:

  1. for i in 0 1; do kubectl exec "web-$i" -- sh -c 'hostname'; done
  1. web-0
  2. web-1

Use kubectl run to execute a container that provides the nslookup command from the dnsutils package. Using nslookup on the Pods’ hostnames, you can examine their in-cluster DNS addresses:

  1. kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm

which starts a new shell. In that new shell, run:

  1. # Run this in the dns-test container shell
  2. nslookup web-0.nginx

The output is similar to:

  1. Server: 10.0.0.10
  2. Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
  3. Name: web-0.nginx
  4. Address 1: 10.244.1.6
  5. nslookup web-1.nginx
  6. Server: 10.0.0.10
  7. Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
  8. Name: web-1.nginx
  9. Address 1: 10.244.2.6

(and now exit the container shell: exit)

The CNAME of the headless service points to SRV records (one for each Pod that is Running and Ready). The SRV records point to A record entries that contain the Pods’ IP addresses.

In one terminal, watch the StatefulSet’s Pods:

  1. # Start a new watch
  2. # End this watch when you've seen that the delete is finished
  3. kubectl get pod --watch -l app=nginx

In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet:

  1. kubectl delete pod -l app=nginx
  1. pod "web-0" deleted
  2. pod "web-1" deleted

Wait for the StatefulSet to restart them, and for both Pods to transition to Running and Ready:

  1. # This should already be running
  2. kubectl get pod --watch -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 0/1 ContainerCreating 0 0s
  3. NAME READY STATUS RESTARTS AGE
  4. web-0 1/1 Running 0 2s
  5. web-1 0/1 Pending 0 0s
  6. web-1 0/1 Pending 0 0s
  7. web-1 0/1 ContainerCreating 0 0s
  8. web-1 1/1 Running 0 34s

Use kubectl exec and kubectl run to view the Pods’ hostnames and in-cluster DNS entries. First, view the Pods’ hostnames:

  1. for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
  1. web-0
  2. web-1

then, run:

  1. kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm

which starts a new shell.
In that new shell, run:

  1. # Run this in the dns-test container shell
  2. nslookup web-0.nginx

The output is similar to:

  1. Server: 10.0.0.10
  2. Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
  3. Name: web-0.nginx
  4. Address 1: 10.244.1.7
  5. nslookup web-1.nginx
  6. Server: 10.0.0.10
  7. Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
  8. Name: web-1.nginx
  9. Address 1: 10.244.2.8

(and now exit the container shell: exit)

The Pods’ ordinals, hostnames, SRV records, and A record names have not changed, but the IP addresses associated with the Pods may have changed. In the cluster used for this tutorial, they have. This is why it is important not to configure other applications to connect to Pods in a StatefulSet by IP address.

Discovery for specific Pods in a StatefulSet

If you need to find and connect to the active members of a StatefulSet, you should query the CNAME of the headless Service (nginx.default.svc.cluster.local). The SRV records associated with the CNAME will contain only the Pods in the StatefulSet that are Running and Ready.

If your application already implements connection logic that tests for liveness and readiness, you can use the SRV records of the Pods ( web-0.nginx.default.svc.cluster.local, web-1.nginx.default.svc.cluster.local), as they are stable, and your application will be able to discover the Pods’ addresses when they transition to Running and Ready.

Writing to stable storage

Get the PersistentVolumeClaims for web-0 and web-1:

  1. kubectl get pvc -l app=nginx

The output is similar to:

  1. NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
  2. www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 48s
  3. www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s

The StatefulSet controller created two PersistentVolumeClaims that are bound to two PersistentVolumes.

As the cluster used in this tutorial is configured to dynamically provision PersistentVolumes, the PersistentVolumes were created and bound automatically.

The NGINX webserver, by default, serves an index file from /usr/share/nginx/html/index.html. The volumeMounts field in the StatefulSet’s spec ensures that the /usr/share/nginx/html directory is backed by a PersistentVolume.

Write the Pods’ hostnames to their index.html files and verify that the NGINX webservers serve the hostnames:

  1. for i in 0 1; do kubectl exec "web-$i" -- sh -c 'echo "$(hostname)" > /usr/share/nginx/html/index.html'; done
  2. for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
  1. web-0
  2. web-1

Note:

If you instead see 403 Forbidden responses for the above curl command, you will need to fix the permissions of the directory mounted by the volumeMounts (due to a bug when using hostPath volumes), by running:

for i in 0 1; do kubectl exec web-$i -- chmod 755 /usr/share/nginx/html; done

before retrying the curl command above.

In one terminal, watch the StatefulSet’s Pods:

  1. # End this watch when you've reached the end of the section.
  2. # At the start of "Scaling a StatefulSet" you'll start a new watch.
  3. kubectl get pod --watch -l app=nginx

In a second terminal, delete all of the StatefulSet’s Pods:

  1. kubectl delete pod -l app=nginx
  1. pod "web-0" deleted
  2. pod "web-1" deleted

Examine the output of the kubectl get command in the first terminal, and wait for all of the Pods to transition to Running and Ready.

  1. # This should already be running
  2. kubectl get pod --watch -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 0/1 ContainerCreating 0 0s
  3. NAME READY STATUS RESTARTS AGE
  4. web-0 1/1 Running 0 2s
  5. web-1 0/1 Pending 0 0s
  6. web-1 0/1 Pending 0 0s
  7. web-1 0/1 ContainerCreating 0 0s
  8. web-1 1/1 Running 0 34s

Verify the web servers continue to serve their hostnames:

  1. for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
  1. web-0
  2. web-1

Even though web-0 and web-1 were rescheduled, they continue to serve their hostnames because the PersistentVolumes associated with their PersistentVolumeClaims are remounted to their volumeMounts. No matter what node web-0and web-1 are scheduled on, their PersistentVolumes will be mounted to the appropriate mount points.

Scaling a StatefulSet

Scaling a StatefulSet refers to increasing or decreasing the number of replicas. This is accomplished by updating the replicas field. You can use either kubectl scale or kubectl patch to scale a StatefulSet.

Scaling up

In one terminal window, watch the Pods in the StatefulSet:

  1. # If you already have a watch running, you can continue using that.
  2. # Otherwise, start one.
  3. # End this watch when there are 5 healthy Pods for the StatefulSet
  4. kubectl get pods --watch -l app=nginx

In another terminal window, use kubectl scale to scale the number of replicas to 5:

  1. kubectl scale sts web --replicas=5
  1. statefulset.apps/web scaled

Examine the output of the kubectl get command in the first terminal, and wait for the three additional Pods to transition to Running and Ready.

  1. # This should already be running
  2. kubectl get pod --watch -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 2h
  3. web-1 1/1 Running 0 2h
  4. NAME READY STATUS RESTARTS AGE
  5. web-2 0/1 Pending 0 0s
  6. web-2 0/1 Pending 0 0s
  7. web-2 0/1 ContainerCreating 0 0s
  8. web-2 1/1 Running 0 19s
  9. web-3 0/1 Pending 0 0s
  10. web-3 0/1 Pending 0 0s
  11. web-3 0/1 ContainerCreating 0 0s
  12. web-3 1/1 Running 0 18s
  13. web-4 0/1 Pending 0 0s
  14. web-4 0/1 Pending 0 0s
  15. web-4 0/1 ContainerCreating 0 0s
  16. web-4 1/1 Running 0 19s

The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for each Pod’s predecessor to be Running and Ready before launching the subsequent Pod.

Scaling Down

In one terminal, watch the StatefulSet’s Pods:

  1. # End this watch when there are only 3 Pods for the StatefulSet
  2. kubectl get pod --watch -l app=nginx

In another terminal, use kubectl patch to scale the StatefulSet back down to three replicas:

  1. kubectl patch sts web -p '{"spec":{"replicas":3}}'
  1. statefulset.apps/web patched

Wait for web-4 and web-3 to transition to Terminating.

  1. # This should already be running
  2. kubectl get pods --watch -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 3h
  3. web-1 1/1 Running 0 3h
  4. web-2 1/1 Running 0 55s
  5. web-3 1/1 Running 0 36s
  6. web-4 0/1 ContainerCreating 0 18s
  7. NAME READY STATUS RESTARTS AGE
  8. web-4 1/1 Running 0 19s
  9. web-4 1/1 Terminating 0 24s
  10. web-4 1/1 Terminating 0 24s
  11. web-3 1/1 Terminating 0 42s
  12. web-3 1/1 Terminating 0 42s

Ordered Pod termination

The controller deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each to be completely shutdown before deleting the next.

Get the StatefulSet’s PersistentVolumeClaims:

  1. kubectl get pvc -l app=nginx
  1. NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
  2. www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 13h
  3. www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 13h
  4. www-web-2 Bound pvc-e1125b27-b508-11e6-932f-42010a800002 1Gi RWO 13h
  5. www-web-3 Bound pvc-e1176df6-b508-11e6-932f-42010a800002 1Gi RWO 13h
  6. www-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO 13h

There are still five PersistentVolumeClaims and five PersistentVolumes. When exploring a Pod’s stable storage, we saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted when the StatefulSet’s Pods are deleted. This is still true when Pod deletion is caused by scaling the StatefulSet down.

Updating StatefulSets

In Kubernetes 1.7 and later, the StatefulSet controller supports automated updates. The strategy used is determined by the spec.updateStrategy field of the StatefulSet API Object. This feature can be used to upgrade the container images, resource requests and/or limits, labels, and annotations of the Pods in a StatefulSet. There are two valid update strategies, RollingUpdate and OnDelete.

RollingUpdate update strategy is the default for StatefulSets.

RollingUpdate

The RollingUpdate update strategy will update all Pods in a StatefulSet, in reverse ordinal order, while respecting the StatefulSet guarantees.

Patch the web StatefulSet to apply the RollingUpdate update strategy:

  1. kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
  1. statefulset.apps/web patched

In one terminal window, patch the web StatefulSet to change the container image again:

  1. kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
  1. statefulset.apps/web patched

In another terminal, watch the Pods in the StatefulSet:

  1. # End this watch when the rollout is complete
  2. #
  3. # If you're not sure, leave it running one more minute
  4. kubectl get pod -l app=nginx --watch

The output is similar to:

  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 7m
  3. web-1 1/1 Running 0 7m
  4. web-2 1/1 Running 0 8m
  5. web-2 1/1 Terminating 0 8m
  6. web-2 1/1 Terminating 0 8m
  7. web-2 0/1 Terminating 0 8m
  8. web-2 0/1 Terminating 0 8m
  9. web-2 0/1 Terminating 0 8m
  10. web-2 0/1 Terminating 0 8m
  11. web-2 0/1 Pending 0 0s
  12. web-2 0/1 Pending 0 0s
  13. web-2 0/1 ContainerCreating 0 0s
  14. web-2 1/1 Running 0 19s
  15. web-1 1/1 Terminating 0 8m
  16. web-1 0/1 Terminating 0 8m
  17. web-1 0/1 Terminating 0 8m
  18. web-1 0/1 Terminating 0 8m
  19. web-1 0/1 Pending 0 0s
  20. web-1 0/1 Pending 0 0s
  21. web-1 0/1 ContainerCreating 0 0s
  22. web-1 1/1 Running 0 6s
  23. web-0 1/1 Terminating 0 7m
  24. web-0 1/1 Terminating 0 7m
  25. web-0 0/1 Terminating 0 7m
  26. web-0 0/1 Terminating 0 7m
  27. web-0 0/1 Terminating 0 7m
  28. web-0 0/1 Terminating 0 7m
  29. web-0 0/1 Pending 0 0s
  30. web-0 0/1 Pending 0 0s
  31. web-0 0/1 ContainerCreating 0 0s
  32. web-0 1/1 Running 0 10s

The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod. Note that, even though the StatefulSet controller will not proceed to update the next Pod until its ordinal successor is Running and Ready, it will restore any Pod that fails during the update to its current version.

Pods that have already received the update will be restored to the updated version, and Pods that have not yet received the update will be restored to the previous version. In this way, the controller attempts to continue to keep the application healthy and the update consistent in the presence of intermittent failures.

Get the Pods to view their container images:

  1. for p in 0 1 2; do kubectl get pod "web-$p" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
  1. registry.k8s.io/nginx-slim:0.8
  2. registry.k8s.io/nginx-slim:0.8
  3. registry.k8s.io/nginx-slim:0.8

All the Pods in the StatefulSet are now running the previous container image.

Note: You can also use kubectl rollout status sts/<name> to view the status of a rolling update to a StatefulSet

Staging an update

You can stage an update to a StatefulSet by using the partition parameter of the RollingUpdate update strategy. A staged update will keep all of the Pods in the StatefulSet at the current version while allowing mutations to the StatefulSet’s .spec.template.

Patch the web StatefulSet to add a partition to the updateStrategy field:

  1. kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":3}}}}'
  1. statefulset.apps/web patched

Patch the StatefulSet again to change the container’s image:

  1. kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"registry.k8s.io/nginx-slim:0.7"}]'
  1. statefulset.apps/web patched

Delete a Pod in the StatefulSet:

  1. kubectl delete pod web-2
  1. pod "web-2" deleted

Wait for the Pod to be Running and Ready.

  1. # End the watch when you see that web-2 is healthy
  2. kubectl get pod -l app=nginx --watch
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 4m
  3. web-1 1/1 Running 0 4m
  4. web-2 0/1 ContainerCreating 0 11s
  5. web-2 1/1 Running 0 18s

Get the Pod’s container image:

  1. kubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
  1. registry.k8s.io/nginx-slim:0.8

Notice that, even though the update strategy is RollingUpdate the StatefulSet restored the Pod with its original container. This is because the ordinal of the Pod is less than the partition specified by the updateStrategy.

Rolling out a canary

You can roll out a canary to test a modification by decrementing the partition you specified above.

Patch the StatefulSet to decrement the partition:

  1. # The value of "partition" should match the highest existing ordinal for
  2. # the StatefulSet
  3. kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
  1. statefulset.apps/web patched

Wait for web-2 to be Running and Ready.

  1. # This should already be running
  2. kubectl get pod -l app=nginx --watch
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 4m
  3. web-1 1/1 Running 0 4m
  4. web-2 0/1 ContainerCreating 0 11s
  5. web-2 1/1 Running 0 18s

Get the Pod’s container:

  1. kubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
  1. registry.k8s.io/nginx-slim:0.7

When you changed the partition, the StatefulSet controller automatically updated the web-2 Pod because the Pod’s ordinal was greater than or equal to the partition.

Delete the web-1 Pod:

  1. kubectl delete pod web-1
  1. pod "web-1" deleted

Wait for the web-1 Pod to be Running and Ready.

  1. # This should already be running
  2. kubectl get pod -l app=nginx --watch

The output is similar to:

  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 6m
  3. web-1 0/1 Terminating 0 6m
  4. web-2 1/1 Running 0 2m
  5. web-1 0/1 Terminating 0 6m
  6. web-1 0/1 Terminating 0 6m
  7. web-1 0/1 Terminating 0 6m
  8. web-1 0/1 Pending 0 0s
  9. web-1 0/1 Pending 0 0s
  10. web-1 0/1 ContainerCreating 0 0s
  11. web-1 1/1 Running 0 18s

Get the web-1 Pod’s container image:

  1. kubectl get pod web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
  1. registry.k8s.io/nginx-slim:0.8

web-1 was restored to its original configuration because the Pod’s ordinal was less than the partition. When a partition is specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet’s .spec.template is updated. If a Pod that has an ordinal less than the partition is deleted or otherwise terminated, it will be restored to its original configuration.

Phased roll outs

You can perform a phased roll out (e.g. a linear, geometric, or exponential roll out) using a partitioned rolling update in a similar manner to how you rolled out a canary. To perform a phased roll out, set the partition to the ordinal at which you want the controller to pause the update.

The partition is currently set to 2. Set the partition to 0:

  1. kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
  1. statefulset.apps/web patched

Wait for all of the Pods in the StatefulSet to become Running and Ready.

  1. # This should already be running
  2. kubectl get pod -l app=nginx --watch

The output is similar to:

  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 3m
  3. web-1 0/1 ContainerCreating 0 11s
  4. web-2 1/1 Running 0 2m
  5. web-1 1/1 Running 0 18s
  6. web-0 1/1 Terminating 0 3m
  7. web-0 1/1 Terminating 0 3m
  8. web-0 0/1 Terminating 0 3m
  9. web-0 0/1 Terminating 0 3m
  10. web-0 0/1 Terminating 0 3m
  11. web-0 0/1 Terminating 0 3m
  12. web-0 0/1 Pending 0 0s
  13. web-0 0/1 Pending 0 0s
  14. web-0 0/1 ContainerCreating 0 0s
  15. web-0 1/1 Running 0 3s

Get the container image details for the Pods in the StatefulSet:

  1. for p in 0 1 2; do kubectl get pod "web-$p" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
  1. registry.k8s.io/nginx-slim:0.7
  2. registry.k8s.io/nginx-slim:0.7
  3. registry.k8s.io/nginx-slim:0.7

By moving the partition to 0, you allowed the StatefulSet to continue the update process.

OnDelete

The OnDelete update strategy implements the legacy (1.6 and prior) behavior, When you select this update strategy, the StatefulSet controller will not automatically update Pods when a modification is made to the StatefulSet’s .spec.template field. This strategy can be selected by setting the .spec.template.updateStrategy.type to OnDelete.

Deleting StatefulSets

StatefulSet supports both Non-Cascading and Cascading deletion. In a Non-Cascading Delete, the StatefulSet’s Pods are not deleted when the StatefulSet is deleted. In a Cascading Delete, both the StatefulSet and its Pods are deleted.

Non-cascading delete

In one terminal window, watch the Pods in the StatefulSet.

  1. # End this watch when there are no Pods for the StatefulSet
  2. kubectl get pods --watch -l app=nginx

Use kubectl delete to delete the StatefulSet. Make sure to supply the --cascade=orphan parameter to the command. This parameter tells Kubernetes to only delete the StatefulSet, and to not delete any of its Pods.

  1. kubectl delete statefulset web --cascade=orphan
  1. statefulset.apps "web" deleted

Get the Pods, to examine their status:

  1. kubectl get pods -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 6m
  3. web-1 1/1 Running 0 7m
  4. web-2 1/1 Running 0 5m

Even though web has been deleted, all of the Pods are still Running and Ready. Delete web-0:

  1. kubectl delete pod web-0
  1. pod "web-0" deleted

Get the StatefulSet’s Pods:

  1. kubectl get pods -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-1 1/1 Running 0 10m
  3. web-2 1/1 Running 0 7m

As the web StatefulSet has been deleted, web-0 has not been relaunched.

In one terminal, watch the StatefulSet’s Pods.

  1. # Leave this watch running until the next time you start a watch
  2. kubectl get pods --watch -l app=nginx

In a second terminal, recreate the StatefulSet. Note that, unless you deleted the nginx Service (which you should not have), you will see an error indicating that the Service already exists.

  1. kubectl apply -f https://k8s.io/examples/application/web/web.yaml
  1. statefulset.apps/web created
  2. service/nginx unchanged

Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.

Examine the output of the kubectl get command running in the first terminal.

  1. # This should already be running
  2. kubectl get pods --watch -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-1 1/1 Running 0 16m
  3. web-2 1/1 Running 0 2m
  4. NAME READY STATUS RESTARTS AGE
  5. web-0 0/1 Pending 0 0s
  6. web-0 0/1 Pending 0 0s
  7. web-0 0/1 ContainerCreating 0 0s
  8. web-0 1/1 Running 0 18s
  9. web-2 1/1 Terminating 0 3m
  10. web-2 0/1 Terminating 0 3m
  11. web-2 0/1 Terminating 0 3m
  12. web-2 0/1 Terminating 0 3m

When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the StatefulSet with replicas equal to 2, once web-0 had been recreated, and once web-1 had been determined to already be Running and Ready, web-2 was terminated.

Let’s take another look at the contents of the index.html file served by the Pods’ webservers:

  1. for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
  1. web-0
  2. web-1

Even though you deleted both the StatefulSet and the web-0 Pod, it still serves the hostname originally entered into its index.html file. This is because the StatefulSet never deletes the PersistentVolumes associated with a Pod. When you recreated the StatefulSet and it relaunched web-0, its original PersistentVolume was remounted.

Cascading delete

In one terminal window, watch the Pods in the StatefulSet.

  1. # Leave this running until the next page section
  2. kubectl get pods --watch -l app=nginx

In another terminal, delete the StatefulSet again. This time, omit the --cascade=orphan parameter.

  1. kubectl delete statefulset web
  1. statefulset.apps "web" deleted

Examine the output of the kubectl get command running in the first terminal, and wait for all of the Pods to transition to Terminating.

  1. # This should already be running
  2. kubectl get pods --watch -l app=nginx
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 1/1 Running 0 11m
  3. web-1 1/1 Running 0 27m
  4. NAME READY STATUS RESTARTS AGE
  5. web-0 1/1 Terminating 0 12m
  6. web-1 1/1 Terminating 0 29m
  7. web-0 0/1 Terminating 0 12m
  8. web-0 0/1 Terminating 0 12m
  9. web-0 0/1 Terminating 0 12m
  10. web-1 0/1 Terminating 0 29m
  11. web-1 0/1 Terminating 0 29m
  12. web-1 0/1 Terminating 0 29m

As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits for the Pod’s successor to be completely terminated.

Note: Although a cascading delete removes a StatefulSet together with its Pods, the cascade does not delete the headless Service associated with the StatefulSet. You must delete the nginx Service manually.

  1. kubectl delete service nginx
  1. service "nginx" deleted

Recreate the StatefulSet and headless Service one more time:

  1. kubectl apply -f https://k8s.io/examples/application/web/web.yaml
  1. service/nginx created
  2. statefulset.apps/web created

When all of the StatefulSet’s Pods transition to Running and Ready, retrieve the contents of their index.html files:

  1. for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
  1. web-0
  2. web-1

Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and web-0 and web-1 continue to serve their hostnames.

Finally, delete the nginx Service…

  1. kubectl delete service nginx
  1. service "nginx" deleted

…and the web StatefulSet:

  1. kubectl delete statefulset web
  1. statefulset "web" deleted

Pod management policy

For some distributed systems, the StatefulSet ordering guarantees are unnecessary and/or undesirable. These systems require only uniqueness and identity. To address this, in Kubernetes 1.7, we introduced .spec.podManagementPolicy to the StatefulSet API Object.

OrderedReady Pod management

OrderedReady pod management is the default for StatefulSets. It tells the StatefulSet controller to respect the ordering guarantees demonstrated above.

Parallel Pod management

Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and not to wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod. This option only affects the behavior for scaling operations. Updates are not affected.

application/web/web-parallel.yaml StatefulSet Basics - 图2

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: nginx
  5. labels:
  6. app: nginx
  7. spec:
  8. ports:
  9. - port: 80
  10. name: web
  11. clusterIP: None
  12. selector:
  13. app: nginx
  14. ---
  15. apiVersion: apps/v1
  16. kind: StatefulSet
  17. metadata:
  18. name: web
  19. spec:
  20. serviceName: "nginx"
  21. podManagementPolicy: "Parallel"
  22. replicas: 2
  23. selector:
  24. matchLabels:
  25. app: nginx
  26. template:
  27. metadata:
  28. labels:
  29. app: nginx
  30. spec:
  31. containers:
  32. - name: nginx
  33. image: registry.k8s.io/nginx-slim:0.8
  34. ports:
  35. - containerPort: 80
  36. name: web
  37. volumeMounts:
  38. - name: www
  39. mountPath: /usr/share/nginx/html
  40. volumeClaimTemplates:
  41. - metadata:
  42. name: www
  43. spec:
  44. accessModes: [ "ReadWriteOnce" ]
  45. resources:
  46. requests:
  47. storage: 1Gi

This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.

In one terminal, watch the Pods in the StatefulSet.

  1. # Leave this watch running until the end of the section
  2. kubectl get pod -l app=nginx --watch

In another terminal, create the StatefulSet and Service in the manifest:

  1. kubectl apply -f https://k8s.io/examples/application/web/web-parallel.yaml
  1. service/nginx created
  2. statefulset.apps/web created

Examine the output of the kubectl get command that you executed in the first terminal.

  1. # This should already be running
  2. kubectl get pod -l app=nginx --watch
  1. NAME READY STATUS RESTARTS AGE
  2. web-0 0/1 Pending 0 0s
  3. web-0 0/1 Pending 0 0s
  4. web-1 0/1 Pending 0 0s
  5. web-1 0/1 Pending 0 0s
  6. web-0 0/1 ContainerCreating 0 0s
  7. web-1 0/1 ContainerCreating 0 0s
  8. web-0 1/1 Running 0 10s
  9. web-1 1/1 Running 0 10s

The StatefulSet controller launched both web-0 and web-1 at the same time.

Keep the second terminal open, and, in another terminal window scale the StatefulSet:

  1. kubectl scale statefulset/web --replicas=4
  1. statefulset.apps/web scaled

Examine the output of the terminal where the kubectl get command is running.

  1. web-3 0/1 Pending 0 0s
  2. web-3 0/1 Pending 0 0s
  3. web-3 0/1 Pending 0 7s
  4. web-3 0/1 ContainerCreating 0 7s
  5. web-2 1/1 Running 0 10s
  6. web-3 1/1 Running 0 26s

The StatefulSet launched two new Pods, and it did not wait for the first to become Running and Ready prior to launching the second.

Cleaning up

You should have two terminals open, ready for you to run kubectl commands as part of cleanup.

  1. kubectl delete sts web
  2. # sts is an abbreviation for statefulset

You can watch kubectl get to see those Pods being deleted.

  1. # end the watch when you've seen what you need to
  2. kubectl get pod -l app=nginx --watch
  1. web-3 1/1 Terminating 0 9m
  2. web-2 1/1 Terminating 0 9m
  3. web-3 1/1 Terminating 0 9m
  4. web-2 1/1 Terminating 0 9m
  5. web-1 1/1 Terminating 0 44m
  6. web-0 1/1 Terminating 0 44m
  7. web-0 0/1 Terminating 0 44m
  8. web-3 0/1 Terminating 0 9m
  9. web-2 0/1 Terminating 0 9m
  10. web-1 0/1 Terminating 0 44m
  11. web-0 0/1 Terminating 0 44m
  12. web-2 0/1 Terminating 0 9m
  13. web-2 0/1 Terminating 0 9m
  14. web-2 0/1 Terminating 0 9m
  15. web-1 0/1 Terminating 0 44m
  16. web-1 0/1 Terminating 0 44m
  17. web-1 0/1 Terminating 0 44m
  18. web-0 0/1 Terminating 0 44m
  19. web-0 0/1 Terminating 0 44m
  20. web-0 0/1 Terminating 0 44m
  21. web-3 0/1 Terminating 0 9m
  22. web-3 0/1 Terminating 0 9m
  23. web-3 0/1 Terminating 0 9m

During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod’s ordinal successor to terminate prior to deleting that Pod.

Close the terminal where the kubectl get command is running and delete the nginx Service:

  1. kubectl delete svc nginx

Delete the persistent storage media for the PersistentVolumes used in this tutorial.

  1. kubectl get pvc
  1. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  2. www-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m
  3. www-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m
  4. www-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m
  5. www-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m
  6. www-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m
  1. kubectl get pv
  1. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  2. pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m
  3. pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m
  4. pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m
  5. pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m
  6. pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m
  1. kubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4
  1. persistentvolumeclaim "www-web-0" deleted
  2. persistentvolumeclaim "www-web-1" deleted
  3. persistentvolumeclaim "www-web-2" deleted
  4. persistentvolumeclaim "www-web-3" deleted
  5. persistentvolumeclaim "www-web-4" deleted
  1. kubectl get pvc
  1. No resources found in default namespace.

Note: You also need to delete the persistent storage media for the PersistentVolumes used in this tutorial. Follow the necessary steps, based on your environment, storage configuration, and provisioning method, to ensure that all storage is reclaimed.