Garbage Collection

The role of the Kubernetes garbage collector is to delete certain objects that once had an owner, but no longer have an owner.

Owners and dependents

Some Kubernetes objects are owners of other objects. For example, a ReplicaSet is the owner of a set of Pods. The owned objects are called dependents of the owner object. Every dependent object has a metadata.ownerReferences field that points to the owning object.

Sometimes, Kubernetes sets the value of ownerReference automatically. For example, when you create a ReplicaSet, Kubernetes automatically sets the ownerReference field of each Pod in the ReplicaSet. In 1.8, Kubernetes automatically sets the value of ownerReference for objects created or adopted by ReplicationController, ReplicaSet, StatefulSet, DaemonSet, Deployment, Job and CronJob.

You can also specify relationships between owners and dependents by manually setting the ownerReference field.

Here’s a configuration file for a ReplicaSet that has three Pods:

controllers/replicaset.yaml Garbage Collection - 图1

  1. apiVersion: apps/v1
  2. kind: ReplicaSet
  3. metadata:
  4. name: my-repset
  5. spec:
  6. replicas: 3
  7. selector:
  8. matchLabels:
  9. pod-is-for: garbage-collection-example
  10. template:
  11. metadata:
  12. labels:
  13. pod-is-for: garbage-collection-example
  14. spec:
  15. containers:
  16. - name: nginx
  17. image: nginx

If you create the ReplicaSet and then view the Pod metadata, you can see OwnerReferences field:

  1. kubectl apply -f https://k8s.io/examples/controllers/replicaset.yaml
  2. kubectl get pods --output=yaml

The output shows that the Pod owner is a ReplicaSet named my-repset:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. ...
  5. ownerReferences:
  6. - apiVersion: apps/v1
  7. controller: true
  8. blockOwnerDeletion: true
  9. kind: ReplicaSet
  10. name: my-repset
  11. uid: d9607e19-f88f-11e6-a518-42010a800195
  12. ...

Note:

Cross-namespace owner references are disallowed by design.

Namespaced dependents can specify cluster-scoped or namespaced owners. A namespaced owner must exist in the same namespace as the dependent. If it does not, the owner reference is treated as absent, and the dependent is subject to deletion once all owners are verified absent.

Cluster-scoped dependents can only specify cluster-scoped owners. In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner, it is treated as having an unresolveable owner reference, and is not able to be garbage collected.

In v1.20+, if the garbage collector detects an invalid cross-namespace ownerReference, or a cluster-scoped dependent with an ownerReference referencing a namespaced kind, a warning Event with a reason of OwnerRefInvalidNamespace and an involvedObject of the invalid dependent is reported. You can check for that kind of Event by running kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace.

Controlling how the garbage collector deletes dependents

When you delete an object, you can specify whether the object’s dependents are also deleted automatically. Deleting dependents automatically is called cascading deletion. There are two modes of cascading deletion: background and foreground.

If you delete an object without deleting its dependents automatically, the dependents are said to be orphaned.

Foreground cascading deletion

In foreground cascading deletion, the root object first enters a “deletion in progress” state. In the “deletion in progress” state, the following things are true:

  • The object is still visible via the REST API
  • The object’s deletionTimestamp is set
  • The object’s metadata.finalizers contains the value “foregroundDeletion”.

Once the “deletion in progress” state is set, the garbage collector deletes the object’s dependents. Once the garbage collector has deleted all “blocking” dependents (objects with ownerReference.blockOwnerDeletion=true), it deletes the owner object.

Note that in the “foregroundDeletion”, only dependents with ownerReference.blockOwnerDeletion=true block the deletion of the owner object. Kubernetes version 1.7 added an admission controller that controls user access to set blockOwnerDeletion to true based on delete permissions on the owner object, so that unauthorized dependents cannot delay deletion of an owner object.

If an object’s ownerReferences field is set by a controller (such as Deployment or ReplicaSet), blockOwnerDeletion is set automatically and you do not need to manually modify this field.

Background cascading deletion

In background cascading deletion, Kubernetes deletes the owner object immediately and the garbage collector then deletes the dependents in the background.

Setting the cascading deletion policy

To control the cascading deletion policy, set the propagationPolicy field on the deleteOptions argument when deleting an Object. Possible values include “Orphan”, “Foreground”, or “Background”.

Here’s an example that deletes dependents in background:

  1. kubectl proxy --port=8080
  2. curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
  3. -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
  4. -H "Content-Type: application/json"

Here’s an example that deletes dependents in foreground:

  1. kubectl proxy --port=8080
  2. curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
  3. -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
  4. -H "Content-Type: application/json"

Here’s an example that orphans dependents:

  1. kubectl proxy --port=8080
  2. curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
  3. -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
  4. -H "Content-Type: application/json"

kubectl also supports cascading deletion. To delete dependents automatically using kubectl, set --cascade to true. To orphan dependents, set --cascade to false. The default value for --cascade is true.

Here’s an example that orphans the dependents of a ReplicaSet:

  1. kubectl delete replicaset my-repset --cascade=false

Additional note on Deployments

Prior to 1.7, When using cascading deletes with Deployments you must use propagationPolicy: Foreground to delete not only the ReplicaSets created, but also their Pods. If this type of propagationPolicy is not used, only the ReplicaSets will be deleted, and the Pods will be orphaned. See kubeadm/#149 for more information.

Known issues

Tracked at #26120

What’s next

Design Doc 1

Design Doc 2