Volumes

On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. One problem is the loss of files when a container crashes. The kubelet restarts the container but with a clean state. A second problem occurs when sharing files between containers running together in a Pod. The Kubernetes volume abstraction solves both of these problems. Familiarity with Pods is suggested.

Background

Docker has a concept of volumes, though it is somewhat looser and less managed. A Docker volume is a directory on disk or in another container. Docker provides volume drivers, but the functionality is somewhat limited.

Kubernetes supports many types of volumes. A Pod can use any number of volume types simultaneously. Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes. For any kind of volume in a given pod, data is preserved across container restarts.

At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.

To use a volume, specify the volumes to provide for the Pod in .spec.volumes and declare where to mount those volumes into containers in .spec.containers[*].volumeMounts. A process in a container sees a filesystem view composed from their Docker image and volumes. The Docker image is at the root of the filesystem hierarchy. Volumes mount at the specified paths within the image. Volumes can not mount onto other volumes or have hard links to other volumes. Each Container in the Pod’s configuration must independently specify where to mount each volume.

Types of Volumes

Kubernetes supports several types of volumes.

awsElasticBlockStore

An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS volume into your pod. Unlike emptyDir, which is erased when a pod is removed, the contents of an EBS volume are persisted and the volume is unmounted. This means that an EBS volume can be pre-populated with data, and that data can be shared between pods.

Note: You must create an EBS volume by using aws ec2 create-volume or the AWS API before you can use it.

There are some restrictions when using an awsElasticBlockStore volume:

  • the nodes on which pods are running must be AWS EC2 instances
  • those instances need to be in the same region and availability zone as the EBS volume
  • EBS only supports a single EC2 instance mounting a volume

Creating an AWS EBS volume

Before you can use an EBS volume with a pod, you need to create it.

  1. aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2

Make sure the zone matches the zone you brought up your cluster in. Check that the size and EBS volume type are suitable for your use.

AWS EBS configuration example

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-ebs
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-container
  9. volumeMounts:
  10. - mountPath: /test-ebs
  11. name: test-volume
  12. volumes:
  13. - name: test-volume
  14. # This AWS EBS volume must already exist.
  15. awsElasticBlockStore:
  16. volumeID: "<volume id>"
  17. fsType: ext4

If the EBS volume is partitioned, you can supply the optional field partition: "<partition number>" to specify which parition to mount on.

AWS EBS CSI migration

FEATURE STATE: Kubernetes v1.17 [beta]

The CSIMigration feature for awsElasticBlockStore, when enabled, redirects all plugin operations from the existing in-tree plugin to the ebs.csi.aws.com Container Storage Interface (CSI) driver. In order to use this feature, the AWS EBS CSI driver must be installed on the cluster and the CSIMigration and CSIMigrationAWS beta features must be enabled.

AWS EBS CSI migration complete

FEATURE STATE: Kubernetes v1.17 [alpha]

To disable the awsElasticBlockStore storage plugin from being loaded by the controller manager and the kubelet, set the InTreePluginAWSUnregister flag to true.

azureDisk

The azureDisk volume type mounts a Microsoft Azure Data Disk into a pod.

For more details, see the azureDisk volume plugin.

azureDisk CSI migration

FEATURE STATE: Kubernetes v1.19 [beta]

The CSIMigration feature for azureDisk, when enabled, redirects all plugin operations from the existing in-tree plugin to the disk.csi.azure.com Container Storage Interface (CSI) Driver. In order to use this feature, the Azure Disk CSI Driver must be installed on the cluster and the CSIMigration and CSIMigrationAzureDisk features must be enabled.

azureFile

The azureFile volume type mounts a Microsoft Azure File volume (SMB 2.1 and 3.0) into a pod.

For more details, see the azureFile volume plugin.

azureFile CSI migration

FEATURE STATE: Kubernetes v1.21 [beta]

The CSIMigration feature for azureFile, when enabled, redirects all plugin operations from the existing in-tree plugin to the file.csi.azure.com Container Storage Interface (CSI) Driver. In order to use this feature, the Azure File CSI Driver must be installed on the cluster and the CSIMigration and CSIMigrationAzureFile feature gates must be enabled.

Azure File CSI driver does not support using same volume with different fsgroups, if Azurefile CSI migration is enabled, using same volume with different fsgroups won’t be supported at all.

cephfs

A cephfs volume allows an existing CephFS volume to be mounted into your Pod. Unlike emptyDir, which is erased when a pod is removed, the contents of a cephfs volume are preserved and the volume is merely unmounted. This means that a cephfs volume can be pre-populated with data, and that data can be shared between pods. The cephfs volume can be mounted by multiple writers simultaneously.

Note: You must have your own Ceph server running with the share exported before you can use it.

See the CephFS example for more details.

cinder

Note: Kubernetes must be configured with the OpenStack cloud provider.

The cinder volume type is used to mount the OpenStack Cinder volume into your pod.

Cinder volume configuration example

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-cinder
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-cinder-container
  9. volumeMounts:
  10. - mountPath: /test-cinder
  11. name: test-volume
  12. volumes:
  13. - name: test-volume
  14. # This OpenStack volume must already exist.
  15. cinder:
  16. volumeID: "<volume id>"
  17. fsType: ext4

OpenStack CSI migration

FEATURE STATE: Kubernetes v1.21 [beta]

The CSIMigration feature for Cinder is enabled by default in Kubernetes 1.21. It redirects all plugin operations from the existing in-tree plugin to the cinder.csi.openstack.org Container Storage Interface (CSI) Driver. OpenStack Cinder CSI Driver must be installed on the cluster. You can disable Cinder CSI migration for your cluster by setting the CSIMigrationOpenStack feature gate to false. If you disable the CSIMigrationOpenStack feature, the in-tree Cinder volume plugin takes responsibility for all aspects of Cinder volume storage management.

configMap

A ConfigMap provides a way to inject configuration data into pods. The data stored in a ConfigMap can be referenced in a volume of type configMap and then consumed by containerized applications running in a pod.

When referencing a ConfigMap, you provide the name of the ConfigMap in the volume. You can customize the path to use for a specific entry in the ConfigMap. The following configuration shows how to mount the log-config ConfigMap onto a Pod called configmap-pod:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: configmap-pod
  5. spec:
  6. containers:
  7. - name: test
  8. image: busybox
  9. volumeMounts:
  10. - name: config-vol
  11. mountPath: /etc/config
  12. volumes:
  13. - name: config-vol
  14. configMap:
  15. name: log-config
  16. items:
  17. - key: log_level
  18. path: log_level

The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level. Note that this path is derived from the volume’s mountPath and the path keyed with log_level.

Note:

  • You must create a ConfigMap before you can use it.

  • A container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.

  • Text data is exposed as files using the UTF-8 character encoding. For other character encodings, use binaryData.

downwardAPI

A downwardAPI volume makes downward API data available to applications. It mounts a directory and writes the requested data in plain text files.

Note: A container using the downward API as a subPath volume mount will not receive downward API updates.

See the downward API example for more details.

emptyDir

An emptyDir volume is first created when a Pod is assigned to a node, and exists as long as that Pod is running on that node. As the name says, the emptyDir volume is initially empty. All containers in the Pod can read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently.

Note: A container crashing does not remove a Pod from a node. The data in an emptyDir volume is safe across container crashes.

Some uses for an emptyDir are:

  • scratch space, such as for a disk-based merge sort
  • checkpointing a long computation for recovery from crashes
  • holding files that a content-manager container fetches while a webserver container serves the data

Depending on your environment, emptyDir volumes are stored on whatever medium that backs the node such as disk or SSD, or network storage. However, if you set the emptyDir.medium field to "Memory", Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write count against your container’s memory limit.

Note: If the SizeMemoryBackedVolumes feature gate is enabled, you can specify a size for memory backed volumes. If no size is specified, memory backed volumes are sized to 50% of the memory on a Linux host.

emptyDir configuration example

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-pd
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-container
  9. volumeMounts:
  10. - mountPath: /cache
  11. name: cache-volume
  12. volumes:
  13. - name: cache-volume
  14. emptyDir: {}

fc (fibre channel)

An fc volume type allows an existing fibre channel block storage volume to mount in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your Volume configuration. If multiple WWNs are specified, targetWWNs expect that those WWNs are from multi-path connections.

Note: You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.

See the fibre channel example for more details.

flocker (deprecated)

Flocker is an open-source, clustered container data volume manager. Flocker provides management and orchestration of data volumes backed by a variety of storage backends.

A flocker volume allows a Flocker dataset to be mounted into a Pod. If the dataset does not already exist in Flocker, it needs to be first created with the Flocker CLI or by using the Flocker API. If the dataset already exists it will be reattached by Flocker to the node that the pod is scheduled. This means data can be shared between pods as required.

Note: You must have your own Flocker installation running before you can use it.

See the Flocker example for more details.

gcePersistentDisk

A gcePersistentDisk volume mounts a Google Compute Engine (GCE) persistent disk (PD) into your Pod. Unlike emptyDir, which is erased when a pod is removed, the contents of a PD are preserved and the volume is merely unmounted. This means that a PD can be pre-populated with data, and that data can be shared between pods.

Note: You must create a PD using gcloud or the GCE API or UI before you can use it.

There are some restrictions when using a gcePersistentDisk:

  • the nodes on which Pods are running must be GCE VMs
  • those VMs need to be in the same GCE project and zone as the persistent disk

One feature of GCE persistent disk is concurrent read-only access to a persistent disk. A gcePersistentDisk volume permits multiple consumers to simultaneously mount a persistent disk as read-only. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed.

Using a GCE persistent disk with a Pod controlled by a ReplicaSet will fail unless the PD is read-only or the replica count is 0 or 1.

Creating a GCE persistent disk

Before you can use a GCE persistent disk with a Pod, you need to create it.

  1. gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk

GCE persistent disk configuration example

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-pd
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-container
  9. volumeMounts:
  10. - mountPath: /test-pd
  11. name: test-volume
  12. volumes:
  13. - name: test-volume
  14. # This GCE PD must already exist.
  15. gcePersistentDisk:
  16. pdName: my-data-disk
  17. fsType: ext4

Regional persistent disks

The Regional persistent disks feature allows the creation of persistent disks that are available in two zones within the same region. In order to use this feature, the volume must be provisioned as a PersistentVolume; referencing the volume directly from a pod is not supported.

Manually provisioning a Regional PD PersistentVolume

Dynamic provisioning is possible using a StorageClass for GCE PD. Before creating a PersistentVolume, you must create the persistent disk:

  1. gcloud compute disks create --size=500GB my-data-disk
  2. --region us-central1
  3. --replica-zones us-central1-a,us-central1-b

Regional persistent disk configuration example

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: test-volume
  5. spec:
  6. capacity:
  7. storage: 400Gi
  8. accessModes:
  9. - ReadWriteOnce
  10. gcePersistentDisk:
  11. pdName: my-data-disk
  12. fsType: ext4
  13. nodeAffinity:
  14. required:
  15. nodeSelectorTerms:
  16. - matchExpressions:
  17. # failure-domain.beta.kubernetes.io/zone should be used prior to 1.21
  18. - key: topology.kubernetes.io/zone
  19. operator: In
  20. values:
  21. - us-central1-a
  22. - us-central1-b

GCE CSI migration

FEATURE STATE: Kubernetes v1.17 [beta]

The CSIMigration feature for GCE PD, when enabled, redirects all plugin operations from the existing in-tree plugin to the pd.csi.storage.gke.io Container Storage Interface (CSI) Driver. In order to use this feature, the GCE PD CSI Driver must be installed on the cluster and the CSIMigration and CSIMigrationGCE beta features must be enabled.

GCE CSI migration complete

FEATURE STATE: Kubernetes v1.21 [alpha]

To disable the gcePersistentDisk storage plugin from being loaded by the controller manager and the kubelet, set the InTreePluginGCEUnregister flag to true.

gitRepo (deprecated)

Warning: The gitRepo volume type is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.

A gitRepo volume is an example of a volume plugin. This plugin mounts an empty directory and clones a git repository into this directory for your Pod to use.

Here is an example of a gitRepo volume:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: server
  5. spec:
  6. containers:
  7. - image: nginx
  8. name: nginx
  9. volumeMounts:
  10. - mountPath: /mypath
  11. name: git-volume
  12. volumes:
  13. - name: git-volume
  14. gitRepo:
  15. repository: "git@somewhere:me/my-git-repository.git"
  16. revision: "22f1d8406d464b0c0874075539c1f2e96c253775"

glusterfs

A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. This means that a glusterfs volume can be pre-populated with data, and that data can be shared between pods. GlusterFS can be mounted by multiple writers simultaneously.

Note: You must have your own GlusterFS installation running before you can use it.

See the GlusterFS example for more details.

hostPath

Warning:

HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.

If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.

A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.

For example, some uses for a hostPath are:

  • running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
  • running cAdvisor in a container; use a hostPath of /sys
  • allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as

In addition to the required path property, you can optionally specify a type for a hostPath volume.

The supported values for field type are:

ValueBehavior
Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume.
DirectoryOrCreateIf nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet.
DirectoryA directory must exist at the given path
FileOrCreateIf nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet.
FileA file must exist at the given path
SocketA UNIX socket must exist at the given path
CharDeviceA character device must exist at the given path
BlockDeviceA block device must exist at the given path

Watch out when using this type of volume, because:

  • HostPaths can expose privileged system credentials (such as for the Kubelet) or privileged APIs (such as container runtime socket), which can be used for container escape or to attack other parts of the cluster.
  • Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes
  • The files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a hostPath volume

hostPath configuration example

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-pd
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-container
  9. volumeMounts:
  10. - mountPath: /test-pd
  11. name: test-volume
  12. volumes:
  13. - name: test-volume
  14. hostPath:
  15. # directory location on host
  16. path: /data
  17. # this field is optional
  18. type: Directory

Caution: The FileOrCreate mode does not create the parent directory of the file. If the parent directory of the mounted file does not exist, the pod fails to start. To ensure that this mode works, you can try to mount directories and files separately, as shown in the FileOrCreateconfiguration.

hostPath FileOrCreate configuration example

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-webserver
  5. spec:
  6. containers:
  7. - name: test-webserver
  8. image: k8s.gcr.io/test-webserver:latest
  9. volumeMounts:
  10. - mountPath: /var/local/aaa
  11. name: mydir
  12. - mountPath: /var/local/aaa/1.txt
  13. name: myfile
  14. volumes:
  15. - name: mydir
  16. hostPath:
  17. # Ensure the file directory is created.
  18. path: /var/local/aaa
  19. type: DirectoryOrCreate
  20. - name: myfile
  21. hostPath:
  22. path: /var/local/aaa/1.txt
  23. type: FileOrCreate

iscsi

An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of an iscsi volume are preserved and the volume is merely unmounted. This means that an iscsi volume can be pre-populated with data, and that data can be shared between pods.

Note: You must have your own iSCSI server running with the volume created before you can use it.

A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed.

See the iSCSI example for more details.

local

A local volume represents a mounted local storage device such as a disk, partition or directory.

Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported.

Compared to hostPath volumes, local volumes are used in a durable and portable manner without manually scheduling pods to nodes. The system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.

However, local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume becomes inaccessible by the pod. The pod using this volume is unable to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.

The following example shows a PersistentVolume using a local volume and nodeAffinity:

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: example-pv
  5. spec:
  6. capacity:
  7. storage: 100Gi
  8. volumeMode: Filesystem
  9. accessModes:
  10. - ReadWriteOnce
  11. persistentVolumeReclaimPolicy: Delete
  12. storageClassName: local-storage
  13. local:
  14. path: /mnt/disks/ssd1
  15. nodeAffinity:
  16. required:
  17. nodeSelectorTerms:
  18. - matchExpressions:
  19. - key: kubernetes.io/hostname
  20. operator: In
  21. values:
  22. - example-node

You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.

PersistentVolume volumeMode can be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.

When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer. For more details, see the local StorageClass example. Delaying volume binding ensures that the PersistentVolumeClaim binding decision will also be evaluated with any other node constraints the Pod may have, such as node resource requirements, node selectors, Pod affinity, and Pod anti-affinity.

An external static provisioner can be run separately for improved management of the local volume lifecycle. Note that this provisioner does not support dynamic provisioning yet. For an example on how to run an external local provisioner, see the local volume provisioner user guide.

Note: The local PersistentVolume requires manual cleanup and deletion by the user if the external static provisioner is not used to manage the volume lifecycle.

nfs

An nfs volume allows an existing NFS (Network File System) share to be mounted into a Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be shared between pods. NFS can be mounted by multiple writers simultaneously.

Note: You must have your own NFS server running with the share exported before you can use it.

See the NFS example for more details.

persistentVolumeClaim

A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod. PersistentVolumeClaims are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.

See the information about PersistentVolumes for more details.

portworxVolume

A portworxVolume is an elastic block storage layer that runs hyperconverged with Kubernetes. Portworx fingerprints storage in a server, tiers based on capabilities, and aggregates capacity across multiple servers. Portworx runs in-guest in virtual machines or on bare metal Linux nodes.

A portworxVolume can be dynamically created through Kubernetes or it can also be pre-provisioned and referenced inside a Pod. Here is an example Pod referencing a pre-provisioned Portworx volume:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-portworx-volume-pod
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-container
  9. volumeMounts:
  10. - mountPath: /mnt
  11. name: pxvol
  12. volumes:
  13. - name: pxvol
  14. # This Portworx volume must already exist.
  15. portworxVolume:
  16. volumeID: "pxvol"
  17. fsType: "<fs-type>"

Note: Make sure you have an existing PortworxVolume with name pxvol before using it in the Pod.

For more details, see the Portworx volume examples.

projected

A projected volume maps several existing volume sources into the same directory.

Currently, the following types of volume sources can be projected:

All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.

Example configuration with a secret, a downwardAPI, and a configMap

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: volume-test
  5. spec:
  6. containers:
  7. - name: container-test
  8. image: busybox
  9. volumeMounts:
  10. - name: all-in-one
  11. mountPath: "/projected-volume"
  12. readOnly: true
  13. volumes:
  14. - name: all-in-one
  15. projected:
  16. sources:
  17. - secret:
  18. name: mysecret
  19. items:
  20. - key: username
  21. path: my-group/my-username
  22. - downwardAPI:
  23. items:
  24. - path: "labels"
  25. fieldRef:
  26. fieldPath: metadata.labels
  27. - path: "cpu_limit"
  28. resourceFieldRef:
  29. containerName: container-test
  30. resource: limits.cpu
  31. - configMap:
  32. name: myconfigmap
  33. items:
  34. - key: config
  35. path: my-group/my-config

Example configuration: secrets with a non-default permission mode set

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: volume-test
  5. spec:
  6. containers:
  7. - name: container-test
  8. image: busybox
  9. volumeMounts:
  10. - name: all-in-one
  11. mountPath: "/projected-volume"
  12. readOnly: true
  13. volumes:
  14. - name: all-in-one
  15. projected:
  16. sources:
  17. - secret:
  18. name: mysecret
  19. items:
  20. - key: username
  21. path: my-group/my-username
  22. - secret:
  23. name: mysecret2
  24. items:
  25. - key: password
  26. path: my-group/my-password
  27. mode: 511

Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:

  • For secrets, the secretName field has been changed to name to be consistent with ConfigMap naming.
  • The defaultMode can only be specified at the projected level and not for each volume source. However, as illustrated above, you can explicitly set the mode for each individual projection.

When the TokenRequestProjection feature is enabled, you can inject the token for the current service account into a Pod at a specified path. For example:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: sa-token-test
  5. spec:
  6. containers:
  7. - name: container-test
  8. image: busybox
  9. volumeMounts:
  10. - name: token-vol
  11. mountPath: "/service-account"
  12. readOnly: true
  13. volumes:
  14. - name: token-vol
  15. projected:
  16. sources:
  17. - serviceAccountToken:
  18. audience: api
  19. expirationSeconds: 3600
  20. path: token

The example Pod has a projected volume containing the injected service account token. This token can be used by a Pod’s containers to access the Kubernetes API server. The audience field contains the intended audience of the token. A recipient of the token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. This field is optional and it defaults to the identifier of the API server.

The expirationSeconds is the expected duration of validity of the service account token. It defaults to 1 hour and must be at least 10 minutes (600 seconds). An administrator can also limit its maximum value by specifying the --service-account-max-token-expiration option for the API server. The path field specifies a relative path to the mount point of the projected volume.

Note: A container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.

quobyte (deprecated)

A quobyte volume allows an existing Quobyte volume to be mounted into your Pod.

Note: You must have your own Quobyte setup and running with the volumes created before you can use it.

Quobyte supports the Container Storage Interface. CSI is the recommended plugin to use Quobyte volumes inside Kubernetes. Quobyte’s GitHub project has instructions for deploying Quobyte using CSI, along with examples.

rbd

An rbd volume allows a Rados Block Device (RBD) volume to mount into your Pod. Unlike emptyDir, which is erased when a pod is removed, the contents of an rbd volume are preserved and the volume is unmounted. This means that a RBD volume can be pre-populated with data, and that data can be shared between pods.

Note: You must have a Ceph installation running before you can use RBD.

A feature of RBD is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, RBD volumes can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed.

See the RBD example for more details.

secret

A secret volume is used to pass sensitive information, such as passwords, to Pods. You can store secrets in the Kubernetes API and mount them as files for use by pods without coupling to Kubernetes directly. secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.

Note: You must create a Secret in the Kubernetes API before you can use it.

Note: A container using a Secret as a subPath volume mount will not receive Secret updates.

For more details, see Configuring Secrets.

storageOS (deprecated)

A storageos volume allows an existing StorageOS volume to mount into your Pod.

StorageOS runs as a container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster. Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.

At its core, StorageOS provides block storage to containers, accessible from a file system.

The StorageOS Container requires 64-bit Linux and has no additional dependencies. A free developer license is available.

Caution: You must run the StorageOS container on each node that wants to access StorageOS volumes or that will contribute storage capacity to the pool. For installation instructions, consult the StorageOS documentation.

The following example is a Pod configuration with StorageOS:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. labels:
  5. name: redis
  6. role: master
  7. name: test-storageos-redis
  8. spec:
  9. containers:
  10. - name: master
  11. image: kubernetes/redis:v1
  12. env:
  13. - name: MASTER
  14. value: "true"
  15. ports:
  16. - containerPort: 6379
  17. volumeMounts:
  18. - mountPath: /redis-master-data
  19. name: redis-data
  20. volumes:
  21. - name: redis-data
  22. storageos:
  23. # The `redis-vol01` volume must already exist within StorageOS in the `default` namespace.
  24. volumeName: redis-vol01
  25. fsType: ext4

For more information about StorageOS, dynamic provisioning, and PersistentVolumeClaims, see the StorageOS examples.

vsphereVolume

Note: You must configure the Kubernetes vSphere Cloud Provider. For cloudprovider configuration, refer to the vSphere Getting Started guide.

A vsphereVolume is used to mount a vSphere VMDK volume into your Pod. The contents of a volume are preserved when it is unmounted. It supports both VMFS and VSAN datastore.

Note: You must create vSphere VMDK volume using one of the following methods before using with a Pod.

Creating a VMDK volume

Choose one of the following methods to create a VMDK.

First ssh into ESX, then use the following command to create a VMDK:

  1. vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk

Use the following command to create a VMDK:

  1. vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk

vSphere VMDK configuration example

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-vmdk
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-container
  9. volumeMounts:
  10. - mountPath: /test-vmdk
  11. name: test-volume
  12. volumes:
  13. - name: test-volume
  14. # This VMDK volume must already exist.
  15. vsphereVolume:
  16. volumePath: "[DatastoreName] volumes/myDisk"
  17. fsType: ext4

For more information, see the vSphere volume examples.

vSphere CSI migration

FEATURE STATE: Kubernetes v1.19 [beta]

The CSIMigration feature for vsphereVolume, when enabled, redirects all plugin operations from the existing in-tree plugin to the csi.vsphere.vmware.com CSI driver. In order to use this feature, the vSphere CSI driver must be installed on the cluster and the CSIMigration and CSIMigrationvSphere feature gates must be enabled.

This also requires minimum vSphere vCenter/ESXi Version to be 7.0u1 and minimum HW Version to be VM version 15.

Note:

The following StorageClass parameters from the built-in vsphereVolume plugin are not supported by the vSphere CSI driver:

  • diskformat
  • hostfailurestotolerate
  • forceprovisioning
  • cachereservation
  • diskstripes
  • objectspacereservation
  • iopslimit

Existing volumes created using these parameters will be migrated to the vSphere CSI driver, but new volumes created by the vSphere CSI driver will not be honoring these parameters.

vSphere CSI migration complete

FEATURE STATE: Kubernetes v1.19 [beta]

To turn off the vsphereVolume plugin from being loaded by the controller manager and the kubelet, you need to set InTreePluginvSphereUnregister feature flag to true. You must install a csi.vsphere.vmware.com CSI driver on all worker nodes.

Using subPath

Sometimes, it is useful to share one volume for multiple uses in a single pod. The volumeMounts.subPath property specifies a sub-path inside the referenced volume instead of its root.

The following example shows how to configure a Pod with a LAMP stack (Linux Apache MySQL PHP) using a single, shared volume. This sample subPath configuration is not recommended for production use.

The PHP application’s code and assets map to the volume’s html folder and the MySQL database is stored in the volume’s mysql folder. For example:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: my-lamp-site
  5. spec:
  6. containers:
  7. - name: mysql
  8. image: mysql
  9. env:
  10. - name: MYSQL_ROOT_PASSWORD
  11. value: "rootpasswd"
  12. volumeMounts:
  13. - mountPath: /var/lib/mysql
  14. name: site-data
  15. subPath: mysql
  16. - name: php
  17. image: php:7.0-apache
  18. volumeMounts:
  19. - mountPath: /var/www/html
  20. name: site-data
  21. subPath: html
  22. volumes:
  23. - name: site-data
  24. persistentVolumeClaim:
  25. claimName: my-lamp-site-data

Using subPath with expanded environment variables

FEATURE STATE: Kubernetes v1.17 [stable]

Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.

In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods. The hostPath volume takes the Pod name from the downwardAPI. The host directory /var/log/pods/pod1 is mounted at /logs in the container.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: pod1
  5. spec:
  6. containers:
  7. - name: container1
  8. env:
  9. - name: POD_NAME
  10. valueFrom:
  11. fieldRef:
  12. apiVersion: v1
  13. fieldPath: metadata.name
  14. image: busybox
  15. command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
  16. volumeMounts:
  17. - name: workdir1
  18. mountPath: /logs
  19. subPathExpr: $(POD_NAME)
  20. restartPolicy: Never
  21. volumes:
  22. - name: workdir1
  23. hostPath:
  24. path: /var/log/pods

Resources

The storage media (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between containers or between pods.

To learn about requesting space using a resource specification, see how to manage resources.

Out-of-tree volume plugins

The out-of-tree volume plugins include Container Storage Interface (CSI) and FlexVolume. These plugins enable storage vendors to create custom storage plugins without adding their plugin source code to the Kubernetes repository.

Previously, all volume plugins were “in-tree”. The “in-tree” plugins were built, linked, compiled, and shipped with the core Kubernetes binaries. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository.

Both CSI and FlexVolume allow volume plugins to be developed independent of the Kubernetes code base, and deployed (installed) on Kubernetes clusters as extensions.

For storage vendors looking to create an out-of-tree volume plugin, please refer to the volume plugin FAQ.

csi

Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.

Please read the CSI design proposal for more information.

Note: Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes v1.13 and will be removed in a future release.

Note: CSI drivers may not be compatible across all Kubernetes releases. Please check the specific CSI driver’s documentation for supported deployments steps for each Kubernetes release and a compatibility matrix.

Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach or mount the volumes exposed by the CSI driver.

A csi volume can be used in a Pod in three different ways:

The following fields are available to storage administrators to configure a CSI persistent volume:

  • driver: A string value that specifies the name of the volume driver to use. This value must correspond to the value returned in the GetPluginInfoResponse by the CSI driver as defined in the CSI spec. It is used by Kubernetes to identify which CSI driver to call out to, and by CSI driver components to identify which PV objects belong to the CSI driver.
  • volumeHandle: A string value that uniquely identifies the volume. This value must correspond to the value returned in the volume.id field of the CreateVolumeResponse by the CSI driver as defined in the CSI spec. The value is passed as volume_id on all calls to the CSI volume driver when referencing the volume.
  • readOnly: An optional boolean value indicating whether the volume is to be “ControllerPublished” (attached) as read only. Default is false. This value is passed to the CSI driver via the readonly field in the ControllerPublishVolumeRequest.
  • fsType: If the PV’s VolumeMode is Filesystem then this field may be used to specify the filesystem that should be used to mount the volume. If the volume has not been formatted and formatting is supported, this value will be used to format the volume. This value is passed to the CSI driver via the VolumeCapability field of ControllerPublishVolumeRequest, NodeStageVolumeRequest, and NodePublishVolumeRequest.
  • volumeAttributes: A map of string to string that specifies static properties of a volume. This map must correspond to the map returned in the volume.attributes field of the CreateVolumeResponse by the CSI driver as defined in the CSI spec. The map is passed to the CSI driver via the volume_context field in the ControllerPublishVolumeRequest, NodeStageVolumeRequest, and NodePublishVolumeRequest.
  • controllerPublishSecretRef: A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerPublishVolume and ControllerUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the Secret contains more than one secret, all secrets are passed.
  • nodeStageSecretRef: A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeStageVolume call. This field is optional, and may be empty if no secret is required. If the Secret contains more than one secret, all secrets are passed.
  • nodePublishSecretRef: A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.

CSI raw block volume support

FEATURE STATE: Kubernetes v1.18 [stable]

Vendors with external CSI drivers can implement raw block volume support in Kubernetes workloads.

You can set up your PersistentVolume/PersistentVolumeClaim with raw block volume support as usual, without any CSI specific changes.

CSI ephemeral volumes

FEATURE STATE: Kubernetes v1.16 [beta]

You can directly configure CSI volumes within the Pod specification. Volumes specified in this way are ephemeral and do not persist across pod restarts. See Ephemeral Volumes for more information.

For more information on how to develop a CSI driver, refer to the kubernetes-csi documentation

Migrating to CSI drivers from in-tree plugins

FEATURE STATE: Kubernetes v1.17 [beta]

The CSIMigration feature, when enabled, directs operations against existing in-tree plugins to corresponding CSI plugins (which are expected to be installed and configured). As a result, operators do not have to make any configuration changes to existing Storage Classes, PersistentVolumes or PersistentVolumeClaims (referring to in-tree plugins) when transitioning to a CSI driver that supersedes an in-tree plugin.

The operations and features that are supported include: provisioning/delete, attach/detach, mount/unmount and resizing of volumes.

In-tree plugins that support CSIMigration and have a corresponding CSI driver implemented are listed in Types of Volumes.

flexVolume

FlexVolume is an out-of-tree plugin interface that has existed in Kubernetes since version 1.2 (before CSI). It uses an exec-based model to interface with drivers. The FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node and in some cases the control plane nodes as well.

Pods interact with FlexVolume drivers through the flexvolume in-tree volume plugin. For more details, see the FlexVolume examples.

Mount propagation

Mount propagation allows for sharing volumes mounted by a container to other containers in the same pod, or even to other pods on the same node.

Mount propagation of a volume is controlled by the mountPropagation field in Container.volumeMounts. Its values are:

  • None - This volume mount will not receive any subsequent mounts that are mounted to this volume or any of its subdirectories by the host. In similar fashion, no mounts created by the container will be visible on the host. This is the default mode.

    This mode is equal to private mount propagation as described in the Linux kernel documentation

  • HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.

    In other words, if the host mounts anything inside the volume mount, the container will see it mounted there.

    Similarly, if any Pod with Bidirectional mount propagation to the same volume mounts anything there, the container with HostToContainer mount propagation will see it.

    This mode is equal to rslave mount propagation as described in the Linux kernel documentation

  • Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the container will be propagated back to the host and to all containers of all pods that use the same volume.

    A typical use case for this mode is a Pod with a FlexVolume or CSI driver or a Pod that needs to mount something on the host using a hostPath volume.

    This mode is equal to rshared mount propagation as described in the Linux kernel documentation

    Warning: Bidirectional mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by containers in pods must be destroyed (unmounted) by the containers on termination.

Configuration

Before mount propagation can work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu) mount share must be configured correctly in Docker as shown below.

Edit your Docker’s systemd service file. Set MountFlags as follows:

  1. MountFlags=shared

Or, remove MountFlags=slave if present. Then restart the Docker daemon:

  1. sudo systemctl daemon-reload
  2. sudo systemctl restart docker

What’s next

Follow an example of deploying WordPress and MySQL with Persistent Volumes.