Restoring Volumes for Kubernetes StatefulSets
Longhorn supports restoring backups, and one of the use cases for this feature is to restore data for use in a Kubernetes StatefulSet, which requires restoring a volume for each replica that was backed up.
To restore, follow the below instructions. The example below uses a StatefulSet with one volume attached to each Pod and two replicas.
Connect to the
Longhorn UIpage in your web browser. Under theBackuptab, select the name of the StatefulSet volume. Click the dropdown menu of the volume entry and restore it. Name the volume something that can easily be referenced later for thePersistent Volumes.- Repeat this step for each volume you need restored.
- For example, if restoring a StatefulSet with two replicas that had volumes named
pvc-01aandpvc-02b, the restore could look like this:
Backup Name Restored Volume pvc-01a statefulset-vol-0 pvc-02b statefulset-vol-1 In Kubernetes, create a
Persistent Volumefor each Longhorn volume that was created. Name the volumes something that can easily be referenced later for thePersistent Volume Claims.storagecapacity,numberOfReplicas,storageClassName, andvolumeHandlemust be replaced below. In the example, we’re referencingstatefulset-vol-0andstatefulset-vol-1in Longhorn and usinglonghornas ourstorageClassName.apiVersion: v1kind: PersistentVolumemetadata:name: statefulset-vol-0spec:capacity:storage: <size> # must match size of Longhorn volumevolumeMode: FilesystemaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: Deletecsi:driver: driver.longhorn.io # driver must match thisfsType: ext4volumeAttributes:numberOfReplicas: <replicas> # must match Longhorn volume valuestaleReplicaTimeout: '30' # in minutesvolumeHandle: statefulset-vol-0 # must match volume name from LonghornstorageClassName: longhorn # must be same name that we will use later---apiVersion: v1kind: PersistentVolumemetadata:name: statefulset-vol-1spec:capacity:storage: <size> # must match size of Longhorn volumevolumeMode: FilesystemaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: Deletecsi:driver: driver.longhorn.io # driver must match thisfsType: ext4volumeAttributes:numberOfReplicas: <replicas> # must match Longhorn volume valuestaleReplicaTimeout: '30'volumeHandle: statefulset-vol-1 # must match volume name from LonghornstorageClassName: longhorn # must be same name that we will use later
In the
namespacetheStatefulSetwill be deployed in, create PersistentVolume Claims for eachPersistent Volume. The name of thePersistent Volume Claimmust follow this naming scheme:<name of Volume Claim Template>-<name of StatefulSet>-<index>
StatefulSet Pods are zero-indexed. In this example, the name of the
Volume Claim Templateisdata, the name of theStatefulSetiswebapp, and there are two replicas, which are indexes0and1.apiVersion: v1kind: PersistentVolumeClaimmetadata:name: data-webapp-0spec:accessModes:- ReadWriteOnceresources:requests:storage: 2Gi # must match size from earlierstorageClassName: longhorn # must match name from earliervolumeName: statefulset-vol-0 # must reference Persistent Volume---apiVersion: v1kind: PersistentVolumeClaimmetadata:name: data-webapp-1spec:accessModes:- ReadWriteOnceresources:requests:storage: 2Gi # must match size from earlierstorageClassName: longhorn # must match name from earliervolumeName: statefulset-vol-1 # must reference Persistent Volume
Create the
StatefulSet:apiVersion: apps/v1beta2kind: StatefulSetmetadata:name: webapp # match this with the PersistentVolumeClaim naming schemespec:selector:matchLabels:app: nginx # has to match .spec.template.metadata.labelsserviceName: "nginx"replicas: 2 # by default is 1template:metadata:labels:app: nginx # has to match .spec.selector.matchLabelsspec:terminationGracePeriodSeconds: 10containers:- name: nginximage: k8s.gcr.io/nginx-slim:0.8ports:- containerPort: 80name: webvolumeMounts:- name: datamountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: data # match this with the PersistentVolumeClaim naming schemespec:accessModes: [ "ReadWriteOnce" ]storageClassName: longhorn # must match name from earlierresources:requests:storage: 2Gi # must match size from earlier
Result: The restored data should now be accessible from inside the StatefulSet Pods.