Setting a Backup Target

A backup target is an endpoint used to access a backup store in Longhorn. A backup store is an NFS server, SMB/CIFS server, Azure Blob Storage server, or S3 compatible server that stores the backups of Longhorn volumes. The backup target can be set at Settings/General/BackupTarget.

Saving to an object store such as S3 is preferable because it generally offers better reliability. Another advantage is that you do not need to mount and unmount the target, which can complicate failover and upgrades.

For more information about how the backupstore works in Longhorn, see the concepts section.

If you don’t have access to AWS S3 or want to give the backupstore a try first, we’ve also provided a way to setup a local S3 testing backupstore using MinIO.

Longhorn also supports setting up recurring snapshot/backup jobs for volumes, via Longhorn UI or Kubernetes Storage Class. See here for details.

This page covers the following topics:

Set up AWS S3 Backupstore

  1. Create a new bucket in AWS S3.

  2. Set permissions for Longhorn. There are two options for setting up the credentials. The first is that you can set up a Kubernetes secret with the credentials of an AWS IAM user. The second is that you can use a third-party application to manage temporary AWS IAM permissions for a Pod via annotations rather than operating with AWS credentials.

  • Option 1: Create a Kubernetes secret with IAM user credentials

    1. Follow the guide to create a new AWS IAM user, with the following permissions set. Edit the Resource section to use your S3 bucket name:

      1. {
      2. "Version": "2012-10-17",
      3. "Statement": [
      4. {
      5. "Sid": "GrantLonghornBackupstoreAccess0",
      6. "Effect": "Allow",
      7. "Action": [
      8. "s3:PutObject",
      9. "s3:GetObject",
      10. "s3:ListBucket",
      11. "s3:DeleteObject"
      12. ],
      13. "Resource": [
      14. "arn:aws:s3:::<your-bucket-name>",
      15. "arn:aws:s3:::<your-bucket-name>/*"
      16. ]
      17. }
      18. ]
      19. }
    2. Create a Kubernetes secret with a name such as aws-secret in the namespace where Longhorn is placed (longhorn-system by default). The secret must be created in the longhorn-system namespace for Longhorn to access it:

      1. kubectl create secret generic <aws-secret> \
      2. --from-literal=AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \
      3. --from-literal=AWS_SECRET_ACCESS_KEY=<your-aws-secret-access-key> \
      4. -n longhorn-system
  • Option 2: Set permissions with IAM temporary credentials by AWS STS AssumeRole (kube2iam or kiam)

    kube2iam or kiam is a Kubernetes application that allows managing AWS IAM permissions for Pod via annotations rather than operating on AWS credentials. Follow the instructions in the GitHub repository for kube2iam or kiam to install it into the Kubernetes cluster.

    1. Follow the guide to create a new AWS IAM role for AWS S3 service, with the following permissions set:

      1. {
      2. "Version": "2012-10-17",
      3. "Statement": [
      4. {
      5. "Sid": "GrantLonghornBackupstoreAccess0",
      6. "Effect": "Allow",
      7. "Action": [
      8. "s3:PutObject",
      9. "s3:GetObject",
      10. "s3:ListBucket",
      11. "s3:DeleteObject"
      12. ],
      13. "Resource": [
      14. "arn:aws:s3:::<your-bucket-name>",
      15. "arn:aws:s3:::<your-bucket-name>/*"
      16. ]
      17. }
      18. ]
      19. }
    2. Edit the AWS IAM role with the following trust relationship:

      1. {
      2. "Version": "2012-10-17",
      3. "Statement": [
      4. {
      5. "Effect": "Allow",
      6. "Principal": {
      7. "Service": "ec2.amazonaws.com"
      8. },
      9. "Action": "sts:AssumeRole"
      10. },
      11. {
      12. "Effect": "Allow",
      13. "Principal": {
      14. "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AWS_EC2_NODE_INSTANCE_ROLE>"
      15. },
      16. "Action": "sts:AssumeRole"
      17. }
      18. ]
      19. }
    3. Create a Kubernetes secret with a name such as aws-secret in the namespace where Longhorn is placed (longhorn-system by default). The secret must be created in the longhorn-system namespace for Longhorn to access it:

      1. kubectl create secret generic <aws-secret> \
      2. --from-literal=AWS_IAM_ROLE_ARN=<your-aws-iam-role-arn> \
      3. -n longhorn-system
  1. Go to the Longhorn UI. In the top navigation bar, click Settings. In the Backup section, set Backup Target to:

    1. s3://<your-bucket-name>@<your-aws-region>/

    Make sure that you have / at the end, otherwise you will get an error. A subdirectory (prefix) may be used:

    1. s3://<your-bucket-name>@<your-aws-region>/mypath/

    Also make sure you’ve set <your-aws-region> in the URL.

    For example, For AWS, you can find the region codes here.

    For Google Cloud Storage, you can find the region codes here.

  2. In the Backup section set Backup Target Credential Secret to:

    1. aws-secret

    This is the secret name with AWS credentials or AWS IAM role.

Result: Longhorn can store backups in S3. To create a backup, see this section.

Note: If you operate Longhorn behind a proxy and you want to use AWS S3 as the backupstore, you must provide Longhorn information about your proxy in the aws-secret as below:

  1. kubectl create secret generic <aws-secret> \
  2. --from-literal=AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \
  3. --from-literal=AWS_SECRET_ACCESS_KEY=<your-aws-secret-access-key> \
  4. --from-literal=HTTP_PROXY=<your-proxy-ip-and-port> \
  5. --from-literal=HTTPS_PROXY=<your-proxy-ip-and-port> \
  6. --from-literal=NO_PROXY=<excluded-ip-list> \
  7. -n longhorn-system

Make sure NO_PROXY contains the network addresses, network address ranges and domains that should be excluded from using the proxy. In order for Longhorn to operate, the minimum required values for NO_PROXY are:

  • localhost
  • 127.0.0.1
  • 0.0.0.0
  • 10.0.0.0/8 (K8s components’ IPs)
  • 192.168.0.0/16 (internal IPs in the cluster)

Set up GCP Cloud Storage Backupstore

  1. Create a new bucket in Google Cloud Storage

  2. Create a GCP serviceaccount in IAM & Admin

  3. Give the GCP serviceaccount permissions to read, write, and delete objects in the bucket.

    The serviceaccount will require the roles/storage.objectAdmin role to read, write, and delete objects in the bucket.

    Here is a reference to the GCP IAM roles you have available for granting access to a serviceaccount https://cloud.google.com/storage/docs/access-control/iam-roles.

Note: Consider creating an IAM condition to reduce how many buckets this serviceaccount has object admin access to.

  1. Navigate to your buckets in cloud storage and select your newly created bucket.

  2. Go to the cloud storage’s settings menu and navigate to the interoperability tab

  3. Scroll down to Service account HMAC and press + CREATE A KEY FOR A SERVICE ACCOUNT

  4. Select the GCP serviceaccount you created earlier and press CREATE KEY

  5. Save the Access Key and Secret.

    Also note down the configured Storage URI under the Request Endpoint while you’re in the interoperability menu.

  • The Access Key will be mapped to the AWS_ACCESS_KEY_ID field in the Kubernetes secret we create later.
  • The Secret will be mapped to the AWS_SECRET_ACCESS_KEY field in the Kubernetes secret we create later.
  • The Storage URI will be mapped to the AWS_ENDPOINTS field in the Kubernetes secret we create later.
  1. Go to the Longhorn UI. In the top navigation bar, click Settings. In the Backup section, set Backup Target to
  1. s3://${BUCKET_NAME}@us/

And set Backup Target Credential Secret to:

  1. longhorn-gcp-backups
  1. Create a Kubernetes secret named longhorn-gcp-backups in the longhorn-system namespace with the following content:
  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: longhorn-gcp-backups
  5. namespace: longhorn-system
  6. type: Opaque
  7. stringData:
  8. AWS_ACCESS_KEY_ID: GOOG1EBYHGDE4WIGH2RDYNZWWWDZ5GMQDRMNSAOTVHRAILWAMIZ2O4URPGOOQ
  9. AWS_ENDPOINTS: https://storage.googleapis.com
  10. AWS_SECRET_ACCESS_KEY: BKoKpIW021s7vPtraGxDOmsJbkV/0xOVBG73m+8f

Note: The secret can be named whatever you like as long as they match what’s in longhorn’s settings.

Once the secret is created and Longhorn’s settings are saved, navigate to the backup tab in Longhorn. If there are any issues, they should pop up as a toast notification.

If you don’t get any error messages, try creating a backup and confirm the content is pushed out to your new bucket.

Set up a Local Testing Backupstore

Longhorn provides sample backupstore server setups for testing purposes. You can find samples for AWS S3 (MinIO), Azure, CIFS and NFS in the longhorn/deploy/backupstores folder.

  1. Set up a MinIO S3 server for the backupstore in the longhorn-system namespace.

    1. kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.1/deploy/backupstores/minio-backupstore.yaml
  2. Go to the Longhorn UI. In the top navigation bar, click Settings. In the Backup section, set Backup Target to

    1. s3://backupbucket@us-east-1/

    And set Backup Target Credential Secret to:

    1. minio-secret

    The minio-secret yaml looks like this:

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: minio-secret
    5. namespace: longhorn-system
    6. type: Opaque
    7. data:
    8. AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
    9. AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
    10. AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA== # https://minio-service.default:9000
    11. AWS_CERT: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMRENDQWhTZ0F3SUJBZ0lSQU1kbzQycGhUZXlrMTcvYkxyWjVZRHN3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQ2hNUFRHOXVaMmh2Y200Z0xTQlVaWE4wTUNBWERUSXdNRFF5TnpJek1EQXhNVm9ZRHpJeApNakF3TkRBek1qTXdNREV4V2pBYU1SZ3dGZ1lEVlFRS0V3OU1iMjVuYUc5eWJpQXRJRlJsYzNRd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEWHpVdXJnUFpEZ3pUM0RZdWFlYmdld3Fvd2RlQUQKODRWWWF6ZlN1USs3K21Oa2lpUVBvelVVMmZvUWFGL1BxekJiUW1lZ29hT3l5NVhqM1VFeG1GcmV0eDBaRjVOVgpKTi85ZWFJNWRXRk9teHhpMElPUGI2T0RpbE1qcXVEbUVPSXljdjRTaCsvSWo5Zk1nS0tXUDdJZGxDNUJPeThkCncwOVdkckxxaE9WY3BKamNxYjN6K3hISHd5Q05YeGhoRm9tb2xQVnpJbnlUUEJTZkRuSDBuS0lHUXl2bGhCMGsKVHBHSzYxc2prZnFTK3hpNTlJeHVrbHZIRXNQcjFXblRzYU9oaVh6N3lQSlorcTNBMWZoVzBVa1JaRFlnWnNFbQovZ05KM3JwOFhZdURna2kzZ0UrOElXQWRBWHExeWhqRDdSSkI4VFNJYTV0SGpKUUtqZ0NlSG5HekFnTUJBQUdqCmF6QnBNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTURFR0ExVWRFUVFxTUNpQ0NXeHZZMkZzYUc5emRJSVZiV2x1YVc4dGMyVnlkbWxqWlM1awpaV1poZFd4MGh3Ui9BQUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbUZMMzlNSHVZMzFhMTFEajRwMjVjCnFQRUM0RHZJUWozTk9kU0dWMmQrZjZzZ3pGejFXTDhWcnF2QjFCMVM2cjRKYjJQRXVJQkQ4NFlwVXJIT1JNU2MKd3ViTEppSEtEa0Jmb2U5QWI1cC9VakpyS0tuajM0RGx2c1cvR3AwWTZYc1BWaVdpVWorb1JLbUdWSTI0Q0JIdgpnK0JtVzNDeU5RR1RLajk0eE02czNBV2xHRW95YXFXUGU1eHllVWUzZjFBWkY5N3RDaklKUmVWbENtaENGK0JtCmFUY1RSUWN3cVdvQ3AwYmJZcHlERFlwUmxxOEdQbElFOW8yWjZBc05mTHJVcGFtZ3FYMmtYa2gxa3lzSlEralAKelFadHJSMG1tdHVyM0RuRW0yYmk0TktIQVFIcFc5TXUxNkdRakUxTmJYcVF0VEI4OGpLNzZjdEg5MzRDYWw2VgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t

    For more information on creating a secret, see the Kubernetes documentation. The secret must be created in the longhorn-system namespace for Longhorn to access it.

    Note: Make sure to use echo -n when generating the base64 encoding, otherwise an new line will be added at the end of the string and it will cause error when accessing the S3.

  3. Click the Backup tab in the UI. It should report an empty list without any errors.

Result: Longhorn can store backups in S3. To create a backup, see this section.

Using a self-signed SSL certificate for S3 communication

If you want to use a self-signed SSL certificate, you can specify AWS_CERT in the Kubernetes secret you provided to Longhorn. See the example in Set up a Local Testing Backupstore. It’s important to note that the certificate needs to be in PEM format, and must be its own CA. Or one must include a certificate chain that contains the CA certificate. To include multiple certificates, one can just concatenate the different certificates (PEM files).

Enable virtual-hosted-style access for S3 compatible Backupstore

You may need to enable this new addressing approach for your S3 compatible Backupstore when

  1. you want to switch to this new access style right now so that you won’t need to worry about Amazon S3 Path Deprecation Plan;
  2. the backupstore you are using supports virtual-hosted-style access only, e.g., Alibaba Cloud(Aliyun) OSS;
  3. you have configurated MINIO_DOMAIN environment variable to enable virtual-host-style requests for the MinIO server;
  4. the error ...... error: AWS Error: SecondLevelDomainForbidden Please use virtual hosted style to access. ..... is triggered.

The way to enable virtual-hosted-style access

  1. Add a new field VIRTUAL_HOSTED_STYLE with value true to your backup target secret. e.g.:

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: s3-compatible-backup-target-secret
    5. namespace: longhorn-system
    6. type: Opaque
    7. data:
    8. AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5
    9. AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5
    10. AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA==
    11. VIRTUAL_HOSTED_STYLE: dHJ1ZQ== # true
  2. Deploy/update the secret and set it in Settings/General/BackupTargetSecret.

Set up NFS Backupstore

Ensure that the NFS server supports NFSv4 and that the target URL points to the service.

Example:

  1. nfs://longhorn-test-nfs-svc.default:/opt/backupstore

The default mount options are actimeo=1,soft,timeo=300,retry=2. To use other options, append the keyword “nfsOptions” and the options string to the target URL.

Example:

  1. nfs://longhorn-test-nfs-svc.default:/opt/backupstore?nfsOptions=soft,timeo=330,retrans=3

Any mount options that you specify will replace, not add to, the default options.

You can find an example NFS backupstore for testing purpose here.

Result: Longhorn can store backups in NFS. To create a backup, see this section.

Set up SMB/CIFS Backupstore

Before configuring a SMB/CIFS backupstore, a credential secret for the backupstore can be created and deployed by

  1. #!/bin/bash
  2. USERNAME=${Username of SMB/CIFS Server}
  3. PASSWORD=${Password of SMB/CIFS Server}
  4. CIFS_USERNAME=`echo -n ${USERNAME} | base64`
  5. CIFS_PASSWORD=`echo -n ${PASSWORD} | base64`
  6. cat <<EOF >>cifs_secret.yml
  7. apiVersion: v1
  8. kind: Secret
  9. metadata:
  10. name: cifs-secret
  11. namespace: longhorn-system
  12. type: Opaque
  13. data:
  14. CIFS_USERNAME: ${CIFS_USERNAME}
  15. CIFS_PASSWORD: ${CIFS_PASSWORD}
  16. EOF
  17. kubectl apply -f cifs_secret.yml

Then, navigate to Longhorn UI > Setting > General > Backup

  1. Set Backup Target. The target URL should look like this:

    1. cifs://longhorn-test-cifs-svc.default/backupstore

    The default CIFS mount option is “soft”. To use other options, append the keyword “cifsOptions” and the options string to the target URL.

    Example:

    1. cifs://longhorn-test-cifs-svc.default/backupstore?cifsOptions=rsize=65536,wsize=65536,soft

    Any mount options that you specify will replace, not add to, the default options.

  2. Set Backup Target Credential Secret

    1. cifs-secret

    This is the secret name with CIFS credentials.

You can find an example CIFS backupstore for testing purpose here.

Result: Longhorn can store backups in CIFS. To create a backup, see this section.

Set up Azure Blob Storage Backupstore

  1. Create a new container in Azure Blob Storage Service

  2. Before configuring an Azure Blob Storage backup store, create a Kubernetes secret with a name such as azblob-secret in the namespace where Longhorn is installed (longhorn-system). The secret must be created in the same namespace for Longhorn to access it.

  • The Account Name will be the AZBLOB_ACCOUNT_NAME field in the secret.

  • The Account Secret Key will be the AZBLOB_ACCOUNT_KEY field in the secret.

  • The Storage URI will be the AZBLOB_ENDPOINT field in the secret.

  • By a manifest:

    1. #!/bin/bash
    2. # AZBLOB_ACCOUNT_NAME: Account name of Azure Blob Storage server
    3. # AZBLOB_ACCOUNT_KEY: Account key of Azure Blob Storage server
    4. # AZBLOB_ENDPOINT: Endpoint of Azure Blob Storage server
    5. # AZBLOB_CERT: SSL certificate for Azure Blob Storage server
    6. AZBLOB_ACCOUNT_NAME=`echo -n ${AZBLOB_ACCOUNT_NAME} | base64`
    7. AZBLOB_ACCOUNT_KEY=`echo -n ${AZBLOB_ACCOUNT_KEY} | base64`
    8. AZBLOB_ENDPOINT=`echo -n ${AZBLOB_ENDPOINT} | base64`
    9. AZBLOB_CERT=`echo -n ${AZBLOB_CERT} | base64`
    10. cat <<EOF >>azblob_secret.yml
    11. apiVersion: v1
    12. kind: Secret
    13. metadata:
    14. name: azblob-secret
    15. namespace: longhorn-system
    16. type: Opaque
    17. data:
    18. AZBLOB_ACCOUNT_NAME: ${AZBLOB_ACCOUNT_NAME}
    19. AZBLOB_ACCOUNT_KEY: ${AZBLOB_ACCOUNT_KEY}
    20. #AZBLOB_ENDPOINT: ${AZBLOB_ENDPOINT}
    21. #AZBLOB_CERT: ${AZBLOB_CERT}
    22. #HTTP_PROXY: aHR0cDovLzEwLjIxLjkxLjUxOjMxMjg=
    23. #HTTPS_PROXY: aHR0cDovLzEwLjIxLjkxLjUxOjMxMjg=
    24. EOF
    25. kubectl apply -f azblob_secret.yml
  • CLI command:

    1. kubectl create secret generic <azblob-secret> \
    2. --from-literal=AZBLOB_ACCOUNT_NAME=<your-azure-storage-account-name> \
    3. --from-literal=AZBLOB_ACCOUNT_KEY=<your-azure-storage-account-key> \
    4. --from-literal=HTTP_PROXY=<your-proxy-ip-and-port> \
    5. --from-literal=HTTPS_PROXY=<your-proxy-ip-and-port> \
    6. --from-literal=NO_PROXY=<excluded-ip-list> \
    7. -n longhorn-system

Then, navigate to Longhorn UI > Setting > General > Backup

  1. Set Backup Target. The target URL should look like this:

    1. azblob://[your-container-name]@[endpoint-suffix]/

    Make sure that you have / at the end, otherwise you will get an error. A subdirectory (prefix) may be used:

    1. azblob://[your-container-name]@[endpoint-suffix]/my-path/
    • If you set <endpoint-suffix> in the URL, the default endpoint suffix will be core.windows.net.
    • If you set AZBLOB_ENDPOINT in the secret, Longhorn will use AZBLOB_ENDPOINT as your storage URL, and <endpoint-suffix> will not be used even if it has been set.
  2. Set Backup Target Credential Secret

    1. azblob-secret

After configuring the above settings, you can manage backups on Azure Blob storage. See how to create backup for details.