Integrate Velero to back up and restore Karmada resources

Velero gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises.

Velero lets you:

  • Take backups of your cluster and restore in case of loss.
  • Migrate cluster resources to other clusters.
  • Replicate your production cluster to development and testing clusters.

This document gives an example to demonstrate how to use the Velero to back up and restore Kubernetes cluster resources and persistent volumes. Following example backups resources in cluster member1, and then restores those to cluster member2.

Start up Karamda clusters

You just need to clone Karamda repo, and run the following script in Karamda directory.

  1. hack/local-up-karmada.sh

And then run the below command to switch to the member cluster member1.

  1. export KUBECONFIG=/root/.kube/members.config
  2. kubectl config use-context member1

Install MinIO

Velero uses Object Storage Services from different cloud providers to support backup and snapshot operations. For simplicity, here takes, one object storage that runs locally on k8s clusters, for an example.

Download the binary from the official site:

  1. wget https://dl.min.io/server/minio/release/linux-amd64/minio
  2. chmod +x minio

Run the below command to set MinIO username and password:

  1. export MINIO_ROOT_USER=minio
  2. export MINIO_ROOT_PASSWORD=minio123

Run this command to start MinIO:

  1. ./minio server /data --console-address="0.0.0.0:20001" --address="0.0.0.0:9000"

Replace /data with the path to the drive or directory in which you want MinIO to store data. And now we can visit http://{SERVER_EXTERNAL_IP}/20001 in the browser to visit MinIO console UI. And Velero can use http://{SERVER_EXTERNAL_IP}/9000 to connect MinIO. The two configurations will make our follow-up work easier and more convenient.

Please visit MinIO console to create region minio and bucket velero, these will be used by Velero.

For more details about how to install MinIO, please run minio server --help for help, or you can visit MinIO Github Repo.

Install Velero

Velero consists of two components:

  • A command-line client that runs locally.

    1. Download the release tarball for your client platform
    1. wget https://github.com/vmware-tanzu/velero/releases/download/v1.7.0/velero-v1.7.0-linux-amd64.tar.gz
    1. Extract the tarball:
    1. tar -zxvf velero-v1.7.0-linux-amd64.tar.gz
    1. Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users).
    1. cp velero-v1.7.0-linux-amd64/velero /usr/local/bin/
  • A server that runs on your cluster

    We will use velero install to set up server components.

    For more details about how to use MinIO and Velero to backup resources, please ref: https://velero.io/docs/v1.7/contributions/minio/

    1. Create a Velero-specific credentials file (credentials-velero) in your local directory:

      1. [default]
      2. aws_access_key_id = minio
      3. aws_secret_access_key = minio123

      The two values should keep the same with MinIO username and password that we set when we install MinIO

    2. Start the server.

      We need to install Velero in both member1 and member2, so we should run the below command in shell for both two clusters, this will start Velero server. Please run kubectl config use-context member1 and kubectl config use-context member2 to switch to the different member clusters: member1 or member2.

      1. velero install \
      2. --provider aws \
      3. --plugins velero/velero-plugin-for-aws:v1.2.1 \
      4. --bucket velero \
      5. --secret-file ./credentials-velero \
      6. --use-volume-snapshots=false \
      7. --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://{SERVER_EXTERNAL_IP}:9000

      Replace {SERVER_EXTERNAL_IP} with your own server external IP.

    3. Deploy the nginx application to cluster member1:

      Run the below command in the Karmada directory.

      1. kubectl apply -f samples/nginx/deployment.yaml

      And then you will find nginx is deployed successfully.

      1. # kubectl get deployment.apps
      2. NAME READY UP-TO-DATE AVAILABLE AGE
      3. nginx 2/2 2 2 17s

Back up and restore kubernetes resources independent

Create a backup in member1:

  1. velero backup create nginx-backup --selector app=nginx

Restore the backup in member2

Run this command to switch to member2

  1. export KUBECONFIG=/root/.kube/members.config
  2. kubectl config use-context member2

In member2, we can also get the backup that we created in member1:

  1. # velero backup get
  2. NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
  3. nginx-backup Completed 0 0 2021-12-10 15:16:46 +0800 CST 29d default app=nginx

Restore member1 resources to member2:

  1. # velero restore create --from-backup nginx-backup
  2. Restore request "nginx-backup-20211210151807" submitted successfully.

Watch restore result, you’ll find that the status is Completed.

  1. # velero restore get
  2. NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
  3. nginx-backup-20211210151807 nginx-backup Completed 2021-12-10 15:18:07 +0800 CST 2021-12-10 15:18:07 +0800 CST 0 0 2021-12-10 15:18:07 +0800 CST <none>

And then you can find deployment nginx will be restored successfully.

  1. # kubectl get deployment.apps/nginx
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. nginx 2/2 2 2 21s

Backup and restore of kubernetes resources through Velero combined with karmada

In Karmada control plane, we need to install velero crds but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances. Based on work API here, they will be encapsulated as a work object delivered to member clusters and reconciled by velero controllers in member clusters finally.

Create velero crds in Karmada control plane: remote velero crd directory: https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero/crds/

Create a backup in karmada-apiserver and Distributed to member1 cluster through PropagationPolicy

  1. # create backup policy
  2. cat <<EOF | kubectl apply -f -
  3. apiVersion: velero.io/v1
  4. kind: Backup
  5. metadata:
  6. name: member1-default-backup
  7. namespace: velero
  8. spec:
  9. defaultVolumesToRestic: false
  10. includedNamespaces:
  11. - default
  12. storageLocation: default
  13. EOF
  14. # create PropagationPolicy
  15. cat <<EOF | kubectl apply -f -
  16. apiVersion: policy.karmada.io/v1alpha1
  17. kind: PropagationPolicy
  18. metadata:
  19. name: member1-backup
  20. namespace: velero
  21. spec:
  22. resourceSelectors:
  23. - apiVersion: velero.io/v1
  24. kind: Backup
  25. placement:
  26. clusterAffinity:
  27. clusterNames:
  28. - member1
  29. EOF

Create a restore in karmada-apiserver and Distributed to member2 cluster through PropagationPolicy

  1. #create restore policy
  2. cat <<EOF | kubectl apply -f -
  3. apiVersion: velero.io/v1
  4. kind: Restore
  5. metadata:
  6. name: member1-default-restore
  7. namespace: velero
  8. spec:
  9. backupName: member1-default-backup
  10. excludedResources:
  11. - nodes
  12. - events
  13. - events.events.k8s.io
  14. - backups.velero.io
  15. - restores.velero.io
  16. - resticrepositories.velero.io
  17. hooks: {}
  18. includedNamespaces:
  19. - 'default'
  20. EOF
  21. #create PropagationPolicy
  22. cat <<EOF | kubectl apply -f -
  23. apiVersion: policy.karmada.io/v1alpha1
  24. kind: PropagationPolicy
  25. metadata:
  26. name: member2-default-restore-policy
  27. namespace: velero
  28. spec:
  29. resourceSelectors:
  30. - apiVersion: velero.io/v1
  31. kind: Restore
  32. placement:
  33. clusterAffinity:
  34. clusterNames:
  35. - member2
  36. EOF

And then you can find deployment nginx will be restored on member2 successfully.

  1. # kubectl get deployment.apps/nginx
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. nginx 2/2 2 2 10s

Reference

The above introductions about Velero and MinIO are only a summary from the official website and repos, for more details please refer to: