Migrate Kubernetes Objects Using Storage Version Migration

FEATURE STATE: Kubernetes v1.30 [alpha]

Kubernetes relies on API data being actively re-written, to support some maintenance activities related to at rest storage. Two prominent examples are the versioned schema of stored resources (that is, the preferred storage schema changing from v1 to v2 for a given resource) and encryption at rest (that is, rewriting stale data based on a change in how the data should be encrypted).

Before you begin

Install kubectl.

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

Your Kubernetes server must be version v1.30. To check the version, enter kubectl version.

Re-encrypt Kubernetes secrets using storage version migration

  • To begin with, configure KMS provider to encrypt data at rest in etcd using following encryption configuration.

    1. kind: EncryptionConfiguration
    2. apiVersion: apiserver.config.k8s.io/v1
    3. resources:
    4. - resources:
    5. - secrets
    6. providers:
    7. - aescbc:
    8. keys:
    9. - name: key1
    10. secret: c2VjcmV0IGlzIHNlY3VyZQ==

    Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.

  • Create a Secret using kubectl.

    1. kubectl create secret generic my-secret --from-literal=key1=supersecret
  • Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1.

  • Update the encryption configuration file as follows to rotate the encryption key.

    1. kind: EncryptionConfiguration
    2. apiVersion: apiserver.config.k8s.io/v1
    3. resources:
    4. - resources:
    5. - secrets
    6. providers:
    7. - aescbc:
    8. keys:
    9. - name: key2
    10. secret: c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/
    11. - aescbc:
    12. keys:
    13. - name: key1
    14. secret: c2VjcmV0IGlzIHNlY3VyZQ==
  • To ensure that previously created secret my-secert is re-encrypted with new key key2, you will use Storage Version Migration.

  • Create a StorageVersionMigration manifest named migrate-secret.yaml as follows:

    1. kind: StorageVersionMigration
    2. apiVersion: storagemigration.k8s.io/v1alpha1
    3. metadata:
    4. name: secrets-migration
    5. spec:
    6. resource:
    7. group: ""
    8. version: v1
    9. resource: secrets

    Create the object using kubectl as follows:

    1. kubectl apply -f migrate-secret.yaml
  • Monitor migration of Secrets by checking the .status of the StorageVersionMigration. A successful migration should have its Succeeded condition set to true. Get the StorageVersionMigration object as follows:

    1. kubectl get storageversionmigration.storagemigration.k8s.io/secrets-migration -o yaml

    The output is similar to:

    1. kind: StorageVersionMigration
    2. apiVersion: storagemigration.k8s.io/v1alpha1
    3. metadata:
    4. name: secrets-migration
    5. uid: 628f6922-a9cb-4514-b076-12d3c178967c
    6. resourceVersion: '90'
    7. creationTimestamp: '2024-03-12T20:29:45Z'
    8. spec:
    9. resource:
    10. group: ""
    11. version: v1
    12. resource: secrets
    13. status:
    14. conditions:
    15. - type: Running
    16. status: 'False'
    17. lastUpdateTime: '2024-03-12T20:29:46Z'
    18. reason: StorageVersionMigrationInProgress
    19. - type: Succeeded
    20. status: 'True'
    21. lastUpdateTime: '2024-03-12T20:29:46Z'
    22. reason: StorageVersionMigrationSucceeded
    23. resourceVersion: '84'
  • Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.

Update the preferred storage schema of a CRD

Consider a scenario where a CustomResourceDefinition (CRD) is created to serve custom resources (CRs) and is set as the preferred storage schema. When it’s time to introduce v2 of the CRD, it can be added for serving only with a conversion webhook. This enables a smoother transition where users can create CRs using either the v1 or v2 schema, with the webhook in place to perform the necessary schema conversion between them. Before setting v2 as the preferred storage schema version, it’s important to ensure that all existing CRs stored as v1 are migrated to v2. This migration can be achieved through Storage Version Migration to migrate all CRs from v1 to v2.

  • Create a manifest for the CRD, named test-crd.yaml, as follows:

    1. apiVersion: apiextensions.k8s.io/v1
    2. kind: CustomResourceDefinition
    3. metadata:
    4. name: selfierequests.stable.example.com
    5. spec:
    6. group: stable.example.com
    7. names:
    8. plural: SelfieRequests
    9. singular: SelfieRequest
    10. kind: SelfieRequest
    11. listKind: SelfieRequestList
    12. scope: Namespaced
    13. versions:
    14. - name: v1
    15. served: true
    16. storage: true
    17. schema:
    18. openAPIV3Schema:
    19. type: object
    20. properties:
    21. hostPort:
    22. type: string
    23. conversion:
    24. strategy: Webhook
    25. webhook:
    26. clientConfig:
    27. url: https://127.0.0.1:9443/crdconvert
    28. caBundle: <CABundle info>
    29. conversionReviewVersions:
    30. - v1
    31. - v2

    Create CRD using kubectl

    1. kubectl apply -f test-crd.yaml
  • Create a manifest for an example testcrd. Name the manifest cr1.yaml and use these contents:

    1. apiVersion: stable.example.com/v1
    2. kind: SelfieRequest
    3. metadata:
    4. name: cr1
    5. namespace: default

    Create CR using kubectl

    1. kubectl apply -f cr1.yaml
  • Verify that CR is written and stored as v1 by getting the object from etcd.

    1. ETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C

    where [...] contains the additional arguments for connecting to the etcd server.

  • Update the CRD test-crd.yaml to include v2 version for serving and storage and v1 as serving only, as follows:

    1. apiVersion: apiextensions.k8s.io/v1
    2. kind: CustomResourceDefinition
    3. metadata:
    4. name: selfierequests.stable.example.com
    5. spec:
    6. group: stable.example.com
    7. names:
    8. plural: SelfieRequests
    9. singular: SelfieRequest
    10. kind: SelfieRequest
    11. listKind: SelfieRequestList
    12. scope: Namespaced
    13. versions:
    14. - name: v2
    15. served: true
    16. storage: true
    17. schema:
    18. openAPIV3Schema:
    19. type: object
    20. properties:
    21. host:
    22. type: string
    23. port:
    24. type: string
    25. - name: v1
    26. served: true
    27. storage: false
    28. schema:
    29. openAPIV3Schema:
    30. type: object
    31. properties:
    32. hostPort:
    33. type: string
    34. conversion:
    35. strategy: Webhook
    36. webhook:
    37. clientConfig:
    38. url: 'https://127.0.0.1:9443/crdconvert'
    39. caBundle: <CABundle info>
    40. conversionReviewVersions:
    41. - v1
    42. - v2

    Update CRD using kubectl

    1. kubectl apply -f test-crd.yaml
  • Create CR resource file with name cr2.yaml as follows:

    1. apiVersion: stable.example.com/v2
    2. kind: SelfieRequest
    3. metadata:
    4. name: cr2
    5. namespace: default
  • Create CR using kubectl

    1. kubectl apply -f cr2.yaml
  • Verify that CR is written and stored as v2 by getting the object from etcd.

    1. ETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr2 [...] | hexdump -C

    where [...] contains the additional arguments for connecting to the etcd server.

  • Create a StorageVersionMigration manifest named migrate-crd.yaml, with the contents as follows:

    1. kind: StorageVersionMigration
    2. apiVersion: storagemigration.k8s.io/v1alpha1
    3. metadata:
    4. name: crdsvm
    5. spec:
    6. resource:
    7. group: stable.example.com
    8. version: v1
    9. resource: SelfieRequest

    Create the object using kubectl as follows:

    1. kubectl apply -f migrate-crd.yaml
  • Monitor migration of secrets using status. Successful migration should have Succeeded condition set to “True” in the status field. Get the migration resource as follows:

    1. kubectl get storageversionmigration.storagemigration.k8s.io/crdsvm -o yaml

    The output is similar to:

    1. kind: StorageVersionMigration
    2. apiVersion: storagemigration.k8s.io/v1alpha1
    3. metadata:
    4. name: crdsvm
    5. uid: 13062fe4-32d7-47cc-9528-5067fa0c6ac8
    6. resourceVersion: '111'
    7. creationTimestamp: '2024-03-12T22:40:01Z'
    8. spec:
    9. resource:
    10. group: stable.example.com
    11. version: v1
    12. resource: testcrds
    13. status:
    14. conditions:
    15. - type: Running
    16. status: 'False'
    17. lastUpdateTime: '2024-03-12T22:40:03Z'
    18. reason: StorageVersionMigrationInProgress
    19. - type: Succeeded
    20. status: 'True'
    21. lastUpdateTime: '2024-03-12T22:40:03Z'
    22. reason: StorageVersionMigrationSucceeded
    23. resourceVersion: '106'
  • Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.

    1. ETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C

    where [...] contains the additional arguments for connecting to the etcd server.