Project-level tasks

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Backing up a project

Creating a backup of all relevant data involves exporting all important information, then restoring into a new project.

Currently, a OKD project back up and restore tool is being developed by Red Hat. See the following bug for more information:

Procedure

  1. List all the relevant data to back up:

    1. $ oc get all
    2. NAME TYPE FROM LATEST
    3. bc/ruby-ex Source Git 1
    4. NAME TYPE FROM STATUS STARTED DURATION
    5. builds/ruby-ex-1 Source Git@c457001 Complete 2 minutes ago 35s
    6. NAME DOCKER REPO TAGS UPDATED
    7. is/guestbook 10.111.255.221:5000/myproject/guestbook latest 2 minutes ago
    8. is/hello-openshift 10.111.255.221:5000/myproject/hello-openshift latest 2 minutes ago
    9. is/ruby-22-centos7 10.111.255.221:5000/myproject/ruby-22-centos7 latest 2 minutes ago
    10. is/ruby-ex 10.111.255.221:5000/myproject/ruby-ex latest 2 minutes ago
    11. NAME REVISION DESIRED CURRENT TRIGGERED BY
    12. dc/guestbook 1 1 1 config,image(guestbook:latest)
    13. dc/hello-openshift 1 1 1 config,image(hello-openshift:latest)
    14. dc/ruby-ex 1 1 1 config,image(ruby-ex:latest)
    15. NAME DESIRED CURRENT READY AGE
    16. rc/guestbook-1 1 1 1 2m
    17. rc/hello-openshift-1 1 1 1 2m
    18. rc/ruby-ex-1 1 1 1 2m
    19. NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    20. svc/guestbook 10.111.105.84 <none> 3000/TCP 2m
    21. svc/hello-openshift 10.111.230.24 <none> 8080/TCP,8888/TCP 2m
    22. svc/ruby-ex 10.111.232.117 <none> 8080/TCP 2m
    23. NAME READY STATUS RESTARTS AGE
    24. po/guestbook-1-c010g 1/1 Running 0 2m
    25. po/hello-openshift-1-4zw2q 1/1 Running 0 2m
    26. po/ruby-ex-1-build 0/1 Completed 0 2m
    27. po/ruby-ex-1-rxc74 1/1 Running 0 2m
  2. Export the project objects to a .yaml or .json file.

    • To export the project objects into a project.yaml file:

      1. $ oc get -o yaml --export all > project.yaml
    • To export the project objects into a project.json file:

      1. $ oc get -o json --export all > project.json
  3. Export the project’s role bindings, secrets, service accounts, and persistent volume claims:

    1. $ for object in rolebindings serviceaccounts secrets imagestreamtags podpreset cms egressnetworkpolicies rolebindingrestrictions limitranges resourcequotas pvcs templates cronjobs statefulsets hpas deployments replicasets poddisruptionbudget endpoints
    2. do
    3. oc get -o yaml --export $object > $object.yaml
    4. done
  4. To list all the namespaced objects:

    1. $ oc api-resources --namespaced=true -o name
  5. Some exported objects can rely on specific metadata or references to unique IDs in the project. This is a limitation on the usability of the recreated objects.

    When using imagestreams, the image parameter of a deploymentconfig can point to a specific sha checksum of an image in the internal registry that would not exist in a restored environment. For instance, running the sample “ruby-ex” as oc new-app centos/ruby-22-centos7~https://github.com/sclorg/ruby-ex.git creates an imagestream ruby-ex using the internal registry to host the image:

    1. $ oc get dc ruby-ex -o jsonpath="{.spec.template.spec.containers[].image}"
    2. 10.111.255.221:5000/myproject/ruby-ex@sha256:880c720b23c8d15a53b01db52f7abdcbb2280e03f686a5c8edfef1a2a7b21cee

    If importing the deploymentconfig as it is exported with oc get --export it fails if the image does not exist.

Restoring a project

To restore a project, create the new project, then restore any exported files by running oc create -f pods.json. However, restoring a project from scratch requires a specific order because some objects depend on others. For example, you must create the configmaps before you create any pods.

Procedure

  1. If the project was exported as a single file, import it by running the following commands:

    1. $ oc new-project <projectname>
    2. $ oc create -f project.yaml
    3. $ oc create -f secret.yaml
    4. $ oc create -f serviceaccount.yaml
    5. $ oc create -f pvc.yaml
    6. $ oc create -f rolebindings.yaml

    Some resources, such as pods and default service accounts, can fail to be created.

Backing up persistent volume claims

You can synchronize persistent data from inside of a container to a server.

Depending on the provider that is hosting the OKD environment, the ability to launch third party snapshot services for backup and restore purposes also exists. As OKD does not have the ability to launch these services, this guide does not describe these steps.

Consult any product documentation for the correct backup procedures of specific applications. For example, copying the mysql data directory itself does not create a usable backup. Instead, run the specific backup procedures of the associated application and then synchronize any data. This includes using snapshot solutions provided by the OKD hosting platform.

Procedure

  1. View the project and pods:

    1. $ oc get pods
    2. NAME READY STATUS RESTARTS AGE
    3. demo-1-build 0/1 Completed 0 2h
    4. demo-2-fxx6d 1/1 Running 0 1h
  2. Describe the desired pod to find the volumes that are currently used by a persistent volume:

    1. $ oc describe pod demo-2-fxx6d
    2. Name: demo-2-fxx6d
    3. Namespace: test
    4. Security Policy: restricted
    5. Node: ip-10-20-6-20.ec2.internal/10.20.6.20
    6. Start Time: Tue, 05 Dec 2017 12:54:34 -0500
    7. Labels: app=demo
    8. deployment=demo-2
    9. deploymentconfig=demo
    10. Status: Running
    11. IP: 172.16.12.5
    12. Controllers: ReplicationController/demo-2
    13. Containers:
    14. demo:
    15. Container ID: docker://201f3e55b373641eb36945d723e1e212ecab847311109b5cee1fd0109424217a
    16. Image: docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
    17. Image ID: docker-pullable://docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
    18. Port: 8080/TCP
    19. State: Running
    20. Started: Tue, 05 Dec 2017 12:54:52 -0500
    21. Ready: True
    22. Restart Count: 0
    23. Volume Mounts:
    24. */opt/app-root/src/uploaded from persistent-volume (rw)*
    25. /var/run/secrets/kubernetes.io/serviceaccount from default-token-8mmrk (ro)
    26. Environment Variables: <none>
    27. ...omitted...

    This output shows that the persistent data is in the /opt/app-root/src/uploaded directory.

  3. Copy the data locally:

    1. $ oc rsync demo-2-fxx6d:/opt/app-root/src/uploaded ./demo-app
    2. receiving incremental file list
    3. uploaded/
    4. uploaded/ocp_sop.txt
    5. uploaded/lost+found/
    6. sent 38 bytes received 190 bytes 152.00 bytes/sec
    7. total size is 32 speedup is 0.14

    The ocp_sop.txt file is downloaded to the local system to be backed up by backup software or another backup mechanism.

    You can also use the previous steps if a pod starts without needing to use a pvc, but you later decide that a pvc is necessary. You can preserve the data and then use the restorate process to populate the new storage.

Restoring persistent volume claims

You can restore persistent volume claim (PVC) data that you backed up. You can delete the file and then place the file back in the expected location or migrate the persistent volume claims. You might migrate if you need to move the storage or in a disaster scenario when the backend storage no longer exists.

Consult any product documentation for the correct restoration procedures for specific applications.

Restoring files to an existing PVC

Procedure
  1. Delete the file:

    1. $ oc rsh demo-2-fxx6d
    2. sh-4.2$ ls */opt/app-root/src/uploaded/*
    3. lost+found ocp_sop.txt
    4. sh-4.2$ *rm -rf /opt/app-root/src/uploaded/ocp_sop.txt*
    5. sh-4.2$ *ls /opt/app-root/src/uploaded/*
    6. lost+found
  2. Replace the file from the server that contains the rsync backup of the files that were in the pvc:

    1. $ oc rsync uploaded demo-2-fxx6d:/opt/app-root/src/
  3. Validate that the file is back on the pod by using oc rsh to connect to the pod and view the contents of the directory:

    1. $ oc rsh demo-2-fxx6d
    2. sh-4.2$ *ls /opt/app-root/src/uploaded/*
    3. lost+found ocp_sop.txt

Restoring data to a new PVC

The following steps assume that a new pvc has been created.

Procedure
  1. Overwrite the currently defined claim-name:

    1. $ oc volume dc/demo --add --name=persistent-volume \
    2. --type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite
  2. Validate that the pod is using the new PVC:

    1. $ oc describe dc/demo
    2. Name: demo
    3. Namespace: test
    4. Created: 3 hours ago
    5. Labels: app=demo
    6. Annotations: openshift.io/generated-by=OpenShiftNewApp
    7. Latest Version: 3
    8. Selector: app=demo,deploymentconfig=demo
    9. Replicas: 1
    10. Triggers: Config, Image(demo@latest, auto=true)
    11. Strategy: Rolling
    12. Template:
    13. Labels: app=demo
    14. deploymentconfig=demo
    15. Annotations: openshift.io/container.demo.image.entrypoint=["container-entrypoint","/bin/sh","-c","$STI_SCRIPTS_PATH/usage"]
    16. openshift.io/generated-by=OpenShiftNewApp
    17. Containers:
    18. demo:
    19. Image: docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
    20. Port: 8080/TCP
    21. Volume Mounts:
    22. /opt/app-root/src/uploaded from persistent-volume (rw)
    23. Environment Variables: <none>
    24. Volumes:
    25. persistent-volume:
    26. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    27. *ClaimName: filestore*
    28. ReadOnly: false
    29. ...omitted...
  3. Now that the deployement configuration uses the new pvc, run oc rsync to place the files onto the new pvc:

    1. $ oc rsync uploaded demo-3-2b8gs:/opt/app-root/src/
    2. sending incremental file list
    3. uploaded/
    4. uploaded/ocp_sop.txt
    5. uploaded/lost+found/
    6. sent 181 bytes received 39 bytes 146.67 bytes/sec
    7. total size is 32 speedup is 0.15
  4. Validate that the file is back on the pod by using oc rsh to connect to the pod and view the contents of the directory:

    1. $ oc rsh demo-3-2b8gs
    2. sh-4.2$ ls /opt/app-root/src/uploaded/
    3. lost+found ocp_sop.txt

Pruning images and containers

See the Pruning Resources topic for information about pruning collected data and older versions of objects.