Snapshots

Table of Contents

Snapshot

In CrateDB, backups are called Snapshots. They represent the state of the tables in a CrateDB cluster at the time the Snapshot was created. A Snapshot is always stored in a Repository which has to be created first.

Caution

You cannot snapshot BLOB tables.

Creating a Repository

Repositories are used to store, manage and restore snapshots.

They are created using the CREATE REPOSITORY statement:

  1. cr> CREATE REPOSITORY where_my_snapshots_go TYPE fs
  2. ... WITH (location='repo_path', compress=true);
  3. CREATE OK, 1 row affected (... sec)

Repositories are uniquely identified by their name. Every repository has a specific type which determines how snapshots are stored.

CrateDB supports different repository types: fs, hdfs, s3, and url. Support for further types can be added using plugins.

The creation of a repository configures it inside the CrateDB cluster. In general no data is written, no snapshots inside repositories changed or deleted. This way you can tell the CrateDB cluster about existing repositories which already contain snapshots.

Creating a repository with the same name will result in an error:

  1. cr> CREATE REPOSITORY where_my_snapshots_go TYPE fs
  2. ... WITH (location='another_repo_path', compress=false);
  3. SQLActionException[RepositoryAlreadyExistsException: Repository 'where_my_snapshots_go' already exists]

Creating a Snapshot

Snapshots are created inside a repository and can contain any number of tables. The CREATE SNAPSHOT statement is used to create a snapshots:

  1. cr> CREATE SNAPSHOT where_my_snapshots_go.snapshot1 ALL
  2. ... WITH (wait_for_completion=true, ignore_unavailable=true);
  3. CREATE OK, 1 row affected (... sec)

A snapshot is referenced by the name of the repository and the snapshot name, separated by a dot. If ALL is used, all user created tables of the cluster (except blob tables) are stored inside the snapshot.

It’s possible to only save a specific subset of tables in the snapshot by listing them explicitly:

  1. cr> CREATE SNAPSHOT where_my_snapshots_go.snapshot2 TABLE quotes, doc.locations
  2. ... WITH (wait_for_completion=true);
  3. CREATE OK, 1 row affected (... sec)

Even single partition of Partitioned Tables can be selected for backup. This is especially useful if old partitions need to be deleted but it should be possible to restore them if needed:

  1. cr> CREATE SNAPSHOT where_my_snapshots_go.snapshot3 TABLE
  2. ... locations,
  3. ... parted_table PARTITION (date='1970-01-01')
  4. ... WITH (wait_for_completion=true);
  5. CREATE OK, 1 row affected (... sec)

Snapshots are incremental. Snapshots of the same cluster created later only store data not already contained in the repository.

All examples above are used with the argument wait_for_completion set to true. As described in the CREATE REPOSITORY reference documentation, by doing this, the statement will only respond (successfully or not) when the snapshot is fully created. Otherwise the snapshot will be created in the background and the statement will immediately respond as successful. The status of a created snapshot can be retrieved by querying the sys.snapshots system table.

Restore

Caution

If you are restoring a snapshot into a newer version of CrateDB, be sure to check the Release Notes for upgrade instructions.

Once a snapshot is created, it can be used to restore its tables to the state when the snapshot was created.

To get basic information about snapshots the sys.snapshots table can be queried:

  1. cr> SELECT repository, name, state, concrete_indices
  2. ... FROM sys.snapshots
  3. ... ORDER BY repository, name;
  4. +-----------------------+-----------+---------+--------------------...-+
  5. | repository | name | state | concrete_indices |
  6. +-----------------------+-----------+---------+--------------------...-+
  7. | where_my_snapshots_go | snapshot1 | SUCCESS | [...] |
  8. | where_my_snapshots_go | snapshot2 | SUCCESS | [...] |
  9. | where_my_snapshots_go | snapshot3 | SUCCESS | [...] |
  10. +-----------------------+-----------+---------+--------------------...-+
  11. SELECT 3 rows in set (... sec)

To restore a table from a snapshot we have to drop it beforehand:

  1. cr> DROP TABLE quotes;
  2. DROP OK, 1 row affected (... sec)

Restoring a snapshot using the RESTORE SNAPSHOT statement.:

  1. cr> RESTORE SNAPSHOT where_my_snapshots_go.snapshot2 TABLE quotes WITH (wait_for_completion=true);
  2. RESTORE OK, 1 row affected (... sec)

In this case only the quotes table from snapshot where_my_snapshots_go.snapshot2 is restored. Using ALL instead of listing all tables restores the whole snapshot.

It’s not possible to restore tables that exist in the current cluster:

  1. cr> RESTORE SNAPSHOT where_my_snapshots_go.snapshot2 TABLE quotes;
  2. SQLActionException[RelationAlreadyExists: Relation 'doc.quotes' already exists.]

Single partitions can be either imported into an existing partitioned table the partition belongs to.

  1. cr> RESTORE SNAPSHOT where_my_snapshots_go.snapshot3 TABLE
  2. ... parted_table PARTITION (date='1970-01-01')
  3. ... WITH (wait_for_completion=true);
  4. RESTORE OK, 1 row affected (... sec)

Or if no matching partition table exists, it will be implicitly created during restore.

Caution

This is only possible with CrateDB version 0.55.5 or greater!

Snapshots of single partitions that have been created with earlier versions of CrateDB may be restored, but lead to orphaned partitions!

When using CrateDB prior to 0.55.5 you will have to create the table schema first before restoring.

  1. cr> DROP TABLE parted_table;
  2. DROP OK, 1 row affected (... sec)
  3. cr> RESTORE SNAPSHOT where_my_snapshots_go.snapshot3 TABLE
  4. ... parted_table PARTITION (date=0)
  5. ... WITH (wait_for_completion=true);
  6. RESTORE OK, 1 row affected (... sec)

Cleanup

Dropping Snapshots

Dropping a snapshot deletes all files inside the repository that are only referenced by this snapshot. Due to its incremental nature this might be very few files (e.g. for intermediate snapshots). Snapshots are dropped using the DROP SNAPSHOT command:

  1. cr> DROP SNAPSHOT where_my_snapshots_go.snapshot3;
  2. DROP OK, 1 row affected (... sec)

Dropping Repositories

If a repository is not needed anymore, it can be dropped using the DROP REPOSITORY statement:

  1. cr> DROP REPOSITORY "OldRepository";
  2. DROP OK, 1 row affected (... sec)

This statement, like CREATE REPOSITORY, does not manipulate repository contents but only deletes stored configuration for this repository in the cluster state, so it’s not accessible any more.

Requirements for Using HDFS Repositories

CrateDB supports repositories of type hdfs type by default, but required Hadoop java client libraries are not included in any CrateDB distribution and need to be added to CrateDB’s hdfs plugin folder. By default this is $CRATE_HOME/plugins/es-repository-hdfs

Because some libraries Hadoop depends on are also required (and so deployed) by CrateDB, only the Hadoop libraries listed below must be copied into the $CRATE_HOME/plugins/es-repository-hdfs folder, other libraries will be ignored:

  1. - apacheds-i18n-2.0.0-M15.jar
  2. - apacheds-kerberos-codec-2.0.0-M15.jar
  3. - api-asn1-api-1.0.0-M20.jar
  4. - api-util-1.0.0-M20.jar
  5. - avro-1.7.4.jar
  6. - commons-compress-1.4.1.jar
  7. - commons-configuration-1.6.jar
  8. - commons-digester-1.8.jar
  9. - commons-httpclient-3.1.jar
  10. - commons-io-2.4.jar
  11. - commons-lang-2.6.jar
  12. - commons-net-3.1.jar
  13. - curator-client-2.7.1.jar
  14. - curator-framework-2.7.1.jar
  15. - curator-recipes-2.7.1.jar
  16. - gson-2.2.4.jar
  17. - hadoop-annotations-2.8.1.jar
  18. - hadoop-auth-2.8.1.jar
  19. - hadoop-client-2.8.1.jar
  20. - hadoop-common-2.8.1.jar
  21. - hadoop-hdfs-2.8.1.jar
  22. - hadoop-hdfs-client-2.8.1.jar
  23. - htrace-core4-4.0.1-incubating.jar
  24. - jackson-core-asl-1.9.13.jar
  25. - jackson-mapper-asl-1.9.13.jar
  26. - jline-0.9.94.jar
  27. - jsp-api-2.1.jar
  28. - leveldbjni-all-1.8.jar
  29. - protobuf-java-2.5.0.jar
  30. - paranamer-2.3.jar
  31. - snappy-java-1.0.4.1.jar
  32. - servlet-api-2.5.jar
  33. - xercesImpl-2.9.1.jar
  34. - xmlenc-0.52.jar
  35. - xml-apis-1.3.04.jar
  36. - xz-1.0.jar
  37. - zookeeper-3.4.6.jar

Note

Only Hadoop version 2.x is supported and as of writing this documentation, the latest stable Hadoop (YARN) version is 2.8.1. Required libraries may differ for other versions.

Crate’s packaged es-repository-hdfs plugin depends on a different version of commons-collections, htrace, and xml-apis than Hadoop depends, and the presence of both versions will result in Jar Hell. The es-repository-hdfs plugin’s dependencies should take precedence when encountered, but the above list works for Hadoop v2.8.1.

Working with a Secured HA HDFS

For users with Kerberos-secured HA NameNode configurations, configuring the plugin is easy.

First, the core-site.xml and hdfs-site.xml files for the HDFS cluster need to be placed in an empty JAR and added to the $CRATE_HOME/plugins/es-repository-hdfs directory. Because Crate plugins are loaded as collections of JARs, plain xml files simply won’t be loaded and the HDFS client won’t be able to find the configuration files. These files should include any relevant keys and values for communicating with the NameNode; this includes any HA config, authentication method, etc.

Note

Make sure the load_defaults parameter to CREATE REPOSITORY is true (it is by default) as this will load the values as described here.

Next, if kerberos is the authentication method, the hdfs plugin will need a keytab to authenticate with. This needs to be placed in a separate config directory for the plugin, $CRATE_HOME/config/repository-hdfs, and must be named krb5.keytab.

Lastly, the security.principal parameter passed in the CREATE REPOSITORY statement must be a fully-qualified kerberos identity: a service principal name (SPN) or a user principal name (UPN) will work.

Note

Only one kerberos identity is supported per Crate cluster.

If all this has been configured correctly, the HDFS repository plugin should be able to communicate with an optionally-HA, secured HDFS cluster.