Migrate the entire database at once

Migrate smaller databases by dumping and restoring the entire database at once. This method works best on databases smaller than 100 GB. For larger databases, consider migrating your schema and data separately.

warning

Depending on your database size and network speed, migration can take a very long time. You can continue reading from your source database during this time, though performance could be slower. To avoid this problem, fork your database and migrate your data from the fork. If you write to tables in your source database during the migration, the new writes might not be transferred to Timescale Cloud. To avoid this problem, see the section on migrating an active database.

Prerequisites

Before you begin, check that you have:

  • Installed the PostgreSQL pg_dump and pg_restore utilities.
  • Installed a client for connecting to PostgreSQL. These instructions use psql, but any client works.
  • Created a new empty database in Timescale Cloud. For more information, see the Install Timescale Cloud section. Provision your database with enough space for all your data.
  • Checked that any other PostgreSQL extensions you use are compatible with Timescale Cloud. For more information, see the list of compatible extensions. Install your other PostgreSQL extensions.
  • Checked that you’re running the same major version of PostgreSQL on both Timescale Cloud and your source database. For information about upgrading PostgreSQL on your source database, see the upgrade instructions for self-hosted TimescaleDB and Managed Service for TimescaleDB.
  • Checked that you’re running the same major version of TimescaleDB on both Timescale Cloud and your source database. For more information, see the upgrading TimescaleDB section.
note

To speed up migration, compress your data. You can compress any chunks where data is not being currently inserted, updated, or deleted. When you finish the migration, you can decompress chunks as needed for normal operation. For more information about compression and decompression, see the compression section.

Migrating the entire database at once

  1. Dump all the data from your source database into a dump.bak file, using your source database connection details. If you are prompted for a password, use your source database credentials:

    1. pg_dump -U <SOURCE_DB_USERNAME> -W \
    2. -h <SOURCE_DB_HOST> -p <SOURCE_DB_PORT> -Fc -v \
    3. -f dump.bak <SOURCE_DB_NAME>
  2. Connect to your Timescale Cloud database using your Timescale Cloud connection details. When you are prompted for a password, use your Timescale Cloud credentials:

    1. psql postgres://tsdbadmin:<CLOUD_PASSWORD>@<CLOUD_HOST>:<CLOUD_PORT>/tsdb?sslmode=require”
  3. Prepare your Timescale Cloud database for data restoration by using timescaledb_pre_restore to stop background workers:

    1. SELECT timescaledb_pre_restore();
  4. At the command prompt, restore the dumped data from the dump.bak file into your Timescale Cloud database, using your Timescale Cloud connection details. To avoid permissions errors, include the --no-owner flag:

    1. pg_restore -U tsdbadmin -W \
    2. -h <CLOUD_HOST> -p <CLOUD_PORT> --no-owner \
    3. -Fc -v -d tsdb dump.bak
  5. At the psql prompt, return your Timescale Cloud database to normal operations by using the timescaledb_post_restore command:

    1. SELECT timescaledb_post_restore();
  6. Update your table statistics by running ANALYZE on your entire dataset:

    1. ANALYZE;

Troubleshooting

If you see these errors during the migration process, you can safely ignore them. The migration still occurs successfully.

    1. pg_dump: warning: there are circular foreign-key constraints on this table:
    2. pg_dump: hypertable
    3. pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.
    4. pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.
    1. pg_dump: NOTICE: hypertable data are in the chunks, no data will be copied
    2. DETAIL: Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.
    3. HINT: Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.
  1. pg_restore tries to apply the TimescaleDB extension when it copies your schema. This can cause a permissions error. Because TimescaleDB is already installed by default on Timescale Cloud, you can safely ignore this.

    1. pg_restore: creating EXTENSION "timescaledb"
    2. pg_restore: creating COMMENT "EXTENSION timescaledb"
    3. pg_restore: while PROCESSING TOC:
    4. pg_restore: from TOC entry 6239; 0 0 COMMENT EXTENSION timescaledb
    5. pg_restore: error: could not execute query: ERROR: must be owner of extension timescaledb
    1. pg_restore: WARNING: no privileges were granted for "<..>"
    1. pg_restore: warning: errors ignored on restore: 1

If you see errors of the following form when you run ANALYZE, you can safely ignore them:

  1. WARNING: skipping "<TABLE OR INDEX>" --- only superuser can analyze it

The skipped tables and indexes correspond to system catalogs that can’t be accessed. Skipping them does not affect statistics on your data.