Ingest data

There are several different ways of ingesting your data into Managed Service for TimescaleDB. This section contains instructions to:

note

Before you begin, make sure you have created your Managed Service for TimescaleDB service, and can connect to it using psql.

Preparing your new database

  1. Use psql to connect to your service. You can retrieve the service URL, port, and login credentials from the service overview in the Timescale Cloud dashboard:

    1. psql -h <HOSTNAME> -p <PORT> -U <USERNAME> -W -d <DATABASE_NAME>
  2. Create a new database for your data. In this example, the new database is called new_db:

    1. CREATE DATABASE new_db;
    2. \c new_db;
  3. Create a new SQL table in your database. The columns you create for the table must match the columns in your source data. In this example, the table is storing weather condition data, and has columns for the timestamp, location, and temperature:

    1. CREATE TABLE conditions (
    2. time TIMESTAMPTZ NOT NULL,
    3. location text NOT NULL,
    4. temperature DOUBLE PRECISION NULL
    5. );
  4. Load the timescaledb PostgreSQL extension:

    1. CREATE EXTENSION timescaledb;
    2. \dx
  5. Convert the SQL table into a hypertable:

    1. SELECT create_hypertable('conditions', 'time');

When you have successfully set up your new database, you can ingest data using one of these methods.

Bulk upload from CSV files

If you have a dataset stored in a .csv file, you can import it into an empty TimescaleDB hypertable. You need to begin by creating the new table, before you import the data.

important

Before you begin, make sure you have prepared your new database.

Bulk uploading from a CSV file

  1. Insert data into the new hypertable using the timescaledb-parallel-copy tool. You should already have the tool installed, but you can install it manually from our GitHub repository if you need to. In this example, we are inserting the data using four workers:

    1. timescaledb-parallel-copy \
    2. --connection '<service_url>' \
    3. --table conditions \
    4. --file ~/Downloads/example.csv \
    5. --workers 4 \
    6. --copy-options "CSV" \
    7. --skip-header

    We recommend that you set the number of workers lower than the number of available CPU cores on your client machine or server, to prevent the workers having to compete for resources. This helps your ingest go faster.

  2. OPTIONAL: If you don’t want to use the timescaledb-parallel-copy tool, or if you have a very small dataset, you can use the PostgreSQL COPY command instead:

    1. psql '<service_url>/new_db?sslmode=require' -c "\copy conditions FROM <example.csv> WITH (FORMAT CSV, HEADER)"

Insert data directly using a client driver

You can use a client driver such as JDBC, Python, or Node.js, to insert data directly into your new database.

See the PostgreSQL instructions for using the ODBC driver.

See the Code Quick Starts for using various languages, including Python and node.js.

Insert data directly using a message queue

If you have data stored in a message queue, you can import it into your TimescaleDB database. This section provides instructions on using the Kafka Connect PostgreSQL connector.

This connector deploys PostgreSQL change events from Kafka Connect to a runtime service. It monitors one or more schemas in a TimescaleDB server, and writes all change events to Kafka topics, which can then be independently consumed by one or more clients. Kafka Connect can be distributed to provide fault tolerance, which ensures the connectors are running and continually keeping up with changes in the database.

You can also use the PostgreSQL connector as a library without Kafka or Kafka Connect. This allows applications and services to directly connect to TimescaleDB and obtain the ordered change events. In this environment, the application must record the progress of the connector so that when it is restarted, the connect can continue where it left off. This approach can be useful for less critical use cases. However, for production use cases, we recommend that you use the connector with Kafka and Kafka Connect.

See these instructions for using the Kafka connector.