Data manipulation

This section provides an overview of how to manipulate data (e.g., inserting rows) with CrateDB.

See also

General use: Data definition

General use: Querying

Table of contents

Inserting data

Inserting data to CrateDB is done by using the SQL INSERT statement.

Note

The column list is always ordered based on the column position in the CREATE TABLE statement of the table. If the insert columns are omitted, the values in the VALUES clauses must correspond to the table columns in that order.

Inserting a row:

  1. cr> insert into locations (id, date, description, kind, name, position)
  2. ... values (
  3. ... '14',
  4. ... '2013-09-12T21:43:59.000Z',
  5. ... 'Blagulon Kappa is the planet to which the police are native.',
  6. ... 'Planet',
  7. ... 'Blagulon Kappa',
  8. ... 7
  9. ... );
  10. INSERT OK, 1 row affected (... sec)

When inserting a single row, if an error occurs an error is returned as a response.

Inserting multiple rows at once (aka. bulk insert) can be done by defining multiple values for the INSERT statement:

  1. cr> insert into locations (id, date, description, kind, name, position) values
  2. ... (
  3. ... '16',
  4. ... '2013-09-14T21:43:59.000Z',
  5. ... 'Blagulon Kappa II is the planet to which the police are native.',
  6. ... 'Planet',
  7. ... 'Blagulon Kappa II',
  8. ... 19
  9. ... ),
  10. ... (
  11. ... '17',
  12. ... '2013-09-13T16:43:59.000Z',
  13. ... 'Brontitall is a planet with a warm, rich atmosphere and no mountains.',
  14. ... 'Planet',
  15. ... 'Brontitall',
  16. ... 10
  17. ... );
  18. INSERT OK, 2 rows affected (... sec)

When inserting multiple rows, if an error occurs for some of these rows there is no error returned but instead the number of rows affected would be decreased by the number of rows that failed to be inserted.

When inserting into tables containing Generated columns or Base Columns having the Default clause specified, their values can be safely omitted. They are generated upon insert:

  1. cr> CREATE TABLE debit_card (
  2. ... owner text,
  3. ... num_part1 integer,
  4. ... num_part2 integer,
  5. ... check_sum integer GENERATED ALWAYS AS ((num_part1 + num_part2) * 42),
  6. ... "user" text DEFAULT 'crate'
  7. ... );
  8. CREATE OK, 1 row affected (... sec)
  1. cr> insert into debit_card (owner, num_part1, num_part2) values
  2. ... ('Zaphod Beeblebrox', 1234, 5678);
  3. INSERT OK, 1 row affected (... sec)
  1. cr> select * from debit_card;
  2. +-------------------+-----------+-----------+-----------+-------+
  3. | owner | num_part1 | num_part2 | check_sum | user |
  4. +-------------------+-----------+-----------+-----------+-------+
  5. | Zaphod Beeblebrox | 1234 | 5678 | 290304 | crate |
  6. +-------------------+-----------+-----------+-----------+-------+
  7. SELECT 1 row in set (... sec)

For Generated columns, if the value is given, it is validated against the generation clause of the column and the currently inserted row:

  1. cr> insert into debit_card (owner, num_part1, num_part2, check_sum) values
  2. ... ('Arthur Dent', 9876, 5432, 642935);
  3. SQLParseException[Given value 642935 for generated column check_sum does not match calculation ((num_part1 + num_part2) * 42) = 642936]

Inserting data by query

It is possible to insert data using a query instead of values. Column data types of source and target table can differ as long as the values are castable. This gives the opportunity to restructure the tables data, renaming a field, changing a field’s data type or convert a normal table into a partitioned one.

Example of changing a field’s data type, in this case, changing the position data type from integer to smallint:

  1. cr> create table locations2 (
  2. ... id text primary key,
  3. ... name text,
  4. ... date timestamp with time zone,
  5. ... kind text,
  6. ... position smallint,
  7. ... description text
  8. ... ) clustered by (id) into 2 shards with (number_of_replicas = 0);
  9. CREATE OK, 1 row affected (... sec)
  1. cr> insert into locations2 (id, name, date, kind, position, description)
  2. ... (
  3. ... select id, name, date, kind, position, description
  4. ... from locations
  5. ... where position < 10
  6. ... );
  7. INSERT OK, 14 rows affected (... sec)

Example of creating a new partitioned table out of the locations table with data partitioned by year:

  1. cr> create table locations_parted (
  2. ... id text primary key,
  3. ... name text,
  4. ... year text primary key,
  5. ... date timestamp with time zone,
  6. ... kind text,
  7. ... position integer
  8. ... ) clustered by (id) into 2 shards
  9. ... partitioned by (year) with (number_of_replicas = 0);
  10. CREATE OK, 1 row affected (... sec)
  1. cr> insert into locations_parted (id, name, year, date, kind, position)
  2. ... (
  3. ... select
  4. ... id,
  5. ... name,
  6. ... date_format('%Y', date),
  7. ... date,
  8. ... kind,
  9. ... position
  10. ... from locations
  11. ... );
  12. INSERT OK, 16 rows affected (... sec)

Resulting partitions of the last insert by query:

  1. cr> select table_name, partition_ident, values, number_of_shards, number_of_replicas
  2. ... from information_schema.table_partitions
  3. ... where table_name = 'locations_parted'
  4. ... order by partition_ident;
  5. +------------------+-----------------+------------------+------------------+--------------------+
  6. | table_name | partition_ident | values | number_of_shards | number_of_replicas |
  7. +------------------+-----------------+------------------+------------------+--------------------+
  8. | locations_parted | 042j2e9n74 | {"year": "1979"} | 2 | 0 |
  9. | locations_parted | 042j4c1h6c | {"year": "2013"} | 2 | 0 |
  10. +------------------+-----------------+------------------+------------------+--------------------+
  11. SELECT 2 rows in set (... sec)

Note

limit, offset and order by are not supported inside the query statement.

Upserts (ON CONFLICT DO UPDATE SET)

The ON CONFLICT DO UPDATE SET clause is used to update the existing row if inserting is not possible because of a duplicate-key conflict if a document with the same PRIMARY KEY already exists. This is type of operation is commonly referred to as an upsert, being a combination of “update” and “insert”.

  1. cr> select
  2. ... name,
  3. ... visits,
  4. ... extract(year from last_visit) as last_visit
  5. ... from uservisits order by name;
  6. +----------+--------+------------+
  7. | name | visits | last_visit |
  8. +----------+--------+------------+
  9. | Ford | 1 | 2013 |
  10. | Trillian | 3 | 2013 |
  11. +----------+--------+------------+
  12. SELECT 2 rows in set (... sec)
  1. cr> insert into uservisits (id, name, visits, last_visit) values
  2. ... (
  3. ... 0,
  4. ... 'Ford',
  5. ... 1,
  6. ... '2015-09-12'
  7. ... ) on conflict (id) do update set
  8. ... visits = visits + 1,
  9. ... last_visit = '2015-01-12';
  10. INSERT OK, 1 row affected (... sec)
  1. cr> select
  2. ... name,
  3. ... visits,
  4. ... extract(year from last_visit) as last_visit
  5. ... from uservisits where id = 0;
  6. +------+--------+------------+
  7. | name | visits | last_visit |
  8. +------+--------+------------+
  9. | Ford | 2 | 2015 |
  10. +------+--------+------------+
  11. SELECT 1 row in set (... sec)

It’s possible to refer to values which would be inserted if no duplicate-key conflict occurred, by using the special excluded table. This table is especially useful in multiple-row inserts, to refer to the current rows values:

  1. cr> insert into uservisits (id, name, visits, last_visit) values
  2. ... (
  3. ... 0,
  4. ... 'Ford',
  5. ... 2,
  6. ... '2016-01-13'
  7. ... ),
  8. ... (
  9. ... 1,
  10. ... 'Trillian',
  11. ... 5,
  12. ... '2016-01-15'
  13. ... ) on conflict (id) do update set
  14. ... visits = visits + excluded.visits,
  15. ... last_visit = excluded.last_visit;
  16. INSERT OK, 2 rows affected (... sec)
  1. cr> select
  2. ... name,
  3. ... visits,
  4. ... extract(year from last_visit) as last_visit
  5. ... from uservisits order by name;
  6. +----------+--------+------------+
  7. | name | visits | last_visit |
  8. +----------+--------+------------+
  9. | Ford | 4 | 2016 |
  10. | Trillian | 8 | 2016 |
  11. +----------+--------+------------+
  12. SELECT 2 rows in set (... sec)

This can also be done when using a query instead of values:

  1. cr> create table uservisits2 (
  2. ... id integer primary key,
  3. ... name text,
  4. ... visits integer,
  5. ... last_visit timestamp with time zone
  6. ... ) clustered by (id) into 2 shards with (number_of_replicas = 0);
  7. CREATE OK, 1 row affected (... sec)
  1. cr> insert into uservisits2 (id, name, visits, last_visit)
  2. ... (
  3. ... select id, name, visits, last_visit
  4. ... from uservisits
  5. ... );
  6. INSERT OK, 2 rows affected (... sec)
  1. cr> insert into uservisits2 (id, name, visits, last_visit)
  2. ... (
  3. ... select id, name, visits, last_visit
  4. ... from uservisits
  5. ... ) on conflict (id) do update set
  6. ... visits = visits + excluded.visits,
  7. ... last_visit = excluded.last_visit;
  8. INSERT OK, 2 rows affected (... sec)
  1. cr> select
  2. ... name,
  3. ... visits,
  4. ... extract(year from last_visit) as last_visit
  5. ... from uservisits order by name;
  6. +----------+--------+------------+
  7. | name | visits | last_visit |
  8. +----------+--------+------------+
  9. | Ford | 4 | 2016 |
  10. | Trillian | 8 | 2016 |
  11. +----------+--------+------------+
  12. SELECT 2 rows in set (... sec)

Updating data

In order to update documents in CrateDB the SQL UPDATE statement can be used:

  1. cr> update locations set description = 'Updated description'
  2. ... where name = 'Bartledan';
  3. UPDATE OK, 1 row affected (... sec)

Updating nested objects is also supported:

  1. cr> update locations set inhabitants['name'] = 'Human' where name = 'Bartledan';
  2. UPDATE OK, 1 row affected (... sec)

It’s also possible to reference a column within the expression, for example to increment a number like this:

  1. cr> update locations set position = position + 1 where position < 3;
  2. UPDATE OK, 6 rows affected (... sec)

Note

If the same documents are updated concurrently an VersionConflictException might occur. CrateDB contains a retry logic that tries to resolve the conflict automatically.

Deleting data

Deleting rows in CrateDB is done using the SQL DELETE statement:

  1. cr> delete from locations where position > 3;
  2. DELETE OK, ... rows affected (... sec)

Import and export

Importing data

Using the COPY FROM statement, CrateDB nodes can import data from local files or files that are available over the network.

The supported data formats are JSON and CSV. The format is inferred from the file extension, if possible. Alternatively the format can also be provided as an option (see WITH). If the format is not provided and cannot be inferred from the file extension, it will be processed as JSON.

JSON files must contain a single JSON object per line.

Example JSON data:

  1. {"id": 1, "quote": "Don't panic"}
  2. {"id": 2, "quote": "Ford, you're turning into a penguin. Stop it."}

CSV files must contain a header with comma-separated values, which will be added as columns.

Example CSV data:

  1. id,quote
  2. 1,"Don't panic"
  3. 2,"Ford, you're turning into a penguin. Stop it."

Note

  • The COPY FROM statement will not convert or validate your data. Please make sure that it fits your schema.

  • Values for generated columns will be computed if the data does not contain them, otherwise they will be imported but not validated, so please make sure that they are correct.

  • Furthermore, column names in your data are considered case sensitive (as if they were quoted in a SQL statement).

For further information, including how to import data to Partitioned tables, take a look at the COPY FROM reference.

Example

Here’s an example statement:

  1. cr> COPY quotes FROM 'file:///tmp/import_data/quotes.json';
  2. COPY OK, 3 rows affected (... sec)

This statement imports data from the /tmp/import_data/quotes.json file and uses it to create a table named quotes.

Note

The file you specify must be available on one of the CrateDB nodes. This statement will not work with files that are local to your client.

For the above statement, every node in the cluster will attempt to import data from a file located at /tmp/import_data/quotes.json relative to the crate process (i.e., if you are running CrateDB inside a container, the file must also be inside the container).

If you want to import data from a file that on your local computer using COPY FROM, you must first transfer the file to one of the CrateDB nodes.

Consult the COPY FROM reference for additional information.

If you want to import all files inside the /tmp/import_data directory on every CrateDB node, you can use a wildcard, like so:

  1. cr> COPY quotes FROM '/tmp/import_data/*' WITH (bulk_size = 4);
  2. COPY OK, 3 rows affected (... sec)

This wildcard can also be used to only match certain files in a directory:

  1. cr> COPY quotes FROM '/tmp/import_data/qu*.json';
  2. COPY OK, 3 rows affected (... sec)

Detailed error reporting

If the RETURN_SUMMARY clause is specified, a result set containing information about failures and successfully imported records is returned.

  1. cr> COPY locations FROM '/tmp/import_data/locations_with_failure/locations*.json' RETURN SUMMARY;
  2. +--...--+----------...--------+---------------+-------------+-------------------...--------------------------------------+
  3. | node | uri | success_count | error_count | errors |
  4. +--...--+----------...--------+---------------+-------------+-------------------...--------------------------------------+
  5. | {...} | .../locations1.json | 6 | 0 | {} |
  6. | {...} | .../locations2.json | 5 | 2 | {"failed to parse ...{"count": 2, "line_numbers": [1, 2]}} |
  7. +--...--+----------...--------+---------------+-------------+-------------------...--------------------------------------+
  8. COPY 2 rows in set (... sec)

If an error happens while processing the URI in general, the error_count and success_count columns will contains NULL values to indicate that no records were processed.

  1. cr> COPY locations FROM '/tmp/import_data/not-existing.json' RETURN SUMMARY;
  2. +--...--+-----------...---------+---------------+-------------+------------------------...------------------------+
  3. | node | uri | success_count | error_count | errors |
  4. +--...--+-----------...---------+---------------+-------------+------------------------...------------------------+
  5. | {...} | .../not-existing.json | NULL | NULL | {"...not-existing.json (...)": {"count": 1, ...}} |
  6. +--...--+-----------...---------+---------------+-------------+------------------------...------------------------+
  7. COPY 1 row in set (... sec)

See COPY FROM for more information.

Exporting data

Data can be exported using the COPY TO statement. Data is exported in a distributed way, meaning each node will export its own data.

Replicated data is not exported. So every row of an exported table is stored only once.

This example shows how to export a given table into files named after the table and shard ID with gzip compression:

  1. cr> REFRESH TABLE quotes;
  2. REFRESH OK...
  1. cr> COPY quotes TO DIRECTORY '/tmp/' with (compression='gzip');
  2. COPY OK, 3 rows affected ...

Instead of exporting a whole table, rows can be filtered by an optional WHERE clause condition. This is useful if only a subset of the data needs to be exported:

  1. cr> COPY quotes WHERE match(quote_ft, 'time') TO DIRECTORY '/tmp/' WITH (compression='gzip');
  2. COPY OK, 2 rows affected ...

For further details see COPY TO.