titlesidebar_labeldescription
REST API
REST
REST API reference documentation.

The QuestDB REST API is based on standard HTTP features and is understood by off-the-shelf HTTP clients. It provides a simple way to interact with QuestDB and is compatible with most programming languages. API functions are fully keyed on the URL and they use query parameters as their arguments.

The Web Console is the official Web client relying on the REST API. Find out more in the section using the Web Console.

Available methods

  • /imp for importing data from .CSV files
  • /exec to execute a SQL statement
  • /exp to export data

/imp - Import data

/imp streams tabular text data directly into a table. It supports CSV, TAB and pipe (|) delimited inputs with optional headers. There are no restrictions on data size. Data types and structures are detected automatically, without additional configuration. In some cases, additional configuration can be provided to improve the automatic detection as described in user-defined schema.

:::note

The structure detection algorithm analyses the chunk in the beginning of the file and relies on relative uniformity of data. When the first chunk is non-representative of the rest of the data, automatic imports can yield errors.

If the data follows a uniform pattern, the number of lines which are analyzed for schema detection can be reduced to improve performance during uploads using the http.text.analysis.max.lines key. Usage of this setting is described in the HTTP server configuration documentation.

:::

URL parameters

/imp is expecting an HTTP POST request using the multipart/form-data Content-Type with following optional URL parameters which must be URL encoded:

ParameterRequiredDefaultDescription
atomicityNo20, 1 or 2. Behaviour when an error is detected in the data. 0: the entire file will be skipped. 1: the row is skipped. 2: the column is skipped.
durableNofalsetrue or false. When set to true, import will be resilient against OS errors or power losses by forcing the data to be fully persisted before sending a response back to the user.
fmtNotabularCan be set to json to get the response formatted as such.
forceHeaderNofalsetrue or false. When false, QuestDB will try to infer if the first line of the file is the header line. When set to true, QuestDB will expect that line to be the header line.
nameNoName of the fileName of the table to create, see below.
overwriteNofalsetrue or false. When set to true, any existing data or structure will be overwritten.
partitionByNoNONESee partitions.
skipLevNofalsetrue or false. Skip “Line Extra Values”, when set to true, the parser will ignore those extra values rather than ignoring entire line. An extra value is something in addition to what is defined by the header.
timestampNoName of the column that will be used as a designated timestamp.
  1. curl -F data=@weather.csv \
  2. 'http://localhost:9000/imp?overwrite=true&name=new_table&timestamp=ts&partitionBy=MONTH'

Further example queries with context on the source CSV file contents relative and the generated tables are provided in the examples section below.

Names

Table and column names are subject to restrictions, the following list of characters are automatically removed:

  1. [whitespace]
  2. .
  3. ?
  4. ,
  5. :
  6. \
  7. /
  8. \\
  9. \0
  10. )
  11. (
  12. _
  13. +
  14. -
  15. *
  16. ~
  17. %

When the header row is missing, column names are generated automatically.

Consistency guarantees

/imp benefits from the properties of the QuestDB storage model, although Atomicity and Durability can be relaxed to meet convenience and performance demands.

Atomicity

QuestDB is fully insured against any connection problems. If the server detects closed socket(s), the entire request is rolled back instantly and transparently for any existing readers. The only time data can be partially imported is when atomicity is in relaxed mode and data cannot be converted to column type. In this scenario, any “defective” row of data is discarded and /imp continues to stream request data into table.

Consistency

This property is guaranteed by consistency of append transactions against QuestDB storage engine.

Isolation

Data is committed to QuestDB storage engine at end of request. Uncommitted transactions are not visible to readers.

Durability

/imp streams data from network socket buffer directly into memory mapped files. At this point data is handed over to the OS and is resilient against QuestDB internal errors and unlikely but hypothetically possible crashes. This is default method of appending data and it is chosen for its performance characteristics. In cases where transaction has to be resilient against OS errors or power losses physical durability can be enforced. At a cost of append performance QuestDB storage engine will also guarantee that each memory block is flushed to physical device.

Examples

Automatic schema detection

The following example uploads a file ratings.csv which has the following contents:

tsvisMilestempFdewpF
2010-01-01T00:00:00.000000Z8.83430
2010-01-01T00:51:00.000000Z9.1000000000003430
2010-01-01T01:36:00.000000Z8.03430

An import can be performed with automatic schema detection with the following request:

  1. curl -F data=@weather.csv 'http://localhost:9000/imp'

A HTTP status code of 200 will be returned and the response will be:

  1. +-------------------------------------------------------------------------------+
  2. | Location: | weather.csv | Pattern | Locale | Errors |
  3. | Partition by | NONE | | | |
  4. | Timestamp | NONE | | | |
  5. +-------------------------------------------------------------------------------+
  6. | Rows handled | 49976 | | | |
  7. | Rows imported | 49976 | | | |
  8. +-------------------------------------------------------------------------------+
  9. | 0 | ts | TIMESTAMP | 0 |
  10. | 1 | visMiles | DOUBLE | 0 |
  11. | 2 | tempF | INT | 0 |
  12. | 3 | dewpF | INT | 0 |
  13. +-------------------------------------------------------------------------------+

User-defined schema

To specify the schema of a table, a schema object can be provided:

  1. curl \
  2. -F schema='[{"name":"dewpF", "type": "STRING"}]' \
  3. -F data=@weather.csv 'http://localhost:9000/imp'
  1. +------------------------------------------------------------------------------+
  2. | Location: | weather.csv | Pattern | Locale | Errors |
  3. | Partition by | NONE | | | |
  4. | Timestamp | NONE | | | |
  5. +------------------------------------------------------------------------------+
  6. | Rows handled | 49976 | | | |
  7. | Rows imported | 49976 | | | |
  8. +------------------------------------------------------------------------------+
  9. | 0 | ts | TIMESTAMP | 0 |
  10. | 1 | visMiles | DOUBLE | 0 |
  11. | 2 | tempF | INT | 0 |
  12. | 3 | dewpF | STRING | 0 |
  13. +------------------------------------------------------------------------------+

Non-standard timestamp formats

Given a file weather.csv with the following contents which contains a timestamp with a non-standard format:

tsvisMilestempFdewpF
2010-01-01 - 00:00:008.83430
2010-01-01 - 00:51:009.1000000000003430
2010-01-01 - 01:36:008.03430

The file can be imported as usual with the following request:

  1. curl -F data=@weather.csv 'http://localhost:9000/imp'

A HTTP status code of 200 will be returned and the import will be successful, but the timestamp column is detected as a STRING type:

  1. +-------------------------------------------------------------------------------+
  2. | Location: | weather.csv | Pattern | Locale | Errors |
  3. | Partition by | NONE | | | |
  4. | Timestamp | NONE | | | |
  5. +-------------------------------------------------------------------------------+
  6. | Rows handled | 49976 | | | |
  7. | Rows imported | 49976 | | | |
  8. +-------------------------------------------------------------------------------+
  9. | 0 | ts | STRING | 0 |
  10. | 1 | visMiles | DOUBLE | 0 |
  11. | 2 | tempF | INT | 0 |
  12. | 3 | dewpF | INT | 0 |
  13. +-------------------------------------------------------------------------------+

To amend the timestamp column type, this example curl can be used which has a schema JSON object to specify that the ts column is of TIMESTAMP type with the pattern yyyy-MM-dd - HH:mm:ss

Additionally, URL parameters are provided:

  • overwrite=true to overwrite the existing table
  • timestamp=ts to specify that the ts column is the designated timestamp column for this table
  • partitionBy=MONTH to set a partitioning strategy on the table by MONTH
  1. curl \
  2. -F schema='[{"name":"ts", "type": "TIMESTAMP", "pattern": "yyyy-MM-dd - HH:mm:ss"}]' \
  3. -F data=@weather.csv \
  4. 'http://localhost:9000/imp?overwrite=true&timestamp=ts&partitionBy=MONTH'

The HTTP status code will be set to 200 and the response will show 0 errors parsing the timestamp column:

  1. +------------------------------------------------------------------------------+
  2. | Location: | weather.csv | Pattern | Locale | Errors |
  3. | Partition by | MONTH | | | |
  4. | Timestamp | ts | | | |
  5. +------------------------------------------------------------------------------+
  6. | Rows handled | 49976 | | | |
  7. | Rows imported | 49976 | | | |
  8. +------------------------------------------------------------------------------+
  9. | 0 | ts | TIMESTAMP | 0 |
  10. | 1 | visMiles | DOUBLE | 0 |
  11. | 2 | tempF | INT | 0 |
  12. | 3 | dewpF | INT | 0 |
  13. +------------------------------------------------------------------------------+

/exec - Execute queries

/exec compiles and executes the SQL query supplied as a parameter and returns a JSON response.

:::note

The query execution terminates automatically when the socket connection is closed.

:::

Overview

/exec is expecting an HTTP GET request with following query parameters:

ParameterRequiredDefaultDescription
countNofalsetrue or false. Counts the number of rows and returns this value.
limitNoPaging parameter. For example, limit=10,20 will return row numbers 10 through to 20 inclusive and limit=20 will return first 20 rows, which is equivalent to limit=0,20. limit=-20 will return the last 20 rows.
nmNofalsetrue or false. Skips the metadata section of the response when set to true.
queryYesURL encoded query text. It can be multi-line.
timingsNofalsetrue or false. When set to true, QuestDB will also include a timings property in the response which gives details about the execution.

The parameters must be URL encoded.

Examples

This endpoint returns responses in the following format:

  1. {
  2. "query": string,
  3. "columns": Array<{ "name": string, "type": string }>
  4. "dataset": Array<Array<Value for Column1, Value for Column2>>,
  5. "count": Optional<number>,
  6. "timings": Optional<{ compiler: number, count: number, execute: number}>
  7. }

You can find the exact list of types in the dedicated page.

Considering the query:

  1. curl -G \
  2. --data-urlencode "query=select timestamp, tempF from weather limit 2;" \
  3. --data-urlencode "count=true" \
  4. http://localhost:9000/exec

A HTTP status code of 200 is returned with the following response body:

  1. {
  2. "query": "select timestamp, tempF from weather limit 2;",
  3. "columns": [
  4. {
  5. "name": "timestamp",
  6. "type": "TIMESTAMP"
  7. },
  8. {
  9. "name": "tempF",
  10. "type": "INT"
  11. }
  12. ],
  13. "dataset": [
  14. ["2010-01-01T00:00:00.000000Z", 34],
  15. ["2010-01-01T00:51:00.000000Z", 34]
  16. ],
  17. "count": 2
  18. }

/exp - Export data

This endpoint allows you to pass url-encoded queries but the request body is returned in a tabular form to be saved and reused as opposed to JSON.

Overview

/exp is expecting an HTTP GET request with following parameters:

ParameterRequiredDescription
limitNoPaging opp parameter. For example, limit=10,20 will return row numbers 10 through to 20 inclusive and limit=20 will return first 20 rows, which is equivalent to limit=0,20. limit=-20 will return the last 20 rows.
queryYesURL encoded query text. It can be multi-line.

The parameters must be URL encoded.

Examples

Considering the query:

  1. curl -G \
  2. --data-urlencode "query=select AccidentIndex2, Date, Time from 'Accidents0514.csv'" \
  3. --data-urlencode "limit=5" \
  4. http://localhost:9000/exp

A HTTP status code of 200 is returned with the following response body:

  1. "AccidentIndex","Date","Time"
  2. 200501BS00001,"2005-01-04T00:00:00.000Z",17:42
  3. 200501BS00002,"2005-01-05T00:00:00.000Z",17:36
  4. 200501BS00003,"2005-01-06T00:00:00.000Z",00:15
  5. 200501BS00004,"2005-01-07T00:00:00.000Z",10:35
  6. 200501BS00005,"2005-01-10T00:00:00.000Z",21:13

Error responses

Malformed queries

A successful call to /exec or /exp which also contains a malformed query will return response bodies with the following format:

  1. {
  2. "query": string,
  3. "error": string,
  4. "position": number
  5. }

The position field is the character number from the beginning of the string where the error was found.

Considering the query:

  1. curl -G \
  2. --data-urlencode "query=SELECTT * FROM table;" \
  3. http://localhost:9000/exp

A HTTP status code of 200 is returned with the following response body:

  1. {
  2. "query": "SELECTT * FROM table;",
  3. "error": "function, literal or constant is expected",
  4. "position": 8
  5. }