Write CSV data to InfluxDB

This page documents an earlier version of InfluxDB. InfluxDB v2.7 is the latest stable version. View this page in the v2.7 documentation.

Write CSV data with the following methods:

influx write command

Use the influx write command to write CSV data to InfluxDB. Include Extended annotated CSV annotations to specify how the data translates into line protocol. Include annotations in the CSV file or inject them using the --header flag of the influx write command.

On this page
Example write command
  1. influx write -b example-bucket -f path/to/example.csv
example.csv
  1. #datatype measurement,tag,double,dateTime:RFC3339
  2. m,host,used_percent,time
  3. mem,host1,64.23,2020-01-01T00:00:00Z
  4. mem,host2,72.01,2020-01-01T00:00:00Z
  5. mem,host1,62.61,2020-01-01T00:00:10Z
  6. mem,host2,72.98,2020-01-01T00:00:10Z
  7. mem,host1,63.40,2020-01-01T00:00:20Z
  8. mem,host2,73.77,2020-01-01T00:00:20Z
Resulting line protocol
  1. mem,host=host1 used_percent=64.23 1577836800000000000
  2. mem,host=host2 used_percent=72.01 1577836800000000000
  3. mem,host=host1 used_percent=62.61 1577836810000000000
  4. mem,host=host2 used_percent=72.98 1577836810000000000
  5. mem,host=host1 used_percent=63.40 1577836820000000000
  6. mem,host=host2 used_percent=73.77 1577836820000000000

To test the CSV to line protocol conversion process, use the influx write dryrun command to print the resulting line protocol to stdout rather than write to InfluxDB.

“too many open files” errors

When attempting to write large amounts of CSV data into InfluxDB, you might see an error like the following:

  1. Error: Failed to write data: unexpected error writing points to database: [shard <#>] fcntl: too many open files.

To fix this error on Linux or macOS, run the following command to increase the number of open files allowed:

  1. ulimit -n 10000

macOS users, to persist the ulimit setting, follow the recommended steps for your operating system version.

Telegraf

Use CSV data format in Telegraf as a way to write CSV data to InfluxDB.

For more information, see:

CSV Annotations

Use CSV annotations to specify which element of line protocol each CSV column represents and how to format the data. CSV annotations are rows at the beginning of a CSV file that describe column properties.

The influx write command supports Extended annotated CSV which provides options for specifying how CSV data should be converted into line protocol and how data is formatted.

To write data to InfluxDB, data must include the following:

Use CSV annotations to specify which of these elements each column represents.

Write raw query results back to InfluxDB

Flux returns query results in annotated CSV. These results include all annotations necessary to write the data back to InfluxDB.

Inject annotation headers

If the CSV data you want to write to InfluxDB does not contain the annotations required to properly convert the data to line protocol, use the --header flag to inject annotation rows into the CSV data.

  1. influx write -b example-bucket \
  2. -f path/to/example.csv \
  3. --header "#constant measurement,birds" \
  4. --header "#datatype dateTime:2006-01-02,long,tag"
example.csv
  1. date,sighted,loc
  2. 2020-01-01,12,Boise
  3. 2020-06-01,78,Boise
  4. 2020-01-01,54,Seattle
  5. 2020-06-01,112,Seattle
  6. 2020-01-01,9,Detroit
  7. 2020-06-01,135,Detroit
Resulting line protocol
  1. birds,loc=Boise sighted=12i 1577836800000000000
  2. birds,loc=Boise sighted=78i 1590969600000000000
  3. birds,loc=Seattle sighted=54i 1577836800000000000
  4. birds,loc=Seattle sighted=112i 1590969600000000000
  5. birds,loc=Detroit sighted=9i 1577836800000000000
  6. birds,loc=Detroit sighted=135i 1590969600000000000

Use files to inject headers

The influx write command supports importing multiple files in a single command. Include annotations and header rows in their own file and import them with the write command. Files are read in the order in which they’re provided.

  1. influx write -b example-bucket \
  2. -f path/to/headers.csv \
  3. -f path/to/example.csv
headers.csv
  1. #constant measurement,birds
  2. #datatype dateTime:2006-01-02,long,tag
example.csv
  1. date,sighted,loc
  2. 2020-01-01,12,Boise
  3. 2020-06-01,78,Boise
  4. 2020-01-01,54,Seattle
  5. 2020-06-01,112,Seattle
  6. 2020-01-01,9,Detroit
  7. 2020-06-01,135,Detroit
Resulting line protocol
  1. birds,loc=Boise sighted=12i 1577836800000000000
  2. birds,loc=Boise sighted=78i 1590969600000000000
  3. birds,loc=Seattle sighted=54i 1577836800000000000
  4. birds,loc=Seattle sighted=112i 1590969600000000000
  5. birds,loc=Detroit sighted=9i 1577836800000000000
  6. birds,loc=Detroit sighted=135i 1590969600000000000

Skip annotation headers

Some CSV data may include header rows that conflict with or lack the annotations necessary to write CSV data to InfluxDB. Use the --skipHeader flag to specify the number of rows to skip at the beginning of the CSV data.

  1. influx write -b example-bucket \
  2. -f path/to/example.csv \
  3. --skipHeader=2

You can then inject new header rows to rename columns and provide the necessary annotations.

Process input as CSV

The influx write command automatically processes files with the .csv extension as CSV files. If your CSV file uses a different extension, use the --format flat to explicitly declare the format of the input file.

  1. influx write -b example-bucket \
  2. -f path/to/example.txt \
  3. --format csv

The influx write command assumes all input files are line protocol unless they include the .csv extension or you declare the csv.

Specify CSV character encoding

The influx write command assumes CSV files contain UTF-8 encoded characters. If your CSV data uses different character encoding, specify the encoding with the --encoding.

  1. influx write -b example-bucket \
  2. -f path/to/example.csv \
  3. --encoding "UTF-16"

Skip rows with errors

If a row in your CSV data is missing an element required to write to InfluxDB or data is incorrectly formatted, when processing the row, the influx write command returns an error and cancels the write request. To skip rows with errors, use the --skipRowOnError flag.

  1. influx write -b example-bucket \
  2. -f path/to/example.csv \
  3. --skipRowOnError

Skipped rows are ignored and are not written to InfluxDB.

Use the --errors-file flag to record errors to a file. The error file identifies all rows that cannot be imported and includes error messages for debugging. For example:

  1. cpu,1.1

Advanced examples


Define constants

Use the Extended annotated CSV #constant annotation to add a column and value to each row in the CSV data.

CSV with constants
  1. #constant measurement,example
  2. #constant tag,source,csv
  3. #datatype long,dateTime:RFC3339
  4. count,time
  5. 1,2020-01-01T00:00:00Z
  6. 4,2020-01-02T00:00:00Z
  7. 9,2020-01-03T00:00:00Z
  8. 18,2020-01-04T00:00:00Z
Resulting line protocol
  1. example,source=csv count=1 1577836800000000000
  2. example,source=csv count=4 1577923200000000000
  3. example,source=csv count=9 1578009600000000000
  4. example,source=csv count=18 1578096000000000000

Annotation shorthand

Extended annotated CSV supports annotation shorthand, which lets you define the column label, datatype, and default value in the column header.

CSV with annotation shorthand
  1. m|measurement,count|long|0,time|dateTime:RFC3339
  2. example,1,2020-01-01T00:00:00Z
  3. example,4,2020-01-02T00:00:00Z
  4. example,,2020-01-03T00:00:00Z
  5. example,18,2020-01-04T00:00:00Z
Resulting line protocol
  1. example count=1 1577836800000000000
  2. example count=4 1577923200000000000
  3. example count=0 1578009600000000000
  4. example count=18 1578096000000000000

Replace column header with annotation shorthand

It’s possible to replace the column header row in a CSV file with annotation shorthand without modifying the CSV file. This lets you define column data types and default values while writing to InfluxDB.

To replace an existing column header row with annotation shorthand:

  1. Use the --skipHeader flag to ignore the existing column header row.
  2. Use the --header flag to inject a new column header row that uses annotation shorthand.
  1. influx write -b example-bucket \
  2. -f example.csv \
  3. --skipHeader=1
  4. --header="m|measurement,count|long|0,time|dateTime:RFC3339"
Unmodified example.csv
  1. m,count,time
  2. example,1,2020-01-01T00:00:00Z
  3. example,4,2020-01-02T00:00:00Z
  4. example,,2020-01-03T00:00:00Z
  5. example,18,2020-01-04T00:00:00Z
Resulting line protocol
  1. example count=1i 1577836800000000000
  2. example count=4i 1577923200000000000
  3. example count=0i 1578009600000000000
  4. example count=18i 1578096000000000000

Ignore columns

Use the Extended annotated CSV #datatype ignored annotation to ignore columns when writing CSV data to InfluxDB.

CSV data with ignored column
  1. #datatype measurement,long,time,ignored
  2. m,count,time,foo
  3. example,1,2020-01-01T00:00:00Z,bar
  4. example,4,2020-01-02T00:00:00Z,bar
  5. example,9,2020-01-03T00:00:00Z,baz
  6. example,18,2020-01-04T00:00:00Z,baz
Resulting line protocol
  1. m count=1i 1577836800000000000
  2. m count=4i 1577923200000000000
  3. m count=9i 1578009600000000000
  4. m count=18i 1578096000000000000

Use alternate numeric formats

If your CSV data contains numeric values that use a non-default fraction separator (.) or contain group separators, define your numeric format in the double, long, and unsignedLong datatype annotations.

If your numeric format separators include a comma (,), wrap the column annotation in double quotes ("") to prevent the comma from being parsed as a column separator or delimiter. You can also define a custom column separator.

Floats Integers Uintegers

CSV with non-default float values
  1. #datatype measurement,"double:.,",dateTime:RFC3339
  2. m,lbs,time
  3. example,"1,280.7",2020-01-01T00:00:00Z
  4. example,"1,352.5",2020-01-02T00:00:00Z
  5. example,"1,862.8",2020-01-03T00:00:00Z
  6. example,"2,014.9",2020-01-04T00:00:00Z
Resulting line protocol
  1. example lbs=1280.7 1577836800000000000
  2. example lbs=1352.5 1577923200000000000
  3. example lbs=1862.8 1578009600000000000
  4. example lbs=2014.9 1578096000000000000
CSV with non-default integer values
  1. #datatype measurement,"long:.,",dateTime:RFC3339
  2. m,lbs,time
  3. example,"1,280.0",2020-01-01T00:00:00Z
  4. example,"1,352.0",2020-01-02T00:00:00Z
  5. example,"1,862.0",2020-01-03T00:00:00Z
  6. example,"2,014.9",2020-01-04T00:00:00Z
Resulting line protocol
  1. example lbs=1280i 1577836800000000000
  2. example lbs=1352i 1577923200000000000
  3. example lbs=1862i 1578009600000000000
  4. example lbs=2014i 1578096000000000000
CSV with non-default uinteger values
  1. #datatype measurement,"unsignedLong:.,",dateTime:RFC3339
  2. m,lbs,time
  3. example,"1,280.0",2020-01-01T00:00:00Z
  4. example,"1,352.0",2020-01-02T00:00:00Z
  5. example,"1,862.0",2020-01-03T00:00:00Z
  6. example,"2,014.9",2020-01-04T00:00:00Z
Resulting line protocol
  1. example lbs=1280u 1577836800000000000
  2. example lbs=1352u 1577923200000000000
  3. example lbs=1862u 1578009600000000000
  4. example lbs=2014u 1578096000000000000

Use alternate boolean format

Line protocol supports only specific boolean values. If your CSV data contains boolean values that line protocol does not support, define your boolean format in the boolean datatype annotation.

CSV with non-default boolean values
  1. #datatype measurement,"boolean:y,Y,1:n,N,0",dateTime:RFC3339
  2. m,verified,time
  3. example,y,2020-01-01T00:00:00Z
  4. example,n,2020-01-02T00:00:00Z
  5. example,1,2020-01-03T00:00:00Z
  6. example,N,2020-01-04T00:00:00Z
Resulting line protocol
  1. example verified=true 1577836800000000000
  2. example verified=false 1577923200000000000
  3. example verified=true 1578009600000000000
  4. example verified=false 1578096000000000000

Use different timestamp formats

The influx write command automatically detects RFC3339 and number formatted timestamps when converting CSV to line protocol. If using a different timestamp format, define your timestamp format in the dateTime datatype annotation.

CSV with non-default timestamps
  1. #datatype measurement,dateTime:2006-01-02,field
  2. m,time,lbs
  3. example,2020-01-01,1280.7
  4. example,2020-01-02,1352.5
  5. example,2020-01-03,1862.8
  6. example,2020-01-04,2014.9
Resulting line protocol
  1. example lbs=1280.7 1577836800000000000
  2. example lbs=1352.5 1577923200000000000
  3. example lbs=1862.8 1578009600000000000
  4. example lbs=2014.9 1578096000000000000

Flux

Use the csv.from() and to() Flux functions to write an annotated CSV to the bucket of your choice.

The experimental csv.from() function lets you write CSV from a URL. The example below writes NOAA water sample data to an example noaa bucket in an example organization:

  1. import "experimental/csv"
  2. csv.from(url: "https://influx-testdata.s3.amazonaws.com/noaa.csv")
  3. |> to(bucket: "noaa", org: "example-org")

Required annotations and columns

To write CSV data to InfluxDB with Flux, you must include all of the following annotations and columns:

  • datatype
  • group
  • default

See annotations for more information. With Flux, you must also include a comma between the annotation name and the annotation values (this differs from the influx write command). See an example of valid syntax for annotated CSV in Flux.

Required columns:

  • _time
  • _measurement
  • _field
  • _value