Sample data

In order to explore the query language further, these instructions help you create a database, download and write data to that database within your InfluxDB installation. The sample data is then used and referenced in Data Exploration, Schema Exploration, and Functions.

Creating a database

If you’ve installed InfluxDB locally, the influx command should be available via the command line. Executing influx will start the CLI and automatically connect to the local InfluxDB instance (assuming you have already started the server with service influxdb start or by running influxd directly). The output should look like this:

  1. $ influx -precision rfc3339
  2. Connected to http://localhost:8086 version 1.4.x
  3. InfluxDB shell 1.4.x
  4. >

Notes:

  • The InfluxDB API runs on port 8086 by default. Therefore, influx will connect to port 8086 and localhost by default. If you need to alter these defaults, run influx --help.
  • The -precision argument specifies the format/precision of any returned timestamps. In the example above, rfc3339 tells InfluxDB to return timestamps in RFC3339 format (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ).

The command line is now ready to take input in the form of the Influx Query Language (a.k.a InfluxQL) statements. To exit the InfluxQL shell, type exit and hit return.

A fresh install of InfluxDB has no databases (apart from the system _internal), so creating one is our first task. You can create a database with the CREATE DATABASE <db-name> InfluxQL statement, where <db-name> is the name of the database you wish to create. Names of databases can contain any unicode character as long as the string is double-quoted. Names can also be left unquoted if they contain only ASCII letters, digits, or underscores and do not begin with a digit.

Throughout the query language exploration, we’ll use the database name NOAA_water_database:

  1. > CREATE DATABASE NOAA_water_database
  2. > exit

Download and write the data to InfluxDB

From your terminal, download the text file that contains the data in line protocol format:

  1. curl https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt -o NOAA_data.txt

Write the data to InfluxDB via the CLI:

  1. influx -import -path=NOAA_data.txt -precision=s -database=NOAA_water_database

Test queries

  1. $ influx -precision rfc3339 -database NOAA_water_database
  2. Connected to http://localhost:8086 version 1.4.x
  3. InfluxDB shell 1.4.x
  4. >

See all five measurements:

  1. > SHOW measurements
  2. name: measurements
  3. ------------------
  4. name
  5. average_temperature
  6. h2o_feet
  7. h2o_pH
  8. h2o_quality
  9. h2o_temperature

Count the number of non-null values of water_level in h2o_feet:

  1. > SELECT COUNT("water_level") FROM h2o_feet
  2. name: h2o_feet
  3. --------------
  4. time count
  5. 1970-01-01T00:00:00Z 15258

Select the first five observations in the measurement h2o_feet:

  1. > SELECT * FROM h2o_feet LIMIT 5
  2. name: h2o_feet
  3. --------------
  4. time level description location water_level
  5. 2015-08-18T00:00:00Z below 3 feet santa_monica 2.064
  6. 2015-08-18T00:00:00Z between 6 and 9 feet coyote_creek 8.12
  7. 2015-08-18T00:06:00Z between 6 and 9 feet coyote_creek 8.005
  8. 2015-08-18T00:06:00Z below 3 feet santa_monica 2.116
  9. 2015-08-18T00:12:00Z between 6 and 9 feet coyote_creek 7.887

Data sources and things to note

The sample data is publicly available data from the National Oceanic and Atmospheric Administration’s (NOAA) Center for Operational Oceanographic Products and Services. The data include 15,258 observations of water levels (ft) collected every six minutes at two stations (Santa Monica, CA (ID 9410840) and Coyote Creek, CA (ID 9414575)) over the period from August 18, 2015 through September 18, 2015.

Note that the measurements average_temperature, h2o_pH, h2o_quality, and h2o_temperature contain fictional data. Those measurements serve to illuminate query functionality in Schema Exploration.

The h2o_feet measurement is the only measurement that contains the NOAA data. Please note that the level description field isn’t part of the original NOAA data - we snuck it in there for the sake of having a field key with a special character and string field values.