Flux syntax basics

Flux, at its core, is a scripting language designed specifically for working with data. This guide walks through a handful of simple expressions and how they are handled in Flux.

Use the influx CLI

Use the influx CLI in “Flux mode” as you follow this guide. When started with -type=flux, the influx CLI is an interactive read-eval-print-loop (REPL) that supports Flux syntax.

Start in the influx CLI in Flux mode
  1. influx -type=flux

If using the InfluxData Sandbox, use the ./sandbox enter command to enter the influxdb container, where you can start the influx CLI in Flux mode. You will also need to specify the host as influxdb to connect to InfluxDB over the Docker network.

  1. ./sandbox enter influxdb
  2. root@9bfc3c08579c:/# influx -host influxdb -type=flux

Basic Flux syntax

The code blocks below provide commands that illustrate the basic syntax of Flux. Run these commands in the influx CLI’s Flux REPL.

Simple expressions

Flux is a scripting language that supports basic expressions. For example, simple addition:

  1. > 1 + 1
  2. 2

Variables

Assign an expression to a variable using the assignment operator, =.

  1. > s = "this is a string"
  2. > i = 1 // an integer
  3. > f = 2.0 // a floating point number

Type the name of a variable to print its value:

  1. > s
  2. this is a string
  3. > i
  4. 1
  5. > f
  6. 2

Records

Flux also supports records. Each value in a record can be a different data type.

  1. > o = {name:"Jim", age: 42, "favorite color": "red"}

Use dot notation to access a properties of a record:

  1. > o.name
  2. Jim
  3. > o.age
  4. 42

Or bracket notation:

  1. > o["name"]
  2. Jim
  3. > o["age"]
  4. 42
  5. > o["favorite color"]
  6. red

Use bracket notation to reference record properties with special or white space characters in the property key.

Lists

Flux supports lists. List values must be the same type.

  1. > n = 4
  2. > l = [1,2,3,n]
  3. > l
  4. [1, 2, 3, 4]

Functions

Flux uses functions for most of its heavy lifting. Below is a simple function that squares a number, n.

  1. > square = (n) => n * n
  2. > square(n:3)
  3. 9

Flux does not support positional arguments or parameters. Parameters must always be named when calling a function.

Pipe-forward operator

Flux uses the pipe-forward operator (|>) extensively to chain operations together. After each function or operation, Flux returns a table or collection of tables containing data. The pipe-forward operator pipes those tables into the next function where they are further processed or manipulated.

  1. data |> someFunction() |> anotherFunction()

Real-world application of basic syntax

This likely seems familiar if you’ve already been through through the other getting started guides. Flux’s syntax is inspired by Javascript and other functional scripting languages. As you begin to apply these basic principles in real-world use cases such as creating data stream variables, custom functions, etc., the power of Flux and its ability to query and process data will become apparent.

The examples below provide both multi-line and single-line versions of each input command. Carriage returns in Flux aren’t necessary, but do help with readability. Both single- and multi-line commands can be copied and pasted into the influx CLI running in Flux mode.

Multi-line inputs Single-line inputs

Define data stream variables

A common use case for variable assignments in Flux is creating variables for one or more input data streams.

  1. timeRange = -1h
  2. cpuUsageUser =
  3. from(bucket:"telegraf/autogen")
  4. |> range(start: timeRange)
  5. |> filter(fn: (r) =>
  6. r._measurement == "cpu" and
  7. r._field == "usage_user" and
  8. r.cpu == "cpu-total"
  9. )
  10. memUsagePercent =
  11. from(bucket:"telegraf/autogen")
  12. |> range(start: timeRange)
  13. |> filter(fn: (r) =>
  14. r._measurement == "mem" and
  15. r._field == "used_percent"
  16. )

These variables can be used in other functions, such as join(), while keeping the syntax minimal and flexible.

Define custom functions

Create a function that returns the N number rows in the input stream with the highest _values. To do this, pass the input stream (tables) and the number of results to return (n) into a custom function. Then using Flux’s sort() and limit() functions to find the top n results in the data set.

  1. topN = (tables=<-, n) =>
  2. tables
  3. |> sort(desc: true)
  4. |> limit(n: n)

More information about creating custom functions is available in the Custom functions documentation.

Using this new custom function topN and the cpuUsageUser data stream variable defined above, find the top five data points and yield the results.

  1. cpuUsageUser
  2. |> topN(n:5)
  3. |> yield()

Define data stream variables

A common use case for variable assignments in Flux is creating variables for multiple filtered input data streams.

  1. timeRange = -1h
  2. cpuUsageUser = from(bucket:"telegraf/autogen") |> range(start: timeRange) |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user" and r.cpu == "cpu-total")
  3. memUsagePercent = from(bucket:"telegraf/autogen") |> range(start: timeRange) |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent")

These variables can be used in other functions, such as join(), while keeping the syntax minimal and flexible.

Define custom functions

Let’s create a function that returns the N number rows in the input data stream with the highest _values. To do this, pass the input stream (tables) and the number of results to return (n) into a custom function. Then using Flux’s sort() and limit() functions to find the top n results in the data set.

  1. topN = (tables=<-, n) => tables |> sort(desc: true) |> limit(n: n)

More information about creating custom functions is available in the Custom functions documentation.

Using the cpuUsageUser data stream variable defined above, find the top five data points with the custom topN function and yield the results.

  1. cpuUsageUser |> topN(n:5) |> yield()

This query will return the five data points with the highest user CPU usage over the last hour.

Transform your data