Tutorials

Quick Start: Go and TimescaleDB

Goal

This quick start guide is designed to get the Golang developer up and running with TimescaleDB as their database. In this tutorial, you’ll learn how to:

Pre-requisites

To complete this tutorial, you will need a cursory knowledge of the Structured Query Language (SQL). The tutorial will walk you through each SQL command, but it will be helpful if you’ve seen SQL before.

To start, install TimescaleDB. Once your installation is complete, we can proceed to ingesting or creating sample data and finishing the tutorial.

You will also need:

Connect to database

Locate your TimescaleDB credentials in order to compose a connection string for PGX to use in order to connect to your TimescaleDB instance.

You’ll need the following credentials:

  • password
  • username
  • host URL
  • port number
  • database name

Next, compose your connection string variable, as a libpq connection string, using the following format:

  1. connStr := "postgres://username:password@host:port/dbname"

If you’re using a hosted version of TimescaleDB, or generally require an SSL connection, use this version instead:

  1. connStr := "postgres://username:password@host:port/dbname?sslmode=require"

Creating a single connection to your database

Here’s a hello world program that you can run to ensure you’re connected to your database

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "github.com/jackc/pgx/v4"
  7. )
  8. //connect to database using a single connection
  9. func main() {
  10. /***********************************************/
  11. /* Single Connection to TimescaleDB/ PostresQL */
  12. /***********************************************/
  13. ctx := context.Background()
  14. connStr := "yourConnectionStringHere"
  15. conn, err := pgx.Connect(ctx, connStr)
  16. if err != nil {
  17. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  18. os.Exit(1)
  19. }
  20. defer conn.Close(ctx)
  21. //run a simple query to check our connection
  22. var greeting string
  23. err = conn.QueryRow(ctx, "select 'Hello, Timescale!'").Scan(&greeting)
  24. if err != nil {
  25. fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)
  26. os.Exit(1)
  27. }
  28. fmt.Println(greeting)
  29. }

If you’d like to specify your connection string as an environment variable, you can use the following syntax to access it in place of the variable connStr above:

os.Getenv("DATABASE_CONNECTION_STRING")

Using a connection pool (for multiple connections)

Connection pooling is useful to ensure you don’t waste resources and can lead to faster queries on your database.

To create a connection pool that can be used for concurrent connections to your database, use the pgxpool.Connect() function instead of pgx.Connect() as used in the example program below. Also note the import of github.com/jackc/pgx/v4/pgxpool, rather than pgx/v4 which was used to create a single connection.

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "github.com/jackc/pgx/v4/pgxpool"
  7. )
  8. func main() {
  9. ctx := context.Background()
  10. connStr := "yourConnectionStringHere"
  11. dbpool, err := pgxpool.Connect(ctx, connStr)
  12. if err != nil {
  13. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  14. os.Exit(1)
  15. }
  16. defer dbpool.Close()
  17. //run a simple query to check our connection
  18. var greeting string
  19. err = dbpool.QueryRow(ctx, "select 'Hello, Timescale (but concurrently)'").Scan(&greeting)
  20. if err != nil {
  21. fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)
  22. os.Exit(1)
  23. }
  24. fmt.Println(greeting)
  25. }

Congratulations, you’ve successfully connected to TimescaleDB using Go!

Create a table

Note: For the rest of this tutorial, we will use a connection pool, since having concurrent connections is the most common use case.

Step 1: Formulate your SQL statement

First, compose a string which contains the SQL state that you would use to create a relational table. In the example below, we create a table called sensors, with columns id, type and location:

  1. queryCreateTable := `CREATE TABLE sensors (id SERIAL PRIMARY KEY, type VARCHAR(50), location VARCHAR(50));`

Step 2: Execute the SQL statement and commit changes

Next, we execute our CREATE TABLE statement by calling the Exec() function on the dbpool object, using the arguments of the current context and our statement string, queryCreateTable formulated in step 1.

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "github.com/jackc/pgx/v4"
  7. "github.com/jackc/pgx/v4/pgxpool"
  8. )
  9. func main() {
  10. ctx := context.Background()
  11. connStr := "yourConnectionStringHere"
  12. dbpool, err := pgxpool.Connect(ctx, connStr)
  13. if err != nil {
  14. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  15. os.Exit(1)
  16. }
  17. defer dbpool.Close()
  18. /********************************************/
  19. /* Create relational table */
  20. /********************************************/
  21. //Create relational table called sensors
  22. queryCreateTable := `CREATE TABLE sensors (id SERIAL PRIMARY KEY, type VARCHAR(50), location VARCHAR(50));`
  23. _, err = dbpool.Exec(ctx, queryCreateTable)
  24. if err != nil {
  25. fmt.Fprintf(os.Stderr, "Unable to create SENSORS table: %v\n", err)
  26. os.Exit(1)
  27. }
  28. fmt.Println("Successfully created relational table SENSORS")
  29. }

Generate a Hypertable

In TimescaleDB, the primary point of interaction with your data is a hypertable, the abstraction of a single continuous table across all space and time intervals, such that one can query it via standard SQL.

Virtually all user interactions with TimescaleDB are with hypertables. Creating tables and indexes, altering tables, inserting data, selecting data, etc. can (and should) all be executed on the hypertable.

A hypertable is defined by a standard schema with column names and types, with at least one column specifying a time value.

Step 1: Formulate CREATE TABLE SQL Statements for hypertable

First, we create a variable which houses our CREATE TABLE SQL statement for our hypertable. Notice how the hypertable has the compulsory time column:

Second, we formulate the SELECT statement to convert the table created into a hypertable.Note that we must specify the table name which we wish to convert to a hypertable and its time column name as the two arguments, as mandated by the create_hypertable docs:

  1. queryCreateHypertable := `CREATE TABLE sensor_data (
  2. time TIMESTAMPTZ NOT NULL,
  3. sensor_id INTEGER,
  4. temperature DOUBLE PRECISION,
  5. cpu DOUBLE PRECISION,
  6. FOREIGN KEY (sensor_id) REFERENCES sensors (id)
  7. );
  8. SELECT create_hypertable('sensor_data', 'time');
  9. `

Step 2: Execute SQL statement using .Exec()

Next, we execute our CREATE TABLE statement and SELECT statement which converts the table created into a hypertable. We do this by calling the Exec() function on the dbpool object, using the arguments of the current context and our statement string queryCreateHypertable formulated in step 1 above.

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "github.com/jackc/pgx/v4"
  7. "github.com/jackc/pgx/v4/pgxpool"
  8. )
  9. func main() {
  10. ctx := context.Background()
  11. connStr := "yourConnectionStringHere"
  12. dbpool, err := pgxpool.Connect(ctx, connStr)
  13. if err != nil {
  14. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  15. os.Exit(1)
  16. }
  17. defer dbpool.Close()
  18. /********************************************/
  19. /* Create Hypertable */
  20. /********************************************/
  21. // Create hypertable of time-series data called sensor_data
  22. //formulate statement
  23. queryCreateHypertable := `CREATE TABLE sensor_data (
  24. time TIMESTAMPTZ NOT NULL,
  25. sensor_id INTEGER,
  26. temperature DOUBLE PRECISION,
  27. cpu DOUBLE PRECISION,
  28. FOREIGN KEY (sensor_id) REFERENCES sensors (id)
  29. );
  30. SELECT create_hypertable('sensor_data', 'time');
  31. `
  32. //execute statement
  33. _, err = dbpool.Exec(ctx, queryCreateHypertable)
  34. if err != nil {
  35. fmt.Fprintf(os.Stderr, "Unable to create SENSOR_DATA hypertable: %v\n", err)
  36. os.Exit(1)
  37. }
  38. fmt.Println("Successfully created hypertable SENSOR_DATA")
  39. }

Congratulations, you’ve successfully created a hypertable in your Timescale database using Go!

Insert a row into your Timescale database

Here’s a typical pattern you’d use to insert some data into a table. In the example below, we insert the relational data from the two arrays, sensorTypes and sensorLocations, into the relational table named sensors.

First, we open a connection pool to the database, then using prepared statements formulate our INSERT SQL statement and then we execute that statement:

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "github.com/jackc/pgx/v4"
  7. "github.com/jackc/pgx/v4/pgxpool"
  8. )
  9. func main() {
  10. ctx := context.Background()
  11. connStr := "yourConnectionStringHere"
  12. dbpool, err := pgxpool.Connect(ctx, connStr)
  13. if err != nil {
  14. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  15. os.Exit(1)
  16. }
  17. defer dbpool.Close()
  18. /********************************************/
  19. /* INSERT into relational table */
  20. /********************************************/
  21. //Insert data into relational table
  22. // Slices of sample data to insert
  23. // observation i has type sensorTypes[i] and location sensorLocations[i]
  24. sensorTypes := []string{"a", "a", "b", "b"}
  25. sensorLocations := []string{"floor", "ceiling", "floor", "ceiling"}
  26. for i := range sensorTypes {
  27. //INSERT statement in SQL
  28. queryInsertMetadata := `INSERT INTO sensors (type, location) VALUES ($1, $2);`
  29. //Execute INSERT command
  30. _, err := dbpool.Exec(ctx, queryInsertMetadata, sensorTypes[i], sensorLocations[i])
  31. if err != nil {
  32. fmt.Fprintf(os.Stderr, "Unable to insert data into database: %v\n", err)
  33. os.Exit(1)
  34. }
  35. fmt.Printf("Inserted sensor (%s, %s) into database \n", sensorTypes[i], sensorLocations[i])
  36. }
  37. fmt.Println("Successfully inserted all sensors into database")
  38. }

Insert a batch of rows into your Timescale database

We can insert a batch of rows into TimescaleDB in a couple of different ways. First, let’s see what it looks like to insert a number of rows, but one at a time.

Step 0: Generate sample time-series data to insert

For simplicity’s sake, we’ll use PostgreSQL to generate some sample time-series data in order to insert into the sensor_data hypertable. To do this, we define the SQL statement to generate the data, called queryDataGeneration. Then we use the .Query() function to execute the statement and return our sample data.

Then we store the data returned by our query in results, a slice of structs, which we will then use as a source to insert data into our hypertable.

  1. // Generate data to insert
  2. //SQL query to generate sample data
  3. queryDataGeneration := `
  4. SELECT generate_series(now() - interval '24 hour', now(), interval '5 minute') AS time,
  5. floor(random() * (3) + 1)::int as sensor_id,
  6. random()*100 AS temperature,
  7. random() AS cpu
  8. `
  9. //Execute query to generate samples for sensor_data hypertable
  10. rows, err := dbpool.Query(ctx, queryDataGeneration)
  11. if err != nil {
  12. fmt.Fprintf(os.Stderr, "Unable to generate sensor data: %v\n", err)
  13. os.Exit(1)
  14. }
  15. defer rows.Close()
  16. fmt.Println("Successfully generated sensor data\n")
  17. //Store data generated in slice results
  18. type result struct {
  19. Time time.Time
  20. SensorId int
  21. Temperature float64
  22. CPU float64
  23. }
  24. var results []result
  25. for rows.Next() {
  26. var r result
  27. err = rows.Scan(&r.Time, &r.SensorId, &r.Temperature, &r.CPU)
  28. if err != nil {
  29. fmt.Fprintf(os.Stderr, "Unable to scan %v\n", err)
  30. os.Exit(1)
  31. }
  32. results = append(results, r)
  33. }
  34. // Any errors encountered by rows.Next or rows.Scan will be returned here
  35. if rows.Err() != nil {
  36. fmt.Fprintf(os.Stderr, "rows Error: %v\n", rows.Err())
  37. os.Exit(1)
  38. }
  39. // Check contents of results slice
  40. fmt.Println("Contents of RESULTS slice")
  41. for i := range results {
  42. var r result
  43. r = results[i]
  44. fmt.Printf("Time: %s | ID: %d | Temperature: %f | CPU: %f |\n", &r.Time, r.SensorId, r.Temperature, r.CPU)
  45. }

Step 1: Formulate INSERT SQL statement

First we formulate a SQL insert statement for the sensor_data hypertable.

  1. //SQL query to generate sample data
  2. queryInsertTimeseriesData := `
  3. INSERT INTO sensor_data (time, sensor_id, temperature, cpu) VALUES ($1, $2, $3, $4);
  4. `

Step 2: Iterate over data and INSERT

We then execute that SQL statement for each sample we have in our results slice:

  1. //Insert contents of results slice into TimescaleDB
  2. for i := range results {
  3. var r result
  4. r = results[i]
  5. _, err := dbpool.Exec(ctx, queryInsertTimeseriesData, r.Time, r.SensorId, r.Temperature, r.CPU)
  6. if err != nil {
  7. fmt.Fprintf(os.Stderr, "Unable to insert sample into Timescale %v\n", err)
  8. os.Exit(1)
  9. }
  10. defer rows.Close()
  11. }
  12. fmt.Println("Successfully inserted samples into sensor_data hypertable\n")

Here’s a sample main.go which generates sample data and inserts it into the sensor_data hypertable:

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "time"
  7. "github.com/jackc/pgx/v4"
  8. "github.com/jackc/pgx/v4/pgxpool"
  9. )
  10. func main() {
  11. /********************************************/
  12. /* Connect using Connection Pool */
  13. /********************************************/
  14. ctx := context.Background()
  15. connStr := "yourConnectionStringHere"
  16. dbpool, err := pgxpool.Connect(ctx, connStr)
  17. if err != nil {
  18. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  19. os.Exit(1)
  20. }
  21. defer dbpool.Close()
  22. /********************************************/
  23. /* Insert data into hypertable */
  24. /********************************************/
  25. // Generate data to insert
  26. //SQL query to generate sample data
  27. queryDataGeneration := `
  28. SELECT generate_series(now() - interval '24 hour', now(), interval '5 minute') AS time,
  29. floor(random() * (3) + 1)::int as sensor_id,
  30. random()*100 AS temperature,
  31. random() AS cpu
  32. `
  33. //Execute query to generate samples for sensor_data hypertable
  34. rows, err := dbpool.Query(ctx, queryDataGeneration)
  35. if err != nil {
  36. fmt.Fprintf(os.Stderr, "Unable to generate sensor data: %v\n", err)
  37. os.Exit(1)
  38. }
  39. defer rows.Close()
  40. fmt.Println("Successfully generated sensor data\n")
  41. //Store data generated in slice results
  42. type result struct {
  43. Time time.Time
  44. SensorId int
  45. Temperature float64
  46. CPU float64
  47. }
  48. var results []result
  49. for rows.Next() {
  50. var r result
  51. err = rows.Scan(&r.Time, &r.SensorId, &r.Temperature, &r.CPU)
  52. if err != nil {
  53. fmt.Fprintf(os.Stderr, "Unable to scan %v\n", err)
  54. os.Exit(1)
  55. }
  56. results = append(results, r)
  57. }
  58. // Any errors encountered by rows.Next or rows.Scan will be returned here
  59. if rows.Err() != nil {
  60. fmt.Fprintf(os.Stderr, "rows Error: %v\n", rows.Err())
  61. os.Exit(1)
  62. }
  63. // Check contents of results slice
  64. fmt.Println("Contents of RESULTS slice")
  65. for i := range results {
  66. var r result
  67. r = results[i]
  68. fmt.Printf("Time: %s | ID: %d | Temperature: %f | CPU: %f |\n", &r.Time, r.SensorId, r.Temperature, r.CPU)
  69. }
  70. //Insert contents of results slice into TimescaleDB
  71. //SQL query to generate sample data
  72. queryInsertTimeseriesData := `
  73. INSERT INTO sensor_data (time, sensor_id, temperature, cpu) VALUES ($1, $2, $3, $4);
  74. `
  75. //Insert contents of results slice into TimescaleDB
  76. for i := range results {
  77. var r result
  78. r = results[i]
  79. _, err := dbpool.Exec(ctx, queryInsertTimeseriesData, r.Time, r.SensorId, r.Temperature, r.CPU)
  80. if err != nil {
  81. fmt.Fprintf(os.Stderr, "Unable to insert sample into Timescale %v\n", err)
  82. os.Exit(1)
  83. }
  84. defer rows.Close()
  85. }
  86. fmt.Println("Successfully inserted samples into sensor_data hypertable\n")
  87. }

Batch insert data into TimescaleDB

You’ll notice that the method above executes as many insert statements as there are samples to be inserted. This can make ingestion of data slow. To speed up ingestion, we recommend batch inserting data.

Here’s a sample pattern for how to do so, using the sample data generated in Step 0 above. We will use the pgx Batch object

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "time"
  7. "github.com/jackc/pgx/v4"
  8. "github.com/jackc/pgx/v4/pgxpool"
  9. )
  10. func main() {
  11. /********************************************/
  12. /* Connect using Connection Pool */
  13. /********************************************/
  14. ctx := context.Background()
  15. connStr := "yourConnectionStringHere"
  16. dbpool, err := pgxpool.Connect(ctx, connStr)
  17. if err != nil {
  18. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  19. os.Exit(1)
  20. }
  21. defer dbpool.Close()
  22. // Generate data to insert
  23. //SQL query to generate sample data
  24. queryDataGeneration := `
  25. SELECT generate_series(now() - interval '24 hour', now(), interval '5 minute') AS time,
  26. floor(random() * (3) + 1)::int as sensor_id,
  27. random()*100 AS temperature,
  28. random() AS cpu
  29. `
  30. //Execute query to generate samples for sensor_data hypertable
  31. rows, err := dbpool.Query(ctx, queryDataGeneration)
  32. if err != nil {
  33. fmt.Fprintf(os.Stderr, "Unable to generate sensor data: %v\n", err)
  34. os.Exit(1)
  35. }
  36. defer rows.Close()
  37. fmt.Println("Successfully generated sensor data\n")
  38. //Store data generated in slice results
  39. type result struct {
  40. Time time.Time
  41. SensorId int
  42. Temperature float64
  43. CPU float64
  44. }
  45. var results []result
  46. for rows.Next() {
  47. var r result
  48. err = rows.Scan(&r.Time, &r.SensorId, &r.Temperature, &r.CPU)
  49. if err != nil {
  50. fmt.Fprintf(os.Stderr, "Unable to scan %v\n", err)
  51. os.Exit(1)
  52. }
  53. results = append(results, r)
  54. }
  55. // Any errors encountered by rows.Next or rows.Scan will be returned here
  56. if rows.Err() != nil {
  57. fmt.Fprintf(os.Stderr, "rows Error: %v\n", rows.Err())
  58. os.Exit(1)
  59. }
  60. // Check contents of results slice
  61. /*fmt.Println("Contents of RESULTS slice")
  62. for i := range results {
  63. var r result
  64. r = results[i]
  65. fmt.Printf("Time: %s | ID: %d | Temperature: %f | CPU: %f |\n", &r.Time, r.SensorId, r.Temperature, r.CPU)
  66. }*/
  67. //Insert contents of results slice into TimescaleDB
  68. //SQL query to generate sample data
  69. queryInsertTimeseriesData := `
  70. INSERT INTO sensor_data (time, sensor_id, temperature, cpu) VALUES ($1, $2, $3, $4);
  71. `
  72. /********************************************/
  73. /* Batch Insert into TimescaelDB */
  74. /********************************************/
  75. //create batch
  76. batch := &pgx.Batch{}
  77. numInserts := len(results)
  78. //load insert statements into batch queue
  79. for i := range results {
  80. var r result
  81. r = results[i]
  82. batch.Queue(queryInsertTimeseriesData, r.Time, r.SensorId, r.Temperature, r.CPU)
  83. }
  84. batch.Queue("select count(*) from sensor_data")
  85. //send batch to connection pool
  86. br := dbpool.SendBatch(ctx, batch)
  87. //execute statements in batch queue
  88. for i := 0; i < numInserts; i++ {
  89. _, err := br.Exec()
  90. if err != nil {
  91. fmt.Fprintf(os.Stderr, "Unable to execute statement in batch queue %v\n", err)
  92. os.Exit(1)
  93. }
  94. }
  95. fmt.Println("Successfully batch inserted data n")
  96. //Compare length of results slice to size of table
  97. fmt.Println("size of results: %d\n", len(results))
  98. //check size of table for number of rows inserted
  99. // result of last SELECT statement
  100. var rowsInserted int
  101. err = br.QueryRow().Scan(&rowsInserted)
  102. fmt.Println("size of table: %d\n", rowsInserted)
  103. err = br.Close()
  104. if err != nil {
  105. fmt.Fprintf(os.Stderr, "Unable to closer batch %v\n", err)
  106. os.Exit(1)
  107. }
  108. }

TIP:If you are inserting data from a CSV file, we recommend the timescale-parallel-copy tool, which is a command line program for parallelizing PostgreSQL’s built-in COPY functionality for bulk inserting data into TimescaleDB.

Execute a query on your Timescale database

Step 1: Define the SQL query

First, define the SQL query you’d like to run on the database. The example below contains a query which combines time-series and relational data. It returns the average cpu values for every 5 minute interval for sensors located on location ceiling and of type a.

  1. // Formulate query in SQL
  2. // Note the use of prepared statement placeholders $1 and $2
  3. queryTimebucketFiveMin := `
  4. SELECT time_bucket('5 minutes', time) AS five_min, avg(cpu)
  5. FROM sensor_data
  6. JOIN sensors ON sensors.id = sensor_data.sensor_id
  7. WHERE sensors.location = $1 AND sensors.type = $2
  8. GROUP BY five_min
  9. ORDER BY five_min DESC;
  10. `

Notice the use of placeholders for sensor location and type.

Step 2: Execute the query

Secondly, use the .Query() function to query your TimescaleDB database. Here we execute the query string from step 1, ensuring to specify the relevant placeholders.

  1. //Execute query on TimescaleDB
  2. rows, err := dbpool.Query(ctx, queryTimebucketFiveMin, "ceiling", "a")
  3. if err != nil {
  4. fmt.Fprintf(os.Stderr, "Unable to execute query %v\n", err)
  5. os.Exit(1)
  6. }
  7. defer rows.Close()
  8. fmt.Println("Successfully executed query")

Step 3: Access results returned by the query

We can access the rows returned by .Query() by using the following pattern. First we create a struct with fields representing the columns we expect to be returned.

Then we use the rows.Next() function to iterate through the rows returned and fill up results our array of structs. To do this we use the rows.Scan() function, passing in pointers to the fields to which we want to scan the results.

In the example below, we print out the results returned from our query, but you might want to use those results for some other purpose. Once you’ve acanned through all the rows returned you can then use the results array for your desired purpose.

  1. //Do something with the results of query
  2. // Struct for results
  3. type result2 struct {
  4. Bucket time.Time
  5. Avg float64
  6. }
  7. // Print rows returned and fill up results slice for later use
  8. var results []result2
  9. for rows.Next() {
  10. var r result2
  11. err = rows.Scan(&r.Bucket, &r.Avg)
  12. if err != nil {
  13. fmt.Fprintf(os.Stderr, "Unable to scan %v\n", err)
  14. os.Exit(1)
  15. }
  16. results = append(results, r)
  17. fmt.Printf("Time bucket: %s | Avg: %f\n", &r.Bucket, r.Avg)
  18. }
  19. // Any errors encountered by rows.Next or rows.Scan will be returned here
  20. if rows.Err() != nil {
  21. fmt.Fprintf(os.Stderr, "rows Error: %v\n", rows.Err())
  22. os.Exit(1)
  23. }
  24. // use results here…

Sample main.go for querying data in TimescaleDB

Here’s a sample program which combines steps 1, 2 and 3 above, to run a query on a TimescaleDB database and access the results of that query.

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "os"
  6. "time"
  7. "github.com/jackc/pgx/v4/pgxpool"
  8. )
  9. func main() {
  10. ctx := context.Background()
  11. connStr := "yourConnectionStringHere"
  12. dbpool, err := pgxpool.Connect(ctx, connStr)
  13. if err != nil {
  14. fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
  15. os.Exit(1)
  16. }
  17. defer dbpool.Close()
  18. /********************************************/
  19. /* Execute a query */
  20. /********************************************/
  21. // Formulate query in SQL
  22. // Note the use of prepared statement placeholders $1 and $2
  23. queryTimebucketFiveMin := `
  24. SELECT time_bucket('5 minutes', time) AS five_min, avg(cpu)
  25. FROM sensor_data
  26. JOIN sensors ON sensors.id = sensor_data.sensor_id
  27. WHERE sensors.location = $1 AND sensors.type = $2
  28. GROUP BY five_min
  29. ORDER BY five_min DESC;
  30. `
  31. //Execute query on TimescaleDB
  32. rows, err := dbpool.Query(ctx, queryTimebucketFiveMin, "ceiling", "a")
  33. if err != nil {
  34. fmt.Fprintf(os.Stderr, "Unable to execute query %v\n", err)
  35. os.Exit(1)
  36. }
  37. defer rows.Close()
  38. fmt.Println("Successfully executed query")
  39. //Do something with the results of query
  40. // Struct for results
  41. type result2 struct {
  42. Bucket time.Time
  43. Avg float64
  44. }
  45. // Print rows returned and fill up results slice for later use
  46. var results []result2
  47. for rows.Next() {
  48. var r result2
  49. err = rows.Scan(&r.Bucket, &r.Avg)
  50. if err != nil {
  51. fmt.Fprintf(os.Stderr, "Unable to scan %v\n", err)
  52. os.Exit(1)
  53. }
  54. results = append(results, r)
  55. fmt.Printf("Time bucket: %s | Avg: %f\n", &r.Bucket, r.Avg)
  56. }
  57. // Any errors encountered by rows.Next or rows.Scan will be returned here
  58. if rows.Err() != nil {
  59. fmt.Fprintf(os.Stderr, "rows Error: %v\n", rows.Err())
  60. os.Exit(1)
  61. }
  62. }

Congratulations, you’ve successfully executed a query on TimescaleDB using Go and pgx!

Next steps

Now that you’re able to connect, read, and write to a TimescaleDB instance from your Golang application, be sure to check out these advanced TimescaleDB tutorials: