JSON Support

This page walks you through a simple demonstration of how CockroachDB can store and query unstructured JSONB data from a third-party API, as well as how an inverted index can optimize your queries.

Step 1. Install prerequisites

Step 2. Start a single-node cluster

For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode:

  1. $ cockroach start \
  2. --insecure \
  3. --store=json-test \
  4. --listen-addr=localhost:26257 \
  5. --http-addr=localhost:8080

Step 3. Create a user

In a new terminal, as the root user, use the cockroach user command to create a new user, maxroach.

  1. $ cockroach user set maxroach --insecure --host=localhost:26257

Step 4. Create a database and grant privileges

As the root user, open the built-in SQL client:

  1. $ cockroach sql --insecure --host=localhost:26257

Next, create a database called jsonb_test:

  1. > CREATE DATABASE jsonb_test;

Set the database as the default:

  1. > SET DATABASE = jsonb_test;

Then grant privileges to the maxroach user:

  1. > GRANT ALL ON DATABASE jsonb_test TO maxroach;

Step 5. Create a table

Still in the SQL shell, create a table called programming:

  1. > CREATE TABLE programming (
  2. id UUID DEFAULT uuid_v4()::UUID PRIMARY KEY,
  3. posts JSONB
  4. );
  1. > SHOW CREATE programming;
  1. +--------------+-------------------------------------------------+
  2. | Table | CreateTable |
  3. +--------------+-------------------------------------------------+
  4. | programming | CREATE TABLE programming ( |
  5. | | id UUID NOT NULL DEFAULT uuid_v4()::UUID, |
  6. | | posts JSON NULL, |
  7. | | CONSTRAINT "primary" PRIMARY KEY (id ASC), |
  8. | | FAMILY "primary" (id, posts) |
  9. | | ) |
  10. +--------------+-------------------------------------------------+

Step 6. Run the code

Now that you have a database, user, and a table, let's run code to insert rows into the table.

The code queries the Reddit API for posts in /r/programming. The Reddit API only returns 25 results per page; however, each page returns an "after" string that tells you how to get the next page. Therefore, the program does the following in a loop:

  • Makes a request to the API.
  • Inserts the results into the table and grabs the "after" string.
  • Uses the new "after" string as the basis for the next request.
    Download the json-sample.go file, or create the file yourself and copy the code into it:
  1. package main
  2. import (
  3. "database/sql"
  4. "fmt"
  5. "io/ioutil"
  6. "net/http"
  7. "time"
  8. _ "github.com/lib/pq"
  9. )
  10. func main() {
  11. db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257")
  12. if err != nil {
  13. panic(err)
  14. }
  15. // The Reddit API wants us to tell it where to start from. The first request
  16. // we just say "null" to say "from the start", subsequent requests will use
  17. // the value received from the last call.
  18. after := "null"
  19. for i := 0; i < 300; i++ {
  20. after, err = makeReq(db, after)
  21. if err != nil {
  22. panic(err)
  23. }
  24. // Reddit limits to 30 requests per minute, so don't do any more than that.
  25. time.Sleep(2 * time.Second)
  26. }
  27. }
  28. func makeReq(db *sql.DB, after string) (string, error) {
  29. // First, make a request to reddit using the appropriate "after" string.
  30. client := &http.Client{}
  31. req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil)
  32. req.Header.Add("User-Agent", `Go`)
  33. resp, err := client.Do(req)
  34. if err != nil {
  35. return "", err
  36. }
  37. res, err := ioutil.ReadAll(resp.Body)
  38. if err != nil {
  39. return "", err
  40. }
  41. // We've gotten back our JSON from reddit, we can use a couple SQL tricks to
  42. // accomplish multiple things at once.
  43. // The JSON reddit returns looks like this:
  44. // {
  45. // "data": {
  46. // "children": [ ... ]
  47. // },
  48. // "after": ...
  49. // }
  50. // We structure our query so that we extract the `children` field, and then
  51. // expand that and insert each individual element into the database as a
  52. // separate row. We then return the "after" field so we know how to make the
  53. // next request.
  54. r, err := db.Query(`
  55. INSERT INTO jsonb_test.programming (posts)
  56. SELECT json_array_elements($1->'data'->'children')
  57. RETURNING $1->'data'->'after'`,
  58. string(res))
  59. if err != nil {
  60. return "", err
  61. }
  62. // Since we did a RETURNING, we need to grab the result of our query.
  63. r.Next()
  64. var newAfter string
  65. r.Scan(&newAfter)
  66. return newAfter, nil
  67. }

In a new terminal window, navigate to your sample code file and run it:

  1. $ go run json-sample.go

The code queries the Reddit API for posts in /r/programming. The Reddit API only returns 25 results per page; however, each page returns an "after" string that tells you how to get the next page. Therefore, the program does the following in a loop:

  • Makes a request to the API.
  • Grabs the "after" string.
  • Inserts the results into the table.
  • Uses the new "after" string as the basis for the next request.
    Download the json-sample.py file, or create the file yourself and copy the code into it:
  1. import json
  2. import psycopg2
  3. import requests
  4. import time
  5. conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257)
  6. conn.set_session(autocommit=True)
  7. cur = conn.cursor()
  8. # The Reddit API wants us to tell it where to start from. The first request
  9. # we just say "null" to say "from the start"; subsequent requests will use
  10. # the value received from the last call.
  11. url = "https://www.reddit.com/r/programming.json"
  12. after = {"after": "null"}
  13. for n in range(300):
  14. # First, make a request to reddit using the appropriate "after" string.
  15. req = requests.get(url, params=after, headers={"User-Agent": "Python"})
  16. # Decode the JSON and set "after" for the next request.
  17. resp = req.json()
  18. after = {"after": str(resp['data']['after'])}
  19. # Convert the JSON to a string to send to the database.
  20. data = json.dumps(resp)
  21. # The JSON reddit returns looks like this:
  22. # {
  23. # "data": {
  24. # "children": [ ... ]
  25. # },
  26. # "after": ...
  27. # }
  28. # We structure our query so that we extract the `children` field, and then
  29. # expand that and insert each individual element into the database as a
  30. # separate row.
  31. cur.execute("""INSERT INTO jsonb_test.programming (posts)
  32. SELECT json_array_elements(%s->'data'->'children')""", (data,))
  33. # Reddit limits to 30 requests per minute, so don't do any more than that.
  34. time.sleep(2)
  35. cur.close()
  36. conn.close()

In a new terminal window, navigate to your sample code file and run it:

  1. $ python json-sample.py

The program will take awhile to finish, but you can start querying the data right away.

Step 7. Query the data

Back in the terminal where the SQL shell is running, verify that rows of data are being inserted into your table:

  1. > SELECT count(*) FROM programming;
  1. +-------+
  2. | count |
  3. +-------+
  4. | 1120 |
  5. +-------+
  1. > SELECT count(*) FROM programming;
  1. +-------+
  2. | count |
  3. +-------+
  4. | 2400 |
  5. +-------+

Now, retrieve all the current entries where the link is pointing to somewhere on GitHub:

  1. > SELECT id FROM programming \
  2. WHERE posts @> '{"data": {"domain": "github.com"}}';
  1. +--------------------------------------+
  2. | id |
  3. +--------------------------------------+
  4. | 0036d489-3fe3-46ec-8219-2eaee151af4b |
  5. | 00538c2f-592f-436a-866f-d69b58e842b6 |
  6. | 00aff68c-3867-4dfe-82b3-2a27262d5059 |
  7. | 00cc3d4d-a8dd-4c9a-a732-00ed40e542b0 |
  8. | 00ecd1dd-4d22-4af6-ac1c-1f07f3eba42b |
  9. | 012de443-c7bf-461a-b563-925d34d1f996 |
  10. | 014c0ac8-4b4e-4283-9722-1dd6c780f7a6 |
  11. | 017bfb8b-008e-4df2-90e4-61573e3a3f62 |
  12. | 0271741e-3f2a-4311-b57f-a75e5cc49b61 |
  13. | 02f31c61-66a7-41ba-854e-1ece0736f06b |
  14. | 035f31a1-b695-46be-8b22-469e8e755a50 |
  15. | 03bd9793-7b1b-4f55-8cdd-99d18d6cb3ea |
  16. | 03e0b1b4-42c3-4121-bda9-65bcb22dcf72 |
  17. | 0453bc77-4349-4136-9b02-3a6353ea155e |
  18. ...
  19. +--------------------------------------+
  20. (334 rows)
  21. Time: 105.877736ms

Note:
Since you are querying live data, your results for this and the following steps may vary from the results documented in this tutorial.

Step 8. Create an inverted index to optimize performance

The query in the previous step took 105.877736ms. To optimize the performance of queries that filter on the JSONB column, let's create an inverted index on the column:

  1. > CREATE INVERTED INDEX ON programming(posts);

Step 9. Run the query again

Now that there is an inverted index, the same query will run much faster:

  1. > SELECT id FROM programming \
  2. WHERE posts @> '{"data": {"domain": "github.com"}}';
  1. (334 rows)
  2. Time: 28.646769ms

Instead of 105.877736ms, the query now takes 28.646769ms.

What's next?

Explore other core CockroachDB benefits and features:

Was this page helpful?
YesNo