Getting Started

Installation

Create python environment

Using conda to create a python 3 environment. Conda is a great tool for managing Python environments, see official docs to get more information.

shell

  1. conda create --name Greptime python=3
  2. conda activate Greptime

Install GreptimeDB

Please refer to Installation.

Hello world example

Let’s begin with a hello world example:

python

  1. @coprocessor(returns=['msg'])
  2. def hello() -> vector[str]:
  3. return "Hello, GreptimeDB"

Save it as hello.py, then post it by HTTP API:

Submit the Python Script to GreptimeDB

sh

  1. curl --data-binary "@hello.py" -XPOST "http://localhost:4000/v1/scripts?name=hello&db=public"

Then call it in SQL:

sql

  1. select hello();

sql

  1. +-------------------+
  2. | hello() |
  3. +-------------------+
  4. | Hello, GreptimeDB |
  5. +-------------------+
  6. 1 row in set (1.77 sec)

Or call it by HTTP API:

sh

  1. curl -XPOST "http://localhost:4000/v1/run-script?name=hello&db=public"

json

  1. {
  2. "code": 0,
  3. "output": [
  4. {
  5. "records": {
  6. "schema": {
  7. "column_schemas": [
  8. {
  9. "name": "msg",
  10. "data_type": "String"
  11. }
  12. ]
  13. },
  14. "rows": [["Hello, GreptimeDB"]]
  15. }
  16. }
  17. ],
  18. "execution_time_ms": 1917
  19. }

The function hello is a coprocessor with an annotation @coprocessor. The returns in @coprocessor specifies the return column names by the coprocessor and generates the final schema of output:

json

  1. "schema": {
  2. "column_schemas": [
  3. {
  4. "name": "msg",
  5. "data_type": "String"
  6. }
  7. ]
  8. }

The -> vector[str] part after the argument list specifies the return types of the function. They are always vectors with concrete types. The return types are required to generate the output of the coprocessor function.

The function body of hello returns a literal string: "Hello, GreptimeDB".The Coprocessor engine will cast it into a vector of constant string and return it.

A coprocessor contains three main parts in summary:

  • The @coprocessor annotation.
  • The function input and output.
  • The function body.

We can call a coprocessor in SQL like a SQL UDF(User Defined Function) or call it by HTTP API.

SQL example

Save your python code for complex analysis (like the following one which determines the load status by cpu/mem/disk usage) into a file (here its named system_status.py):

python

  1. @coprocessor(args=["host", "idc", "cpu_util", "memory_util", "disk_util"],
  2. returns = ["host", "idc", "status"],
  3. sql = "SELECT * FROM system_metrics")
  4. def system_status(hosts, idcs, cpus, memories, disks)\
  5. -> (vector[str], vector[str], vector[str]):
  6. statuses = []
  7. for host, cpu, memory, disk in zip(hosts, cpus, memories, disks):
  8. if cpu > 80 or memory > 80 or disk > 80:
  9. statuses.append("red")
  10. continue
  11. status = cpu * 0.4 + memory * 0.4 + disk * 0.2
  12. if status > 80:
  13. statuses.append("red")
  14. elif status > 50:
  15. statuses.append("yello")
  16. else:
  17. statuses.append("green")
  18. return hosts, idcs, statuses

The above piece of code evaluates the host status based on the cpu/memory/disk usage. Arguments come from querying data from system_metrics specified by parameter sql in @coprocessor annotation (here is = "SELECT * FROM system_metrics"). The query result is assigned to each positional argument with corresponding names in args=[...], then the function returns three variables, which are converted back into three columns returns = ["host", "idc", "status"].

Submit the Python Script to GreptimeDB

You can submit the file to GreptimeDB with a script name so you can refer to it by this name(system_status) later and execute it:

shell

  1. curl --data-binary "@system_status.py" \
  2. -XPOST "http://localhost:4000/v1/scripts?name=system_status&db=greptime.public"

Run the script:

shell

  1. curl -XPOST \
  2. "http://localhost:4000/v1/run-script?name=system_status&db=public"

Getting the results in json format:

json

  1. {
  2. "code": 0,
  3. "output": {
  4. "records": {
  5. "schema": {
  6. "column_schemas": [
  7. {
  8. "name": "host",
  9. "data_type": "String"
  10. },
  11. {
  12. "name": "idc",
  13. "data_type": "String"
  14. },
  15. {
  16. "name": "status",
  17. "data_type": "String"
  18. }
  19. ]
  20. },
  21. "rows": [
  22. ["host1", "idc_a", "green"],
  23. ["host1", "idc_b", "yello"],
  24. ["host2", "idc_a", "red"]
  25. ]
  26. }
  27. }
  28. }

For more information about python coprocessor, please refer to Function for more information.