consul

Summary

APACHE APISIX supports Consul as a service discovery

Configuration for discovery client

Configuration for Consul

First of all, we need to add following configuration in conf/config.yaml :

  1. discovery:
  2. consul:
  3. servers: # make sure service name is unique in these consul servers
  4. - "http://127.0.0.1:8500" # `http://127.0.0.1:8500` and `http://127.0.0.1:8600` are different clusters
  5. - "http://127.0.0.1:8600" # `consul` service is default skip service
  6. token: "..." # if your consul cluster has enabled acl access control, you need to specify the token
  7. skip_services: # if you need to skip special services
  8. - "service_a"
  9. timeout:
  10. connect: 1000 # default 2000 ms
  11. read: 1000 # default 2000 ms
  12. wait: 60 # default 60 sec
  13. weight: 1 # default 1
  14. fetch_interval: 5 # default 3 sec, only take effect for keepalive: false way
  15. keepalive: true # default true, use the long pull way to query consul servers
  16. sort_type: "origin" # default origin
  17. default_service: # you can define default service when missing hit
  18. host: "127.0.0.1"
  19. port: 20999
  20. metadata:
  21. fail_timeout: 1 # default 1 ms
  22. weight: 1 # default 1
  23. max_fails: 1 # default 1
  24. dump: # if you need, when registered nodes updated can dump into file
  25. path: "logs/consul.dump"
  26. expire: 2592000 # unit sec, here is 30 day

And you can config it in short by default value:

  1. discovery:
  2. consul:
  3. servers:
  4. - "http://127.0.0.1:8500"

The keepalive has two optional values:

  • true, default and recommend value, use the long pull way to query consul servers
  • false, not recommend, it would use the short pull way to query consul servers, then you can set the fetch_interval for fetch interval

The sort_type has four optional values:

  • origin, not sorting
  • host_sort, sort by host
  • port_sort, sort by port
  • combine_sort, with the precondition that hosts are ordered, ports are also ordered.

Dump Data

When we need reload apisix online, as the consul module maybe loads data from CONSUL slower than load routes from ETCD, and would get the log at the moment before load successfully from consul:

  1. http_access_phase(): failed to set upstream: no valid upstream node

So, we import the dump function for consul module. When reload, would load the dump file before from consul; when the registered nodes in consul been updated, would dump the upstream nodes into file automatically.

The dump has three optional values now:

  • path, the dump file save path
    • support relative path, eg: logs/consul.dump
    • support absolute path, eg: /tmp/consul.dump
    • make sure the dump file’s parent path exist
    • make sure the apisix has the dump file’s read-write access permission,eg: add below config in conf/config.yaml
  1. nginx_config: # config for render the template to generate nginx.conf
  2. user: root # specifies the execution user of the worker process.
  • load_on_init, default value is true
    • if true, just try to load the data from the dump file before loading data from consul when starting, does not care the dump file exists or not
    • if false, ignore loading data from the dump file
    • Whether true or false, we don’t need to prepare a dump file for apisix at anytime
  • expire, unit sec, avoiding load expired dump data when load
    • default 0, it is unexpired forever
    • recommend 2592000, which is 30 days(equals 3600 * 24 * 30)

Register Http API Services

Now, register nodes into consul:

  1. curl -X PUT 'http://127.0.0.1:8500/v1/agent/service/register' \
  2. -d '{
  3. "ID": "service_a1",
  4. "Name": "service_a",
  5. "Tags": ["primary", "v1"],
  6. "Address": "127.0.0.1",
  7. "Port": 8000,
  8. "Meta": {
  9. "service_a_version": "4.0"
  10. },
  11. "EnableTagOverride": false,
  12. "Weights": {
  13. "Passing": 10,
  14. "Warning": 1
  15. }
  16. }'
  17. curl -X PUT 'http://127.0.0.1:8500/v1/agent/service/register' \
  18. -d '{
  19. "ID": "service_a1",
  20. "Name": "service_a",
  21. "Tags": ["primary", "v1"],
  22. "Address": "127.0.0.1",
  23. "Port": 8002,
  24. "Meta": {
  25. "service_a_version": "4.0"
  26. },
  27. "EnableTagOverride": false,
  28. "Weights": {
  29. "Passing": 10,
  30. "Warning": 1
  31. }
  32. }'

In some cases, same service name might exist in different consul servers. To avoid confusion, use the full consul key url path as service name in practice.

Upstream setting

L7

Here is an example of routing a request with a URL of “/*“ to a service which named “service_a” and use consul discovery client in the registry :

  1. $ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
  2. {
  3. "uri": "/*",
  4. "upstream": {
  5. "service_name": "service_a",
  6. "type": "roundrobin",
  7. "discovery_type": "consul"
  8. }
  9. }'

The format response as below:

  1. {
  2. "key": "/apisix/routes/1",
  3. "value": {
  4. "uri": "/*",
  5. "priority": 0,
  6. "id": "1",
  7. "upstream": {
  8. "scheme": "http",
  9. "type": "roundrobin",
  10. "hash_on": "vars",
  11. "discovery_type": "consul",
  12. "service_name": "service_a",
  13. "pass_host": "pass"
  14. },
  15. "create_time": 1669267329,
  16. "status": 1,
  17. "update_time": 1669267329
  18. }
  19. }

You could find more usage in the apisix/t/discovery/consul.t file.

L4

Consul service discovery also supports use in L4, the configuration method is similar to L7.

  1. $ curl http://127.0.0.1:9180/apisix/admin/stream_routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
  2. {
  3. "remote_addr": "127.0.0.1",
  4. "upstream": {
  5. "scheme": "tcp",
  6. "service_name": "service_a",
  7. "type": "roundrobin",
  8. "discovery_type": "consul"
  9. }
  10. }'

You could find more usage in the apisix/t/discovery/stream/consul.t file.

Debugging API

It also offers control api for debugging.

Memory Dump API

  1. GET /v1/discovery/consul/dump

For example:

  1. # curl http://127.0.0.1:9090/v1/discovery/consul/dump | jq
  2. {
  3. "config": {
  4. "fetch_interval": 3,
  5. "timeout": {
  6. "wait": 60,
  7. "connect": 6000,
  8. "read": 6000
  9. },
  10. "weight": 1,
  11. "servers": [
  12. "http://172.19.5.30:8500",
  13. "http://172.19.5.31:8500"
  14. ],
  15. "keepalive": true,
  16. "default_service": {
  17. "host": "172.19.5.11",
  18. "port": 8899,
  19. "metadata": {
  20. "fail_timeout": 1,
  21. "weight": 1,
  22. "max_fails": 1
  23. }
  24. },
  25. "skip_services": [
  26. "service_d"
  27. ]
  28. },
  29. "services": {
  30. "service_a": [
  31. {
  32. "host": "127.0.0.1",
  33. "port": 30513,
  34. "weight": 1
  35. },
  36. {
  37. "host": "127.0.0.1",
  38. "port": 30514,
  39. "weight": 1
  40. }
  41. ],
  42. "service_b": [
  43. {
  44. "host": "172.19.5.51",
  45. "port": 50051,
  46. "weight": 1
  47. }
  48. ],
  49. "service_c": [
  50. {
  51. "host": "127.0.0.1",
  52. "port": 30511,
  53. "weight": 1
  54. },
  55. {
  56. "host": "127.0.0.1",
  57. "port": 30512,
  58. "weight": 1
  59. }
  60. ]
  61. }
  62. }

Show Dump File API

It offers another control api for dump file view now. Maybe would add more api for debugging in future.

  1. GET /v1/discovery/consul/show_dump_file

For example:

  1. curl http://127.0.0.1:9090/v1/discovery/consul/show_dump_file | jq
  2. {
  3. "services": {
  4. "service_a": [
  5. {
  6. "host": "172.19.5.12",
  7. "port": 8000,
  8. "weight": 120
  9. },
  10. {
  11. "host": "172.19.5.13",
  12. "port": 8000,
  13. "weight": 120
  14. }
  15. ]
  16. },
  17. "expire": 0,
  18. "last_update": 1615877468
  19. }