Windows based Front proxy

Requirements

Sandbox environment

Setup your sandbox environment with Docker and Docker Compose, and clone the Envoy repository with Git.

To get a flavor of what Envoy has to offer on Windows, we are releasing a docker compose sandbox that deploys a front Envoy and a couple of services (simple Flask apps) colocated with a running service Envoy.

The three containers will be deployed inside a virtual network called envoymesh.

Below you can see a graphic showing the docker compose deployment:

../../_images/docker_compose_front_proxy.svg

All incoming requests are routed via the front Envoy, which is acting as a reverse proxy sitting on the edge of the envoymesh network. Port 8080, 8443, and 8001 are exposed by docker compose (see docker-compose.yaml) to handle HTTP, HTTPS calls to the services and requests to /admin respectively.

Moreover, notice that all traffic routed by the front Envoy to the service containers is actually routed to the service Envoys (routes setup in front-envoy.yaml).

In turn the service Envoys route the request to the Flask app via the loopback address (routes setup in service-envoy.yaml). This setup illustrates the advantage of running service Envoys collocated with your services: all requests are handled by the service Envoy, and efficiently routed to your services.

Step 1: Start all of our containers

Change to the examples/front-proxy directory.

  1. PS> $PWD
  2. D:\envoy\examples\win32-front-proxy
  3. PS> docker-compose build --pull
  4. PS> docker-compose up -d
  5. PS> docker-compose ps
  6. Name Command State Ports
  7. ------------------------------------------------------------------------------------------------------------------------------------------------------------
  8. envoy-front-proxy_front-envoy_1 powershell.exe ./start_env ... Up 10000/tcp, 0.0.0.0:8003->8003/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8443->8443/tcp
  9. envoy-front-proxy_service1_1 powershell.exe ./start_ser ... Up 10000/tcp
  10. envoy-front-proxy_service2_1 powershell.exe ./start_ser ... Up 10000/tcp

Step 2: Test Envoy’s routing capabilities

You can now send a request to both services via the front-envoy.

For service1:

  1. PS> curl -v localhost:8080/service/1
  2. * Trying ::1...
  3. * TCP_NODELAY set
  4. * Trying 127.0.0.1...
  5. * TCP_NODELAY set
  6. * Connected to localhost (127.0.0.1) port 8080 (#0)
  7. > GET /service/1 HTTP/1.1
  8. > Host: localhost:8080
  9. > User-Agent: curl/7.55.1
  10. > Accept: */*
  11. >
  12. < HTTP/1.1 200 OK
  13. < content-type: text/html; charset=utf-8
  14. < content-length: 92
  15. < server: envoy
  16. < date: Wed, 05 May 2021 05:55:55 GMT
  17. < x-envoy-upstream-service-time: 18
  18. <
  19. Hello from behind Envoy (service 1)! hostname: 8a45bba91d83 resolvedhostname: 172.30.97.237
  20. * Connection #0 to host localhost left intact

For service2:

  1. PS> curl -v localhost:8080/service/2
  2. * Trying ::1...
  3. * TCP_NODELAY set
  4. * Trying 127.0.0.1...
  5. * TCP_NODELAY set
  6. * Connected to localhost (127.0.0.1) port 8080 (#0)
  7. > GET /service/2 HTTP/1.1
  8. > Host: localhost:8080
  9. > User-Agent: curl/7.55.1
  10. > Accept: */*
  11. >
  12. < HTTP/1.1 200 OK
  13. < content-type: text/html; charset=utf-8
  14. < content-length: 93
  15. < server: envoy
  16. < date: Wed, 05 May 2021 05:57:03 GMT
  17. < x-envoy-upstream-service-time: 14
  18. <
  19. Hello from behind Envoy (service 2)! hostname: 51e28eb3c8b8 resolvedhostname: 172.30.109.113
  20. * Connection #0 to host localhost left intact

Notice that each request, while sent to the front Envoy, was correctly routed to the respective application.

We can also use HTTPS to call services behind the front Envoy. For example, calling service1:

  1. PS> curl https://localhost:8443/service/1 -k -v
  2. * Trying ::1...
  3. * TCP_NODELAY set
  4. * Trying 127.0.0.1...
  5. * TCP_NODELAY set
  6. * Connected to localhost (127.0.0.1) port 8443 (#0)
  7. * schannel: SSL/TLS connection with localhost port 8443 (step 1/3)
  8. * schannel: disabled server certificate revocation checks
  9. * schannel: verifyhost setting prevents Schannel from comparing the supplied target name with the subject names in server certificates.
  10. * schannel: sending initial handshake data: sending 171 bytes...
  11. * schannel: sent initial handshake data: sent 171 bytes
  12. * schannel: SSL/TLS connection with localhost port 8443 (step 2/3)
  13. * schannel: failed to receive handshake, need more data
  14. * schannel: SSL/TLS connection with localhost port 8443 (step 2/3)
  15. * schannel: encrypted data got 1081
  16. * schannel: encrypted data buffer: offset 1081 length 4096
  17. * schannel: sending next handshake data: sending 93 bytes...
  18. * schannel: SSL/TLS connection with localhost port 8443 (step 2/3)
  19. * schannel: encrypted data got 258
  20. * schannel: encrypted data buffer: offset 258 length 4096
  21. * schannel: SSL/TLS handshake complete
  22. * schannel: SSL/TLS connection with localhost port 8443 (step 3/3)
  23. * schannel: stored credential handle in session cache
  24. > GET /service/1 HTTP/1.1
  25. > Host: localhost:8443
  26. > User-Agent: curl/7.55.1
  27. > Accept: */*
  28. >
  29. * schannel: client wants to read 102400 bytes
  30. * schannel: encdata_buffer resized 103424
  31. * schannel: encrypted data buffer: offset 0 length 103424
  32. * schannel: encrypted data got 286
  33. * schannel: encrypted data buffer: offset 286 length 103424
  34. * schannel: decrypted data length: 257
  35. * schannel: decrypted data added: 257
  36. * schannel: decrypted data cached: offset 257 length 102400
  37. * schannel: encrypted data buffer: offset 0 length 103424
  38. * schannel: decrypted data buffer: offset 257 length 102400
  39. * schannel: schannel_recv cleanup
  40. * schannel: decrypted data returned 257
  41. * schannel: decrypted data buffer: offset 0 length 102400
  42. < HTTP/1.1 200 OK
  43. < content-type: text/html; charset=utf-8
  44. < content-length: 92
  45. < server: envoy
  46. < date: Wed, 05 May 2021 05:57:45 GMT
  47. < x-envoy-upstream-service-time: 3
  48. <
  49. Hello from behind Envoy (service 1)! hostname: 8a45bba91d83 resolvedhostname: 172.30.97.237
  50. * Connection #0 to host localhost left intact

Step 3: Test Envoy’s load balancing capabilities

Now let’s scale up our service1 nodes to demonstrate the load balancing abilities of Envoy:

  1. PS> docker-compose scale service1=3
  2. Creating and starting example_service1_2 ... done
  3. Creating and starting example_service1_3 ... done

Now if we send a request to service1 multiple times, the front Envoy will load balance the requests by doing a round robin of the three service1 machines:

  1. PS> curl -v localhost:8080/service/1
  2. * Trying ::1...
  3. * TCP_NODELAY set
  4. * Trying 127.0.0.1...
  5. * TCP_NODELAY set
  6. * Connected to localhost (127.0.0.1) port 8080 (#0)
  7. > GET /service/1 HTTP/1.1
  8. > Host: localhost:8080
  9. > User-Agent: curl/7.55.1
  10. > Accept: */*
  11. >
  12. < HTTP/1.1 200 OK
  13. < content-type: text/html; charset=utf-8
  14. < content-length: 93
  15. < server: envoy
  16. < date: Wed, 05 May 2021 05:58:40 GMT
  17. < x-envoy-upstream-service-time: 22
  18. <
  19. Hello from behind Envoy (service 1)! hostname: 8d2359ee21a8 resolvedhostname: 172.30.101.143
  20. * Connection #0 to host localhost left intact
  21. PS> curl -v localhost:8080/service/1
  22. * Trying ::1...
  23. * TCP_NODELAY set
  24. * Trying 127.0.0.1...
  25. * TCP_NODELAY set
  26. * Connected to localhost (127.0.0.1) port 8080 (#0)
  27. > GET /service/1 HTTP/1.1
  28. > Host: localhost:8080
  29. > User-Agent: curl/7.55.1
  30. > Accept: */*
  31. >
  32. < HTTP/1.1 200 OK
  33. < content-type: text/html; charset=utf-8
  34. < content-length: 91
  35. < server: envoy
  36. < date: Wed, 05 May 2021 05:58:43 GMT
  37. < x-envoy-upstream-service-time: 11
  38. <
  39. Hello from behind Envoy (service 1)! hostname: 41e1141eebf4 resolvedhostname: 172.30.96.11
  40. * Connection #0 to host localhost left intact
  41. PS> curl -v localhost:8080/service/1
  42. * Trying ::1...
  43. * TCP_NODELAY set
  44. * Trying 127.0.0.1...
  45. * TCP_NODELAY set
  46. * Connected to localhost (127.0.0.1) port 8080 (#0)
  47. > GET /service/1 HTTP/1.1
  48. > Host: localhost:8080
  49. > User-Agent: curl/7.55.1
  50. > Accept: */*
  51. >
  52. < HTTP/1.1 200 OK
  53. < content-type: text/html; charset=utf-8
  54. < content-length: 92
  55. < server: envoy
  56. < date: Wed, 05 May 2021 05:58:44 GMT
  57. < x-envoy-upstream-service-time: 7
  58. <
  59. Hello from behind Envoy (service 1)! hostname: 8a45bba91d83 resolvedhostname: 172.30.97.237
  60. * Connection #0 to host localhost left intact

Step 4: Enter containers and curl services

In addition of using curl from your host machine, you can also enter the containers themselves and curl from inside them. To enter a container you can use docker-compose exec <container_name> /bin/bash. For example we can enter the front-envoy container, and curl for services locally:

  1. PS> docker-compose exec front-envoy powershell
  2. PS C:\> (curl -UseBasicParsing http://localhost:8080/service/1).Content
  3. Hello from behind Envoy (service 1)! hostname: 41e1141eebf4 resolvedhostname: 172.30.96.11
  4. PS C:\> (curl -UseBasicParsing http://localhost:8080/service/1).Content
  5. Hello from behind Envoy (service 1)! hostname: 8a45bba91d83 resolvedhostname: 172.30.97.237
  6. PS C:\> (curl -UseBasicParsing http://localhost:8080/service/1).Content
  7. Hello from behind Envoy (service 1)! hostname: 8d2359ee21a8 resolvedhostname: 172.30.101.143

Step 5: Enter container and curl admin interface

When Envoy runs it also attaches an admin to your desired port.

In the example configs the admin listener is bound to port 8001.

We can curl it to gain useful information:

  • /server_info provides information about the Envoy version you are running.

  • /stats provides statistics about the Envoy server.

In the example we can enter the front-envoy container to query admin:

  1. PS> docker-compose exec front-envoy powershell
  2. PS C:\> (curl http://localhost:8003/server_info -UseBasicParsing).Content
  1. {
  2. "version": "093e2ffe046313242144d0431f1bb5cf18d82544/1.15.0-dev/Clean/RELEASE/BoringSSL",
  3. "state": "LIVE",
  4. "hot_restart_version": "11.104",
  5. "command_line_options": {
  6. "base_id": "0",
  7. "use_dynamic_base_id": false,
  8. "base_id_path": "",
  9. "concurrency": 8,
  10. "config_path": "/etc/front-envoy.yaml",
  11. "config_yaml": "",
  12. "allow_unknown_static_fields": false,
  13. "reject_unknown_dynamic_fields": false,
  14. "ignore_unknown_dynamic_fields": false,
  15. "admin_address_path": "",
  16. "local_address_ip_version": "v4",
  17. "log_level": "info",
  18. "component_log_level": "",
  19. "log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v",
  20. "log_format_escaped": false,
  21. "log_path": "",
  22. "service_cluster": "front-proxy",
  23. "service_node": "",
  24. "service_zone": "",
  25. "drain_strategy": "Gradual",
  26. "mode": "Serve",
  27. "disable_hot_restart": false,
  28. "enable_mutex_tracing": false,
  29. "restart_epoch": 0,
  30. "cpuset_threads": false,
  31. "disabled_extensions": [],
  32. "bootstrap_version": 0,
  33. "hidden_envoy_deprecated_max_stats": "0",
  34. "hidden_envoy_deprecated_max_obj_name_len": "0",
  35. "file_flush_interval": "10s",
  36. "drain_time": "600s",
  37. "parent_shutdown_time": "900s"
  38. },
  39. "uptime_current_epoch": "188s",
  40. "uptime_all_epochs": "188s"
  41. }
  1. PS C:\> (curl http://localhost:8003/stats -UseBasicParsing).Content
  2. cluster.service1.external.upstream_rq_200: 7
  3. ...
  4. cluster.service1.membership_change: 2
  5. cluster.service1.membership_total: 3
  6. ...
  7. cluster.service1.upstream_cx_http2_total: 3
  8. ...
  9. cluster.service1.upstream_rq_total: 7
  10. ...
  11. cluster.service2.external.upstream_rq_200: 2
  12. ...
  13. cluster.service2.membership_change: 1
  14. cluster.service2.membership_total: 1
  15. ...
  16. cluster.service2.upstream_cx_http2_total: 1
  17. ...
  18. cluster.service2.upstream_rq_total: 2
  19. ...

Notice that we can get the number of members of upstream clusters, number of requests fulfilled by them, information about http ingress, and a plethora of other useful stats.

See also

Envoy admin quick start guide

Quick start guide to the Envoy admin interface.