tunnel

A network tunnel to proxy network requests between cloud and edge.

Tunnel acts as the bridge between edge and cloud. It consists of tunnel-cloud and tunnel-edge, which is responsible for maintaining persistent cloud-to-edge network connection. It allows edge nodes with no public IP assigned to be managed by Kubernetes on the cloud to obtain unified and centralized operation and maintenance.

Architecture Diagram

tunnel - 图1

Implementation

Node Registration

  • The tunnel-edge on the edge node actively connects to tunnel-cloud service, and tunnel-cloud service transfers the request to the tunnel-cloud pod according to the load balancing policy.
  • After tunnel-edge establishes a gRPC connection with tunnel-cloud, tunnel-cloud will write the mapping of its podIp and nodeName of the node where tunnel-edge is located into DNS。If the gRPC connection is disconnected, tunnel-cloud will delete the podIp and node name mapping.

Cloud Request Forwarding

  • When apiserver or other cloud applications access the kubelet or other applications on the edge node, the tunnel-dns uses DNS hijacking (resolving the node name in the host to the podIp of tunnel-cloud) to forward the request to the pod of the tunnel-cloud.
  • The tunnel-cloud forwards the request information to the gRPC connection established with the tunnel-edge according to the node name.
  • The tunnel-edge requests the application on the edge node according to the received request information.

Configuration File

The tunnel component includes tunnel-cloud and tunnel-edge. The tunnel-edge running on the edge node establishes a gRPC long-lived with the tunnel-cloud running on the cloud, which is used to forward the tunnel from the cloud to the edge node.

Tunnel-cloud

The tunnel-cloud contains three modules of stream, TCP and HTTPS. The stream module includes gRPC server and dns components. The gRPC server is used to receive gRPC long-lived requests from tunnel-edge, and the dns component is used to update the node name and IP mapping in the tunnel-cloud memory to the coredns hosts plug-in configmap.

Tunnel-cloud Configuration

tunnel-cloud-conf.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: tunnel-cloud-conf
  5. namespace: edge-system
  6. data:
  7. tunnel_cloud.toml: |
  8. [mode]
  9. [mode.cloud]
  10. [mode.cloud.stream] # stream module
  11. [mode.cloud.stream.server] # gRPC server component
  12. grpcport = 9000 # listening port of gRPC server
  13. logport = 8000 # The listening port of the http server for log and health check,use (curl -X PUT http://podip:logport/debug/flags/v -d "8") to set the log level
  14. channelzaddr = "0.0.0.0:6000" # The listening address of the gRPC [channlez](https://grpc.io/blog/a-short-introduction-to-channelz/)server, used to obtain the debugging information of gRPC
  15. key = "../../conf/certs/cloud.key" # The server-side private key of gRPC server
  16. cert = "../../conf/certs/cloud.crt" # Server-side certificate of gRPC server
  17. tokenfile = "../../conf/token" # The token list file (nodename: random string) is used to verify the token sent by the edge node tunneledge. If the verification fails according to the node name, the token corresponding to the default will be used to verify
  18. [mode.cloud.stream.dns] # DNS component
  19. configmap= "proxy-nodes" # configmap of the configuration file of the coredns hosts plugin
  20. hosts = "/etc/superedge/proxy/nodes/hosts" # The path of the configmap of the configuration file of the coredns hosts plugin in the mount file of the tunnel-cloud pod
  21. service = "proxy-cloud-public" # tunnel-cloud service name
  22. debug = true # DNS component switch, debug=true dns component is closed, the node name mapping in the tunnel-cloud memory will not be saved to the configmap of the coredns hosts plug-in configuration file, the default value is false
  23. [mode.cloud.tcp] # TCP module
  24. "0.0.0.0:6443" = "127.0.0.1:6443" # The parameter format is "0.0.0.0:cloudPort": "EdgeServerIp:EdgeServerPort", cloudPort is the server listening port ofthe tunnel-cloud TCP module, EdgeServerIp and EdgeServerPort are the IP and port of the edge node server forwarded by the proxy
  25. [mode.cloud.https] # HTTPS module
  26. cert ="../../conf/certs/kubelet.crt" # HTTPS module server certificate
  27. key = "../../conf/certs/kubelet.key" # HTTPS module server private key
  28. [mode.cloud.https.addr] # The parameter format is "httpsServerPort": "EdgeHttpsServerIp: EdgeHttpsServerPort", httpsServerPort is the listening port of the HTTPS module server, EdgeHttpsServerIp:EdgeHttpsServerPort is the proxy forwarding the IP and port of the edge node HTTPS server, The server of the HTTPS module skips verifying the client certificate, so you can use (curl -k https://podip:httpsServerPort) to access the port monitored by the HTTPS module. The data type of the addr parameter is map, which can support monitoring multiple ports.
  29. "10250" = "101.206.162.213:10250"

Tunnel-edge

The tunnel-edge also contains three modules of stream, TCP and HTTPS. The stream module includes the gRPC client component, which is used to send gRPC long-lived requests to the tunnel-cloud.

Tunnel-edge Configuration

tunnel-edge-conf.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: tunnel-edge-conf
  5. namespace: edge-system
  6. data:
  7. tunnel_edge.toml: |
  8. [mode]
  9. [mode.edge]
  10. [mode.edge.stream] # stream module
  11. [mode.edge.stream.client] # gRPC client component
  12. token = "6ff2a1ea0f1611eb9896362096106d9d" # Authentication token for access to tunnel-cloud
  13. cert = "../../conf/certs/ca.crt" # The ca certificate of the server-side certificate of the gRPC server of tunnel-cloud is used to verify the server-side certificate
  14. dns = "localhost" # The IP or domain name signed by the gRPC server certificate of tunnel-cloud
  15. servername = "localhost:9000" # The IP and port of gRPC server of tunnel-cloud
  16. logport = 7000 # The listening port of the http server for log and health check,use (curl -X PUT http://podip:logport/debug/flags/v -d "8") to set the log level
  17. channelzaddr = "0.0.0.0:5000" # The listening address of the gRPC channlez server, used to obtain the debugging information of gRPC
  18. [mode.edge.https]
  19. cert= "../../conf/certs/kubelet-client.crt" # The client certificate of the HTTPS server forwarded by tunnel-cloud proxy
  20. key= "../../conf/certs/kubelet-client.key" # The private key of the client side of the HTTPS server forwarded by the tunnel-cloud proxy

Tunnel Forwarding Mode

Tunnel proxy supports either TCP or HTTPS request forwarding.

TCP Forwarding

The TCP module will forward the TCP request to the first edge node connected to the cloud. When there is only one tunnel-edge connected to the tunnel-cloud, The request will be forwarded to the node where the tunnel-edge is located

Tunnel-cloud Configuration

tunnel-cloud-conf.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: tunnel-cloud-conf
  5. namespace: edge-system
  6. data:
  7. tunnel_cloud.toml: |
  8. [mode]
  9. [mode.cloud]
  10. [mode.cloud.stream]
  11. [mode.cloud.stream.server]
  12. grpcport = 9000
  13. key = "/etc/superedge/tunnel/certs/tunnel-cloud-server.key"
  14. cert = "/etc/superedge/tunnel/certs/tunnel-cloud-server.crt"
  15. tokenfile = "/etc/superedge/tunnel/token/token"
  16. logport = 51000
  17. [mode.cloud.stream.dns]
  18. debug = true
  19. [mode.cloud.tcp]
  20. "0.0.0.0:6443" = "127.0.0.1:6443"
  21. [mode.cloud.https]

The gRPC server of the tunnel-cloud listens on port 9000 and waits for the tunnel-edge to establish a gRPC long-lived. The 6443 request to access the tunnel-cloud will be forwarded to the server with the access address 127.0.0.1:6443 of the edge node.

tunnel-cloud.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: tunnel-cloud-conf
  5. namespace: edge-system
  6. data:
  7. mode.toml: |
  8. {{tunnel-cloud-tcp.toml}}
  9. ---
  10. apiVersion: v1
  11. kind: ConfigMap
  12. metadata:
  13. name: tunnel-cloud-token
  14. namespace: edge-system
  15. data:
  16. token: |
  17. default:{{.TunnelCloudEdgeToken}}
  18. ---
  19. apiVersion: v1
  20. data:
  21. tunnel-cloud-server.crt: '{{tunnel-cloud-server.crt}}'
  22. tunnel-cloud-server.key: '{{tunnel-cloud-server.key}}'
  23. kind: Secret
  24. metadata:
  25. name: tunnel-cloud-cert
  26. namespace: edge-system
  27. type: Opaque
  28. ---
  29. apiVersion: v1
  30. kind: Service
  31. metadata:
  32. name: tunnel-cloud
  33. namespace: edge-system
  34. spec:
  35. ports:
  36. - name: proxycloud
  37. port: 9000
  38. protocol: TCP
  39. targetPort: 9000
  40. selector:
  41. app: tunnel-cloud
  42. type: NodePort
  43. ---
  44. apiVersion: apps/v1
  45. kind: Deployment
  46. metadata:
  47. labels:
  48. app: tunnel-cloud
  49. name: tunnel-cloud
  50. namespace: edge-system
  51. spec:
  52. selector:
  53. matchLabels:
  54. app: tunnel-cloud
  55. template:
  56. metadata:
  57. labels:
  58. app: tunnel-cloud
  59. spec:
  60. serviceAccount: tunnel-cloud
  61. serviceAccountName: tunnel-cloud
  62. containers:
  63. - name: tunnel-cloud
  64. image: superedge/tunnel:v0.2.0
  65. imagePullPolicy: IfNotPresent
  66. livenessProbe:
  67. httpGet:
  68. path: /cloud/healthz
  69. port: 51010
  70. initialDelaySeconds: 10
  71. periodSeconds: 60
  72. timeoutSeconds: 3
  73. successThreshold: 1
  74. failureThreshold: 1
  75. command:
  76. - /usr/local/bin/tunnel
  77. args:
  78. - --m=cloud
  79. - --c=/etc/superedge/tunnel/conf/mode.toml
  80. - --log-dir=/var/log/tunnel
  81. - --alsologtostderr
  82. volumeMounts:
  83. - name: token
  84. mountPath: /etc/superedge/tunnel/token
  85. - name: certs
  86. mountPath: /etc/superedge/tunnel/certs
  87. - name: conf
  88. mountPath: /etc/superedge/tunnel/conf
  89. ports:
  90. - containerPort: 9000
  91. name: tunnel
  92. protocol: TCP
  93. - containerPort: 6443
  94. name: apiserver
  95. protocol: TCP
  96. resources:
  97. limits:
  98. cpu: 50m
  99. memory: 100Mi
  100. requests:
  101. cpu: 10m
  102. memory: 20Mi
  103. volumes:
  104. - name: token
  105. configMap:
  106. name: tunnel-cloud-token
  107. - name: certs
  108. secret:
  109. secretName: tunnel-cloud-cert
  110. - name: conf
  111. configMap:
  112. name: tunnel-cloud-conf
  113. nodeSelector:
  114. node-role.kubernetes.io/master: ""
  115. tolerations:
  116. - key: "node-role.kubernetes.io/master"
  117. operator: "Exists"
  118. effect: "NoSchedule"

The TunnelCloudEdgeToken in the configmap of tunnel-cloud-token is a random string used to verify the tunnel-edge; The server-side certificate and private key of gRPC server corresponding to the secret of tunnel-cloud-cert.

Tunnel-edge Configuration

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: tunnel-edge-conf
  5. namespace: edge-system
  6. data:
  7. tunnel_edge.toml: |
  8. [mode]
  9. [mode.edge]
  10. [mode.edge.stream]
  11. [mode.edge.stream.client]
  12. token = "{{.TunnelCloudEdgeToken}}"
  13. cert = "/etc/superedge/tunnel/certs/tunnel-ca.crt"
  14. dns = "{{ServerName}}"
  15. servername = "{{.MasterIP}}:9000"
  16. logport = 51000

The tunnel-edge uses MasterIP:9000 to access the tunnel-cloud, uses TunnelCloudEdgeToken as the verification token,and sends it to the cloud for verification. The token is the TunnelCloudEdgeToken in the configmap of the tunnel-cloud-token of the deployment of the tunnel-cloud; DNS is the domain name or IP signed by the gRPC server certificate of the tunnel-cloud; MasterIP is the IP of the node where the tunnel-cloud is located, and 9000 is the tunnel-cloud service NodePort.

tunnel-edge.yaml

  1. ---
  2. kind: ClusterRole
  3. apiVersion: rbac.authorization.k8s.io/v1
  4. metadata:
  5. name: tunnel-edge
  6. rules:
  7. - apiGroups: [ "" ]
  8. resources: [ "configmaps" ]
  9. verbs: [ "get" ]
  10. ---
  11. apiVersion: rbac.authorization.k8s.io/v1
  12. kind: ClusterRoleBinding
  13. metadata:
  14. name: tunnel-edge
  15. roleRef:
  16. apiGroup: rbac.authorization.k8s.io
  17. kind: ClusterRole
  18. name: tunnel-edge
  19. subjects:
  20. - kind: ServiceAccount
  21. name: tunnel-edge
  22. namespace: edge-system
  23. ---
  24. apiVersion: v1
  25. kind: ServiceAccount
  26. metadata:
  27. name: tunnel-edge
  28. namespace: edge-system
  29. ---
  30. apiVersion: v1
  31. kind: ConfigMap
  32. metadata:
  33. name: tunnel-edge-conf
  34. namespace: edge-system
  35. data:
  36. mode.toml: |
  37. {{tunnel-edge-conf}}
  38. ---
  39. apiVersion: v1
  40. data:
  41. tunnel-ca.crt: '{{.tunnel-ca.crt}}'
  42. kind: Secret
  43. metadata:
  44. name: tunnel-edge-cert
  45. namespace: edge-system
  46. type: Opaque
  47. ---
  48. apiVersion: apps/v1
  49. kind: Deployment
  50. metadata:
  51. name: tunnel-edge
  52. namespace: edge-system
  53. spec:
  54. selector:
  55. matchLabels:
  56. app: tunnel-edge
  57. template:
  58. metadata:
  59. labels:
  60. app: tunnel-edge
  61. spec:
  62. hostNetwork: true
  63. containers:
  64. - name: tunnel-edge
  65. image: superedge/tunnel:v0.2.0
  66. imagePullPolicy: IfNotPresent
  67. livenessProbe:
  68. httpGet:
  69. path: /edge/healthz
  70. port: 51010
  71. initialDelaySeconds: 10
  72. periodSeconds: 180
  73. timeoutSeconds: 3
  74. successThreshold: 1
  75. failureThreshold: 3
  76. resources:
  77. limits:
  78. cpu: 20m
  79. memory: 20Mi
  80. requests:
  81. cpu: 10m
  82. memory: 10Mi
  83. command:
  84. - /usr/local/bin/tunnel
  85. env:
  86. - name: NODE_NAME
  87. valueFrom:
  88. fieldRef:
  89. apiVersion: v1
  90. fieldPath: spec.nodeName
  91. args:
  92. - --m=edge
  93. - --c=/etc/superedge/tunnel/conf/tunnel_edge.toml
  94. - --log-dir=/var/log/tunnel
  95. - --alsologtostderr
  96. volumeMounts:
  97. - name: certs
  98. mountPath: /etc/superedge/tunnel/certs
  99. - name: conf
  100. mountPath: /etc/superedge/tunnel/conf
  101. volumes:
  102. - secret:
  103. secretName: tunnel-edge-cert
  104. name: certs
  105. - configMap:
  106. name: tunnel-edge-conf
  107. name: conf

The ca certificate corresponding to the secret of tunnel-edge-cert to verify the gRPC server certificate; tunnel-edge is deployed in the form of deployment, and the number of copies is 1 , Tcp forwarding now only supports forwarding to a single node.

HTTPS Forwarding

To forward cloud requests to edge nodes through tunnel, you need to use the edge node name as the domain name of the HTTPS request host. Domain name resolution can reuse tunnel-coredns ,To use HTTPS forwarding, you need to deploy tunnel-cloud, tunnel-edge and tunnel-coredns three modules .

Tunnel-cloud Configuration

tunnel-cloud-conf.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: tunnel-cloud-conf
  5. namespace: edge-system
  6. data:
  7. tunnel_cloud.toml: |
  8. [mode]
  9. [mode.cloud]
  10. [mode.cloud.stream]
  11. [mode.cloud.stream.server]
  12. grpcport = 9000
  13. logport = 51010
  14. key = "/etc/superedge/tunnel/certs/tunnel-cloud-server.key"
  15. cert = "/etc/superedge/tunnel/certs/tunnel-cloud-server.crt"
  16. tokenfile = "/etc/superedge/tunnel/token/token"
  17. [mode.cloud.stream.dns]
  18. configmap = "tunnel-nodes"
  19. hosts = "/etc/superedge/tunnel/nodes/hosts"
  20. service = "tunnel-cloud"
  21. [mode.cloud.https]
  22. cert = "/etc/superedge/tunnel/certs/apiserver-kubelet-server.crt"
  23. key = "/etc/superedge/tunnel/certs/apiserver-kubelet-server.key"
  24. [mode.cloud.https.addr]
  25. "10250" = "127.0.0.1:10250"

The gRPC server of the tunnel-cloud listens on port 9000 and waits for the tunnel-edge to establish a gRPC long-lived. The request to access tunnel-cloud 10250 will be forwarded to the server with the access address 127.0.0.1:10250 of the edge node.

tunnel-cloud.yaml

tunnel-cloud.yaml

Tunnel-edge Configuration

tunnel-edge-conf.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: tunnel-edge-conf
  5. namespace: edge-system
  6. data:
  7. tunnel_edge.toml: |
  8. [mode]
  9. [mode.edge]
  10. [mode.edge.stream]
  11. [mode.edge.stream.client]
  12. token = "{{.TunnelCloudEdgeToken}}"
  13. cert = "/etc/superedge/tunnel/certs/cluster-ca.crt"
  14. dns = "tunnel.cloud.io"
  15. servername = "{{.MasterIP}}:9000"
  16. logport = 51000
  17. [mode.edge.https]
  18. cert = "/etc/superedge/tunnel/certs/apiserver-kubelet-client.crt"
  19. key = "/etc/superedge/tunnel/certs/apiserver-kubelet-client.key"

The certificate and private key of the HTTPS module are the client certificate corresponding to the server-side certificate of the server of the edge node forwarded by the tunnel-cloud. For example, when tunnel-cloud forwards the request from apiserver to kubelet, it is necessary to configure the client certificate and private key corresponding to the kubelet port 10250 server certificate.

tunnel-edge.yaml

tunnel-edge.yaml

Local Debugging

Tunnel supports HTTPS (HTTPS module) and TCP protocol (TCP module). The data of the protocol module is transmitted through gRPC long-lived (stream module ), so it can be divided into modules for local debugging. Local debugging can use go’s testing framework. The configuration file can be generated by calling config_test test method Test_Config (where the constant variable config_path is the path of the generated configuration file relative to the path of the config_test go file, and main_path is the configuration file relative to the test file Path), such as the local debugging of the stream module: config_path = “../../../conf” (The generated configuration file is in the conf folder under the root directory of the project), then main_path=”../../../../conf”(the path of stream_test relative to conf), the configuration file is generated to support the configuration of ca.crt and ca.key (when configpath/certs/ca.crt and configpath/certs/ca.key exist, the specified ca is used to issue the certificate).

stream module debugging

start of the stream server

  1. func Test_StreamServer(t *testing.T) {
  2. err := conf.InitConf(util.CLOUD, "../../../../conf/cloud_mode.toml")
  3. if err != nil {
  4. t.Errorf("failed to initialize stream server configuration file err = %v", err)
  5. return
  6. }
  7. model.InitModules(util.CLOUD)
  8. InitStream(util.CLOUD)
  9. model.LoadModules(util.CLOUD)
  10. context.GetContext().RegisterHandler(util.MODULE_DEBUG, util.STREAM, StreamDebugHandler)
  11. model.ShutDown()
  12. }
  1. Load configuration file(conf.InitConf)->Initialize the module(model.InitMoudule)->Initialize the stream module(InitStream)->Load the initialized module->Register a custom handler (StreamDebugHandler)->Shut down the module (model.ShutDown)

StreamDebugHandler is a custom handler for debugging cloud side messaging

start of the stream client

  1. func Test_StreamClient(t *testing.T) {
  2. os.Setenv(util.NODE_NAME_ENV, "node1")
  3. err := conf.InitConf(util.EDGE, "../../../../conf/edge_mode.toml")
  4. if err != nil {
  5. t.Errorf("failed to initialize stream client configuration file err = %v", err)
  6. return
  7. }
  8. model.InitModules(util.EDGE)
  9. InitStream(util.EDGE)
  10. model.LoadModules(util.EDGE)
  11. context.GetContext().RegisterHandler(util.MODULE_DEBUG, util.STREAM, StreamDebugHandler)
  12. go func() {
  13. running := true
  14. for running {
  15. node := context.GetContext().GetNode(os.Getenv(util.NODE_NAME_ENV))
  16. if node != nil {
  17. node.Send2Node(&proto.StreamMsg{
  18. Node: os.Getenv(util.NODE_NAME_ENV),
  19. Category: util.STREAM,
  20. Type: util.MODULE_DEBUG,
  21. Topic: uuid.NewV4().String(),
  22. Data: []byte{'c'},
  23. })
  24. }
  25. time.Sleep(10 * time.Second)
  26. }
  27. }()
  28. model.ShutDown()
  29. }
  1. Set the node name environment variable->Load configuration file (conf.InitConf)->Initialization module (model.InitMoudule)->Initialize the stream module (InitStream)->Load the initialized module->Register a custom handler (StreamDebugHandler)->Shut down the module (model.ShutDown)

The node name is loaded through the environment variable of NODE_NAME

TCP module debugging

TCP server debugging

  1. func Test_TcpServer(t *testing.T) {
  2. err := conf.InitConf(util.CLOUD, "../../../../conf/cloud_mode.toml")
  3. if err != nil {
  4. t.Errorf("failed to initialize stream server configuration file err = %v", err)
  5. return
  6. }
  7. model.InitModules(util.CLOUD)
  8. InitTcp()
  9. stream.InitStream(util.CLOUD)
  10. model.LoadModules(util.CLOUD)
  11. model.ShutDown()
  12. }

Need to initialize the TCP module (InitTcp) and the stream module (stream.InitStream) at the same time

TCP client debugging

  1. func Test_TcpClient(t *testing.T) {
  2. os.Setenv(util.NODE_NAME_ENV, "node1")
  3. err := conf.InitConf(util.EDGE, "../../../../conf/edge_mode.toml")
  4. if err != nil {
  5. t.Errorf("failed to initialize stream client configuration file err = %v", err)
  6. return
  7. }
  8. model.InitModules(util.EDGE)
  9. InitTcp()
  10. stream.InitStream(util.EDGE)
  11. model.LoadModules(util.EDGE)
  12. model.ShutDown()
  13. }

HTTPS module debugging

Similar to TCP module debugging, HTTPS module and stream module need to be loaded at the same time.

Debugging of tunnel main() function

In the main test file of the tunnel tunnel_test, you need to use init() set the parameters, at the same time you need to use TestMain to parse the parameters and call the test method

Last modified June 16, 2021 : Update Tunnel description (6bb9132)