Edge Ingress

This document introduces how to access Edge services through Edge Ingress in Cloud Edge scenarios. Users can access the Edge services from inside or outside of the NodePools, and for the condition from outside of the NodePools, only NodePort type ingress controller service is supported by now.

Generally, it only needs 2 steps to use the Edge Ingress feature:

  1. Enable the ingress feature on NodePools which provide your desired services.
  2. Create and apply the ingress rule as K8S to access your desired services.

Follow the steps below to try the Edge Ingress feature:


1.Enable the ingress feature on NodePools which provide your desired services

YurtIngress operator is responsible for orchestrating multi ingress controllers to the corresponding NodePools. Suppose you have created 4 NodePools in your OpenYurt cluster: pool01, pool02, pool03, pool04, and you want to enable edge ingress feature on pool01 and pool03, you can create the YurtIngress CR as below:

1). Create the YurtIngress CR yaml file:

1.1). A simple CR definition with some default configurations:

  1. apiVersion: apps.openyurt.io/v1alpha1
  2. kind: YurtIngress
  3. metadata:
  4. name: yurtingress-test
  5. spec:
  6. pools:
  7. - name: pool01
  8. - name: pool03

The default nginx ingress controller replicas per pool is 1.
The default nginx ingress controller image is controller:v0.48.1 from dockerhub.
The default nginx ingress webhook certgen image is kube-webhook-certgen:v0.48.1 from dockerhub.

1.2). If users want to make personalized configurations about the default options, the YurtIngress CR can be defined as below:

  1. apiVersion: apps.openyurt.io/v1alpha1
  2. kind: YurtIngress
  3. metadata:
  4. name: yurtingress-test
  5. spec:
  6. ingress_controller_replicas_per_pool: 2
  7. ingress_controller_image: k8s.gcr.io/ingress-nginx/controller:v0.49.0
  8. ingress_webhook_certgen_image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v0.49.0
  9. pools:
  10. - name: pool01
  11. ingress_ips:
  12. - xxx.xxx.xxx.xxx
  13. - name: pool03

“ingress_ips” represents the IPs if users want to expose the nginx ingress controller service through externalIPs for a specified nodepool.

Notes:

a). User can define different YurtIngress CRs for personalized configurations, for example set different ingress controller replicas for different nodepools.

b). In spec, the “ingress_controller_replicas_per_pool” represents the ingress controller replicas deployed on every pool, It is used for the HA usage scenarios.

c). In spec, the “pools” represents the pools list on which you want to enable ingress feature. Currently it supports the pool name and the nginx ingress controller service externalIPs.

2). Apply the YurtIngress CR yaml file:
Assume the file name is yurtingress-test.yaml:

  1. #kubectl apply -f yurtingress-test.yaml
  2. yurtingress.apps.openyurt.io/yurtingress-test created

Then you can get the YurtIngress CR to check the status:

  1. #kubectl get ying
  2. NAME REPLICAS-PER-POOL READYNUM NOTREADYNUM AGE
  3. yurtingress-test 1 2 0 3m13s

When the ingress controller is enabled successfully, a per-pool NodePort service is created to expose the ingress controller serivce:

  1. #kubectl get svc -n ingress-nginx
  2. ingress-nginx pool01-ingress-nginx-controller NodePort 192.167.107.123 <none> 80:32255/TCP,443:32275/TCP 53m
  3. ingress-nginx pool03-ingress-nginx-controller NodePort 192.167.48.114 <none> 80:30531/TCP,443:30916/TCP 53m

Notes:

a). “ying” is the shortName of YurtIngress resource.

b). When the “READYNUM” equals the pools number you defined in the YurtIngress CR, it represents the ingress feature is ready on all your spec pools.

c). If the “NOTREADYNUM” is not 0 all the times, you can check the YurtIngress CR for the the status infomation. Also you can check the corresponding deployments and pods to figure out why the ingress is not ready yet.

d). For every NodePool which ingress is enabled successfully, it exposes a NodePort type service for users to access the nginx ingress controller.

e). When the ingress controllers are orchestrated to the specified NodePools, an “ingress-nginx” namespace will be created, and all the namespace related resources will be created under it.


2.Create and apply the ingress rule as K8S to access your desired services

When the step 1 above is done, you have successfully deployed the nginx ingress controller to the related NodePools, and the following ingress user experience is totally consistent with K8S.

Suppose your app workload is deployed to several NodePools and it exposes a global service, for example:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: pool01-deployment
  5. labels:
  6. app: echo
  7. spec:
  8. replicas: 2
  9. selector:
  10. matchLabels:
  11. app: echo
  12. template:
  13. metadata:
  14. labels:
  15. app: echo
  16. spec:
  17. containers:
  18. - name: echo-app
  19. image: hashicorp/http-echo
  20. args:
  21. - "-text=echo from nodepool pool01"
  22. imagePullPolicy: IfNotPresent
  23. nodeSelector:
  24. apps.openyurt.io/nodepool: pool01
  25. ---
  26. apiVersion: apps/v1
  27. kind: Deployment
  28. metadata:
  29. name: pool03-deployment
  30. labels:
  31. app: echo
  32. spec:
  33. replicas: 2
  34. selector:
  35. matchLabels:
  36. app: echo
  37. template:
  38. metadata:
  39. labels:
  40. app: echo
  41. spec:
  42. containers:
  43. - name: echo-app
  44. image: hashicorp/http-echo
  45. args:
  46. - "-text=echo from nodepool pool03"
  47. imagePullPolicy: IfNotPresent
  48. nodeSelector:
  49. apps.openyurt.io/nodepool: pool03
  50. ---
  51. kind: Service
  52. apiVersion: v1
  53. metadata:
  54. name: echo-service
  55. spec:
  56. selector:
  57. app: echo
  58. ports:
  59. - port: 5678

If you want to access the service provided by pool01:

1). Create the ingress rule yaml file:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: ingress-pool01
  5. annotations:
  6. kubernetes.io/ingress.class: pool01
  7. ingress.kubernetes.io/rewrite-target: /
  8. spec:
  9. rules:
  10. - http:
  11. paths:
  12. - path: /echo
  13. backend:
  14. serviceName: echo-service
  15. servicePort: 5678

Notes:

a). Ingress class decides which NodePool to provide the ingress capability, so you need to define the ingress class to your desired NodePool name.

b). The ingress CR definition may be different for different K8S versions, so you need ensure the CR definition matches with your cluster K8S version.

2). Apply the ingress rule yaml file:
Assume the file name is ingress-myapp.yaml:

  1. #kubectl apply -f ingress-myapp.yaml
  2. ingress.extensions/ingress-myapp created

After all the steps above are done successfully, you can verify the edge ingress feature through the ingress controller NodePort service:

  1. #curl xxx:32255/echo
  2. "xxx" represents any NodeIP in NodePool pool01
  3. "32255" represents the NodePort which pool01 nginx ingress controller service exposes
  4. It should return "echo from nodepool pool01" all the times.