Configuring ingress cluster traffic using an Ingress Controller

OKD provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller.

Using Ingress Controllers and routes

The Ingress Operator manages Ingress Controllers and wildcard DNS.

Using an Ingress Controller is the most common way to allow external access to an OKD cluster.

An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI.

Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes.

The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators.

By default, every Ingress Controller in the cluster can admit any route created in any project in the cluster.

The Ingress Controller:

  • Has two replicas by default, which means it should be running on two worker nodes.

  • Can be scaled up to have more replicas on more nodes.

The procedures in this section require prerequisites performed by the cluster administrator.

Prerequisites

Before starting the following procedures, the administrator must:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster.

  • Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:

    1. $ oc adm policy add-cluster-role-to-user cluster-admin username
  • Have an OKD cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.

Creating a project and service

If the project and service that you want to expose do not exist, first create the project, then the service.

If the project and service already exist, skip to the procedure on exposing the service to create a route.

Prerequisites

  • Install the oc CLI and log in as a cluster administrator.

Procedure

  1. Create a new project for your service by running the oc new-project command:

    1. $ oc new-project myproject
  2. Use the oc new-app command to create your service:

    1. $ oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
  3. To verify that the service was created, run the following command:

    1. $ oc get svc -n myproject

    Example output

    1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2. nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s

    By default, the new service does not have an external IP address.

Exposing the service by creating a route

You can expose the service as a route by using the oc expose command.

Procedure

To expose the service:

  1. Log in to OKD.

  2. Log in to the project where the service you want to expose is located:

    1. $ oc project myproject
  3. Run the oc expose service command to expose the route:

    1. $ oc expose service nodejs-ex

    Example output

    1. route.route.openshift.io/nodejs-ex exposed
  4. To verify that the service is exposed, you can use a tool, such as cURL, to make sure the service is accessible from outside the cluster.

    1. Use the oc get route command to find the route’s host name:

      1. $ oc get route

      Example output

      1. NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
      2. nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None
    2. Use cURL to check that the host responds to a GET request:

      1. $ curl --head nodejs-ex-myproject.example.com

      Example output

      1. HTTP/1.1 200 OK
      2. ...

Configuring Ingress Controller sharding by using route labels

Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

Procedure

  1. Edit the router-internal.yaml file:

    1. # cat router-internal.yaml
    2. apiVersion: v1
    3. items:
    4. - apiVersion: operator.openshift.io/v1
    5. kind: IngressController
    6. metadata:
    7. name: sharded
    8. namespace: openshift-ingress-operator
    9. spec:
    10. domain: <apps-sharded.basedomain.example.net>
    11. nodePlacement:
    12. nodeSelector:
    13. matchLabels:
    14. node-role.kubernetes.io/worker: ""
    15. routeSelector:
    16. matchLabels:
    17. type: sharded
    18. status: {}
    19. kind: List
    20. metadata:
    21. resourceVersion: ""
    22. selfLink: ""
  2. Apply the Ingress Controller router-internal.yaml file:

    1. # oc apply -f router-internal.yaml

    The Ingress Controller selects routes in any namespace that have the label type: sharded.

Configuring Ingress Controller sharding by using namespace labels

Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

If you deploy the Keepalived Ingress VIP, do not deploy a non-default Ingress Controller with value HostNetwork for the endpointPublishingStrategy parameter. Doing so might cause issues. Use value NodePort instead of HostNetwork for endpointPublishingStrategy.

Procedure

  1. Edit the router-internal.yaml file:

    1. # cat router-internal.yaml

    Example output

    1. apiVersion: v1
    2. items:
    3. - apiVersion: operator.openshift.io/v1
    4. kind: IngressController
    5. metadata:
    6. name: sharded
    7. namespace: openshift-ingress-operator
    8. spec:
    9. domain: <apps-sharded.basedomain.example.net>
    10. nodePlacement:
    11. nodeSelector:
    12. matchLabels:
    13. node-role.kubernetes.io/worker: ""
    14. namespaceSelector:
    15. matchLabels:
    16. type: sharded
    17. status: {}
    18. kind: List
    19. metadata:
    20. resourceVersion: ""
    21. selfLink: ""
  2. Apply the Ingress Controller router-internal.yaml file:

    1. # oc apply -f router-internal.yaml

    The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded.

Creating a route for Ingress Controller sharding

A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs.

The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

Prerequisites

  • You installed the OpenShift CLI (oc).

  • You are logged in as a project administrator.

  • You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.

  • You have configured the Ingress Controller for sharding.

Procedure

  1. Create a project called hello-openshift by running the following command:

    1. $ oc new-project hello-openshift
  2. Create a pod in the project by running the following command:

    1. $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
  3. Create a service called hello-openshift by running the following command:

    1. $ oc expose pod/hello-openshift
  4. Create a route definition called hello-openshift-route.yaml:

    YAML definition of the created route for sharding:

    1. apiVersion: route.openshift.io/v1
    2. kind: Route
    3. metadata:
    4. labels:
    5. type: sharded (1)
    6. name: hello-openshift-edge
    7. namespace: hello-openshift
    8. spec:
    9. subdomain: hello-openshift (2)
    10. tls:
    11. termination: edge
    12. to:
    13. kind: Service
    14. name: hello-openshift
    1Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded.
    2The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field.
  5. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command:

    1. $ oc -n hello-openshift create -f hello-openshift-route.yaml

Verification

  • Get the status of the route with the following command:

    1. $ oc -n hello-openshift get routes/hello-openshift-edge -o yaml

    The resulting Route resource should look similar to the following:

    Example output

    1. apiVersion: route.openshift.io/v1
    2. kind: Route
    3. metadata:
    4. labels:
    5. type: sharded
    6. name: hello-openshift-edge
    7. namespace: hello-openshift
    8. spec:
    9. subdomain: hello-openshift
    10. tls:
    11. termination: edge
    12. to:
    13. kind: Service
    14. name: hello-openshift
    15. status:
    16. ingress:
    17. - host: hello-openshift.<apps-sharded.basedomain.example.net> (1)
    18. routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> (2)
    19. routerName: sharded (3)
    1The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net>.
    2The hostname of the Ingress Controller.
    3The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded.

Additional resources