Route configuration

Creating an HTTP-based route

A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port.

The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example.

Prerequisites

  • You installed the OpenShift CLI (oc).

  • You are logged in as an administrator.

  • You have a web application that exposes a port and a TCP endpoint listening for traffic on the port.

Procedure

  1. Create a project called hello-openshift by running the following command:

    1. $ oc new-project hello-openshift
  2. Create a pod in the project by running the following command:

    1. $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
  3. Create a service called hello-openshift by running the following command:

    1. $ oc expose pod/hello-openshift
  4. Create an unsecured route to the hello-openshift application by running the following command:

    1. $ oc expose svc hello-openshift

Verification

  • To verify that the route resource that you created, run the following command:

    1. $ oc get routes -o yaml <name of resource> (1)
    1In this example, the route is named hello-openshift.

Sample YAML definition of the created unsecured route:

  1. apiVersion: route.openshift.io/v1
  2. kind: Route
  3. metadata:
  4. name: hello-openshift
  5. spec:
  6. host: hello-openshift-hello-openshift.<Ingress_Domain> (1)
  7. port:
  8. targetPort: 8080 (2)
  9. to:
  10. kind: Service
  11. name: hello-openshift
1<Ingress_Domain> is the default ingress domain name. The ingresses.config/cluster object is created during the installation and cannot be changed. If you want to specify a different domain, you can specify an alternative cluster domain using the appsDomain option.
2targetPort is the target port on pods that is selected by the service that this route points to.

To display your default ingress domain, run the following command:

  1. $ oc get ingresses.config/cluster -o jsonpath={.spec.domain}

Creating a route for Ingress Controller sharding

A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs.

The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

Prerequisites

  • You installed the OpenShift CLI (oc).

  • You are logged in as a project administrator.

  • You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.

  • You have configured the Ingress Controller for sharding.

Procedure

  1. Create a project called hello-openshift by running the following command:

    1. $ oc new-project hello-openshift
  2. Create a pod in the project by running the following command:

    1. $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
  3. Create a service called hello-openshift by running the following command:

    1. $ oc expose pod/hello-openshift
  4. Create a route definition called hello-openshift-route.yaml:

    YAML definition of the created route for sharding:

    1. apiVersion: route.openshift.io/v1
    2. kind: Route
    3. metadata:
    4. labels:
    5. type: sharded (1)
    6. name: hello-openshift-edge
    7. namespace: hello-openshift
    8. spec:
    9. subdomain: hello-openshift (2)
    10. tls:
    11. termination: edge
    12. to:
    13. kind: Service
    14. name: hello-openshift
    1Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded.
    2The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field.
  5. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command:

    1. $ oc -n hello-openshift create -f hello-openshift-route.yaml

Verification

  • Get the status of the route with the following command:

    1. $ oc -n hello-openshift get routes/hello-openshift-edge -o yaml

    The resulting Route resource should look similar to the following:

    Example output

    1. apiVersion: route.openshift.io/v1
    2. kind: Route
    3. metadata:
    4. labels:
    5. type: sharded
    6. name: hello-openshift-edge
    7. namespace: hello-openshift
    8. spec:
    9. subdomain: hello-openshift
    10. tls:
    11. termination: edge
    12. to:
    13. kind: Service
    14. name: hello-openshift
    15. status:
    16. ingress:
    17. - host: hello-openshift.<apps-sharded.basedomain.example.net> (1)
    18. routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> (2)
    19. routerName: sharded (3)
    1The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net>.
    2The hostname of the Ingress Controller.
    3The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded.

Configuring route timeouts

You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.

Prerequisites

  • You need a deployed Ingress Controller on a running cluster.

Procedure

  1. Using the oc annotate command, add the timeout to the route:

    1. $ oc annotate route <route_name> \
    2. --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> (1)
    1Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d).

    The following example sets a timeout of two seconds on a route named myroute:

    1. $ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s

HTTP Strict Transport Security

HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites.

When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect.

Cluster administrators can configure HSTS to do the following:

  • Enable HSTS per-route

  • Disable HSTS per-route

  • Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains

HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes.

Enabling HTTP Strict Transport Security per-route

HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation.

Prerequisites

  • You are logged in to the cluster with a user with administrator privileges for the project.

  • You installed the oc CLI.

Procedure

  • To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command:

    1. $ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ (1)
    2. includeSubDomains;preload"
    1In this example, the maximum age is set to 31536000 ms, which is approximately eight and a half hours.

    In this example, the equal sign (=) is in quotes. This is required to properly execute the annotate command.

    Example route configured with an annotation

    1. apiVersion: route.openshift.io/v1
    2. kind: Route
    3. metadata:
    4. annotations:
    5. haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload (1) (2) (3)
    6. ...
    7. spec:
    8. host: def.abc.com
    9. tls:
    10. termination: "reencrypt"
    11. ...
    12. wildcardPolicy: "Subdomain"
    1Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0, it negates the policy.
    2Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host.
    3Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header.

Disabling HTTP Strict Transport Security per-route

To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0.

Prerequisites

  • You are logged in to the cluster with a user with administrator privileges for the project.

  • You installed the oc CLI.

Procedure

  • To disable HSTS, set the max-age value in the route annotation to 0, by entering the following command:

    1. $ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"

    You can alternatively apply the following YAML to create the config map:

    Example of disabling HSTS per-route
    1. metadata:
    2. annotations:
    3. haproxy.router.openshift.io/hsts_header: max-age=0
  • To disable HSTS for every route in a namespace, enter the following command:

    1. $ oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"

Verification

  1. To query the annotation for all routes, enter the following command:

    1. $ oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'

    Example output

    1. Name: routename HSTS: max-age=0

Enforcing HTTP Strict Transport Security per-domain

To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies record to the Ingress spec to capture the configuration of the HSTS policy.

If you configure a requiredHSTSPolicy to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation.

To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates.

You cannot use oc expose route or oc create route commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations.

HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally.

Prerequisites

  • You are logged in to the cluster with a user with administrator privileges for the project.

  • You installed the oc CLI.

Procedure

  1. Edit the Ingress config file:

    1. $ oc edit ingresses.config.openshift.io/cluster

    Example HSTS policy

    1. apiVersion: config.openshift.io/v1
    2. kind: Ingress
    3. metadata:
    4. name: cluster
    5. spec:
    6. domain: 'hello-openshift-default.apps.username.devcluster.openshift.com'
    7. requiredHSTSPolicies: (1)
    8. - domainPatterns: (2)
    9. - '*hello-openshift-default.apps.username.devcluster.openshift.com'
    10. - '*hello-openshift-default2.apps.username.devcluster.openshift.com'
    11. namespaceSelector: (3)
    12. matchLabels:
    13. myPolicy: strict
    14. maxAge: (4)
    15. smallestMaxAge: 1
    16. largestMaxAge: 31536000
    17. preloadPolicy: RequirePreload (5)
    18. includeSubDomainsPolicy: RequireIncludeSubDomains (6)
    19. - domainPatterns: (2)
    20. - 'abc.example.com'
    21. - '*xyz.example.com'
    22. namespaceSelector:
    23. matchLabels: {}
    24. maxAge: {}
    25. preloadPolicy: NoOpinion
    26. includeSubDomainsPolicy: RequireNoIncludeSubDomains
    1Required. requiredHSTSPolicies are validated in order, and the first matching domainPatterns applies.
    2Required. You must specify at least one domainPatterns hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for different domainPatterns.
    3Optional. If you include namespaceSelector, it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match the namespaceSelector and not the domainPatterns are not validated.
    4Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largest max-age to be enforced.
    • The largestMaxAge value must be between 0 and 2147483647. It can be left unspecified, which means no upper limit is enforced.

    • The smallestMaxAge value must be between 0 and 2147483647. Enter 0 to disable HSTS for troubleshooting, otherwise enter 1 if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced.

    5Optional. Including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers need to interact at least once with the site to get the header. preload can be set with one of the following:
    • RequirePreload: preload is required by the RequiredHSTSPolicy.

    • RequireNoPreload: preload is forbidden by the RequiredHSTSPolicy.

    • NoOpinion: preload does not matter to the RequiredHSTSPolicy.

    6Optional. includeSubDomainsPolicy can be set with one of the following:
    • RequireIncludeSubDomains: includeSubDomains is required by the RequiredHSTSPolicy.

    • RequireNoIncludeSubDomains: includeSubDomains is forbidden by the RequiredHSTSPolicy.

    • NoOpinion: includeSubDomains does not matter to the RequiredHSTSPolicy.

  2. You can apply HSTS to all routes in the cluster or in a particular namespace by entering the oc annotate command.

    • To apply HSTS to all routes in the cluster, enter the oc annotate command. For example:

      1. $ oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"
    • To apply HSTS to all routes in a particular namespace, enter the oc annotate command. For example:

      1. $ oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"

Verification

You can review the HSTS policy you configured. For example:

  • To review the maxAge set for required HSTS policies, enter the following command:

    1. $ oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}'
  • To review the HSTS annotations on all routes, enter the following command:

    1. $ oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'

    Example output

    1. Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains

Throughput issue troubleshooting methods

Sometimes applications deployed by using OKD can cause network throughput issues, such as unusually high latency between specific services.

If pod logs do not reveal any cause of the problem, use the following methods to analyze performance issues:

  • Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node.

    For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OKD if a node interface is overloaded with traffic from other pods, storage devices, or the data plane.

    1. $ tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> (1)
    1podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod.

    The tcpdump command generates a file at /tmp/dump.pcap containing all traffic between these two pods. You can run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with:

    1. $ tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789
  • Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Locate any bottlenecks by running the tool from the pods first, and then running it from the nodes.

  • In some cases, the cluster may mark the node with the router pod as unhealthy due to latency issues. Use worker latency profiles to adjust the frequency that the cluster waits for a status update from the node before taking action.

  • If your cluster has designated lower-latency and higher-latency nodes, configure the spec.nodePlacement field in the Ingress Controller to control the placement of the router pod.

Additional resources

Using cookies to keep route statefulness

OKD provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear.

OKD can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the next request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod.

Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend.

If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod.

You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. By deleting the cookie it can force the next request to re-choose an endpoint. So, if a server was overloaded it tries to remove the requests from the client and redistribute them.

Procedure

  1. Annotate the route with the specified cookie name:

    1. $ oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>"

    where:

    <route_name>

    Specifies the name of the route.

    <cookie_name>

    Specifies the name for the cookie.

    For example, to annotate the route my_route with the cookie name my_cookie:

    1. $ oc annotate route my_route router.openshift.io/cookie_name="my_cookie"
  2. Capture the route hostname in a variable:

    1. $ ROUTE_NAME=$(oc get route <route_name> -o jsonpath='{.spec.host}')

    where:

    <route_name>

    Specifies the name of the route.

  3. Save the cookie, and then access the route:

    1. $ curl $ROUTE_NAME -k -c /tmp/cookie_jar

    Use the cookie saved by the previous command when connecting to the route:

    1. $ curl $ROUTE_NAME -k -b /tmp/cookie_jar

Path-based routes

Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. However, this depends on the router implementation.

The following table shows example routes and their accessibility:

Table 1. Route availability
RouteWhen Compared toAccessible

www.example.com/test

www.example.com/test

Yes

www.example.com

No

www.example.com/test and www.example.com

www.example.com/test

Yes

www.example.com

Yes

www.example.com

www.example.com/text

Yes (Matched by the host, not the route)

www.example.com

Yes

An unsecured route with a path

  1. apiVersion: route.openshift.io/v1
  2. kind: Route
  3. metadata:
  4. name: route-unsecured
  5. spec:
  6. host: www.example.com
  7. path: "/test" (1)
  8. to:
  9. kind: Service
  10. name: service-name
1The path is the only added attribute for a path-based route.

Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request.

Route-specific annotations

The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route.

To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message.

Table 2. Route annotations
VariableDescriptionEnvironment variable used as default

haproxy.router.openshift.io/balance

Sets the load-balancing algorithm. Available options are random, source, roundrobin, and leastconn. The default value is random.

ROUTERTCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM.

haproxy.router.openshift.io/disable_cookies

Disables the use of cookies to track related connections. If set to ‘true’ or ‘TRUE’, the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request.

router.openshift.io/cookie_name

Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, ““, and “-“. The default is the hashed internal key name for the route.

haproxy.router.openshift.io/pod-concurrent-connections

Sets the maximum number of connections that are allowed to a backing pod from a router.
Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit.

haproxy.router.openshift.io/rate-limit-connections

Setting ‘true’ or ‘TRUE’ enables rate limiting functionality which is implemented through stick-tables on the specific backend per route.
Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks.

haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp

Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value.
Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks.

haproxy.router.openshift.io/rate-limit-connections.rate-http

Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value.
Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks.

haproxy.router.openshift.io/rate-limit-connections.rate-tcp

Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value.
Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks.

haproxy.router.openshift.io/timeout

Sets a server-side timeout for the route. (TimeUnits)

ROUTER_DEFAULT_SERVER_TIMEOUT

haproxy.router.openshift.io/timeout-tunnel

This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set.

ROUTER_DEFAULT_TUNNEL_TIMEOUT

ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after

You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop.

ROUTER_HARD_STOP_AFTER

router.openshift.io/haproxy.health.check.interval

Sets the interval for the back-end health checks. (TimeUnits)

ROUTER_BACKEND_CHECK_INTERVAL

haproxy.router.openshift.io/ip_whitelist

Sets a whitelist for the route. The whitelist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the whitelist are dropped.

The maximum number of IP addresses and CIDR ranges allowed in a whitelist is 61.

haproxy.router.openshift.io/hsts_header

Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route.

haproxy.router.openshift.io/log-send-hostname

Sets the hostname field in the Syslog header. Uses the hostname of the system. log-send-hostname is enabled by default if any Ingress API logging method, such as sidecar or Syslog facility, is enabled for the router.

haproxy.router.openshift.io/rewrite-target

Sets the rewrite path of the request on the backend.

router.openshift.io/cookie-same-site

Sets a value to restrict cookies. The values are:

Lax: cookies are transferred between the visited site and third-party sites.

Strict: cookies are restricted to the visited site.

None: cookies are restricted to the visited site.

This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation.

haproxy.router.openshift.io/set-forwarded-headers

Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are:

append: appends the header, preserving any existing header. This is the default value.

replace: sets the header, removing any existing header.

never: never sets the header, but preserves any existing header.

if-none: sets the header if it is not already set.

ROUTER_SET_FORWARDED_HEADERS

Environment variables cannot be edited.

Router timeout variables

TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days).

The regular expression is: [1-9][0-9]*(us\|ms\|s\|m\|h\|d).

VariableDefaultDescription

ROUTER_BACKEND_CHECK_INTERVAL

5000ms

Length of time between subsequent liveness checks on back ends.

ROUTER_CLIENT_FIN_TIMEOUT

1s

Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router.

ROUTER_DEFAULT_CLIENT_TIMEOUT

30s

Length of time that a client has to acknowledge or send data.

ROUTER_DEFAULT_CONNECT_TIMEOUT

5s

The maximum connection time.

ROUTER_DEFAULT_SERVER_FIN_TIMEOUT

1s

Controls the TCP FIN timeout from the router to the pod backing the route.

ROUTER_DEFAULT_SERVER_TIMEOUT

30s

Length of time that a server has to acknowledge or send data.

ROUTER_DEFAULT_TUNNEL_TIMEOUT

1h

Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads.

ROUTER_SLOWLORIS_HTTP_KEEPALIVE

300s

Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value.

Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive. It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay, which is set to 5s. In this case, the overall timeout would be 300s plus 5s.

ROUTER_SLOWLORIS_TIMEOUT

10s

Length of time the transmission of an HTTP request can take.

RELOAD_INTERVAL

5s

Allows the minimum frequency for the router to reload and accept new changes.

ROUTER_METRICS_HAPROXY_TIMEOUT

5s

Timeout for the gathering of HAProxy metrics.

A route setting custom timeout

  1. apiVersion: route.openshift.io/v1
  2. kind: Route
  3. metadata:
  4. annotations:
  5. haproxy.router.openshift.io/timeout: 5500ms (1)
  6. ...
1Specifies the new timeout with HAProxy supported units (us, ms, s, m, h, d). If the unit is not provided, ms is the default.

Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route.

A route that allows only one specific IP address

  1. metadata:
  2. annotations:
  3. haproxy.router.openshift.io/ip_whitelist: 192.168.1.10

A route that allows several IP addresses

  1. metadata:
  2. annotations:
  3. haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12

A route that allows an IP address CIDR network

  1. metadata:
  2. annotations:
  3. haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24

A route that allows both IP an address and IP address CIDR networks

  1. metadata:
  2. annotations:
  3. haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8

A route specifying a rewrite target

  1. apiVersion: route.openshift.io/v1
  2. kind: Route
  3. metadata:
  4. annotations:
  5. haproxy.router.openshift.io/rewrite-target: / (1)
  6. ...
1Sets / as rewrite path of the request on the backend.

Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation.

The following table provides examples of the path rewriting behavior for various combinations of spec.path, request path, and rewrite target.

Table 3. rewrite-target examples:
Route.spec.pathRequest pathRewrite targetForwarded request path

/foo

/foo

/

/

/foo

/foo/

/

/

/foo

/foo/bar

/

/bar

/foo

/foo/bar/

/

/bar/

/foo

/foo

/bar

/bar

/foo

/foo/

/bar

/bar/

/foo

/foo/bar

/baz

/baz/bar

/foo

/foo/bar/

/baz

/baz/bar/

/foo/

/foo

/

N/A (request path does not match route path)

/foo/

/foo/

/

/

/foo/

/foo/bar

/

/bar

Configuring the route admission policy

Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.

Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.

Prerequisites

  • Cluster administrator privileges.

Procedure

  • Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command:

    1. $ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge

    Sample Ingress Controller configuration

    1. spec:
    2. routeAdmission:
    3. namespaceOwnership: InterNamespaceAllowed
    4. ...

    You can alternatively apply the following YAML to configure the route admission policy:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. routeAdmission:
    8. namespaceOwnership: InterNamespaceAllowed

Creating a route through an Ingress object

Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OKD automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted.

Procedure

  1. Define an Ingress object in the OKD console or by entering the oc create command:

    YAML Definition of an Ingress

    1. apiVersion: networking.k8s.io/v1
    2. kind: Ingress
    3. metadata:
    4. name: frontend
    5. annotations:
    6. route.openshift.io/termination: "reencrypt" (1)
    7. route.openshift.io/destination-ca-certificate-secret: secret-ca-cert (3)
    8. spec:
    9. rules:
    10. - host: www.example.com (2)
    11. http:
    12. paths:
    13. - backend:
    14. service:
    15. name: frontend
    16. port:
    17. number: 443
    18. path: /
    19. pathType: Prefix
    20. tls:
    21. - hosts:
    22. - www.example.com
    23. secretName: example-com-tls-certificate
    1The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge, passthrough and reencrypt. All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route.
    2When working with an Ingress object, you must specify an explicit hostname, unlike when working with routes. You can use the <host_name>.<cluster_ingress_domain> syntax, for example apps.openshiftdemos.com, to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname.
    1. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to ‘’ and pathType to ImplementationSpecific in the spec:

      1. spec:
      2. rules:
      3. - host: www.example.com
      4. http:
      5. paths:
      6. - path: ‘’
      7. pathType: ImplementationSpecific
      8. backend:
      9. service:
      10. name: frontend
      11. port:
      12. number: 443
      1. $ oc apply -f ingress.yaml
    3The route.openshift.io/destination-ca-certificate-secret can be used on an Ingress object to define a route with a custom destination certificate (CA). The annotation references a kubernetes secret, secret-ca-cert that will be inserted into the generated route.
    1. To specify a route object with a destination CA from an ingress object, you must create a kubernetes.io/tls or Opaque type secret with a certificate in PEM-encoded format in the data.tls.crt specifier of the secret.

  2. List your routes:

    1. $ oc get routes

    The result includes an autogenerated route whose name starts with frontend-:

    1. NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
    2. frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None

    If you inspect this route, it looks this:

    YAML Definition of an autogenerated route

    1. apiVersion: route.openshift.io/v1
    2. kind: Route
    3. metadata:
    4. name: frontend-gnztq
    5. ownerReferences:
    6. - apiVersion: networking.k8s.io/v1
    7. controller: true
    8. kind: Ingress
    9. name: frontend
    10. uid: 4e6c59cc-704d-4f44-b390-617d879033b6
    11. spec:
    12. host: www.example.com
    13. path: /
    14. port:
    15. targetPort: https
    16. tls:
    17. certificate: |
    18. -----BEGIN CERTIFICATE-----
    19. [...]
    20. -----END CERTIFICATE-----
    21. insecureEdgeTerminationPolicy: Redirect
    22. key: |
    23. -----BEGIN RSA PRIVATE KEY-----
    24. [...]
    25. -----END RSA PRIVATE KEY-----
    26. termination: reencrypt
    27. destinationCACertificate: |
    28. -----BEGIN CERTIFICATE-----
    29. [...]
    30. -----END CERTIFICATE-----
    31. to:
    32. kind: Service
    33. name: frontend

Creating a route using the default certificate through an Ingress object

If you create an Ingress object without specifying any TLS configuration, OKD generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows.

Prerequisites

  • You have a service that you want to expose.

  • You have access to the OpenShift CLI (oc).

Procedure

  1. Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml:

    YAML definition of an Ingress object

    1. apiVersion: networking.k8s.io/v1
    2. kind: Ingress
    3. metadata:
    4. name: frontend
    5. ...
    6. spec:
    7. rules:
    8. ...
    9. tls:
    10. - {} (1)
    1Use this exact syntax to specify TLS without specifying a custom certificate.
  2. Create the Ingress object by running the following command:

    1. $ oc create -f example-ingress.yaml

Verification

  • Verify that OKD has created the expected route for the Ingress object by running the following command:

    1. $ oc get routes -o yaml

    Example output

    1. apiVersion: v1
    2. items:
    3. - apiVersion: route.openshift.io/v1
    4. kind: Route
    5. metadata:
    6. name: frontend-j9sdd (1)
    7. ...
    8. spec:
    9. ...
    10. tls: (2)
    11. insecureEdgeTerminationPolicy: Redirect
    12. termination: edge (3)
    13. ...
    1The name of the route includes the name of the Ingress object followed by a random suffix.
    2In order to use the default certificate, the route should not specify spec.certificate.
    3The route should specify the edge termination policy.

Creating a route using the destination CA certificate in the Ingress annotation

The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate.

Prerequisites

  • You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.

  • You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain.

  • You must have a separate destination CA certificate in a PEM-encoded file.

  • You must have a service that you want to expose.

Procedure

  1. Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations:

    1. apiVersion: networking.k8s.io/v1
    2. kind: Ingress
    3. metadata:
    4. name: frontend
    5. annotations:
    6. route.openshift.io/termination: "reencrypt"
    7. route.openshift.io/destination-ca-certificate-secret: secret-ca-cert (1)
    8. ...
    1The annotation references a kubernetes secret.
  2. The secret referenced in this annotation will be inserted into the generated route.

    Example output

    1. apiVersion: route.openshift.io/v1
    2. kind: Route
    3. metadata:
    4. name: frontend
    5. annotations:
    6. route.openshift.io/termination: reencrypt
    7. route.openshift.io/destination-ca-certificate-secret: secret-ca-cert
    8. spec:
    9. ...
    10. tls:
    11. insecureEdgeTerminationPolicy: Redirect
    12. termination: reencrypt
    13. destinationCACertificate: |
    14. -----BEGIN CERTIFICATE-----
    15. [...]
    16. -----END CERTIFICATE-----
    17. ...

Configuring the OKD Ingress Controller for dual-stack networking

If your OKD cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OKD routes.

The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services.

Prerequisites

  • You deployed an OKD cluster on bare metal.

  • You installed the OpenShift CLI (oc).

Procedure

  1. To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example:

    Sample service YAML file

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. creationTimestamp: yyyy-mm-ddT00:00:00Z
    5. labels:
    6. name: <service_name>
    7. manager: kubectl-create
    8. operation: Update
    9. time: yyyy-mm-ddT00:00:00Z
    10. name: <service_name>
    11. namespace: <namespace_name>
    12. resourceVersion: "<resource_version_number>"
    13. selfLink: "/api/v1/namespaces/<namespace_name>/services/<service_name>"
    14. uid: <uid_number>
    15. spec:
    16. clusterIP: 172.30.0.0/16
    17. clusterIPs: (1)
    18. - 172.30.0.0/16
    19. - <second_IP_address>
    20. ipFamilies: (2)
    21. - IPv4
    22. - IPv6
    23. ipFamilyPolicy: RequireDualStack (3)
    24. ports:
    25. - port: 8080
    26. protocol: TCP
    27. targetport: 8080
    28. selector:
    29. name: <namespace_name>
    30. sessionAffinity: None
    31. type: ClusterIP
    32. status:
    33. loadbalancer: {}
    1In a dual-stack instance, there are two different clusterIPs provided.
    2For a single-stack instance, enter IPv4 or IPv6. For a dual-stack instance, enter both IPv4 and IPv6.
    3For a single-stack instance, enter SingleStack. For a dual-stack instance, enter RequireDualStack.

    These resources generate corresponding endpoints. The Ingress Controller now watches endpointslices.

  2. To view endpoints, enter the following command:

    1. $ oc get endpoints
  3. To view endpointslices, enter the following command:

    1. $ oc get endpointslices

Additional resources