Ingress Operator in OKD

OKD Ingress Operator

When you create your OKD cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OKD cluster services.

The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OKD Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints.

The Ingress configuration asset

The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml.

YAML Definition of the Ingress resource

  1. apiVersion: config.openshift.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: cluster
  5. spec:
  6. domain: apps.openshiftdemos.com

The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows:

  • The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller.

  • The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host.

Ingress Controller configuration parameters

The ingresscontrollers.operator.openshift.io resource offers the following configuration parameters.

ParameterDescription

domain

domain is a DNS name serviced by the Ingress Controller and is used to configure multiple features:

  • For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy.

  • When using a generated default certificate, the certificate is valid for domain and its subdomains. See defaultCertificate.

  • The value is published to individual Route statuses so that users know where to target external DNS records.

The domain value must be unique among all Ingress Controllers and cannot be updated.

If empty, the default value is ingress.config.openshift.io/cluster .spec.domain.

replicas

replicas is the desired number of Ingress Controller replicas. If not set, the default value is 2.

endpointPublishingStrategy

endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems.

On GCP, AWS, and Azure you can configure the following endpointPublishingStrategy fields:

  • loadBalancer.scope

  • loadBalancer.allowedSourceRanges

If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform:

  • Amazon Web Services (AWS): LoadBalancerService (with External scope)

  • Azure: LoadBalancerService (with External scope)

  • Google Cloud Platform (GCP): LoadBalancerService (with External scope)

  • Bare metal: NodePortService

  • Other: HostNetwork

    HostNetwork has a hostNetwork field with the following default values for the optional binding ports: httpPort: 80, httpsPort: 443, and statsPort: 1936. With the binding ports, you can deploy multiple Ingress Controllers on the same node for the HostNetwork strategy.

    Example
    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: internal
    5. namespace: openshift-ingress-operator
    6. spec:
    7. domain: example.com
    8. endpointPublishingStrategy:
    9. type: HostNetwork
    10. hostNetwork:
    11. httpPort: 80
    12. httpsPort: 443
    13. statsPort: 1936

    On {rh-openstack-first}, the LoadBalancerService endpoint publishing strategy is only supported if a cloud provider is configured to create health monitors. For {rh-openstack} 16.1 and 16.2, this strategy is only possible if you use the Amphora Octavia provider.

    For more information, see the “Setting cloud provider options” section of the {rh-openstack} installation documentation.

For most platforms, the endpointPublishingStrategy value can be updated. On GCP, you can configure the following endpointPublishingStrategy fields:

  • loadBalancer.scope

  • loadbalancer.providerParameters.gcp.clientAccess

  • hostNetwork.protocol

  • nodePort.protocol

defaultCertificate

The defaultCertificate value is a reference to a secret that contains the default certificate that is served by the Ingress Controller. When Routes do not specify their own certificate, defaultCertificate is used.

The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents

If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller domain and subdomains, and the generated certificate’s CA is automatically integrated with the cluster’s trust store.

The in-use certificate, whether generated or user-specified, is automatically integrated with OKD built-in OAuth server.

namespaceSelector

namespaceSelector is used to filter the set of namespaces serviced by the Ingress Controller. This is useful for implementing shards.

routeSelector

routeSelector is used to filter the set of Routes serviced by the Ingress Controller. This is useful for implementing shards.

nodePlacement

nodePlacement enables explicit control over the scheduling of the Ingress Controller.

If not set, the defaults values are used.

The nodePlacement parameter includes two parts, nodeSelector and tolerations. For example:

  1. nodePlacement:
  2. nodeSelector:
  3. matchLabels:
  4. kubernetes.io/os: linux
  5. tolerations:
  6. - effect: NoSchedule
  7. operator: Exists

tlsSecurityProfile

tlsSecurityProfile specifies settings for TLS connections for Ingress Controllers.

If not set, the default value is based on the apiservers.config.openshift.io/cluster resource.

When using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the Ingress Controller, resulting in a rollout.

The minimum TLS version for Ingress Controllers is 1.1, and the maximum TLS version is 1.3.

Ciphers and the minimum TLS version of the configured security profile are reflected in the TLSProfile status.

The Ingress Operator converts the TLS 1.0 of an Old or Custom profile to 1.1.

clientTLS

clientTLS authenticates client access to the cluster and services; as a result, mutual TLS authentication is enabled. If not set, then client TLS is not enabled.

clientTLS has the required subfields, spec.clientTLS.clientCertificatePolicy and spec.clientTLS.ClientCA.

The ClientCertificatePolicy subfield accepts one of the two values: Required or Optional. The ClientCA subfield specifies a config map that is in the openshift-config namespace. The config map should contain a CA certificate bundle.

The AllowedSubjectPatterns is an optional value that specifies a list of regular expressions, which are matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. At least one pattern must match a client certificate’s distinguished name; otherwise, the Ingress Controller rejects the certificate and denies the connection. If not specified, the Ingress Controller does not reject certificates based on the distinguished name.

routeAdmission

routeAdmission defines a policy for handling new route claims, such as allowing or denying claims across namespaces.

namespaceOwnership describes how hostname claims across namespaces should be handled. The default is Strict.

  • Strict: does not allow routes to claim the same hostname across namespaces.

  • InterNamespaceAllowed: allows routes to claim different paths of the same hostname across namespaces.

wildcardPolicy describes how routes with wildcard policies are handled by the Ingress Controller.

  • WildcardsAllowed: Indicates routes with any wildcard policy are admitted by the Ingress Controller.

  • WildcardsDisallowed: Indicates only routes with a wildcard policy of None are admitted by the Ingress Controller. Updating wildcardPolicy from WildcardsAllowed to WildcardsDisallowed causes admitted routes with a wildcard policy of Subdomain to stop working. These routes must be recreated to a wildcard policy of None to be readmitted by the Ingress Controller. WildcardsDisallowed is the default setting.

IngressControllerLogging

logging defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled.

  • access describes how client requests are logged. If this field is empty, access logging is disabled.

    • destination describes a destination for log messages.

      • type is the type of destination for logs:

        • Container specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs, on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity.

        • Syslog specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance.

      • container describes parameters for the Container logging destination type. Currently there are no parameters for container logging, so this field must be empty.

      • syslog describes parameters for the Syslog logging destination type:

        • address is the IP address of the syslog endpoint that receives log messages.

        • port is the UDP port number of the syslog endpoint that receives log messages.

        • maxLength is the maximum length of the syslog message. It must be between 480 and 4096 bytes. If this field is empty, the maximum length is set to the default value of 1024 bytes.

        • facility specifies the syslog facility of log messages. If this field is empty, the facility is local1. Otherwise, it must specify a valid syslog facility: kern, user, mail, daemon, auth, syslog, lpr, news, uucp, cron, auth2, ftp, ntp, audit, alert, cron2, local0, local1, local2, local3. local4, local5, local6, or local7.

    • httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation’s default HTTP log format. For HAProxy’s default HTTP log format, see the HAProxy documentation.

httpHeaders

httpHeaders defines the policy for HTTP headers.

By setting the forwardedHeaderPolicy for the IngressControllerHTTPHeaders, you specify when and how the Ingress Controller sets the Forwarded, X-Forwarded-For, X-Forwarded-Host, X-Forwarded-Port, X-Forwarded-Proto, and X-Forwarded-Proto-Version HTTP headers.

By default, the policy is set to Append.

  • Append specifies that the Ingress Controller appends the headers, preserving any existing headers.

  • Replace specifies that the Ingress Controller sets the headers, removing any existing headers.

  • IfNone specifies that the Ingress Controller sets the headers if they are not already set.

  • Never specifies that the Ingress Controller never sets the headers, preserving any existing headers.

By setting headerNameCaseAdjustments, you can specify case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying X-Forwarded-For indicates that the x-forwarded-for HTTP header should be adjusted to have the specified capitalization.

These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1.

For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted.

httpCompression

httpCompression defines the policy for HTTP traffic compression.

  • mimeTypes defines a list of MIME types to which compression should be applied. For example, text/css; charset=utf-8, text/html, text/*, image/svg+xml, application/octet-stream, X-custom/customsub, using the format pattern, type/subtype; [;attribute=value]. The types are: application, image, message, multipart, text, video, or a custom type prefaced by X-; e.g. To see the full notation for MIME types and subtypes, see RFC1341

httpErrorCodePages

httpErrorCodePages specifies custom HTTP error code response pages. By default, an IngressController uses error pages built into the IngressController image.

httpCaptureCookies

httpCaptureCookies specifies HTTP cookies that you want to capture in access logs. If the httpCaptureCookies field is empty, the access logs do not capture the cookies.

For any cookie that you want to capture, the following parameters must be in your IngressController configuration:

  • name specifies the name of the cookie.

  • maxLength specifies tha maximum length of the cookie.

  • matchType specifies if the field name of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. The matchType field uses the Exact and Prefix parameters.

For example:

  1. httpCaptureCookies:
  2. - matchType: Exact
  3. maxLength: 128
  4. name: MYCOOKIE

httpCaptureHeaders

httpCaptureHeaders specifies the HTTP headers that you want to capture in the access logs. If the httpCaptureHeaders field is empty, the access logs do not capture the headers.

httpCaptureHeaders contains two lists of headers to capture in the access logs. The two lists of header fields are request and response. In both lists, the name field must specify the header name and the maxlength field must specify the maximum length of the header. For example:

  1. httpCaptureHeaders:
  2. request:
  3. - maxLength: 256
  4. name: Connection
  5. - maxLength: 128
  6. name: User-Agent
  7. response:
  8. - maxLength: 256
  9. name: Content-Type
  10. - maxLength: 256
  11. name: Content-Length

tuningOptions

tuningOptions specifies options for tuning the performance of Ingress Controller pods.

  • clientFinTimeout specifies how long a connection is held open while waiting for the client response to the server closing the connection. The default timeout is 1s.

  • clientTimeout specifies how long a connection is held open while waiting for a client response. The default timeout is 30s.

  • headerBufferBytes specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least 16384 if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is 32768 bytes. Setting this field not recommended because headerBufferBytes values that are too small can break the Ingress Controller, and headerBufferBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary.

  • headerBufferMaxRewriteBytes specifies how much memory should be reserved, in bytes, from headerBufferBytes for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for headerBufferMaxRewriteBytes is 4096. headerBufferBytes must be greater than headerBufferMaxRewriteBytes for incoming HTTP requests. If not set, the default value is 8192 bytes. Setting this field not recommended because headerBufferMaxRewriteBytes values that are too small can break the Ingress Controller and headerBufferMaxRewriteBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary.

  • healthCheckInterval specifies how long the router waits between health checks. The default is 5s.

  • serverFinTimeout specifies how long a connection is held open while waiting for the server response to the client that is closing the connection. The default timeout is 1s.

  • serverTimeout specifies how long a connection is held open while waiting for a server response. The default timeout is 30s.

  • threadCount specifies the number of threads to create per HAProxy process. Creating more threads allows each Ingress Controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to 64 threads. If this field is empty, the Ingress Controller uses the default value of 4 threads. The default value can change in future releases. Setting this field is not recommended because increasing the number of HAProxy threads allows Ingress Controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the Ingress Controller to perform poorly.

  • tlsInspectDelay specifies how long the router can hold data to find a matching route. Setting this value too short can cause the router to fall back to the default certificate for edge-terminated, reencrypted, or passthrough routes, even when using a better matched certificate. The default inspect delay is 5s.

  • tunnelTimeout specifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. The default timeout is 1h.

  • maxConnections specifies the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections at the cost of additional system resources. Permitted values are 0, -1, any value within the range 2000 and 2000000, or the field can be left empty.

    • If this field is left empty or has the value 0, the Ingress Controller will use the default value of 50000. This value is subject to change in future releases.

    • If the field has the value of -1, then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. This process results in a large computed value that will incur significant memory usage compared to the current default value of 50000.

    • If the field has a value that is greater than the current operating system limit, the HAProxy process will not start.

    • If you choose a discrete value and the router pod is migrated to a new node, it is possible the new node does not have an identical ulimit configured. In such cases, the pod fails to start.

    • If you have nodes with different ulimits configured, and you choose a discrete value, it is recommended to use the value of -1 for this field so that the maximum number of connections is calculated at runtime.

logEmptyRequests

logEmptyRequests specifies connections for which no request is received and logged. These empty requests come from load balancer health probes or web browser speculative connections (preconnect) and logging these requests can be undesirable. However, these requests can be caused by network errors, in which case logging empty requests can be useful for diagnosing the errors. These requests can be caused by port scans, and logging empty requests can aid in detecting intrusion attempts. Allowed values for this field are Log and Ignore. The default value is Log.

The LoggingPolicy type accepts either one of two values:

  • Log: Setting this value to Log indicates that an event should be logged.

  • Ignore: Setting this value to Ignore sets the dontlognull option in the HAproxy configuration.

HTTPEmptyRequestsPolicy

HTTPEmptyRequestsPolicy describes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are Respond and Ignore. The default value is Respond.

The HTTPEmptyRequestsPolicy type accepts either one of two values:

  • Respond: If the field is set to Respond, the Ingress Controller sends an HTTP 400 or 408 response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics.

  • Ignore: Setting this option to Ignore adds the http-ignore-probes parameter in the HAproxy configuration. If the field is set to Ignore, the Ingress Controller closes the connection without sending a response, then logs the connection, or incrementing metrics.

These connections come from load balancer health probes or web browser speculative connections (preconnect) and can be safely ignored. However, these requests can be caused by network errors, so setting this field to Ignore can impede detection and diagnosis of problems. These requests can be caused by port scans, in which case logging empty requests can aid in detecting intrusion attempts.

All parameters are optional.

Ingress Controller TLS security profiles

TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server.

Understanding TLS security profiles

You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OKD components. The OKD TLS security profiles are based on Mozilla recommended configurations.

You can specify one of the following TLS security profiles for each component:

Table 1. TLS security profiles
ProfileDescription

Old

This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration.

The Old profile requires a minimum TLS version of 1.0.

For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1.

Intermediate

This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration.

The Intermediate profile requires a minimum TLS version of 1.2.

Modern

This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration.

The Modern profile requires a minimum TLS version of 1.3.

Custom

This profile allows you to define the TLS version and ciphers to use.

Use caution when using a Custom profile, because invalid configurations can cause problems.

When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout.

Configuring the TLS security profile for the Ingress Controller

To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.

Sample IngressController CR that configures the Old TLS security profile

  1. apiVersion: operator.openshift.io/v1
  2. kind: IngressController
  3. ...
  4. spec:
  5. tlsSecurityProfile:
  6. old: {}
  7. type: Old
  8. ...

The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers.

You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile. For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters.

The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile.

The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile:

    1. $ oc edit IngressController default -n openshift-ingress-operator
  2. Add the spec.tlsSecurityProfile field:

    Sample IngressController CR for a Custom profile

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. ...
    4. spec:
    5. tlsSecurityProfile:
    6. type: Custom (1)
    7. custom: (2)
    8. ciphers: (3)
    9. - ECDHE-ECDSA-CHACHA20-POLY1305
    10. - ECDHE-RSA-CHACHA20-POLY1305
    11. - ECDHE-RSA-AES128-GCM-SHA256
    12. - ECDHE-ECDSA-AES128-GCM-SHA256
    13. minTLSVersion: VersionTLS11
    14. ...
    1Specify the TLS security profile type (Old, Intermediate, or Custom). The default is Intermediate.
    2Specify the appropriate field for the selected type:
    • old: {}

    • intermediate: {}

    • custom:

    3For the custom type, specify a list of TLS ciphers and minimum accepted TLS version.
  3. Save the file to apply the changes.

Verification

  • Verify that the profile is set in the IngressController CR:

    1. $ oc describe IngressController default -n openshift-ingress-operator

    Example output

    1. Name: default
    2. Namespace: openshift-ingress-operator
    3. Labels: <none>
    4. Annotations: <none>
    5. API Version: operator.openshift.io/v1
    6. Kind: IngressController
    7. ...
    8. Spec:
    9. ...
    10. Tls Security Profile:
    11. Custom:
    12. Ciphers:
    13. ECDHE-ECDSA-CHACHA20-POLY1305
    14. ECDHE-RSA-CHACHA20-POLY1305
    15. ECDHE-RSA-AES128-GCM-SHA256
    16. ECDHE-ECDSA-AES128-GCM-SHA256
    17. Min TLS Version: VersionTLS11
    18. Type: Custom
    19. ...

Configuring mutual TLS authentication

You can configure the Ingress Controller to enable mutual TLS (mTLS) authentication by setting a spec.clientTLS value. The clientTLS value configures the Ingress Controller to verify client certificates. This configuration includes setting a clientCA value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client’s certificate. Optionally, you can also configure a list of certificate subject filters.

If the clientCA value specifies an X509v3 certificate revocation list (CRL) distribution point, the Ingress Operator downloads and manages a CRL config map based on the HTTP URI X509v3 CRL Distribution Point specified in each provided certificate. The Ingress Controller uses this config map during mTLS/TLS negotiation. Requests that do not provide valid certificates are rejected.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have a PEM-encoded CA certificate bundle.

  • If your CA bundle references a CRL distribution point, you must have also included the end-entity or leaf certificate to the client CA bundle. This certificate must have included an HTTP URI under CRL Distribution Points, as described in RFC 5280. For example:

    1. Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1
    2. Subject: SOME SIGNED CERT X509v3 CRL Distribution Points:
    3. Full Name:
    4. URI:http://crl.example.com/example.crl

Procedure

  1. In the openshift-config namespace, create a config map from your CA bundle:

    1. $ oc create configmap \
    2. router-ca-certs-default \
    3. --from-file=ca-bundle.pem=client-ca.crt \(1)
    4. -n openshift-config
    1The config map data key must be ca-bundle.pem, and the data value must be a CA certificate in PEM format.
  2. Edit the IngressController resource in the openshift-ingress-operator project:

    1. $ oc edit IngressController default -n openshift-ingress-operator
  3. Add the spec.clientTLS field and subfields to configure mutual TLS:

    Sample IngressController CR for a clientTLS profile that specifies filtering patterns

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. clientTLS:
    8. clientCertificatePolicy: Required
    9. clientCA:
    10. name: router-ca-certs-default
    11. allowedSubjectPatterns:
    12. - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift$"

View the default Ingress Controller

The Ingress Operator is a core feature of OKD and is enabled out of the box.

Every new OKD installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute.

Procedure

  • View the default Ingress Controller:

    1. $ oc describe --namespace=openshift-ingress-operator ingresscontroller/default

View Ingress Operator status

You can view and inspect the status of your Ingress Operator.

Procedure

  • View your Ingress Operator status:

    1. $ oc describe clusteroperators/ingress

View Ingress Controller logs

You can view your Ingress Controller logs.

Procedure

  • View your Ingress Controller logs:

    1. $ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>

View Ingress Controller status

Your can view the status of a particular Ingress Controller.

Procedure

  • View the status of an Ingress Controller:

    1. $ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>

Configuring the Ingress Controller

Setting a custom default certificate

As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR).

Prerequisites

  • You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI.

  • Your certificate meets the following requirements:

    • The certificate is valid for the ingress domain.

    • The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com.

  • You must have an IngressController CR. You may use the default one:

    1. $ oc --namespace openshift-ingress-operator get ingresscontrollers

    Example output

    1. NAME AGE
    2. default 10m

If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s).

Procedure

The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key. You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR.

This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy.

  1. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files.

    1. $ oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key
  2. Update the IngressController CR to reference the new certificate secret:

    1. $ oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \
    2. --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'
  3. Verify the update was effective:

    1. $ echo Q |\
    2. openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\
    3. openssl x509 -noout -subject -issuer -enddate

    where:

    <domain>

    Specifies the base domain name for your cluster.

    Example output

    1. subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com
    2. issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com
    3. notAfter=May 10 08:32:45 2022 GM

    You can alternatively apply the following YAML to set a custom default certificate:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. defaultCertificate:
    8. name: custom-certs-default

    The certificate secret name should match the value used to update the CR.

Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller’s deployment to use the custom certificate.

Removing a custom default certificate

As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You previously configured a custom default certificate for the Ingress Controller.

Procedure

  • To remove the custom certificate and restore the certificate that ships with OKD, enter the following command:

    1. $ oc patch -n openshift-ingress-operator ingresscontrollers/default \
    2. --type json -p $'- op: remove\n path: /spec/defaultCertificate'

    There can be a delay while the cluster reconciles the new certificate configuration.

Verification

  • To confirm that the original cluster certificate is restored, enter the following command:

    1. $ echo Q | \
    2. openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \
    3. openssl x509 -noout -subject -issuer -enddate

    where:

    <domain>

    Specifies the base domain name for your cluster.

    Example output

    1. subject=CN = *.apps.<domain>
    2. issuer=CN = ingress-operator@1620633373
    3. notAfter=May 10 10:44:36 2023 GMT

Autoscaling an Ingress Controller

Automatically scale an Ingress Controller to dynamically meet routing performance or availability requirements such as the requirement to increase throughput. The following procedure provides an example for scaling up the default IngressController.

The Custom Metrics Autoscaler is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  1. You have the OpenShift CLI (oc) installed.

  2. You have access to an OKD cluster as a user with the cluster-admin role.

  3. You have the Custom Metrics Autoscaler Operator installed.

Procedure

  1. Create a project in the openshift-ingress-operator namespace by running the following command:

    1. $ oc project openshift-ingress-operator
  2. Enable OpenShift monitoring for user-defined projects by creating and applying a config map:

    1. Create a new ConfigMap object, cluster-monitoring-config.yaml:

      cluster-monitoring-config.yaml

      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: cluster-monitoring-config
      5. namespace: openshift-monitoring
      6. data:
      7. config.yaml: |
      8. enableUserWorkload: true (1)
      1When set to true, the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster.
    2. Apply the config map by running the following command:

      1. $ oc apply -f cluster-monitoring-config.yaml
  3. Create a service account to authenticate with Thanos by running the following command:

    1. $ oc create serviceaccount thanos && oc describe serviceaccount thanos

    Example output

    1. Name: thanos
    2. Namespace: openshift-ingress-operator
    3. Labels: <none>
    4. Annotations: <none>
    5. Image pull secrets: thanos-dockercfg-b4l9s
    6. Mountable secrets: thanos-dockercfg-b4l9s
    7. Tokens: thanos-token-c422q
    8. Events: <none>
  4. Define a TriggerAuthentication object within the openshift-ingress-operator namespace using the service account’s token.

    1. Define the variable secret that contains the secret by running the following command:

      1. $ secret=$(oc get secret | grep thanos-token | head -n 1 | awk '{ print $1 }')
    2. Create the TriggerAuthentication object and pass the value of the secret variable to the TOKEN parameter:

      1. $ oc process TOKEN="$secret" -f - <<EOF | oc apply -f -
      2. apiVersion: template.openshift.io/v1
      3. kind: Template
      4. parameters:
      5. - name: TOKEN
      6. objects:
      7. - apiVersion: keda.sh/v1alpha1
      8. kind: TriggerAuthentication
      9. metadata:
      10. name: keda-trigger-auth-prometheus
      11. spec:
      12. secretTargetRef:
      13. - parameter: bearerToken
      14. name: \${TOKEN}
      15. key: token
      16. - parameter: ca
      17. name: \${TOKEN}
      18. key: ca.crt
      19. EOF
  5. Create and apply a role for reading metrics from Thanos:

    1. Create a new role, thanos-metrics-reader.yaml, that reads metrics from pods and nodes:

      thanos-metrics-reader.yaml

      1. apiVersion: rbac.authorization.k8s.io/v1
      2. kind: Role
      3. metadata:
      4. name: thanos-metrics-reader
      5. rules:
      6. - apiGroups:
      7. - ""
      8. resources:
      9. - pods
      10. - nodes
      11. verbs:
      12. - get
      13. - apiGroups:
      14. - metrics.k8s.io
      15. resources:
      16. - pods
      17. - nodes
      18. verbs:
      19. - get
      20. - list
      21. - watch
      22. - apiGroups:
      23. - ""
      24. resources:
      25. - namespaces
      26. verbs:
      27. - get
    2. Apply the new role by running the following command:

      1. $ oc apply -f thanos-metrics-reader.yaml
  6. Add the new role to the service account by entering the following commands:

    1. $ oc adm policy add-role-to-user thanos-metrics-reader -z thanos --role=namespace=openshift-ingress-operator
    1. $ oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos

    The argument add-cluster-role-to-user is only required if you use cross-namespace queries. The following step uses a query from the kube-metrics namespace which requires this argument.

  7. Create a new ScaledObject YAML file, ingress-autoscaler.yaml, that targets the default Ingress Controller deployment:

    Example ScaledObject definition

    1. apiVersion: keda.sh/v1alpha1
    2. kind: ScaledObject
    3. metadata:
    4. name: ingress-scaler
    5. spec:
    6. scaleTargetRef: (1)
    7. apiVersion: operator.openshift.io/v1
    8. kind: IngressController
    9. name: default
    10. envSourceContainerName: ingress-operator
    11. minReplicaCount: 1
    12. maxReplicaCount: 20 (2)
    13. cooldownPeriod: 1
    14. pollingInterval: 1
    15. triggers:
    16. - type: prometheus
    17. metricType: AverageValue
    18. metadata:
    19. serverAddress: https://<example-cluster>:9091 (3)
    20. namespace: openshift-ingress-operator (4)
    21. metricName: 'kube-node-role'
    22. threshold: '1'
    23. query: 'sum(kube_node_role{role="worker",service="kube-state-metrics"})' (5)
    24. authModes: "bearer"
    25. authenticationRef:
    26. name: keda-trigger-auth-prometheus
    1The custom resource that you are targeting. In this case, the Ingress Controller.
    2Optional: The maximum number of replicas. If you omit this field, the default maximum is set to 100 replicas.
    3The cluster address and port.
    4The Ingress Operator namespace.
    5This expression evaluates to however many worker nodes are present in the deployed cluster.

    If you are using cross-namespace queries, you must target port 9091 and not port 9092 in the serverAddress field. You also must have elevated privileges to read metrics from this port.

  8. Apply the custom resource definition by running the following command:

    1. $ oc apply -f ingress-autoscaler.yaml

Verification

  • Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands:

    • Use the grep command to search the Ingress Controller YAML file for replicas:

      1. $ oc get ingresscontroller/default -o yaml | grep replicas:

      Example output

      1. replicas: 3
    • Get the pods in the openshift-ingress project:

      1. $ oc get pods -n openshift-ingress

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. router-default-7b5df44ff-l9pmm 2/2 Running 0 17h
      3. router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h
      4. router-default-7b5df44ff-wwsth 2/2 Running 0 66s

Additional resources

Scaling an Ingress Controller

Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController.

Scaling is not an immediate action, as it takes time to create the desired number of replicas.

Procedure

  1. View the current number of available replicas for the default IngressController:

    1. $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

    Example output

    1. 2
  2. Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas:

    1. $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge

    Example output

    1. ingresscontroller.operator.openshift.io/default patched
  3. Verify that the default IngressController scaled to the number of replicas that you specified:

    1. $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

    Example output

    1. 3

    You can alternatively apply the following YAML to scale an Ingress Controller to three replicas:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. replicas: 3 (1)
    1If you need a different amount of replicas, change the replicas value.

Configuring Ingress access logging

You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OKD, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs.

Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller.

Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack’s capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap.

Prerequisites

  • Log in as a user with cluster-admin privileges.

Procedure

Configure Ingress access logging to a sidecar.

  • To configure Ingress access logging, you must specify a destination using spec.logging.access.destination. To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type. The following example is an Ingress Controller definition that logs to a Container destination:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. replicas: 2
    8. logging:
    9. access:
    10. destination:
    11. type: Container
  • When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod:

    1. $ oc -n openshift-ingress logs deployment.apps/router-default -c logs

    Example output

    1. 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1"

Configure Ingress access logging to a Syslog endpoint.

  • To configure Ingress access logging, you must specify a destination using spec.logging.access.destination. To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type. If the destination type is Syslog, you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint and you can specify a facility using spec.logging.access.destination.syslog.facility. The following example is an Ingress Controller definition that logs to a Syslog destination:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. replicas: 2
    8. logging:
    9. access:
    10. destination:
    11. type: Syslog
    12. syslog:
    13. address: 1.2.3.4
    14. port: 10514

    The syslog destination port must be UDP.

Configure Ingress access logging with a specific log format.

  • You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. replicas: 2
    8. logging:
    9. access:
    10. destination:
    11. type: Syslog
    12. syslog:
    13. address: 1.2.3.4
    14. port: 10514
    15. httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'

Disable Ingress access logging.

  • To disable Ingress access logging, leave spec.logging or spec.logging.access empty:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. replicas: 2
    8. logging:
    9. access: null

Setting Ingress Controller thread count

A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads.

Prerequisites

  • The following assumes that you already created an Ingress Controller.

Procedure

  • Update the Ingress Controller to increase the number of threads:

    1. $ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}'

    If you have a node that is capable of running large amounts of resources, you can configure spec.nodePlacement.nodeSelector with labels that match the capacity of the intended node, and configure spec.tuningOptions.threadCount to an appropriately high value.

Configuring an Ingress Controller to use an internal load balancer

When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer.

If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet.

If you want to change the scope for an IngressController, you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created.

OKD Ingress LoadBalancerService endpoint publishing strategy

Figure 1. Diagram of LoadBalancer

The preceding graphic shows the following concepts pertaining to OKD Ingress LoadBalancerService endpoint publishing strategy:

  • You can load load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer.

  • You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic.

  • Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml, such as in the following example:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. namespace: openshift-ingress-operator
    5. name: <name> (1)
    6. spec:
    7. domain: <domain> (2)
    8. endpointPublishingStrategy:
    9. type: LoadBalancerService
    10. loadBalancer:
    11. scope: Internal (3)
    1Replace <name> with a name for the IngressController object.
    2Specify the domain for the application published by the controller.
    3Specify a value of Internal to use an internal load balancer.
  2. Create the Ingress Controller defined in the previous step by running the following command:

    1. $ oc create -f <name>-ingress-controller.yaml (1)
    1Replace <name> with the name of the IngressController object.
  3. Optional: Confirm that the Ingress Controller was created by running the following command:

    1. $ oc --all-namespaces=true get ingresscontrollers

Configuring global access for an Ingress Controller on GCP

An Ingress Controller created on GCP with an internal load balancer generates an internal IP address for the service. A cluster administrator can specify the global access option, which enables clients in any region within the same VPC network and compute region as the load balancer, to reach the workloads running on your cluster.

For more information, see the GCP documentation for global access.

Prerequisites

  • You deployed an OKD cluster on GCP infrastructure.

  • You configured an Ingress Controller to use an internal load balancer.

  • You installed the OpenShift CLI (oc).

Procedure

  1. Configure the Ingress Controller resource to allow global access.

    You can also create an Ingress Controller and specify the global access option.

    1. Configure the Ingress Controller resource:

      1. $ oc -n openshift-ingress-operator edit ingresscontroller/default
    2. Edit the YAML file:

      Sample clientAccess configuration to Global

      1. spec:
      2. endpointPublishingStrategy:
      3. loadBalancer:
      4. providerParameters:
      5. gcp:
      6. clientAccess: Global (1)
      7. type: GCP
      8. scope: Internal
      9. type: LoadBalancerService
      1Set gcp.clientAccess to Global.
    3. Save the file to apply the changes.

  2. Run the following command to verify that the service allows global access:

    1. $ oc -n openshift-ingress edit svc/router-default -o yaml

    The output shows that global access is enabled for GCP with the annotation, networking.gke.io/internal-load-balancer-allow-global-access.

Setting the Ingress Controller health check interval

A cluster administrator can set the health check interval to define how long the router waits between two consecutive health checks. This value is applied globally as a default for all routes. The default value is 5 seconds.

Prerequisites

  • The following assumes that you already created an Ingress Controller.

Procedure

  • Update the Ingress Controller to change the interval between back end health checks:

    1. $ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"healthCheckInterval": "8s"}}}'

    To override the healthCheckInterval for a single route, use the route annotation router.openshift.io/haproxy.health.check.interval

Configuring the default Ingress Controller for your cluster to be internal

You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it.

If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet.

If you want to change the scope for an IngressController, you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it.

    1. $ oc replace --force --wait --filename - <<EOF
    2. apiVersion: operator.openshift.io/v1
    3. kind: IngressController
    4. metadata:
    5. namespace: openshift-ingress-operator
    6. name: default
    7. spec:
    8. endpointPublishingStrategy:
    9. type: LoadBalancerService
    10. loadBalancer:
    11. scope: Internal
    12. EOF

Configuring the route admission policy

Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.

Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.

Prerequisites

  • Cluster administrator privileges.

Procedure

  • Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command:

    1. $ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge

    Sample Ingress Controller configuration

    1. spec:
    2. routeAdmission:
    3. namespaceOwnership: InterNamespaceAllowed
    4. ...

    You can alternatively apply the following YAML to configure the route admission policy:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. name: default
    5. namespace: openshift-ingress-operator
    6. spec:
    7. routeAdmission:
    8. namespaceOwnership: InterNamespaceAllowed

Using wildcard routes

The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller.

The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None, which is backwards compatible with existing IngressController resources.

Procedure

  1. Configure the wildcard policy.

    1. Use the following command to edit the IngressController resource:

      1. $ oc edit IngressController
    2. Under spec, set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed:

      1. spec:
      2. routeAdmission:
      3. wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed

Using X-Forwarded headers

You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For. The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller.

Procedure

  1. Configure the HTTPHeaders field for the Ingress Controller.

    1. Use the following command to edit the IngressController resource:

      1. $ oc edit IngressController
    2. Under spec, set the HTTPHeaders policy field to Append, Replace, IfNone, or Never:

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. name: default
      5. namespace: openshift-ingress-operator
      6. spec:
      7. httpHeaders:
      8. forwardedHeaderPolicy: Append

Example use cases

As a cluster administrator, you can:

  • Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller.

    To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides.

  • Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified.

    To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header.

As an application developer, you can:

  • Configure an application-specific external proxy that injects the X-Forwarded-For header.

    To configure an Ingress Controller to pass the header through unmodified for an application’s Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application.

    You can set the haproxy.router.openshift.io/set-forwarded-headers annotation on a per route basis, independent from the globally set value for the Ingress Controller.

Enabling HTTP/2 Ingress connectivity

You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more.

You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster.

To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.

The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.

Using WebSockets with a re-encrypt route and with HTTP/2 enabled on an Ingress Controller requires WebSocket support over HTTP/2. WebSockets over HTTP/2 is a feature of HAProxy 2.4, which is unsupported in OKD at this time.

For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol.

Procedure

Enable HTTP/2 on a single Ingress Controller.

  • To enable HTTP/2 on an Ingress Controller, enter the oc annotate command:

    1. $ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

    Replace <ingresscontroller_name> with the name of the Ingress Controller to annotate.

Enable HTTP/2 on the entire cluster.

  • To enable HTTP/2 for the entire cluster, enter the oc annotate command:

    1. $ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true

    You can alternatively apply the following YAML to add the annotation:

    1. apiVersion: config.openshift.io/v1
    2. kind: Ingress
    3. metadata:
    4. name: cluster
    5. annotations:
    6. ingress.operator.openshift.io/default-enable-http2: true

Configuring the PROXY protocol for an Ingress Controller

A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the HostNetwork or NodePortService endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer.

This feature is not supported in cloud deployments. This restriction is because when OKD runs in a cloud platform, and an IngressController specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses.

You must configure both OKD and the external load balancer to either use the PROXY protocol or to use TCP.

The PROXY protocol is unsupported for the default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress VIP.

Prerequisites

  • You created an Ingress Controller.

Procedure

  1. Edit the Ingress Controller resource:

    1. $ oc -n openshift-ingress-operator edit ingresscontroller/default
  2. Set the PROXY configuration:

    • If your Ingress Controller uses the hostNetwork endpoint publishing strategy type, set the spec.endpointPublishingStrategy.hostNetwork.protocol subfield to PROXY:

      Sample hostNetwork configuration to PROXY

      1. spec:
      2. endpointPublishingStrategy:
      3. hostNetwork:
      4. protocol: PROXY
      5. type: HostNetwork
    • If your Ingress Controller uses the NodePortService endpoint publishing strategy type, set the spec.endpointPublishingStrategy.nodePort.protocol subfield to PROXY:

      Sample nodePort configuration to PROXY

      1. spec:
      2. endpointPublishingStrategy:
      3. nodePort:
      4. protocol: PROXY
      5. type: NodePortService

Specifying an alternative cluster domain using the appsDomain option

As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain field. The appsDomain field is an optional domain for OKD to use instead of the default, which is specified in the domain field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route.

For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster.

Prerequisites

  • You deployed an OKD cluster.

  • You installed the oc command line interface.

Procedure

  1. Configure the appsDomain field by specifying an alternative default domain for user-created routes.

    1. Edit the ingress cluster resource:

      1. $ oc edit ingresses.config/cluster -o yaml
    2. Edit the YAML file:

      Sample appsDomain configuration to test.example.com

      1. apiVersion: config.openshift.io/v1
      2. kind: Ingress
      3. metadata:
      4. name: cluster
      5. spec:
      6. domain: apps.example.com (1)
      7. appsDomain: <test.example.com> (2)
      1Specifies the default domain. You cannot modify the default domain after installation.
      2Optional: Domain for OKD infrastructure to use for application routes. Instead of the default prefix, apps, you can use an alternative prefix like test.
  2. Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change:

    Wait for the openshift-apiserver finish rolling updates before exposing the route.

    1. Expose the route:

      1. $ oc expose service hello-openshift
      2. route.route.openshift.io/hello-openshift exposed

      Example output:

      1. $ oc get routes
      2. NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
      3. hello-openshift hello_openshift-<my_project>.test.example.com
      4. hello-openshift 8080-tcp None

Converting HTTP header case

HAProxy 2.2 lowercases HTTP header names by default, for example, changing Host: xyz.com to host: xyz.com. If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller spec.httpHeaders.headerNameCaseAdjustments API field for a solution to accommodate legacy applications until they can be fixed.

Because OKD includes HAProxy 2.2, make sure to add the necessary configuration by using spec.httpHeaders.headerNameCaseAdjustments before upgrading.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

As a cluster administrator, you can convert the HTTP header case by entering the oc patch command or by setting the HeaderNameCaseAdjustments field in the Ingress Controller YAML file.

  • Specify an HTTP header to be capitalized by entering the oc patch command.

    1. Enter the oc patch command to change the HTTP host header to Host:

      1. $ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}'
    2. Annotate the route of the application:

      1. $ oc annotate routes/my-application haproxy.router.openshift.io/h1-adjust-case=true

      The Ingress Controller then adjusts the host request header as specified.

  • Specify adjustments using the HeaderNameCaseAdjustments field by configuring the Ingress Controller YAML file.

    1. The following example Ingress Controller YAML adjusts the host header to Host for HTTP/1 requests to appropriately annotated routes:

      Example Ingress Controller YAML

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. name: default
      5. namespace: openshift-ingress-operator
      6. spec:
      7. httpHeaders:
      8. headerNameCaseAdjustments:
      9. - Host
    2. The following example route enables HTTP response header name case adjustments using the haproxy.router.openshift.io/h1-adjust-case annotation:

      Example route YAML

      1. apiVersion: route.openshift.io/v1
      2. kind: Route
      3. metadata:
      4. annotations:
      5. haproxy.router.openshift.io/h1-adjust-case: true (1)
      6. name: my-application
      7. namespace: my-application
      8. spec:
      9. to:
      10. kind: Service
      11. name: my-application
      1Set haproxy.router.openshift.io/h1-adjust-case to true.

Using router compression

You configure the HAProxy Ingress Controller to specify router compression globally for specific MIME types. You can use the mimeTypes variable to define the formats of MIME types to which compression is applied. The types are: application, image, message, multipart, text, video, or a custom type prefaced by “X-“. To see the full notation for MIME types and subtypes, see RFC1341.

Memory allocated for compression can affect the max connections. Additionally, compression of large buffers can cause latency, like heavy regex or long lists of regex.

Not all MIME types benefit from compression, but HAProxy still uses resources to try to compress if instructed to. Generally, text formats, such as html, css, and js, formats benefit from compression, but formats that are already compressed, such as image, audio, and video, benefit little in exchange for the time and resources spent on compression.

Procedure

  1. Configure the httpCompression field for the Ingress Controller.

    1. Use the following command to edit the IngressController resource:

      1. $ oc edit -n openshift-ingress-operator ingresscontrollers/default
    2. Under spec, set the httpCompression policy field to mimeTypes and specify a list of MIME types that should have compression applied:

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. name: default
      5. namespace: openshift-ingress-operator
      6. spec:
      7. httpCompression:
      8. mimeTypes:
      9. - "text/html"
      10. - "text/css; charset=utf-8"
      11. - "application/json"
      12. ...

Exposing router metrics

You can expose the HAProxy router metrics by default in Prometheus format on the default stats port, 1936. The external metrics collection and aggregation systems such as Prometheus can access the HAProxy router metrics. You can view the HAProxy router metrics in a browser in the HTML and comma separated values (CSV) format.

Prerequisites

  • You configured your firewall to access the default stats port, 1936.

Procedure

  1. Get the router pod name by running the following command:

    1. $ oc get pods -n openshift-ingress

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. router-default-76bfffb66c-46qwp 1/1 Running 0 11h
  2. Get the router’s username and password, which the router pod stores in the /var/lib/haproxy/conf/metrics-auth/statsUsername and /var/lib/haproxy/conf/metrics-auth/statsPassword files:

    1. Get the username by running the following command:

      1. $ oc rsh <router_pod_name> cat metrics-auth/statsUsername
    2. Get the password by running the following command:

      1. $ oc rsh <router_pod_name> cat metrics-auth/statsPassword
  3. Get the router IP and metrics certificates by running the following command:

    1. $ oc describe pod <router_pod>
  4. Get the raw statistics in Prometheus format by running the following command:

    1. $ curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics
  5. Access the metrics securely by running the following command:

    1. $ curl -u user:password https://<router_IP>:<stats_port>/metrics -k
  6. Access the default stats port, 1936, by running the following command:

    1. $ curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics

    Example output

    1. ...
    2. # HELP haproxy_backend_connections_total Total number of connections.
    3. # TYPE haproxy_backend_connections_total gauge
    4. haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0
    5. haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0
    6. haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0
    7. ...
    8. # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value.
    9. # TYPE haproxy_exporter_server_threshold gauge
    10. haproxy_exporter_server_threshold{type="current"} 11
    11. haproxy_exporter_server_threshold{type="limit"} 500
    12. ...
    13. # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes.
    14. # TYPE haproxy_frontend_bytes_in_total gauge
    15. haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0
    16. haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0
    17. haproxy_frontend_bytes_in_total{frontend="public"} 119070
    18. ...
    19. # HELP haproxy_server_bytes_in_total Current total of incoming bytes.
    20. # TYPE haproxy_server_bytes_in_total gauge
    21. haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0
    22. haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0
    23. haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0
    24. haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0
    25. ...
  7. Launch the stats window by entering the following URL in a browser:

    1. http://<user>:<password>@<router_IP>:<stats_port>
  8. Optional: Get the stats in CSV format by entering the following URL in a browser:

    1. http://<user>:<password>@<router_ip>:1936/metrics;csv

Customizing HAProxy error code response pages

As a cluster administrator, you can specify a custom error code response page for either 503, 404, or both error pages. The HAProxy router serves a 503 error page when the application pod is not running or a 404 error page when the requested URL does not exist. For example, if you customize the 503 error code response page, then the page is served when the application pod is not running, and the default 404 error code HTTP response page is served by the HAProxy router for an incorrect route or a non-existing route.

Custom error code response pages are specified in a config map then patched to the Ingress Controller. The config map keys have two available file names as follows: error-page-503.http and error-page-404.http.

Custom HTTP error code response pages must follow the HAProxy HTTP error page configuration guidelines. Here is an example of the default OKD HAProxy router http 503 error code response page. You can use the default content as a template for creating your own custom page.

By default, the HAProxy router serves only a 503 error page when the application is not running or when the route is incorrect or non-existent. This default behavior is the same as the behavior on OKD 4.8 and earlier. If a config map for the customization of an HTTP error code response is not provided, and you are using a custom HTTP error code response page, the router serves a default 404 or 503 error code response page.

If you use the OKD default 503 error code page as a template for your customizations, the headers in the file require an editor that can use CRLF line endings.

Procedure

  1. Create a config map named my-custom-error-code-pages in the openshift-config namespace:

    1. $ oc -n openshift-config create configmap my-custom-error-code-pages \
    2. --from-file=error-page-503.http \
    3. --from-file=error-page-404.http

    If you do not specify the correct format for the custom error code response page, a router pod outage occurs. To resolve this outage, you must delete or correct the config map and delete the affected router pods so they can be recreated with the correct information.

  2. Patch the Ingress Controller to reference the my-custom-error-code-pages config map by name:

    1. $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge

    The Ingress Operator copies the my-custom-error-code-pages config map from the openshift-config namespace to the openshift-ingress namespace. The Operator names the config map according to the pattern, <your_ingresscontroller_name>-errorpages, in the openshift-ingress namespace.

  3. Display the copy:

    1. $ oc get cm default-errorpages -n openshift-ingress

    Example output

    1. NAME DATA AGE
    2. default-errorpages 2 25s (1)
    1The example config map name is default-errorpages because the default Ingress Controller custom resource (CR) was patched.
  4. Confirm that the config map containing the custom error response page mounts on the router volume where the config map key is the filename that has the custom HTTP error code response:

    • For 503 custom HTTP custom error code response:

      1. $ oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http
    • For 404 custom HTTP custom error code response:

      1. $ oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http

Verification

Verify your custom error code HTTP response:

  1. Create a test project and application:

    1. $ oc new-project test-ingress
    1. $ oc new-app django-psql-example
  2. For 503 custom http error code response:

    1. Stop all the pods for the application.

    2. Run the following curl command or visit the route hostname in the browser:

      1. $ curl -vk <route_hostname>
  3. For 404 custom http error code response:

    1. Visit a non-existent route or an incorrect route.

    2. Run the following curl command or visit the route hostname in the browser:

      1. $ curl -vk <route_hostname>
  4. Check if the errorfile attribute is properly in the haproxy.config file:

    1. $ oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile

Setting the Ingress Controller maximum connections

A cluster administrator can set the maximum number of simultaneous connections for OpenShift router deployments. You can patch an existing Ingress Controller to increase the maximum number of connections.

Prerequisites

  • The following assumes that you already created an Ingress Controller

Procedure

  • Update the Ingress Controller to change the maximum number of connections for HAProxy:

    1. $ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}'

    If you set the spec.tuningOptions.maxConnections value greater than the current operating system limit, the HAProxy process will not start. See the table in the “Ingress Controller configuration parameters” section for more information about this parameter.

Additional resources