Configuring the Knative Serving Operator custom resource

You can configure Knative Serving with the following options:

Version configuration

Cluster administrators can install a specific version of Knative Serving by using the spec.version field.

For example, if you want to install Knative Serving v0.23.0, you can apply the following KnativeServing custom resource:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. version: 0.23.0

If spec.version is not specified, the Knative Operator installs the latest available version of Knative Serving. If users specify an invalid or unavailable version, the Knative Operator will do nothing. The Knative Operator always includes the latest 3 minor release versions. For example, if the current version of the Knative Operator is v0.24.0, the earliest version of Knative Serving available through the Operator is v0.22.0.

If Knative Serving is already managed by the Operator, updating the spec.version field in the KnativeServing resource enables upgrading or downgrading the Knative Serving version, without needing to change the Operator.

Important

The Knative Operator only permits upgrades or downgrades by one minor release version at a time. For example, if the current Knative Serving deployment is version v0.22.0, you must upgrade to v0.23.0 before upgrading to v0.24.0.

Install customized Knative Serving

The Operator provides you with the flexibility to install Knative Serving customized to your own requirements. As long as the manifests of customized Knative Serving are accessible to the Operator, you can install them.

There are two modes available for you to install customized manifests: overwrite mode and append mode. With overwrite mode, under .spec.manifests, you must define all manifests needed for Knative Serving to install because the Operator will no longer install any default manifests. With append mode, under .spec.additionalManifests, you only need to define your customized manifests. The customized manifests are installed after default manifests are applied.

Overwrite mode

You can use overwrite mode when you want to customize all Knative Serving manifests.

For example, if you want to install Knative Serving and Istio ingress and you want customize both components, you can create the following YAML file:

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: knative-serving
  5. ---
  6. apiVersion: operator.knative.dev/v1beta1
  7. kind: KnativeServing
  8. metadata:
  9. name: knative-serving
  10. namespace: knative-serving
  11. spec:
  12. version: $spec_version
  13. manifests:
  14. - URL: https://my-serving/serving.yaml
  15. - URL: https://my-net-istio/net-istio.yaml

This example installs the customized Knative Serving at version $spec_version which is available at https://my-serving/serving.yaml, and the customized ingress plugin net-istio which is available at https://my-net-istio/net-istio.yaml.

Attention

You can make the customized Knative Serving available in one or multiple links, as the spec.manifests supports a list of links. The ordering of the URLs is critical. Put the manifest you want to apply first on the top.

We strongly recommend you to specify the version and the valid links to the customized Knative Serving, by leveraging both spec_version and spec.manifests. Do not skip either field.

Append mode

You can use append mode to add your customized manifests into the default manifests.

For example, if you only want to customize a few resources but you still want to install the default Knative Serving, you can create the following YAML file:

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: knative-serving
  5. ---
  6. apiVersion: operator.knative.dev/v1beta1
  7. kind: KnativeServing
  8. metadata:
  9. name: knative-serving
  10. namespace: knative-serving
  11. spec:
  12. version: $spec_version
  13. additionalManifests:
  14. - URL: https://my-serving/serving-custom.yaml

This example installs the default Knative Serving, and installs your customized resources available at https://my-serving/serving-custom.yaml.

Knative Operator installs the default manifests of Knative Serving at the version $spec_version, and then installs your customized manifests based on them.

Private repository and private secrets

You can use the spec.registry section of the operator CR to change the image references to point to a private registry or specify imagePullSecrets:

  • default: this field defines a image reference template for all Knative images. The format is example-registry.io/custom/path/${NAME}:{CUSTOM-TAG}. If you use the same tag for all your images, the only difference is the image name. ${NAME} is a pre-defined variable in the operator corresponding to the container name. If you name the images in your private repo to align with the container names ( activator, autoscaler, controller, webhook, autoscaler-hpa, net-istio-controller, and queue-proxy), the default argument should be sufficient.

  • override: a map from container name to the full registry location. This section is only needed when the registry images do not match the common naming format. For containers whose name matches a key, the value is used in preference to the image name calculated by default. If a container’s name does not match a key in override, the template in default is used.

  • imagePullSecrets: a list of Secret names used when pulling Knative container images. The Secrets must be created in the same namespace as the Knative Serving Deployments. See deploying images from a private container registry for configuration details.

Download images in a predefined format without secrets

This example shows how you can define custom image links that can be defined in the CR using the simplified format docker.io/knative-images/${NAME}:{CUSTOM-TAG}.

In the following example:

  • The custom tag latest is used for all images.
  • All image links are accessible without using secrets.
  • Images are pushed as docker.io/knative-images/${NAME}:{CUSTOM-TAG}.

To define your image links:

  1. Push images to the following image tags:

    ContainerDocker Image
    activatordocker.io/knative-images/activator:latest
    autoscalerdocker.io/knative-images/autoscaler:latest
    controllerdocker.io/knative-images/controller:latest
    webhookdocker.io/knative-images/webhook:latest
    autoscaler-hpadocker.io/knative-images/autoscaler-hpa:latest
    net-istio-controllerdocker.io/knative-images/net-istio-controller:latest
    queue-proxydocker.io/knative-images/queue-proxy:latest
  2. Define your operator CR with following content:

    1. apiVersion: operator.knative.dev/v1beta1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. registry:
    8. default: docker.io/knative-images/${NAME}:latest

Download images individually without secrets

If your custom image links are not defined in a uniform format by default, you will need to individually include each link in the CR.

For example, given the following images:

ContainerDocker Image
activatordocker.io/knative-images-repo1/activator:latest
autoscalerdocker.io/knative-images-repo2/autoscaler:latest
controllerdocker.io/knative-images-repo3/controller:latest
webhookdocker.io/knative-images-repo4/webhook:latest
autoscaler-hpadocker.io/knative-images-repo5/autoscaler-hpa:latest
net-istio-controllerdocker.io/knative-images-repo6/prefix-net-istio-controller:latest
net-istio-webhookdocker.io/knative-images-repo6/net-istio-webhooko:latest
queue-proxydocker.io/knative-images-repo7/queue-proxy-suffix:latest

You must modify the Operator CR to include the full list. For example:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. registry:
  8. override:
  9. activator: docker.io/knative-images-repo1/activator:latest
  10. autoscaler: docker.io/knative-images-repo2/autoscaler:latest
  11. controller: docker.io/knative-images-repo3/controller:latest
  12. webhook: docker.io/knative-images-repo4/webhook:latest
  13. autoscaler-hpa: docker.io/knative-images-repo5/autoscaler-hpa:latest
  14. net-istio-controller/controller: docker.io/knative-images-repo6/prefix-net-istio-controller:latest
  15. net-istio-webhook/webhook: docker.io/knative-images-repo6/net-istio-webhook:latest
  16. queue-proxy: docker.io/knative-images-repo7/queue-proxy-suffix:latest

Note

If the container name is not unique across all Deployments, DaemonSets and Jobs, you can prefix the container name with the parent container name and a slash. For example, istio-webhook/webhook.

Download images with secrets

If your image repository requires private secrets for access, include the imagePullSecrets attribute.

This example uses a secret named regcred. You must create your own private secrets if these are required:

After you create this secret, edit the Operator CR by appending the following content:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. registry:
  8. ...
  9. imagePullSecrets:
  10. - name: regcred

The field imagePullSecrets expects a list of secrets. You can add multiple secrets to access the images as follows:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. registry:
  8. ...
  9. imagePullSecrets:
  10. - name: regcred
  11. - name: regcred-2
  12. ...

SSL certificate for controller

To enable tag to digest resolution, the Knative Serving controller needs to access the container registry. To allow the controller to trust a self-signed registry cert, you can use the Operator to specify the certificate using a ConfigMap or Secret.

Specify the following fields in spec.controller-custom-certs to select a custom registry certificate:

  • name: the name of the ConfigMap or Secret.
  • type: either the string “ConfigMap” or “Secret”.

If you create a ConfigMap named testCert containing the certificate, change your CR:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. controller-custom-certs:
  8. name: testCert
  9. type: ConfigMap

Replace the default istio-ingressgateway-service

To set up a custom ingress gateway, follow Step 1: Create Gateway Service and Deployment Instance.

Step 2: Update the Knative gateway

Update spec.ingress.istio.knative-ingress-gateway to select the labels of the new ingress gateway:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. ingress:
  8. istio:
  9. enabled: true
  10. knative-ingress-gateway:
  11. selector:
  12. istio: ingressgateway

Step 3: Update Gateway ConfigMap

Additionally, you will need to update the Istio ConfigMap:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. ingress:
  8. istio:
  9. enabled: true
  10. knative-ingress-gateway:
  11. selector:
  12. istio: ingressgateway
  13. config:
  14. istio:
  15. gateway.knative-serving.knative-ingress-gateway: "custom-ingressgateway.custom-ns.svc.cluster.local"

The key in spec.config.istio is in the format of gateway.<gateway_namespace>.<gateway_name>.

Replace the knative-ingress-gateway gateway

To create the ingress gateway, follow Step 1: Create the Gateway.

Step 2: Update Gateway ConfigMap

You will need to update the Istio ConfigMap:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. config:
  8. istio:
  9. gateway.custom-ns.knative-custom-gateway: "istio-ingressgateway.istio-system.svc.cluster.local"

The key in spec.config.istio is in the format of gateway.<gateway_namespace>.<gateway_name>.

Configuration of cluster local gateway

Update spec.ingress.istio.knative-local-gateway to select the labels of the new cluster-local ingress gateway:

Default local gateway name

Go through the installing Istio guide to use local cluster gateway, if you use the default gateway called knative-local-gateway.

Non-default local gateway name

If you create custom local gateway with a name other than knative-local-gateway, update config.istio and the knative-local-gateway selector:

This example shows a service and deployment knative-local-gateway in the namespace istio-system, with the label custom: custom-local-gw:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. ingress:
  8. istio:
  9. enabled: true
  10. knative-local-gateway:
  11. selector:
  12. custom: custom-local-gateway
  13. config:
  14. istio:
  15. local-gateway.knative-serving.knative-local-gateway: "custom-local-gateway.istio-system.svc.cluster.local"

High availability

By default, Knative Serving runs a single instance of each deployment. The spec.high-availability field allows you to configure the number of replicas for all deployments managed by the operator.

The following configuration specifies a replica count of 3 for the deployments:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. high-availability:
  8. replicas: 3

The replicas field also configures the HorizontalPodAutoscaler resources based on the spec.high-availability. Let’s say the operator includes the following HorizontalPodAutoscaler:

  1. apiVersion: autoscaling/v2beta2
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. ...
  5. spec:
  6. minReplicas: 3
  7. maxReplicas: 5

If you configure replicas: 2, which is less than minReplicas, the operator transforms minReplicas to 1.

If you configure replicas: 6, which is more than maxReplicas, the operator transforms maxReplicas to maxReplicas + (replicas - minReplicas) which is 8.

Override system deployments

If you would like to override some configurations for a specific deployment, you can override the configuration by using spec.deployments in CR. Currently resources, replicas, labels, annotations and nodeSelector are supported.

Override the resources

The KnativeServing custom resource is able to configure system resources for the Knative system containers based on the deployment. Requests and limits can be configured for all the available containers within the deployment, like activator, autoscaler, controller, etc.

For example, the following KnativeServing resource configures the container controller in the deployment controller to request 0.3 CPU and 100MB of RAM, and sets hard limits of 1 CPU and 250MB RAM:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. deployments:
  8. - name: controller
  9. resources:
  10. - container: controller
  11. requests:
  12. cpu: 300m
  13. memory: 100Mi
  14. limits:
  15. cpu: 1000m
  16. memory: 250Mi

Override replicas, labels and annotations

The following KnativeServing resource overrides the webhook deployment to have 3 Replicas, the label mylabel: foo, and the annotation myannotataions: bar, while other system deployments have 2 Replicas by using spec.high-availability.

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. high-availability:
  8. replicas: 2
  9. deployments:
  10. - name: webhook
  11. replicas: 3
  12. labels:
  13. mylabel: foo
  14. annotations:
  15. myannotataions: bar

Note

The KnativeServing resource label and annotation settings override the webhook’s labels and annotations for both Deployments and Pods.

Override the nodeSelector

The following KnativeServing resource overrides the webhook deployment to use the disktype: hdd nodeSelector:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. deployments:
  8. - name: webhook
  9. nodeSelector:
  10. disktype: hdd

Override the tolerations

The KnativeServing resource is able to override tolerations for the Knative Serving deployment resources. For example, if you would like to add the following tolerations

  1. tolerations:
  2. - key: "key1"
  3. operator: "Equal"
  4. value: "value1"
  5. effect: "NoSchedule"

to the deployment activator, you need to change your KnativeServing CR as below:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. deployments:
  8. - name: activator
  9. tolerations:
  10. - key: "key1"
  11. operator: "Equal"
  12. value: "value1"
  13. effect: "NoSchedule"

Override the affinity

The KnativeServing resource is able to override the affinity, including nodeAffinity, podAffinity, and podAntiAffinity, for the Knative Serving deployment resources. For example, if you would like to add the following nodeAffinity

  1. affinity:
  2. nodeAffinity:
  3. preferredDuringSchedulingIgnoredDuringExecution:
  4. - weight: 1
  5. preference:
  6. matchExpressions:
  7. - key: disktype
  8. operator: In
  9. values:
  10. - ssd

to the deployment activator, you need to change your KnativeServing CR as below:

  1. apiVersion: operator.knative.dev/v1beta1
  2. kind: KnativeServing
  3. metadata:
  4. name: knative-serving
  5. namespace: knative-serving
  6. spec:
  7. deployments:
  8. - name: activator
  9. affinity:
  10. nodeAffinity:
  11. preferredDuringSchedulingIgnoredDuringExecution:
  12. - weight: 1
  13. preference:
  14. matchExpressions:
  15. - key: disktype
  16. operator: In
  17. values:
  18. - ssd