Installing Knative

This guide walks you through the installation of the latest version of Knative.

Knative has two components, which can be installed and used independently or together. To help you pick and choose the pieces that are right for you, here is a brief description of each:

  • Serving stable @ v0.9 provides an abstraction for stateless request-based scale-to-zero services.
  • Eventing alpha @ v0.2 provides abstractions to enable binding event sources (e.g. Github Webhooks, Kafka) and consumers (e.g. Kubernetes or Knative Services).

Knative also has an Observability plugin deprecated @ v0.14 which provides standard tooling that can be used to get visibility into the health of the software running on Knative.

Before you begin

This guide assumes that you want to install an upstream Knative release on a Kubernetes cluster. A growing number of vendors have managed Knative offerings; see the Knative Offerings page for a full list.

Knative v0.15.0 requires a Kubernetes cluster v1.16 or newer, as well as a compatible kubectl. This guide assumes that you’ve already created a Kubernetes cluster, and that you are using bash in a Mac or Linux environment; some commands will need to be adjusted for use in a Windows environment.

Installing the Serving component

FEATURE STATE: stable @ Knative v0.9

The following commands install the Knative Serving component.

  1. Install the Custom Resource Definitions (aka CRDs):

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/serving-crds.yaml
  2. Install the core components of Serving (see below for optional extensions):

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/serving-core.yaml
  3. Pick a networking layer (alphabetical):

    FEATURE STATE: alpha @ Knative v0.8

    The following commands install Ambassador and enable its Knative integration.

    1. Create a namespace to install Ambassador in:

      1. kubectl create namespace ambassador
    2. Install Ambassador:

      1. kubectl apply --namespace ambassador \
      2. --filename https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml \
      3. --filename https://getambassador.io/yaml/ambassador/ambassador-service.yaml
    3. Give Ambassador the required permissions:

      1. kubectl patch clusterrolebinding ambassador -p '{"subjects":[{"kind": "ServiceAccount", "name": "ambassador", "namespace": "ambassador"}]}'
    4. Enable Knative support in Ambassador:

      1. kubectl set env --namespace ambassador deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true
    5. To configure Knative Serving to use Ambassador by default:

      1. kubectl patch configmap/config-network \
      2. --namespace knative-serving \
      3. --type merge \
      4. --patch '{"data":{"ingress.class":"ambassador.ingress.networking.knative.dev"}}'
    6. Fetch the External IP or CNAME:

      1. kubectl --namespace ambassador get service ambassador

      Save this for configuring DNS below.

  1. **FEATURE STATE:** `alpha @ Knative v0.12`
  2. The following commands install Contour and enable its Knative integration.
  3. 1. Install a properly configured Contour:
  4. ```
  5. kubectl apply --filename https://github.com/knative/net-contour/releases/download/v0.15.0/contour.yaml
  6. ```
  7. 2. Install the Knative Contour controller:
  8. ```
  9. kubectl apply --filename https://github.com/knative/net-contour/releases/download/v0.15.0/net-contour.yaml
  10. ```
  11. 3. To configure Knative Serving to use Contour by default:
  12. ```
  13. kubectl patch configmap/config-network \
  14. --namespace knative-serving \
  15. --type merge \
  16. --patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
  17. ```
  18. 4. Fetch the External IP or CNAME:
  19. ```
  20. kubectl --namespace contour-external get service envoy
  21. ```
  22. Save this for configuring DNS below.
  23. **FEATURE STATE:** `alpha @ Knative v0.8`
  24. *For a detailed guide on Gloo integration, see [Installing Gloo for Knative](https://docs.solo.io/gloo/latest/installation/knative/) in the Gloo documentation.*
  25. The following commands install Gloo and enable its Knative integration.
  26. 1. Make sure `glooctl` is installed (version 1.3.x and higher recommended):
  27. ```
  28. glooctl version
  29. ```
  30. If it is not installed, you can install the latest version using:
  31. ```
  32. curl -sL https://run.solo.io/gloo/install | sh
  33. export PATH=$HOME/.gloo/bin:$PATH
  34. ```
  35. Or following the [Gloo CLI install instructions](https://docs.solo.io/gloo/latest/installation/knative/#install-command-line-tool-cli).
  36. 2. Install Gloo and the Knative integration:
  37. ```
  38. glooctl install knative --install-knative=false
  39. ```
  40. 3. Fetch the External IP or CNAME:
  41. ```
  42. glooctl proxy url --name knative-external-proxy
  43. ```
  44. Save this for configuring DNS below.
  45. **FEATURE STATE:** `stable @ Knative v0.9`
  46. The following commands install Istio and enable its Knative integration.
  47. 1. [Installing Istio for Knative]($11e55a72cf10df41.md)
  48. 2. Install the Knative Istio controller:
  49. ```
  50. kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.15.0/release.yaml
  51. ```
  52. 3. Fetch the External IP or CNAME:
  53. ```
  54. kubectl --namespace istio-system get service istio-ingressgateway
  55. ```
  56. Save this for configuring DNS below.
  57. **FEATURE STATE:** `@ Knative v0.13`
  58. The following commands install Kong and enable its Knative integration.
  59. 1. Install Kong Ingress Controller:
  60. ```
  61. kubectl apply --filename https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/0.9.x/deploy/single/all-in-one-dbless.yaml
  62. ```
  63. 2. To configure Knative Serving to use Kong by default:
  64. ```
  65. kubectl patch configmap/config-network \
  66. --namespace knative-serving \
  67. --type merge \
  68. --patch '{"data":{"ingress.class":"kong"}}'
  69. ```
  70. 3. Fetch the External IP or CNAME:
  71. ```
  72. kubectl --namespace kong get service kong-proxy
  73. ```
  74. Save this for configuring DNS below.
  75. **FEATURE STATE:** `alpha @ Knative v0.12`
  76. The following commands install Kourier and enable its Knative integration.
  77. 1. Install the Knative Kourier controller:
  78. ```
  79. kubectl apply --filename https://github.com/knative/net-kourier/releases/download/v0.15.0/kourier.yaml
  80. ```
  81. 2. To configure Knative Serving to use Kourier by default:
  82. ```
  83. kubectl patch configmap/config-network \
  84. --namespace knative-serving \
  85. --type merge \
  86. --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
  87. ```
  88. 3. Fetch the External IP or CNAME:
  89. ```
  90. kubectl --namespace kourier-system get service kourier
  91. ```
  92. Save this for configuring DNS below.
  1. Configure DNS

    We ship a simple Kubernetes Job called “default domain” that will (see caveats) configure Knative Serving to use xip.io as the default DNS suffix.

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/serving-default-domain.yaml

    Caveat: This will only work if the cluster LoadBalancer service exposes an IPv4 address, so it will not work with IPv6 clusters, AWS, or local setups like Minikube. For these, see “Real DNS” or “Temporary DNS”.

    To configure DNS for Knative, take the External IP or CNAME from setting up networking, and configure it with your DNS provider as follows:

    • If the networking layer produced an External IP address, then configure a wildcard A record for the domain:

      1. # Here knative.example.com is the domain suffix for your cluster
      2. *.knative.example.com == A 35.233.41.212
    • If the networking layer produced a CNAME, then configure a CNAME record for the domain:

      1. # Here knative.example.com is the domain suffix for your cluster
      2. *.knative.example.com == CNAME a317a278525d111e89f272a164fd35fb-1510370581.eu-central-1.elb.amazonaws.com
  1. Once your DNS provider has been configured, direct Knative to use that domain:
  2. ```
  3. # Replace knative.example.com with your domain suffix
  4. kubectl patch configmap/config-domain \
  5. --namespace knative-serving \
  6. --type merge \
  7. --patch '{"data":{"knative.example.com":""}}'
  8. ```
  9. If you are using `curl` to access the sample applications, or your own Knative app, and are unable to use the Magic DNS (xip.io)” or Real DNS methods, there is a temporary approach. This is useful for those who wish to evaluate Knative without altering their DNS configuration, as per the Real DNS method, or cannot use the Magic DNS method due to using, for example, minikube locally or IPv6 clusters.
  10. To access your application using `curl` using this method:
  11. 1. After starting your application, get the URL of your application:
  12. ```
  13. kubectl get ksvc
  14. ```
  15. The output should be similar to:
  16. ```
  17. NAME URL LATESTCREATED LATESTREADY READY REASON
  18. helloworld-go http://helloworld-go.default.example.com helloworld-go-vqjlf helloworld-go-vqjlf True
  19. ```
  20. 2. Instruct `curl` to connect to the External IP or CNAME defined by the networking layer in section 3 above, and use the `-H "Host:"` command-line option to specify the Knative applications host name. For example, if the networking layer defines your External IP and port to be `http://192.168.39.228:32198` and you wish to access the above `helloworld-go` application, use:
  21. ```
  22. curl -H "Host: helloworld-go.default.example.com" http://192.168.39.228:32198
  23. ```
  24. In the case of the provided `helloworld-go` sample application, the output should, using the default configuration, be:
  25. ```
  26. Hello Go Sample v1!
  27. ```
  28. Refer to the Real DNS method for a permanent solution.
  1. Monitor the Knative components until all of the components show a STATUS of Running or Completed:

    1. kubectl get pods --namespace knative-serving

At this point, you have a basic installation of Knative Serving!

Optional Serving extensions

FEATURE STATE: beta @ Knative v0.8

Knative also supports the use of the Kubernetes Horizontal Pod Autoscaler (HPA) for driving autoscaling decisions. The following command will install the components needed to support HPA-class autoscaling:

  1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/serving-hpa.yaml

FEATURE STATE: alpha @ Knative v0.6

Knative supports automatically provisioning TLS certificates via cert-manager. The following commands will install the components needed to support the provisioning of TLS certificates via cert-manager.

  1. First, install cert-manager version 0.12.0 or higher

  2. Next, install the component that integrates Knative with cert-manager:

    1. kubectl apply --filename https://github.com/knative/net-certmanager/releases/download/v0.15.0/release.yaml
  3. Now configure Knative to automatically configure TLS certificates.

FEATURE STATE: alpha @ Knative v0.14

Knative supports automatically provisioning TLS certificates using Let’s Encrypt HTTP01 challenges. The following commands will install the components needed to support that.

  1. First, install the net-http01 controller:

    1. kubectl apply --filename https://github.com/knative/net-http01/releases/download/v0.15.0/release.yaml
  2. Next, configure the certificate.class to use this certificate type.

    1. kubectl patch configmap/config-network \
    2. --namespace knative-serving \
    3. --type merge \
    4. --patch '{"data":{"certificate.class":"net-http01.certificate.networking.knative.dev"}}'
  3. Lastly, enable auto-TLS.

    1. kubectl patch configmap/config-network \
    2. --namespace knative-serving \
    3. --type merge \
    4. --patch '{"data":{"autoTLS":"Enabled"}}'

FEATURE STATE: alpha @ Knative v0.12

If you are using a Certificate implementation that supports provisioning wildcard certificates (e.g. cert-manager with a DNS01 issuer), then the most efficient way to provision certificates is with the namespace wildcard certificate controller. The following command will install the components needed to provision wildcard certificates in each namespace:

  1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/serving-nscert.yaml

Note this will not work with HTTP01 either via cert-manager or the net-http01 options.

Getting started with Serving

Deploy your first app with the getting started with Knative app deployment guide. You can also find a number of samples for Knative Serving here.

Installing the Eventing component

FEATURE STATE: beta @ Knative v0.13

The following commands install the Knative Eventing component.

  1. Install the Custom Resource Definitions (aka CRDs):

    1. kubectl apply --selector knative.dev/crd-install=true \
    2. --filename https://github.com/knative/eventing/releases/download/v0.15.0/eventing.yaml
  2. Install the core components of Eventing (see below for optional extensions):

    1. kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.15.0/eventing.yaml
  3. Install a default Channel (messaging) layer (alphabetical).

    1. First, Install Apache Kafka for Kubernetes

    2. Then install the Apache Kafka Channel:

      1. curl -L "https://github.com/knative/eventing-contrib/releases/download/v0.15.0/kafka-channel.yaml" \
      2. | sed 's/REPLACE_WITH_CLUSTER_URL/my-cluster-kafka-bootstrap.kafka:9092/' \
      3. | kubectl apply --filename -
  1. To learn more about the Apache Kafka channel, try [our sample](https://knative.dev/v0.15-docs/eventing/samples/kafka/channel/index.html)
  2. 1. Install the Google Cloud Pub/Sub Channel:
  3. ```
  4. # This installs both the Channel and the GCP Sources.
  5. kubectl apply --filename https://github.com/google/knative-gcp/releases/download/v0.15.0/cloud-run-events.yaml
  6. ```
  7. To learn more about the Google Cloud Pub/Sub Channel, try [our sample](https://github.com/google/knative-gcp/blob/master/docs/examples/channel/README.md)
  8. **FEATURE STATE:** `beta @ Knative v0.13`
  9. The following command installs an implementation of Channel that runs in-memory. This implementation is nice because it is simple and standalone, but it is unsuitable for production use cases.
  10. ```
  11. kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.15.0/in-memory-channel.yaml
  12. ```
  13. 1. First, [Install NATS Streaming for Kubernetes](https://github.com/knative/eventing-contrib/blob/v0.15.0/natss/config/broker/README.md)
  14. 2. Then install the NATS Streaming Channel:
  15. ```
  16. kubectl apply --filename https://github.com/knative/eventing-contrib/releases/download/v0.15.0/natss-channel.yaml
  17. ```
  1. Install a Broker (eventing) layer:

    FEATURE STATE: beta @ Knative v0.13

    The following command installs an implementation of Broker that utilizes Channels:

    1. kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.15.0/mt-channel-broker.yaml

    To customize which broker channel implementation is used, update the following ConfigMap to specify which configurations are used for which namespaces:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: config-br-defaults
    5. namespace: knative-eventing
    6. data:
    7. default-br-config: |
    8. # This is the cluster-wide default broker channel.
    9. clusterDefault:
    10. brokerClass: MTChannelBasedBroker
    11. apiVersion: v1
    12. kind: ConfigMap
    13. name: config-br-default-channel
    14. namespace: knative-eventing
    15. # This allows you to specify different defaults per-namespace,
    16. # in this case the "some-namespace" namespace will use the Kafka
    17. # channel ConfigMap by default (only for example, you will need
    18. # to install kafka also to make use of this).
    19. namespaceDefaults:
    20. some-namespace:
    21. brokerClass: MTChannelBasedBroker
    22. apiVersion: v1
    23. kind: ConfigMap
    24. name: kafka-channel
    25. namespace: knative-eventing

    The referenced imc-channel and kafka-channel example ConfigMaps would look like:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: imc-channel
    5. namespace: knative-eventing
    6. data:
    7. channelTemplateSpec: |
    8. apiVersion: messaging.knative.dev/v1beta1
    9. kind: InMemoryChannel
    10. ---
    11. apiVersion: v1
    12. kind: ConfigMap
    13. metadata:
    14. name: kafka-channel
    15. namespace: knative-eventing
    16. data:
    17. channelTemplateSpec: |
    18. apiVersion: messaging.knative.dev/v1alpha1
    19. kind: KafkaChannel
    20. spec:
    21. numPartitions: 3
    22. replicationFactor: 1

    In order to use the KafkaChannel make sure it is installed on the cluster as discussed above.

    FEATURE STATE: alpha @ Knative v0.14

    The following command installs an implementation of Broker that utilizes Channels just like the Channel Based one, but this broker runs event routing components in a System Namespace, providing a smaller and simpler installation.

    1. kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.15.0/mt-channel-broker.yaml

    To customize which broker channel implementation is used, update the following ConfigMap to specify which configurations are used for which namespaces:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: config-br-defaults
    5. namespace: knative-eventing
    6. data:
    7. default-br-config: |
    8. # This is the cluster-wide default broker channel.
    9. clusterDefault:
    10. brokerClass: MTChannelBasedBroker
    11. apiVersion: v1
    12. kind: ConfigMap
    13. name: imc-channel
    14. namespace: knative-eventing
    15. # This allows you to specify different defaults per-namespace,
    16. # in this case the "some-namespace" namespace will use the Kafka
    17. # channel ConfigMap by default (only for example, you will need
    18. # to install kafka also to make use of this).
    19. namespaceDefaults:
    20. some-namespace:
    21. brokerClass: MTChannelBasedBroker
    22. apiVersion: v1
    23. kind: ConfigMap
    24. name: kafka-channel
    25. namespace: knative-eventing

    The referenced imc-channel and kafka-channel example ConfigMaps would look like:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: imc-channel
    5. namespace: knative-eventing
    6. data:
    7. channelTemplateSpec: |
    8. apiVersion: messaging.knative.dev/v1beta1
    9. kind: InMemoryChannel
    10. ---
    11. apiVersion: v1
    12. kind: ConfigMap
    13. metadata:
    14. name: kafka-channel
    15. namespace: knative-eventing
    16. data:
    17. channelTemplateSpec: |
    18. apiVersion: messaging.knative.dev/v1alpha1
    19. kind: KafkaChannel
    20. spec:
    21. numPartitions: 3
    22. replicationFactor: 1

    In order to use the KafkaChannel make sure it is installed on the cluster as discussed above.

  2. Monitor the Knative components until all of the components show a STATUS of Running:

    1. kubectl get pods --namespace knative-eventing

At this point, you have a basic installation of Knative Eventing!

Optional Eventing extensions

FEATURE STATE: alpha @ Knative v0.5

The following command enables the default Broker on a namespace (here default):

  1. kubectl label namespace default knative-eventing-injection=enabled

FEATURE STATE: alpha @ Knative v0.2

The following command installs the Github Source:

  1. kubectl apply --filename https://github.com/knative/eventing-contrib/releases/download/v0.15.0/github.yaml

To learn more about the Github source, try our sample

FEATURE STATE: alpha @ Knative v0.5

The following command installs the Apache Camel-K Source:

  1. kubectl apply --filename https://github.com/knative/eventing-contrib/releases/download/v0.15.0/camel.yaml

To learn more about the Apache Camel-K source, try our sample

FEATURE STATE: alpha @ Knative v0.5

The following command installs the Apache Kafka Source:

  1. kubectl apply --filename https://github.com/knative/eventing-contrib/releases/download/v0.15.0/kafka-source.yaml

To learn more about the Apache Kafka source, try our sample

FEATURE STATE: alpha @ Knative v0.2

The following command installs the GCP Sources:

  1. # This installs both the Sources and the Channel.
  2. kubectl apply --filename https://github.com/google/knative-gcp/releases/download/v0.15.0/cloud-run-events.yaml

To learn more about the Cloud Pub/Sub source, try our sample.

To learn more about the Cloud Storage source, try our sample.

To learn more about the Cloud Scheduler source, try our sample.

To learn more about the Cloud Audit Logs source, try our sample.

FEATURE STATE: alpha @ Knative v0.10

The following command installs the Apache CouchDB Source:

  1. kubectl apply --filename https://github.com/knative/eventing-contrib/releases/download/v0.15.0/couchdb.yaml

To learn more about the Apache CouchDB source, read [our documentation]((https://github.com/knative/eventing-contrib/blob/v0.15.0/couchdb/README.md)

FEATURE STATE: alpha @ Knative v0.14

The following command installs the VMware Sources and Bindings:

  1. kubectl apply --filename https://github.com/vmware-tanzu/sources-for-knative/releases/download/v0.15.0/release.yaml

To learn more about the VMware sources and bindings, try our samples.

Getting started with Eventing

You can find a number of samples for Knative Eventing here. A quick-start guide is available here.

Installing the Observability plugin

FEATURE STATE: deprecated @ Knative v0.14

Install the following observability features to enable logging, metrics, and request tracing in your Serving and Eventing components.

All observibility plugins require that you first install the core:

  1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/monitoring-core.yaml

After the core is installed, you can choose to install one or all of the following observability plugins:

  • Install Prometheus and Grafana for metrics:

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/monitoring-metrics-prometheus.yaml
  • Install the ELK stack (Elasticsearch, Logstash and Kibana) for logs:

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/monitoring-logs-elasticsearch.yaml
  • Install Jaeger for distributed tracing

    To install the in-memory (standalone) version of Jaeger, run the following command:

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/monitoring-tracing-jaeger-in-mem.yaml

    To install the ELK version of Jaeger (needs the ELK install above), run the following command:

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/monitoring-tracing-jaeger.yaml
  • Install Zipkin for distributed tracing

    To install the in-memory (standalone) version of Zipkin, run the following command:

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/monitoring-tracing-zipkin-in-mem.yaml

    To install the ELK version of Zipkin (needs the ELK install above), run the following command:

    1. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/monitoring-tracing-zipkin.yaml