Apache Kafka Sink

This page shows how to install and configure Apache Kafka Sink.

Prerequisites

Knative Eventing installation.

Installation

  1. Install the Kafka controller:

    1. kubectl apply --filename https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/v0.21.0/eventing-kafka-controller.yaml
  2. Install the Kafka Sink data plane:

    1. kubectl apply --filename https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/v0.21.0/eventing-kafka-sink.yaml
  3. Verify that kafka-controller and kafka-sink-receiver are running:

    1. kubectl get deployments.apps -n knative-eventing

    Example output:

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. eventing-controller 1/1 1 1 10s
    3. eventing-webhook 1/1 1 1 9s
    4. kafka-controller 1/1 1 1 3s
    5. kafka-sink-receiver 1/1 1 1 5s

Kafka Sink

A KafkaSink object looks like this:

  1. apiVersion: eventing.knative.dev/v1alpha1
  2. kind: KafkaSink
  3. metadata:
  4. name: my-kafka-sink
  5. namespace: default
  6. spec:
  7. topic: mytopic
  8. bootstrapServers:
  9. - my-cluster-kafka-bootstrap.kafka:9092

Security

Apache Kafka supports different security features, Knative supports the followings:

To enable security features, in the KafkaSink spec, we can reference a Secret:

  1. apiVersion: eventing.knative.dev/v1alpha1
  2. kind: KafkaSink
  3. metadata:
  4. name: my-kafka-sink
  5. namespace: default
  6. spec:
  7. topic: mytopic
  8. bootstrapServers:
  9. - my-cluster-kafka-bootstrap.kafka:9092
  10. auth.secret.ref.name: my_secret

The Secret my_secret must exist in the same namespace of the KafkaSink, in this case: default.

Note: Certificates and keys must be in PEM format.

Authentication using SASL

Knative supports the following SASL mechanisms:

  • PLAIN
  • SCRAM-SHA-256
  • SCRAM-SHA-512

To use a specific SASL mechanism replace <sasl_mechanism> with the mechanism of your choice.

Authentication using SASL without encryption

  1. kubectl create secret --namespace <namespace> generic <my_secret> \
  2. --from-literal=protocol=SASL_PLAINTEXT \
  3. --from-literal=sasl.mechanism=<sasl_mechanism> \
  4. --from-literal=user=<my_user> \
  5. --from-literal=password=<my_password>

Authentication using SASL and encryption using SSL

  1. kubectl create secret --namespace <namespace> generic <my_secret> \
  2. --from-literal=protocol=SASL_SSL \
  3. --from-literal=sasl.mechanism=<sasl_mechanism> \
  4. --from-file=ca.crt=caroot.pem \
  5. --from-literal=user=<my_user> \
  6. --from-literal=password=<my_password>

Encryption using SSL without client authentication

  1. kubectl create secret --namespace <namespace> generic <my_secret> \
  2. --from-literal=protocol=SSL \
  3. --from-file=ca.crt=<my_caroot.pem_file_path> \
  4. --from-literal=user.skip=true

Authentication and encryption using SSL

  1. kubectl create secret --namespace <namespace> generic <my_secret> \
  2. --from-literal=protocol=SSL \
  3. --from-file=ca.crt=<my_caroot.pem_file_path> \
  4. --from-file=user.crt=<my_cert.pem_file_path> \
  5. --from-file=user.key=<my_key.pem_file_path>

NOTE: ca.crt can be omitted to fallback to use system’s root CA set.

Kafka Producer configurations

A Kafka Producer is the component responsible for sending events to the Apache Kafka cluster. Knative exposes all available Kafka Producer configurations that can be modified to suit your workloads.

You can change these configurations by modifying the config-kafka-sink-data-plane config map in the knative-eventing namespace.

Documentation for the settings available in this config map is available on the Apache Kafka website, in particular, Producer configurations.

Enable debug logging for data plane components

To enable debug logging for data plane components change the logging level to DEBUG in the kafka-config-logging config map.

  1. Apply the following kafka-config-logging config map:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kafka-config-logging
    5. namespace: knative-eventing
    6. data:
    7. config.xml: |
    8. <configuration>
    9. <appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
    10. <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    11. </appender>
    12. <root level="DEBUG">
    13. <appender-ref ref="jsonConsoleAppender"/>
    14. </root>
    15. </configuration>
  2. Restart the kafka-sink-receiver:

    1. kubectl rollout restart deployment -n knative-eventing kafka-sink-receiver

Additional information

Feedback

Was this page helpful?

Glad to hear it! Please tell us how we can improve.

Sorry to hear that. Please tell us how we can improve.