Deploying Debezium on OpenShift
Prerequisites
To keep containers separated from other workloads on the cluster, create a dedicated project for Debezium. In the remainder of this document, the debezium-example namespace will be used:
$ oc new-project debezium-example
Deploying Strimzi Operator
For the Debezium deployment we will use the Strimzi project, which manages the Kafka deployment on OpenShift clusters. The simplest way for installing Strimzi is to install the Strimzi operator from OperatorHub. Navigate to the “OperatorHub” tab in the OpenShift UI, select “Strimzi” and click the “install” button.

If you prefer command line tools, you can install the Strimzi operator this way as well:
$ cat << EOF | oc create -f -apiVersion: operators.coreos.com/v1alpha1kind: Subscriptionmetadata:name: my-strimzi-kafka-operatornamespace: openshift-operatorsspec:channel: stablename: strimzi-kafka-operatorsource: operatorhubio-catalogsourceNamespace: olmEOF
Creating Secrets for the Database
Later on, when deploying Debezium Kafka connector, we will need to provide username and password for the connector to be able to connect to the database. For security reasons, it’s a good practice not to provide the credentials directly, but keep them in a separate secured place. OpenShift provides Secret object for this purpose. Besides creating Secret object itself, we have to also create a role and a role binding so that Kafka can access the credentials.
Let’s create Secret object first:
$ cat << EOF | oc create -f -apiVersion: v1kind: Secretmetadata:name: debezium-secretnamespace: debezium-exampletype: Opaquedata:username: ZGViZXppdW0=password: ZGJ6EOF
The username and password contain base64-encoded credentials (debezium/dbz) for connecting to the MySQL database, which we will deploy later.
Now, we can create a role, which refers secret created in the previous step:
$ cat << EOF | oc create -f -apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:name: connector-configuration-rolenamespace: debezium-examplerules:- apiGroups: [""]resources: ["secrets"]resourceNames: ["debezium-secret"]verbs: ["get"]EOF
We also have to bind this role to the Kafka Connect cluster service account so that Kafka Connect can access the secret:
$ cat << EOF | oc create -f -apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: connector-configuration-role-bindingnamespace: debezium-examplesubjects:- kind: ServiceAccountname: debezium-connect-cluster-connectnamespace: debezium-exampleroleRef:kind: Rolename: connector-configuration-roleapiGroup: rbac.authorization.k8s.ioEOF
The service account will be create by Strimzi once we deploy Kafka Connect. The name of the service account take form $KafkaConnectName-connect. Later on, we will create Kafka Connect cluster named debezium-connect-cluster and therefore we used debezium-connect-cluster-connect here as a subjects.name.
Deploying Apache Kafka
Next, deploy a (single-node) Kafka cluster:
$ cat << EOF | oc create -n debezium-example -f -apiVersion: kafka.strimzi.io/v1beta2kind: Kafkametadata:name: debezium-clusterspec:kafka:replicas: 1listeners:- name: plainport: 9092type: internaltls: false- name: tlsport: 9093type: internaltls: trueauthentication:type: tls- name: externalport: 9094type: nodeporttls: falsestorage:type: jbodvolumes:- id: 0type: persistent-claimsize: 100GideleteClaim: falseconfig:offsets.topic.replication.factor: 1transaction.state.log.replication.factor: 1transaction.state.log.min.isr: 1default.replication.factor: 1min.insync.replicas: 1zookeeper:replicas: 1storage:type: persistent-claimsize: 100GideleteClaim: falseentityOperator:topicOperator: {}userOperator: {}EOF
- Wait until it’s ready:
$ oc wait kafka/debezium-cluster --for=condition=Ready --timeout=300s
Deploying a Data Source
As a data source, MySQL will be used in the following. Besides running a pod with MySQL, an appropriate service which will point to the pod with DB itself is needed. It can be created e.g. as follows:
$ cat << EOF | oc create -f -apiVersion: v1kind: Servicemetadata:name: mysqlspec:ports:- port: 3306selector:app: mysqlclusterIP: None---apiVersion: apps/v1kind: Deploymentmetadata:name: mysqlspec:selector:matchLabels:app: mysqlstrategy:type: Recreatetemplate:metadata:labels:app: mysqlspec:containers:- image: quay.io/debezium/example-mysql:2.0name: mysqlenv:- name: MYSQL_ROOT_PASSWORDvalue: debezium- name: MYSQL_USERvalue: mysqluser- name: MYSQL_PASSWORDvalue: mysqlpwports:- containerPort: 3306name: mysqlEOF
Deploying a Debezium Connector
To deploy a Debezium connector, you need to deploy a Kafka Connect cluster with the required connector plug-in(s), before instantiating the actual connector itself. As the first step, a container image for Kafka Connect with the plug-in has to be created. If you already have a container image built and available in the registry, you can skip this step. In this document, the MySQL connector will be used as an example.
Creating Kafka Connect Cluster
Again, we will use Strimzi for creating the Kafka Connect cluster. Strimzi also can be used for building and pushing the required container image for us. In fact, both tasks can be merged together and instructions for building the container image can be provided directly within the KafkaConnect object specification:
$ cat << EOF | oc create -f -apiVersion: kafka.strimzi.io/v1beta2kind: KafkaConnectmetadata:name: debezium-connect-clusterannotations:strimzi.io/use-connector-resources: "true"spec:version: 3.1.0replicas: 1bootstrapServers: debezium-cluster-kafka-bootstrap:9092config:config.providers: secretsconfig.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvidergroup.id: connect-clusteroffset.storage.topic: connect-cluster-offsetsconfig.storage.topic: connect-cluster-configsstatus.storage.topic: connect-cluster-status# -1 means it will use the default replication factor configured in the brokerconfig.storage.replication.factor: -1offset.storage.replication.factor: -1status.storage.replication.factor: -1build:output:type: dockerimage: image-registry.openshift-image-registry.svc:5000/debezium-example/debezium-connect-mysql:latestplugins:- name: debezium-mysql-connectorartifacts:- type: tgzurl: https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/{debezium-version}/debezium-connector-mysql-{debezium-version}-plugin.tar.gzEOF
Here we took the advantage of the OpenShift built-in registry, already running as a service on the OpenShift cluster.
For simplicity, we’ve skipped the checksum validation for the downloaded artifact. If you want to be sure the artifact was correctly downloaded, specify its checksum via the |
If you already have a suitable container image either in the local or a remote registry (such as quay.io or DockerHub), you can use this simplified version:
$ cat << EOF | oc create -f -apiVersion: kafka.strimzi.io/v1beta2kind: KafkaConnectmetadata:name: debezium-connect-clusterannotations:strimzi.io/use-connector-resources: "true"spec:version: 3.1.0image: 10.110.154.103/debezium-connect-mysql:latestreplicas: 1bootstrapServers: debezium-cluster-kafka-bootstrap:9092config:config.providers: secretsconfig.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvidergroup.id: connect-clusteroffset.storage.topic: connect-cluster-offsetsconfig.storage.topic: connect-cluster-configsstatus.storage.topic: connect-cluster-status# -1 means it will use the default replication factor configured in the brokerconfig.storage.replication.factor: -1offset.storage.replication.factor: -1status.storage.replication.factor: -1EOF
You can also note, that we have configured the secret provider to use Strimzi secret provider Strimzi secret provider will create service account for this Kafka Connect cluster (and which we have already bound to the appropriate role), and allow Kafka Connect to access our Secret object.
Before creating a Debezium connector, check that all pods are already running:

Creating a Debezium Connector
To create a Debezium connector, you just need to create a KafkaConnector with the appropriate configuration, MySQL in this case:
$ cat << EOF | oc create -f -apiVersion: kafka.strimzi.io/v1beta2kind: KafkaConnectormetadata:name: debezium-connector-mysqllabels:strimzi.io/cluster: debezium-connect-clusterspec:class: io.debezium.connector.mysql.MySqlConnectortasksMax: 1config:tasks.max: 1database.hostname: mysqldatabase.port: 3306database.user: ${secrets:debezium-example/debezium-secret:username}database.password: ${secrets:debezium-example/debezium-secret:password}database.server.id: 184054topic.prefix: mysqldatabase.include.list: inventoryschema.history.internal.kafka.bootstrap.servers: debezium-cluster-kafka-bootstrap:9092schema.history.internal.kafka.topic: schema-changes.inventoryEOF
As you can note, we didn’t use plain text user name and password in the connector configuration, but refer to Secret object we created previously.
Verifying the Deployment
To verify the everything works fine, you can e.g. start watching mysql.inventory.customers Kafka topic:
$ oc run -n debezium-example -it --rm --image=quay.io/debezium/tooling:1.2 --restart=Never watcher -- kcat -b debezium-cluster-kafka-bootstrap:9092 -C -o beginning -t mysql.inventory.customers
Connect to the MySQL database:
$ oc run -n debezium-example -it --rm --image=mysql:8.0 --restart=Never --env MYSQL_ROOT_PASSWORD=debezium mysqlterm -- mysql -hmysql -P3306 -uroot -pdebezium
Do some changes in the customers table:
sql> update customers set first_name="Sally Marie" where id=1001;
You now should be able to observe the change events on the Kafka topic:
{..."payload": {"before": {"id": 1001,"first_name": "Sally","last_name": "Thomas","email": "sally.thomas@acme.com"},"after": {"id": 1001,"first_name": "Sally Marie","last_name": "Thomas","email": "sally.thomas@acme.com"},"source": {"version": "{debezium-version}","connector": "mysql","name": "mysql","ts_ms": 1646300467000,"snapshot": "false","db": "inventory","sequence": null,"table": "customers","server_id": 223344,"gtid": null,"file": "mysql-bin.000003","pos": 401,"row": 0,"thread": null,"query": null},"op": "u","ts_ms": 1646300467746,"transaction": null}}
If you have any questions or requests related to running Debezium on Kubernetes or OpenShift, then please let us know in our user group or in the Debezium developer’s chat.
