Helm installation

The operator installation is managed by a helm chart. To install run:

  1. helm install flink-kubernetes-operator helm/flink-kubernetes-operator

Alternatively to install the operator (and also the helm chart) to a specific namespace:

  1. helm install flink-kubernetes-operator helm/flink-kubernetes-operator --namespace flink --create-namespace

Note that in this case you will need to update the namespace in the examples accordingly or the default namespace to the watched namespaces.

Overriding configuration parameters during Helm install

Helm provides different ways to override the default installation parameters (contained in values.yaml) for the Helm chart.

To override single parameters you can use --set, for example:

  1. helm install --set image.repository=apache/flink-kubernetes-operator --set image.tag=1.7.0 flink-kubernetes-operator helm/flink-kubernetes-operator

You can also provide your custom values file by using the -f flag:

  1. helm install -f myvalues.yaml flink-kubernetes-operator helm/flink-kubernetes-operator

The configurable parameters of the Helm chart and which default values as detailed in the following table:

ParametersDescriptionDefault Value
watchNamespacesList of kubernetes namespaces to watch for FlinkDeployment changes, empty means all namespaces.
image.repositoryThe image repository of flink-kubernetes-operator.ghcr.io/apache/flink-kubernetes-operator
image.pullPolicyThe image pull policy of flink-kubernetes-operator.IfNotPresent
image.tagThe image tag of flink-kubernetes-operator.latest
image.digestThe image tag of flink-kubernetes-operator. If set then it takes precedence and the image tag will be ignored.
replicasOperator replica count. Must be 1 unless leader election is configured.1
strategy.typeOperator pod upgrade strategy. Must be Recreate unless leader election is configured.Recreate
rbac.createWhether to enable RBAC to create for said namespaces.true
rbac.nodesRule.createWhether to add RBAC rule to list nodes which is needed for rest-service exposed as NodePort type.false
operatorPod.annotationsCustom annotations to be added to the operator pod (but not the deployment).
operatorPod.labelsCustom labels to be added to the operator pod (but not the deployment).
operatorPod.envCustom env to be added to the operator pod.
operatorPod.envFromCustom envFrom settings to be added to the operator pod.
operatorPod.dnsPolicyDNS policy to be used by the operator pod.
operatorPod.dnsConfigDNS configuration to be used by the operator pod.
operatorPod.nodeSelectorCustom nodeSelector to be added to the operator pod.
operatorPod.topologySpreadConstraintsCustom topologySpreadConstraints to be added to the operator pod.
operatorPod.resourcesCustom resources block to be added to the operator pod on main container.
operatorPod.webhook.resourcesCustom resources block to be added to the operator pod on flink-webhook container.
operatorPod.tolerationsCustom tolerations to be added to the operator pod.
operatorServiceAccount.createWhether to enable operator service account to create for flink-kubernetes-operator.true
operatorServiceAccount.annotationsThe annotations of operator service account.
operatorServiceAccount.nameThe name of operator service account.flink-operator
jobServiceAccount.createWhether to enable job service account to create for flink jobmanager/taskmanager pods.true
jobServiceAccount.annotationsThe annotations of job service account.“helm.sh/resource-policy”: keep
jobServiceAccount.nameThe name of job service account.flink
operatorVolumeMounts.createWhether to enable operator volume mounts to create for flink-kubernetes-operator.false
operatorVolumeMounts.dataList of mount paths of operator volume mounts.- name: flink-artifacts
  mountPath: /opt/flink/artifacts
operatorVolumes.createWhether to enable operator volumes to create for flink-kubernetes-operator.false
operatorVolumes.dataThe ConfigMap of operator volumes.- name: flink-artifacts
  hostPath:
    path: /tmp/flink/artifacts
    type: DirectoryOrCreate
podSecurityContextDefines privilege and access control settings for a pod or container for pod security context.runAsUser: 9999
runAsGroup: 9999
operatorSecurityContextDefines privilege and access control settings for a pod or container for operator security context.
webhookSecurityContextDefines privilege and access control settings for a pod or container for webhook security context.
webhook.createWhether to enable validating and mutating webhooks for flink-kubernetes-operator.true
webhook.mutator.createEnable or disable mutating webhook, overrides webhook.create
webhook.validator.createEnable or disable validating webhook, overrides webhook.create
webhook.keystoreThe ConfigMap of webhook key store.useDefaultPassword: true
defaultConfiguration.createWhether to enable default configuration to create for flink-kubernetes-operator.true
defaultConfiguration.appendWhether to append configuration files with configs.true
defaultConfiguration.flink-conf.yamlThe default configuration of flink-conf.yaml.kubernetes.operator.metrics.reporter.slf4j.factory.class: org.apache.flink.metrics.slf4j.Slf4jReporterFactory
kubernetes.operator.metrics.reporter.slf4j.interval: 5 MINUTE
kubernetes.operator.reconcile.interval: 15 s
kubernetes.operator.observer.progress-check.interval: 5 s
defaultConfiguration.log4j-operator.propertiesThe default configuration of log4j-operator.properties.
defaultConfiguration.log4j-console.propertiesThe default configuration of log4j-console.properties.
metrics.portThe metrics port on the container for default configuration.
imagePullSecretsThe image pull secrets of flink-kubernetes-operator.
nameOverrideOverrides the name with the specified name.
fullnameOverrideOverrides the fullname with the specified full name.
jvmArgs.webhookThe JVM start up options for webhook.
jvmArgs.operatorThe JVM start up options for operator.
operatorHealth.portOperator health endpoint port to be used by the probes.8085
operatorHealth.livenessProbeLiveness probe configuration for the operator using the health endpoint. Only time settings should be configured, endpoint is set automatically based on port.
operatorHealth.startupProbeStartup probe configuration for the operator using the health endpoint. Only time settings should be configured, endpoint is set automatically based on port.
postStartThe postStart hook configuration for the main container.

For more information check the Helm documentation.

Operator webhooks

In order to use the webhooks in the operator, you must install the cert-manager on the Kubernetes cluster:

  1. kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml

The webhooks can be disabled during helm install by passing the --set webhook.create=false parameter or editing the values.yaml directly.

Watching only specific namespaces

The operator supports watching a specific list of namespaces for FlinkDeployment resources. You can enable it by setting the --set watchNamespaces={flink-test} parameter. When this is enabled role-based access control is only created specifically for these namespaces for the operator and the jobmanagers, otherwise it defaults to cluster scope.

Note When working with webhook in a specified namespace, users should pay attention to the definition of namespaceSelector.matchExpressions in webhook.yaml. Currently, the default implementation of webhook relies on the kubernetes.io/metadata.name label to filter the validation requests so that only validation requests from the specified namespace will be processed. The kubernetes.io/metadata.name label is automatically attached since k8s 1.21.1.

As a result, for users who run the flink kubernetes operator with older k8s version, they may label the specified namespace by themselves before installing the operator with helm:

  1. kubectl label namespace <target namespace name> kubernetes.io/metadata.name=<target namespace name>

Besides, users can define their own namespaceSelector to filter the requests due to customized requirements.

For example, if users label their namespace with key-value pair {customized_namespace_key: <target namespace name> } the corresponding namespaceSelector that only accepts requests from this namespace could be:

  1. namespaceSelector:
  2. matchExpressions:
  3. - key: customized_namespace_key
  4. operator: In
  5. values: [{{- range .Values.watchNamespaces }}{{ . | quote }},{{- end}}]

Check out this document for more details.

Working with Argo CD

If you are using Argo CD to manage the operator, the simplest example could look like this.

  1. apiVersion: argoproj.io/v1alpha1
  2. kind: Application
  3. metadata:
  4. name: flink-kubernetes-operator
  5. spec:
  6. source:
  7. repoURL: https://github.com/apache/flink-kubernetes-operator
  8. targetRevision: main
  9. path: helm/flink-kubernetes-operator
  10. ...

Check out Argo CD documents for more details.

Advanced customization techniques

The Helm chart does not aim to provide configuration options for all the possible deployment scenarios of the Operator. There are use cases for injecting common tools and/or sidecars in most enterprise environments that cannot be covered by public Helm charts.

Fortunately, post rendering in Helm gives you the ability to manually manipulate manifests before they are installed on a Kubernetes cluster. This allows users to use tools like kustomize to apply configuration changes without the need to fork public charts.

The GitHub repository for the Operator contains a simple example on how to augment the Operator Deployment with a fluent-bit sidecar container and adjust container resources using kustomize.

The example demonstrates that we can still use a values.yaml file to override the default Helm values for changing the log configuration, for example:

  1. defaultConfiguration:
  2. ...
  3. log4j-operator.properties: |+
  4. rootLogger.appenderRef.file.ref = LogFile
  5. appender.file.name = LogFile
  6. appender.file.type = File
  7. appender.file.append = false
  8. appender.file.fileName = ${sys:log.file}
  9. appender.file.layout.type = PatternLayout
  10. appender.file.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
  11. jvmArgs:
  12. webhook: "-Dlog.file=/opt/flink/log/webhook.log -Xms256m -Xmx256m"
  13. operator: "-Dlog.file=/opt/flink/log/operator.log -Xms2048m -Xmx2048m"

But we cannot ingest our fluent-bit sidecar for example unless we patch the deployment using kustomize

  1. ################################################################################
  2. # Licensed to the Apache Software Foundation (ASF) under one
  3. # or more contributor license agreements. See the NOTICE file
  4. # distributed with this work for additional information
  5. # regarding copyright ownership. The ASF licenses this file
  6. # to you under the Apache License, Version 2.0 (the
  7. # "License"); you may not use this file except in compliance
  8. # with the License. You may obtain a copy of the License at
  9. #
  10. # http://www.apache.org/licenses/LICENSE-2.0
  11. #
  12. # Unless required by applicable law or agreed to in writing, software
  13. # distributed under the License is distributed on an "AS IS" BASIS,
  14. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  15. # See the License for the specific language governing permissions and
  16. # limitations under the License.
  17. ################################################################################
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. name: not-important
  22. spec:
  23. template:
  24. spec:
  25. containers:
  26. - name: flink-kubernetes-operator
  27. volumeMounts:
  28. - name: flink-log
  29. mountPath: /opt/flink/log
  30. resources:
  31. requests:
  32. memory: "2.5Gi"
  33. cpu: "1000m"
  34. limits:
  35. memory: "2.5Gi"
  36. cpu: "2000m"
  37. - name: flink-webhook
  38. volumeMounts:
  39. - name: flink-log
  40. mountPath: /opt/flink/log
  41. resources:
  42. requests:
  43. memory: "0.5Gi"
  44. cpu: "200m"
  45. limits:
  46. memory: "0.5Gi"
  47. cpu: "500m"
  48. - name: fluentbit
  49. image: fluent/fluent-bit:1.8.12
  50. command: [ 'sh','-c','/fluent-bit/bin/fluent-bit -i tail -p path=/opt/flink/log/*.log -p multiline.parser=java -o stdout' ]
  51. volumeMounts:
  52. - name: flink-log
  53. mountPath: /opt/flink/log
  54. volumes:
  55. - name: flink-log
  56. emptyDir: { }

You can try out the example using the following command:

  1. helm install flink-kubernetes-operator helm/flink-kubernetes-operator -f examples/kustomize/values.yaml --post-renderer examples/kustomize/render

By examining the sidecar output you should see that the logs from both containers are being processed from the shared folder:

  1. [2022/04/06 10:04:36] [ info] [input:tail:tail.0] inotify_fs_add(): inode=3812411 watch_fd=1 name=/opt/flink/log/operator.log
  2. [2022/04/06 10:04:36] [ info] [input:tail:tail.0] inotify_fs_add(): inode=3812412 watch_fd=2 name=/opt/flink/log/webhook.log

Check out the kustomize repo for more advanced examples.

Please note that post-render mechanism will always override the Helm template values.