Upgrading projects for newer Operator SDK versions

OKD 4.8 supports Operator SDK v1.8.0. If you already have the v1.3.0 CLI installed on your workstation, you can upgrade the CLI to v1.8.0 by installing the latest version.

However, to ensure your existing Operator projects maintain compatibility with Operator SDK v1.8.0, upgrade steps are required for the associated breaking changes introduced since v1.3.0. You must perform the upgrade steps manually in any of your Operator projects that were previously created or maintained with v1.3.0.

Upgrading projects for Operator SDK v1.8.0

The following upgrade steps must be performed to upgrade an existing Operator project for compatibility with v1.8.0.

Prerequisites

  • Operator SDK v1.8.0 installed

  • Operator project that was previously created or maintained with Operator SDK v1.3.0

Procedure

  1. Make the following changes to your PROJECT file:

    1. Update the PROJECT file plugins object to use manifests and scorecard objects.

      The manifests and scorecard plug-ins that create Operator Lifecycle Manager (OLM) and scorecard manifests now have plug-in objects for running create subcommands to create related files.

      • For Go-based Operator projects, an existing Go-based plug-in configuration object is already present. While the old configuration is still supported, these new objects will be useful in the future as configuration options are added to their respective plug-ins:

        Old configuration

        1. version: 3-alpha
        2. ...
        3. plugins:
        4. go.sdk.operatorframework.io/v2-alpha: {}

        New configuration

        1. version: 3-alpha
        2. ...
        3. plugins:
        4. manifests.sdk.operatorframework.io/v2: {}
        5. scorecard.sdk.operatorframework.io/v2: {}
      • Optional: For Ansible- and Helm-based Operator projects, the plug-in configuration object previously did not exist. While you are not required to add the plug-in configuration objects, these new objects will be useful in the future as configuration options are added to their respective plug-ins:

        1. version: 3-alpha
        2. ...
        3. plugins:
        4. manifests.sdk.operatorframework.io/v2: {}
        5. scorecard.sdk.operatorframework.io/v2: {}
    2. The PROJECT config version 3-alpha must be upgraded to 3. The version key in your PROJECT file represents the PROJECT config version:

      Old PROJECT file

      1. version: 3-alpha
      2. resources:
      3. - crdVersion: v1
      4. ...

      Version 3-alpha has been stabilized as version 3 and contains a set of config fields sufficient to fully describe a project. While this change is not technically breaking because the spec at that version was alpha, it was used by default in operator-sdk commands, so it should be marked as breaking and have a convenient upgrade path.

      1. Run the alpha config-3alpha-to-3 command to convert most of your PROJECT file from version 3-alpha to 3:

        1. $ operator-sdk alpha config-3alpha-to-3

        Example output

        1. Your PROJECT config file has been converted from version 3-alpha to 3. Please make sure all config data is correct.

        The command will also output comments with directions where automatic conversion is not possible.

      2. Verify the change:

        New PROJECT file

        1. version: "3"
        2. resources:
        3. - api:
        4. crdVersion: v1
        5. ...
  1. Make the following changes to your config/manager/manager.yaml file:

    1. For Ansible- and Helm-based Operator projects, add liveness and readiness probes.

      New projects built with the Operator SDK have the probes configured by default. The endpoints /healthz and /readyz are available now in the provided image base. You can update your existing projects to use the probes by updating the Dockerfile to use the latest base image, then add the following to the manager container in the config/manager/manager.yaml file:

      Configuration for Ansible-based Operator projects

      1. livenessProbe:
      2. httpGet:
      3. path: /healthz
      4. port: 6789
      5. initialDelaySeconds: 15
      6. periodSeconds: 20
      7. readinessProbe:
      8. httpGet:
      9. path: /readyz
      10. port: 6789
      11. initialDelaySeconds: 5
      12. periodSeconds: 10

      Configuration for Helm-based Operator projects

      1. livenessProbe:
      2. httpGet:
      3. path: /healthz
      4. port: 8081
      5. initialDelaySeconds: 15
      6. periodSeconds: 20
      7. readinessProbe:
      8. httpGet:
      9. path: /readyz
      10. port: 8081
      11. initialDelaySeconds: 5
      12. periodSeconds: 10
    2. For Ansible- and Helm-based Operator projects, add security contexts to your manager’s deployment.

      In the config/manager/manager.yaml file, add the following security contexts:

      config/manager/manager.yaml file

      1. spec:
      2. ...
      3. template:
      4. ...
      5. spec:
      6. securityContext:
      7. runAsNonRoot: true
      8. containers:
      9. - name: manager
      10. securityContext:
      11. allowPrivilegeEscalation: false
  2. Make the following changes to your Makefile:

    1. For Ansible- and Helm-based Operator projects, update the helm-operator and ansible-operator URLs in the Makefile:

      • For Ansible-based Operator projects, change:

        1. https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)

        to:

        1. https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator_$(OS)_$(ARCH)
      • For Helm-based Operator projects, change:

        1. https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)

        to:

        1. https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator_$(OS)_$(ARCH)
    2. For Ansible- and Helm-based Operator projects, update the helm-operator, ansible-operator, and kustomize rules in the Makefile. These rules download a local binary but do not use it if a global binary is present:

      Makefile diff for Ansible-based Operator projects

      1. PATH := $(PATH):$(PWD)/bin
      2. SHELL := env PATH=$(PATH) /bin/sh
      3. -OS := $(shell uname -s | tr '[:upper:]' '[:lower:]')
      4. -ARCH := $(shell uname -m | sed 's/x86_64/amd64/')
      5. +OS = $(shell uname -s | tr '[:upper:]' '[:lower:]')
      6. +ARCH = $(shell uname -m | sed 's/x86_64/amd64/')
      7. +OSOPER = $(shell uname -s | tr '[:upper:]' '[:lower:]' | sed 's/darwin/apple-darwin/' | sed 's/linux/linux-gnu/')
      8. +ARCHOPER = $(shell uname -m )
      9. -# Download kustomize locally if necessary, preferring the $(pwd)/bin path over global if both exist.
      10. -.PHONY: kustomize
      11. -KUSTOMIZE = $(shell pwd)/bin/kustomize
      12. kustomize:
      13. -ifeq (,$(wildcard $(KUSTOMIZE)))
      14. -ifeq (,$(shell which kustomize 2>/dev/null))
      15. +ifeq (, $(shell which kustomize 2>/dev/null))
      16. @{ \
      17. set -e ;\
      18. - mkdir -p $(dir $(KUSTOMIZE)) ;\
      19. - curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | \
      20. - tar xzf - -C bin/ ;\
      21. + mkdir -p bin ;\
      22. + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | tar xzf - -C bin/ ;\
      23. }
      24. +KUSTOMIZE=$(realpath ./bin/kustomize)
      25. else
      26. -KUSTOMIZE = $(shell which kustomize)
      27. -endif
      28. +KUSTOMIZE=$(shell which kustomize)
      29. endif
      30. -# Download ansible-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist.
      31. -.PHONY: ansible-operator
      32. -ANSIBLE_OPERATOR = $(shell pwd)/bin/ansible-operator
      33. ansible-operator:
      34. -ifeq (,$(wildcard $(ANSIBLE_OPERATOR)))
      35. -ifeq (,$(shell which ansible-operator 2>/dev/null))
      36. +ifeq (, $(shell which ansible-operator 2>/dev/null))
      37. @{ \
      38. set -e ;\
      39. - mkdir -p $(dir $(ANSIBLE_OPERATOR)) ;\
      40. - curl -sSLo $(ANSIBLE_OPERATOR) https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator_$(OS)_$(ARCH) ;\
      41. - chmod +x $(ANSIBLE_OPERATOR) ;\
      42. + mkdir -p bin ;\
      43. + curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ;\
      44. + mv ansible-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ./bin/ansible-operator ;\
      45. + chmod +x ./bin/ansible-operator ;\
      46. }
      47. +ANSIBLE_OPERATOR=$(realpath ./bin/ansible-operator)
      48. else
      49. -ANSIBLE_OPERATOR = $(shell which ansible-operator)
      50. -endif
      51. +ANSIBLE_OPERATOR=$(shell which ansible-operator)
      52. endif

      Makefile diff for Helm-based Operator projects

      1. PATH := $(PATH):$(PWD)/bin
      2. SHELL := env PATH=$(PATH) /bin/sh
      3. -OS := $(shell uname -s | tr '[:upper:]' '[:lower:]')
      4. -ARCH := $(shell uname -m | sed 's/x86_64/amd64/')
      5. +OS = $(shell uname -s | tr '[:upper:]' '[:lower:]')
      6. +ARCH = $(shell uname -m | sed 's/x86_64/amd64/')
      7. +OSOPER = $(shell uname -s | tr '[:upper:]' '[:lower:]' | sed 's/darwin/apple-darwin/' | sed 's/linux/linux-gnu/')
      8. +ARCHOPER = $(shell uname -m )
      9. -# Download kustomize locally if necessary, preferring the $(pwd)/bin path over global if both exist.
      10. -.PHONY: kustomize
      11. -KUSTOMIZE = $(shell pwd)/bin/kustomize
      12. kustomize:
      13. -ifeq (,$(wildcard $(KUSTOMIZE)))
      14. -ifeq (,$(shell which kustomize 2>/dev/null))
      15. +ifeq (, $(shell which kustomize 2>/dev/null))
      16. @{ \
      17. set -e ;\
      18. - mkdir -p $(dir $(KUSTOMIZE)) ;\
      19. - curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | \
      20. - tar xzf - -C bin/ ;\
      21. + mkdir -p bin ;\
      22. + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | tar xzf - -C bin/ ;\
      23. }
      24. +KUSTOMIZE=$(realpath ./bin/kustomize)
      25. else
      26. -KUSTOMIZE = $(shell which kustomize)
      27. -endif
      28. +KUSTOMIZE=$(shell which kustomize)
      29. endif
      30. -# Download helm-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist.
      31. -.PHONY: helm-operator
      32. -HELM_OPERATOR = $(shell pwd)/bin/helm-operator
      33. helm-operator:
      34. -ifeq (,$(wildcard $(HELM_OPERATOR)))
      35. -ifeq (,$(shell which helm-operator 2>/dev/null))
      36. +ifeq (, $(shell which helm-operator 2>/dev/null))
      37. @{ \
      38. set -e ;\
      39. - mkdir -p $(dir $(HELM_OPERATOR)) ;\
      40. - curl -sSLo $(HELM_OPERATOR) https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator_$(OS)_$(ARCH) ;\
      41. - chmod +x $(HELM_OPERATOR) ;\
      42. + mkdir -p bin ;\
      43. + curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ;\
      44. + mv helm-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ./bin/helm-operator ;\
      45. + chmod +x ./bin/helm-operator ;\
      46. }
      47. +HELM_OPERATOR=$(realpath ./bin/helm-operator)
      48. else
      49. -HELM_OPERATOR = $(shell which helm-operator)
      50. -endif
      51. +HELM_OPERATOR=$(shell which helm-operator)
      52. endif
    3. Move the positional directory argument . in the make target for docker-build.

      The directory argument . in the docker-build target was moved to the last positional argument to align with podman CLI expectations, which makes substitution cleaner:

      Old target

      1. docker-build:
      2. docker build . -t ${IMG}

      New target

      1. docker-build:
      2. docker build -t ${IMG} .

      You can make this change by running the following command:

      1. $ sed -i 's/docker build . -t ${IMG}/docker build -t ${IMG} ./' $(git grep -l 'docker.*build \. ')
    4. For Ansible- and Helm-based Operator projects, add a help target to the Makefile.

      Ansible- and Helm-based projects now provide help target in the Makefile by default, similar to a --help flag. You can manually add this target to your Makefile using the following lines:

      help target

      1. ##@ General
      2. # The help target prints out all targets with their descriptions organized
      3. # beneath their categories. The categories are represented by '##@' and the
      4. # target descriptions by '##'. The awk commands is responsible for reading the
      5. # entire set of makefiles included in this invocation, looking for lines of the
      6. # file as xyz: ## something, and then pretty-format the target and help. Then,
      7. # if there's a line with ##@ something, that gets pretty-printed as a category.
      8. # More info on the usage of ANSI control characters for terminal formatting:
      9. # https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters
      10. # More info on the awk command:
      11. # http://linuxcommand.org/lc3_adv_awk.php
      12. help: ## Display this help.
      13. @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
    5. Add opm and catalog-build targets. You can use these targets to create your own catalogs for your Operator or add your Operator bundles to an existing catalog:

      1. Add the targets to your Makefile by adding the following lines:

        opm and catalog-build targets

        1. .PHONY: opm
        2. OPM = ./bin/opm
        3. opm:
        4. ifeq (,$(wildcard $(OPM)))
        5. ifeq (,$(shell which opm 2>/dev/null))
        6. @{ \
        7. set -e ;\
        8. mkdir -p $(dir $(OPM)) ;\
        9. curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.15.1/$(OS)-$(ARCH)-opm ;\
        10. chmod +x $(OPM) ;\
        11. }
        12. else
        13. OPM = $(shell which opm)
        14. endif
        15. endif
        16. BUNDLE_IMGS ?= $(BUNDLE_IMG)
        17. CATALOG_IMG ?= $(IMAGE_TAG_BASE)-catalog:v$(VERSION) ifneq ($(origin CATALOG_BASE_IMG), undefined) FROM_INDEX_OPT := --from-index $(CATALOG_BASE_IMG) endif
        18. .PHONY: catalog-build
        19. catalog-build: opm
        20. $(OPM) index add --container-tool docker --mode semver --tag $(CATALOG_IMG) --bundles $(BUNDLE_IMGS) $(FROM_INDEX_OPT)
        21. .PHONY: catalog-push
        22. catalog-push: ## Push the catalog image.
        23. $(MAKE) docker-push IMG=$(CATALOG_IMG)
      2. If you are updating a Go-based Operator project, also add the following Makefile variables:

        Makefile variables

        1. OS = $(shell go env GOOS)
        2. ARCH = $(shell go env GOARCH)
    6. For Go-based Operator projects, set the SHELL variable in your Makefile to the system bash binary.

      Importing the setup-envtest.sh script requires bash, so the SHELL variable must be set to bash with error options:

      Makefile diff

      1. else GOBIN=$(shell go env GOBIN)
      2. endif
      3. +# Setting SHELL to bash allows bash commands to be executed by recipes.
      4. +# This is a requirement for 'setup-envtest.sh' in the test target.
      5. +# Options are set to exit when a recipe line exits non-zero or a piped command fails.
      6. +SHELL = /usr/bin/env bash -o pipefail
      7. +.SHELLFLAGS = -ec
      8. + all: build
  3. For Go-based Operator projects, upgrade controller-runtime to v0.8.3 and Kubernetes dependencies to v0.20.2 by changing the following entries in your go.mod file, then rebuild your project:

    go.mod file

    1. ...
    2. k8s.io/api v0.20.2
    3. k8s.io/apimachinery v0.20.2
    4. k8s.io/client-go v0.20.2
    5. sigs.k8s.io/controller-runtime v0.8.3
  4. Add a system:controller-manager service account to your project. A non-default service account controller-manager is now generated by the operator-sdk init command to improve security for Operators installed in shared namespaces. To add this service account to your existing project, follow these steps:

    1. Create the ServiceAccount definition in a file:

      config/rbac/service_account.yaml file

      1. apiVersion: v1
      2. kind: ServiceAccount
      3. metadata:
      4. name: controller-manager
      5. namespace: system
    2. Add the service account to the list of RBAC resources:

      1. $ echo "- service_account.yaml" >> config/rbac/kustomization.yaml
    3. Update all RoleBinding and ClusterRoleBinding objects that reference the Operator’s service account:

      1. $ find config/rbac -name *_binding.yaml -exec sed -i -E 's/ name: default/ name: controller-manager/g' {} \;
    4. Add the service account name to the manager deployment’s spec.template.spec.serviceAccountName field:

      1. $ sed -i -E 's/([ ]+)(terminationGracePeriodSeconds:)/\1serviceAccountName: controller-manager\n\1\2/g' config/manager/manager.yaml
    5. Verify the changes look like the following diffs:

      config/manager/manager.yaml file diff

      1. ...
      2. requests:
      3. cpu: 100m
      4. memory: 20Mi
      5. + serviceAccountName: controller-manager
      6. terminationGracePeriodSeconds: 10

      config/rbac/auth_proxy_role_binding.yaml file diff

      1. ...
      2. name: proxy-role
      3. subjects:
      4. - kind: ServiceAccount
      5. - name: default
      6. + name: controller-manager
      7. namespace: system

      config/rbac/kustomization.yaml file diff

      1. resources:
      2. +- service_account.yaml
      3. - role.yaml
      4. - role_binding.yaml
      5. - leader_election_role.yaml

      config/rbac/leader_election_role_binding.yaml file diff

      1. ...
      2. name: leader-election-role
      3. subjects:
      4. - kind: ServiceAccount
      5. - name: default
      6. + name: controller-manager
      7. namespace: system

      config/rbac/role_binding.yaml file diff

      1. ...
      2. name: manager-role
      3. subjects:
      4. - kind: ServiceAccount
      5. - name: default
      6. + name: controller-manager
      7. namespace: system

      config/rbac/service_account.yaml file diff

      1. +apiVersion: v1
      2. +kind: ServiceAccount
      3. +metadata:
      4. + name: controller-manager
      5. + namespace: system
  5. Make the following changes to your config/manifests/kustomization.yaml file:

    1. Add a Kustomize patch to remove the cert-manager volume and volumeMount objects from your cluster service version (CSV).

      Because Operator Lifecycle Manager (OLM) does not yet support cert-manager, a JSON patch was added to remove this volume and mount so OLM can create and manage certificates for your Operator.

      In the config/manifests/kustomization.yaml file, add the following lines:

      config/manifests/kustomization.yaml file

      1. patchesJson6902:
      2. - target:
      3. group: apps
      4. version: v1
      5. kind: Deployment
      6. name: controller-manager
      7. namespace: system
      8. patch: |-
      9. # Remove the manager container's "cert" volumeMount, since OLM will create and mount a set of certs.
      10. # Update the indices in this path if adding or removing containers/volumeMounts in the manager's Deployment.
      11. - op: remove
      12. path: /spec/template/spec/containers/1/volumeMounts/0
      13. # Remove the "cert" volume, since OLM will create and mount a set of certs.
      14. # Update the indices in this path if adding or removing volumes in the manager's Deployment.
      15. - op: remove
      16. path: /spec/template/spec/volumes/0
    2. Optional: For Ansible- and Helm-based Operator projects, configure ansible-operator and helm-operator with a component config. To add this option, follow these steps:

      1. Create the following file:

        config/default/manager_config_patch.yaml file

        1. apiVersion: apps/v1
        2. kind: Deployment
        3. metadata:
        4. name: controller-manager
        5. namespace: system
        6. spec:
        7. template:
        8. spec:
        9. containers:
        10. - name: manager
        11. args:
        12. - "--config=controller_manager_config.yaml"
        13. volumeMounts:
        14. - name: manager-config
        15. mountPath: /controller_manager_config.yaml
        16. subPath: controller_manager_config.yaml
        17. volumes:
        18. - name: manager-config
        19. configMap:
        20. name: manager-config
      2. Create the following file:

        config/manager/controller_manager_config.yaml file

        1. apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
        2. kind: ControllerManagerConfig
        3. health:
        4. healthProbeBindAddress: :6789
        5. metrics:
        6. bindAddress: 127.0.0.1:8080
        7. leaderElection:
        8. leaderElect: true
        9. resourceName: <resource_name>
      3. Update the config/default/kustomization.yaml file by applying the following changes to resources:

        config/default/kustomization.yaml file

        1. resources:
        2. ...
        3. - manager_config_patch.yaml
      4. Update the config/manager/kustomization.yaml file by applying the following changes:

        config/manager/kustomization.yaml file

        1. generatorOptions:
        2. disableNameSuffixHash: true
        3. configMapGenerator:
        4. - files:
        5. - controller_manager_config.yaml
        6. name: manager-config
        7. apiVersion: kustomize.config.k8s.io/v1beta1
        8. kind: Kustomization
        9. images:
        10. - name: controller
        11. newName: quay.io/example/memcached-operator
        12. newTag: v0.0.1
    3. Optional: Add a manager config patch to the config/default/kustomization.yaml file.

      The generated --config flag was not added to either the ansible-operator or helm-operator binary when config file support was originally added, so it does not currently work. The --config flag supports configuration of both binaries by file; this method of configuration only applies to the underlying controller manager and not the Operator as a whole.

      To optionally configure the Operator’s deployment with a config file, make changes to the config/default/kustomization.yaml file as shown in the following diff:

      config/default/kustomization.yaml file diff

      1. # If you want your controller-manager to expose the /metrics # endpoint w/o any authn/z, please comment the following line.
      2. \- manager_auth_proxy_patch.yaml
      3. +# Mount the controller config file for loading manager configurations
      4. +# through a ComponentConfig type
      5. +- manager_config_patch.yaml

      Flags can be used as is or to override config file values.

  6. For Ansible- and Helm-based Operator projects, add role rules for leader election by making the following changes to the config/rbac/leader_election_role.yaml file:

    config/rbac/leader_election_role.yaml file

    1. - apiGroups:
    2. - coordination.k8s.io
    3. resources:
    4. - leases
    5. verbs:
    6. - get
    7. - list
    8. - watch
    9. - create
    10. - update
    11. - patch
    12. - delete
  7. For Ansible-based Operator projects, update Ansible collections.

    In your requirements.yml file, change the version field for community.kubernetes to 1.2.1, and the version field for operator_sdk.util to 0.2.0.

  8. Make the following changes to your config/default/manager_auth_proxy_patch.yaml file:

    • For Ansible-based Operator projects, add the --health-probe-bind-address=:6789 argument to the config/default/manager_auth_proxy_patch.yaml file:

      config/default/manager_auth_proxy_patch.yaml file

      1. spec:
      2. template:
      3. spec:
      4. containers:
      5. - name: manager
      6. args:
      7. - "--health-probe-bind-address=:6789"
      8. ...
    • For Helm-based Operator projects:

      1. Add the --health-probe-bind-address=:8081 argument to the config/default/manager_auth_proxy_patch.yaml file:

        config/default/manager_auth_proxy_patch.yaml file

        1. spec:
        2. template:
        3. spec:
        4. containers:
        5. - name: manager
        6. args:
        7. - "--health-probe-bind-address=:8081"
        8. ...
      2. Replace the deprecated flag --enable-leader-election with --leader-elect, and the deprecated flag --metrics-addr with --metrics-bind-address.

  1. Make the following changes to your config/prometheus/monitor.yaml file:

    1. Add scheme, token, and TLS config to the Prometheus ServiceMonitor metrics endpoint.

      The /metrics endpoint, while specifying the https port on the manager pod, was not actually configured to serve over HTTPS because no tlsConfig was set. Because kube-rbac-proxy secures this endpoint as a manager sidecar, using the service account token mounted into the pod by default corrects this problem.

      Apply the changes to the config/prometheus/monitor.yaml file as shown in the following diff:

      config/prometheus/monitor.yaml file diff

      1. spec:
      2. endpoints:
      3. - path: /metrics
      4. port: https
      5. + scheme: https
      6. + bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
      7. + tlsConfig:
      8. + insecureSkipVerify: true
      9. selector:
      10. matchLabels:
      11. control-plane: controller-manager

      If you removed kube-rbac-proxy from your project, ensure that you secure the /metrics endpoint using a proper TLS configuration.

  2. Ensure that existing dependent resources have owner annotations.

    For Ansible-based Operator projects, owner reference annotations on cluster-scoped dependent resources and dependent resources in other namespaces were not applied correctly. A workaround was to add these annotations manually, which is no longer required as this bug has been fixed.

  3. Deprecate support for package manifests.

    The Operator Framework is removing support for the Operator package manifest format in a future release. As part of the ongoing deprecation process, the operator-sdk generate packagemanifests and operator-sdk run packagemanifests commands are now deprecated. To migrate package manifests to bundles, the operator-sdk pkgman-to-bundle command can be used.

    Run the operator-sdk pkgman-to-bundle --help command and see “Migrating package manifest projects to bundle format” for more details.

  4. Update the finalizer names for your Operator.

    The finalizer name format suggested by Kubernetes documentation is:

    1. <qualified_group>/<finalizer_name>

    while the format previously documented for Operator SDK was:

    1. <finalizer_name>.<qualified_group>

    If your Operator uses any finalizers with names that match the incorrect format, change them to match the official format. For example, finalizer.cache.example.com must be changed to cache.example.com/finalizer.

Your Operator project is now compatible with Operator SDK v1.8.0.

Additional resources