Go Operator Tutorial

An in-depth walkthough of building and running a Go-based operator.

NOTE: If your project was created with an operator-sdk version prior to v1.0.0 please migrate, or consult the legacy docs.

Prerequisites

  • Go through the installation guide.
  • User authorized with cluster-admin permissions.
  • An accessible image registry for various operator images (ex. hub.docker.com, quay.io) and be logged in in your command line environment.
    • example.com is used as the registry Docker Hub namespace in these examples. Replace it with another value if using a different registry or namespace.
    • Authentication and certificates if the registry is private or uses a custom CA.

Overview

We will create a sample project to let you know how it works and this sample will:

  • Create a Memcached Deployment if it doesn’t exist
  • Ensure that the Deployment size is the same as specified by the Memcached CR spec
  • Update the Memcached CR status using the status writer with the names of the CR’s pods

Create a new project

Use the CLI to create a new memcached-operator project:

  1. mkdir -p $HOME/projects/memcached-operator
  2. cd $HOME/projects/memcached-operator
  3. # we'll use a domain of example.com
  4. # so all API groups will be <group>.example.com
  5. operator-sdk init --domain example.com --repo github.com/example/memcached-operator

To learn about the project directory structure, see Kubebuilder project layout doc.

A note on dependency management

operator-sdk init generates a go.mod file to be used with Go modules. The --repo=<path> flag is required when creating a project outside of $GOPATH/src, as scaffolded files require a valid module path. Ensure you activate module support by running export GO111MODULE=on before using the SDK.

Manager

The main program for the operator main.go initializes and runs the Manager.

See the Kubebuilder entrypoint doc for more details on how the manager registers the Scheme for the custom resource API definitions, and sets up and runs controllers and webhooks.

The Manager can restrict the namespace that all controllers will watch for resources:

  1. mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})

By default this will be the namespace that the operator is running in. To watch all namespaces leave the namespace option empty:

  1. mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})

Read the operator scope documentation on how to run your operator as namespace-scoped vs cluster-scoped.

Create a new API and Controller

Create a new Custom Resource Definition (CRD) API with group cache version v1alpha1 and Kind Memcached. When prompted, enter yes y for creating both the resource and controller.

  1. $ operator-sdk create api --group cache --version v1alpha1 --kind Memcached --resource --controller
  2. Writing scaffold for you to edit...
  3. api/v1alpha1/memcached_types.go
  4. controllers/memcached_controller.go
  5. ...

This will scaffold the Memcached resource API at api/v1alpha1/memcached_types.go and the controller at controllers/memcached_controller.go.

Note: This guide will cover the default case of a single group API. If you would like to support Multi-Group APIs see the Single Group to Multi-Group doc.

Understanding Kubernetes APIs

For an in-depth explanation of Kubernetes APIs and the group-version-kind model, check out these kubebuilder docs.

In general, it’s recommended to have one controller responsible for manage each API created for the project to properly follow the design goals set by controller-runtime.

Define the API

To begin, we will represent our API by defining the Memcached type, which will have a MemcachedSpec.Size field to set the quantity of memcached instances (CRs) to be deployed, and a MemcachedStatus.Nodes field to store a CR’s Pod names.

Note The Node field is just to illustrate an example of a Status field. In real cases, it would be recommended to use Conditions.

Define the API for the Memcached Custom Resource(CR) by modifying the Go type definitions at api/v1alpha1/memcached_types.go to have the following spec and status:

  1. // MemcachedSpec defines the desired state of Memcached
  2. type MemcachedSpec struct {
  3. //+kubebuilder:validation:Minimum=0
  4. // Size is the size of the memcached deployment
  5. Size int32 `json:"size"`
  6. }
  7. // MemcachedStatus defines the observed state of Memcached
  8. type MemcachedStatus struct {
  9. // Nodes are the names of the memcached pods
  10. Nodes []string `json:"nodes"`
  11. }

Add the +kubebuilder:subresource:status marker to add a status subresource to the CRD manifest so that the controller can update the CR status without changing the rest of the CR object:

  1. // Memcached is the Schema for the memcacheds API
  2. //+kubebuilder:subresource:status
  3. type Memcached struct {
  4. metav1.TypeMeta `json:",inline"`
  5. metav1.ObjectMeta `json:"metadata,omitempty"`
  6. Spec MemcachedSpec `json:"spec,omitempty"`
  7. Status MemcachedStatus `json:"status,omitempty"`
  8. }

After modifying the *_types.go file always run the following command to update the generated code for that resource type:

  1. make generate

The above makefile target will invoke the controller-gen utility to update the api/v1alpha1/zz_generated.deepcopy.go file to ensure our API’s Go type definitions implement the runtime.Object interface that all Kind types must implement.

Generating CRD manifests

Once the API is defined with spec/status fields and CRD validation markers, the CRD manifests can be generated and updated with the following command:

  1. make manifests

This makefile target will invoke controller-gen to generate the CRD manifests at config/crd/bases/cache.example.com_memcacheds.yaml.

OpenAPI validation

OpenAPI validation defined in a CRD ensures CRs are validated based on a set of declarative rules. All CRDs should have validation. See the OpenAPI validation doc for details.

Implement the Controller

For this example replace the generated controller file controllers/memcached_controller.go with the example memcached_controller.go implementation.

Note: The next two subsections explain how the controller watches resources and how the reconcile loop is triggered. If you’d like to skip this section, head to the deploy section to see how to run the operator.

Resources watched by the Controller

The SetupWithManager() function in controllers/memcached_controller.go specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller.

  1. import (
  2. ...
  3. appsv1 "k8s.io/api/apps/v1"
  4. ...
  5. )
  6. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
  7. return ctrl.NewControllerManagedBy(mgr).
  8. For(&cachev1alpha1.Memcached{}).
  9. Owns(&appsv1.Deployment{}).
  10. Complete(r)
  11. }

The NewControllerManagedBy() provides a controller builder that allows various controller configurations.

For(&cachev1alpha1.Memcached{}) specifies the Memcached type as the primary resource to watch. For each Memcached type Add/Update/Delete event the reconcile loop will be sent a reconcile Request (a namespace/name key) for that Memcached object.

Owns(&appsv1.Deployment{}) specifies the Deployments type as the secondary resource to watch. For each Deployment type Add/Update/Delete event, the event handler will map each event to a reconcile Request for the owner of the Deployment. Which in this case is the Memcached object for which the Deployment was created.

Controller Configurations

There are a number of other useful configurations that can be made when initialzing a controller. For more details on these configurations consult the upstream builder and controller godocs.

  • Set the max number of concurrent Reconciles for the controller via the MaxConcurrentReconciles option. Defaults to 1.

    1. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
    2. return ctrl.NewControllerManagedBy(mgr).
    3. For(&cachev1alpha1.Memcached{}).
    4. Owns(&appsv1.Deployment{}).
    5. WithOptions(controller.Options{MaxConcurrentReconciles: 2}).
    6. Complete(r)
    7. }
  • Filter watch events using predicates

  • Choose the type of EventHandler to change how a watch event will translate to reconcile requests for the reconcile loop. For operator relationships that are more complex than primary and secondary resources, the EnqueueRequestsFromMapFunc handler can be used to transform a watch event into an arbitrary set of reconcile requests.

Reconcile loop

The reconcile function is responsible for enforcing the desired CR state on the actual state of the system. It runs each time an event occurs on a watched CR or resource, and will return some value depending on whether those states match or not.

In this way, every Controller has a Reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument which is a Namespace/Name key used to lookup the primary resource object, Memcached, from the cache:

  1. import (
  2. ctrl "sigs.k8s.io/controller-runtime"
  3. cachev1alpha1 "github.com/example/memcached-operator/api/v1alpha1"
  4. ...
  5. )
  6. func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
  7. // Lookup the Memcached instance for this reconcile request
  8. memcached := &cachev1alpha1.Memcached{}
  9. err := r.Get(ctx, req.NamespacedName, memcached)
  10. ...
  11. }

For a guide on Reconcilers, Clients, and interacting with resource Events, see the Client API doc.

The following are a few possible return options for a Reconciler:

  • With the error:

    1. return ctrl.Result{}, err
  • Without an error:

    1. return ctrl.Result{Requeue: true}, nil
  • Therefore, to stop the Reconcile, use:

    1. return ctrl.Result{}, nil
  • Reconcile again after X time:

    1. return ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())}, nil

For more details, check the Reconcile and its Reconcile godoc.

Specify permissions and generate RBAC manifests

The controller needs certain RBAC permissions to interact with the resources it manages. These are specified via RBAC markers like the following:

  1. //+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
  2. //+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
  3. //+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
  4. //+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
  5. //+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
  6. func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
  7. ...
  8. }

The ClusterRole manifest at config/rbac/role.yaml is generated from the above markers via controller-gen with the following command:

  1. make manifests

NOTE: If you receive an error, please run the specified command in the error and re-run make manifests.

Configure the operator’s image registry

All that remains is to build and push the operator image to the desired image registry.

Before building the operator image, ensure the generated Dockerfile references the base image you want. You can change the default “runner” image gcr.io/distroless/static:nonroot by replacing its tag with another, for example alpine:latest, and removing the USER 65532:65532 directive.

Your Makefile composes image tags either from values written at project initialization or from the CLI. In particular, IMAGE_TAG_BASE lets you define a common image registry, namespace, and partial name for all your image tags. Update this to another registry and/or namespace if the current value is incorrect. Afterwards you can update the IMG variable definition like so:

  1. -IMG ?= controller:latest
  2. +IMG ?= $(IMAGE_TAG_BASE):$(VERSION)

Once done, you do not have to set IMG or any other image variable in the CLI. The following command will build and push an operator image tagged as example.com/memcached-operator:v0.0.1 to Docker Hub:

  1. make docker-build docker-push

Run the Operator

There are three ways to run the operator:

1. Run locally outside the cluster

The following steps will show how to deploy the operator on the Cluster. However, to run locally for development purposes and outside of a Cluster use the target make install run.

2. Run as a Deployment inside the cluster

By default, a new namespace is created with name <project-name>-system, ex. memcached-operator-system, and will be used for the deployment.

Run the following to deploy the operator. This will also install the RBAC manifests from config/rbac.

  1. make deploy

Verify that the memcached-operator is up and running:

  1. $ kubectl get deployment -n memcached-operator-system
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. memcached-operator-controller-manager 1/1 1 1 8m

3. Deploy your Operator with OLM

First, install OLM:

  1. operator-sdk olm install

Bundle your operator, then build and push the bundle image. The bundle target generates a bundle in the bundle directory containing manifests and metadata defining your operator. bundle-build and bundle-push build and push a bundle image defined by bundle.Dockerfile.

  1. make bundle bundle-build bundle-push

Finally, run your bundle. If your bundle image is hosted in a registry that is private and/or has a custom CA, these configuration steps must be complete.

  1. operator-sdk run bundle <some-registry>/memcached-operator-bundle:v0.0.1

Check out the docs for a deep dive into operator-sdk‘s OLM integration.

Create a Memcached CR

Update the sample Memcached CR manifest at config/samples/cache_v1alpha1_memcached.yaml and define the spec as the following:

  1. apiVersion: cache.example.com/v1alpha1
  2. kind: Memcached
  3. metadata:
  4. name: memcached-sample
  5. spec:
  6. size: 3

Create the CR:

  1. kubectl apply -f config/samples/cache_v1alpha1_memcached.yaml

Ensure that the memcached operator creates the deployment for the sample CR with the correct size:

  1. $ kubectl get deployment
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. memcached-sample 3/3 3 3 1m

Check the pods and CR status to confirm the status is updated with the memcached pod names:

  1. $ kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m
  4. memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m
  5. memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
  1. $ kubectl get memcached/memcached-sample -o yaml
  2. apiVersion: cache.example.com/v1alpha1
  3. kind: Memcached
  4. metadata:
  5. clusterName: ""
  6. creationTimestamp: 2018-03-31T22:51:08Z
  7. generation: 0
  8. name: memcached-sample
  9. namespace: default
  10. resourceVersion: "245453"
  11. selfLink: /apis/cache.example.com/v1alpha1/namespaces/default/memcacheds/memcached-sample
  12. uid: 0026cc97-3536-11e8-bd83-0800274106a1
  13. spec:
  14. size: 3
  15. status:
  16. nodes:
  17. - memcached-sample-6fd7c98d8-7dqdr
  18. - memcached-sample-6fd7c98d8-g5k7v
  19. - memcached-sample-6fd7c98d8-m7vn7

Update the size

Update config/samples/cache_v1alpha1_memcached.yaml to change the spec.size field in the Memcached CR from 3 to 5:

  1. kubectl patch memcached memcached-sample -p '{"spec":{"size": 5}}' --type=merge

Confirm that the operator changes the deployment size:

  1. $ kubectl get deployment
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. memcached-sample 5/5 5 5 3m

Cleanup

Run the following to delete all deployed resources:

  1. kubectl delete -f config/samples/cache_v1alpha1_memcached.yaml
  2. make undeploy

Next steps

Next, check out the following:

  1. Validating and mutating admission webhooks.
  2. Operator packaging and distribution with OLM.
  3. The advanced topics doc for more use cases and under-the-hood details.

Last modified March 19, 2022: Go Tutorial: Adding Note to further clarify instructions if the user receives an error (#5486) (0b50c331)