Running and deploying the controller

To test out the controller, we can run it locally against the cluster.Before we do so, though, we’ll need to install our CRDs, as per the quickstart. This will automatically update the YAMLmanifests using controller-tools, if needed:

  1. make install

Now that we’ve installed our CRDs, we can run the controller against ourcluster. This will use whatever credentials that we connect to thecluster with, so we don’t need to worry about RBAC just yet.

Note that if you have a webhook and want to deploy it locally, you need toensure the certificates are in the right place.

In a separate terminal, run

  1. make run

You should see logs from the controller about starting up, but it won’t doanything just yet.

At this point, we need a CronJob to test with. Let’s write a sample toconfig/samples/batch_v1_cronjob.yaml, and use that:

  1. apiVersion: batch.tutorial.kubebuilder.io/v1
  2. kind: CronJob
  3. metadata:
  4. name: cronjob-sample
  5. spec:
  6. schedule: "*/1 * * * *"
  7. startingDeadlineSeconds: 60
  8. concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
  9. jobTemplate:
  10. spec:
  11. template:
  12. spec:
  13. containers:
  14. - name: hello
  15. image: busybox
  16. args:
  17. - /bin/sh
  18. - -c
  19. - date; echo Hello from the Kubernetes cluster
  20. restartPolicy: OnFailure
  1. kubectl create -f config/samples/batch_v1_cronjob.yaml

At this point, you should see a flurry of activity. If you watch thechanges, you should see your cronjob running, and updating status:

  1. kubectl get cronjob.batch.tutorial.kubebuilder.io -o yaml
  2. kubectl get job

Now that we know it’s working, we can run it in the cluster. Stop themake run invocation, and run

  1. make docker-build docker-push IMG=<some-registry>/controller
  2. make deploy

If we list cronjobs again like we did before, we should see the controllerfunctioning again!