Introducing New CRDs

Cilium uses a combination of code generation tools to facilitate adding CRDs to the Kubernetes instance it is installed on.

These CRDs make themselves available in the generated Kubernetes client Cilium uses.

Defining And Generating CRDs

Currently, two API versions exist v2 and v2alpha1.

Paths:

  1. pkg/k8s/apis/cilium.io/v2/
  2. pkg/k8s/apis/cilium.io/v2alpha1/

CRDs are defined via Golang structures, annotated with marks, and generated with Cilium make file targets.

Marks

Marks are used to tell controller-gen how to generate the CRD. This includes defining the CRD’s various names (Singular, plural, group), its Scope (Cluster, Namespaced), Shortnames, etc…

An example:

  1. // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
  2. // +kubebuilder:resource:categories={cilium},singular="ciliumendpointslice",path="ciliumendpointslices",scope="Cluster",shortName={ces}
  3. // +kubebuilder:storageversion

You can find CRD generation marks documentation here.

Marks are also used to generate json-schema validation. You can define validation criteria such as “format=cidr” and “required” via validation marks in your struct’s comments.

An example:

  1. type CiliumBGPPeeringConfiguration struct {
  2. // PeerAddress is the IP address of the peer.
  3. // This must be in CIDR notation and use a /32 to express
  4. // a single host.
  5. //
  6. // +kubebuilder:validation:Required
  7. // +kubebuilder:validation:Format=cidr
  8. PeerAddress string `json:"peerAddress"`

You can find CRD validation marks documentation here.

Defining CRDs

Paths:

  1. pkg/k8s/apis/cilium.io/v2/
  2. pkg/k8s/apis/cilium.io/v2alpha1/

The portion of the directory after apis/ makes up the CRD’s Group and Version. See KubeBuilder-GVK

You can begin defining your CRD structure, making any subtypes you like to adequately define your data model and using marks to control the CRD generation process.

Here is a brief example, omitting any further definitions of sub-types to express the CRD data model.

  1. // +genclient
  2. // +genclient:nonNamespaced
  3. // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
  4. // +kubebuilder:resource:categories={cilium,ciliumbgp},singular="ciliumbgppeeringpolicy",path="ciliumbgppeeringpolicies",scope="Cluster",shortName={bgpp}
  5. // +kubebuilder:printcolumn:JSONPath=".metadata.creationTimestamp",name="Age",type=date
  6. // +kubebuilder:storageversion
  7. // CiliumBGPPeeringPolicy is a Kubernetes third-party resource for instructing
  8. // Cilium's BGP control plane to create peers.
  9. type CiliumBGPPeeringPolicy struct {
  10. // +k8s:openapi-gen=false
  11. // +deepequal-gen=false
  12. metav1.TypeMeta `json:",inline"`
  13. // +k8s:openapi-gen=false
  14. // +deepequal-gen=false
  15. metav1.ObjectMeta `json:"metadata"`
  16. // Spec is a human readable description of a BGP peering policy
  17. //
  18. // +kubebuilder:validation:Required
  19. Spec CiliumBGPPeeringPolicySpec `json:"spec,omitempty"`
  20. }

Integrating CRDs Into Cilium

Once you’ve coded your CRD data model you can use Cilium’s make infrastructure to generate and integrate your CRD into Cilium.

There are several make targets and a script which revolve around generating CRD and associated code gen (client, informers, DeepCopy implementations, DeepEqual implementations, etc).

Each of the next sections also detail the steps you should take to integrate your CRD into Cilium.

Generating CRD YAML

To simply generate the CRDs and copy them into the correct location you must perform two tasks:

  • Update the Makefile to copy your generated CRD from a tmp directory to the correct location in Cilium repository. Edit the following location

  • Run make manifests

This will generate your Golang structs into CRD manifests and copy them to ./pkg/k8s/apis/cilium.io/client/crds/ into the appropriate Version directory.

You can inspect your generated CRDs to confirm they look OK.

Additionally ./contrib/scripts/check-k8s-code-gen.sh is a script which will generate the CRD manifest along with generating the necessary K8s API changes to use your CRDs via K8s client in Cilium source code.

Generating Client Code

  1. make generate-k8s-api

This make target will perform the necessary code-gen to integrate your CRD into Cilium’s client-go client, create listers, watchers, and informers.

Again, multiple steps must be taken to fully integrate your CRD into Cilium.

Register With API Scheme

Paths:

  1. pkg/k8s/apis/cilium.io/v2alpha1/register.go

Make a change similar to this diff to register your CRDs with the API scheme.

  1. diff --git a/pkg/k8s/apis/cilium.io/v2alpha1/register.go b/pkg/k8s/apis/cilium.io/v2alpha1/register.go
  2. index 9650e32f8d..0d85c5a233 100644
  3. --- a/pkg/k8s/apis/cilium.io/v2alpha1/register.go
  4. +++ b/pkg/k8s/apis/cilium.io/v2alpha1/register.go
  5. @@ -55,6 +55,34 @@ const (
  6. // CESName is the full name of Cilium Endpoint Slice
  7. CESName = CESPluralName + "." + CustomResourceDefinitionGroup
  8. +
  9. + // Cilium BGP Peering Policy (BGPP)
  10. +
  11. + // BGPPSingularName is the singular name of Cilium BGP Peering Policy
  12. + BGPPSingularName = "ciliumbgppeeringpolicy"
  13. +
  14. + // BGPPPluralName is the plural name of Cilium BGP Peering Policy
  15. + BGPPPluralName = "ciliumbgppeeringpolicies"
  16. +
  17. + // BGPPKindDefinition is the kind name of Cilium BGP Peering Policy
  18. + BGPPKindDefinition = "CiliumBGPPeeringPolicy"
  19. +
  20. + // BGPPName is the full name of Cilium BGP Peering Policy
  21. + BGPPName = BGPPPluralName + "." + CustomResourceDefinitionGroup
  22. +
  23. + // Cilium BGP Load Balancer IP Pool (BGPPool)
  24. +
  25. + // BGPPoolSingularName is the singular name of Cilium BGP Load Balancer IP Pool
  26. + BGPPoolSingularName = "ciliumbgploadbalancerippool"
  27. +
  28. + // BGPPoolPluralName is the plural name of Cilium BGP Load Balancer IP Pool
  29. + BGPPoolPluralName = "ciliumbgploadbalancerippools"
  30. +
  31. + // BGPPoolKindDefinition is the kind name of Cilium BGP Peering Policy
  32. + BGPPoolKindDefinition = "CiliumBGPLoadBalancerIPPool"
  33. +
  34. + // BGPPoolName is the full name of Cilium BGP Load Balancer IP Pool
  35. + BGPPoolName = BGPPoolPluralName + "." + CustomResourceDefinitionGroup
  36. )
  37. // SchemeGroupVersion is group version used to register these objects
  38. @@ -102,6 +130,10 @@ func addKnownTypes(scheme *runtime.Scheme) error {
  39. &CiliumEgressNATPolicyList{},
  40. &CiliumEndpointSlice{},
  41. &CiliumEndpointSliceList{},
  42. + &CiliumBGPPeeringPolicy{},
  43. + &CiliumBGPPeeringPolicyList{},
  44. + &CiliumBGPLoadBalancerIPPool{},
  45. + &CiliumBGPLoadBalancerIPPoolList{},
  46. )
  47. metav1.AddToGroupVersion(scheme, SchemeGroupVersion)

You should also bump the CustomResourceDefinitionSchemaVersion variable in the correct {api_version}/register.go to instruct Cilium that new CRDs have been added to the system. For example, bump this line if adding a CRD to the v2 group: register.go

Register With Client

pkg/k8s/apis/cilium.io/client/register.go

Make a change similar to the following to register CRD types with the client.

  1. diff --git a/pkg/k8s/apis/cilium.io/client/register.go b/pkg/k8s/apis/cilium.io/client/register.go
  2. index ede134d7d9..ec82169270 100644
  3. --- a/pkg/k8s/apis/cilium.io/client/register.go
  4. +++ b/pkg/k8s/apis/cilium.io/client/register.go
  5. @@ -60,6 +60,12 @@ const (
  6. // CESCRDName is the full name of the CES CRD.
  7. CESCRDName = k8sconstv2alpha1.CESKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion
  8. +
  9. + // BGPPCRDName is the full name of the BGPP CRD.
  10. + BGPPCRDName = k8sconstv2alpha1.BGPPKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion
  11. +
  12. + // BGPPoolCRDName is the full name of the BGPPool CRD.
  13. + BGPPoolCRDName = k8sconstv2alpha1.BGPPoolKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion
  14. )
  15. var (
  16. @@ -86,6 +92,7 @@ func CreateCustomResourceDefinitions(clientset apiextensionsclient.Interface) er
  17. synced.CRDResourceName(k8sconstv2.CLRPName): createCLRPCRD,
  18. synced.CRDResourceName(k8sconstv2alpha1.CENPName): createCENPCRD,
  19. synced.CRDResourceName(k8sconstv2alpha1.CESName): createCESCRD,
  20. + synced.CRDResourceName(k8sconstv2alpha1.BGPPName): createCESCRD,
  21. }
  22. for _, r := range synced.AllCRDResourceNames() {
  23. fn, ok := resourceToCreateFnMapping[r]
  24. @@ -127,6 +134,12 @@ var (
  25. //go:embed crds/v2alpha1/ciliumendpointslices.yaml
  26. crdsv2Alpha1Ciliumendpointslices []byte
  27. +
  28. + //go:embed crds/v2alpha1/ciliumbgppeeringpolicies.yaml
  29. + crdsv2Alpha1Ciliumbgppeeringpolicies []byte
  30. +
  31. + //go:embed crds/v2alpha1/ciliumbgploadbalancerippools.yaml
  32. + crdsv2Alpha1Ciliumbgploadbalancerippools []byte
  33. )
  34. // GetPregeneratedCRD returns the pregenerated CRD based on the requested CRD
  35. @@ -286,6 +299,32 @@ func createCESCRD(clientset apiextensionsclient.Interface) error {
  36. )
  37. }
  38. +// createBGPPCRD creates and updates the CiliumBGPPeeringPolicy CRD. It should be
  39. +// called on agent startup but is idempotent and safe to call again.
  40. +func createBGPPCRD(clientset apiextensionsclient.Interface) error {
  41. + ciliumCRD := GetPregeneratedCRD(BGPPCRDName)
  42. +
  43. + return createUpdateCRD(
  44. + clientset,
  45. + BGPPCRDName,
  46. + constructV1CRD(k8sconstv2alpha1.BGPPName, ciliumCRD),
  47. + newDefaultPoller(),
  48. + )
  49. +}
  50. +
  51. +// createBGPPoolCRD creates and updates the CiliumLoadBalancerIPPool CRD. It should be
  52. +// called on agent startup but is idempotent and safe to call again.
  53. +func createBGPPoolCRD(clientset apiextensionsclient.Interface) error {
  54. + ciliumCRD := GetPregeneratedCRD(BGPPoolCRDName)
  55. +
  56. + return createUpdateCRD(
  57. + clientset,
  58. + BGPPoolCRDName,
  59. + constructV1CRD(k8sconstv2alpha1.BGPPName, ciliumCRD),
  60. + newDefaultPoller(),
  61. + )
  62. +}
  63. +
  64. // createUpdateCRD ensures the CRD object is installed into the K8s cluster. It
  65. // will create or update the CRD and its validation schema as necessary. This
  66. // function only accepts v1 CRD objects, and defers to its v1beta1 variant if

Getting Your CRDs Installed

Your new CRDs must be installed into Kubernetes. This is controlled in the pkg/k8s/synced/crd.go file.

Here is an example diff which installs the CRDs v2alpha1.BGPPName and v2alpha.BGPPoolName:

  1. diff --git a/pkg/k8s/synced/crd.go b/pkg/k8s/synced/crd.go
  2. index 52d975c449..10c554cf8a 100644
  3. --- a/pkg/k8s/synced/crd.go
  4. +++ b/pkg/k8s/synced/crd.go
  5. @@ -42,6 +42,11 @@ func agentCRDResourceNames() []string {
  6. CRDResourceName(v2.CCNPName),
  7. CRDResourceName(v2.CNName),
  8. CRDResourceName(v2.CIDName),
  9. + CRDResourceName(v2.CIDName),
  10. + // TODO(louis) make this a conditional install
  11. + // based on --enable-bgp-control-plane flag
  12. + CRDResourceName(v2alpha1.BGPPName),
  13. + CRDResourceName(v2alpha1.BGPPoolName),
  14. }

Updating RBAC Roles

Cilium is installed with a service account and this service account should be given RBAC permissions to access your new CRDs. The following files should be updated to include permissions to create, read, update, and delete your new CRD.

  1. install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
  2. install/kubernetes/cilium/templates/cilium-operator/clusterrole.yaml
  3. install/kubernetes/cilium/templates/cilium-preflight/clusterrole.yaml

Here is a diff of updating the Agent’s cluster role template to include our new BGP CRDs:

  1. diff --git a/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml b/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
  2. index 9878401a81..5ba6c30cd7 100644
  3. --- a/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
  4. +++ b/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
  5. @@ -102,6 +102,8 @@ rules:
  6. - ciliumlocalredirectpolicies/finalizers
  7. - ciliumegressnatpolicies
  8. - ciliumendpointslices
  9. + - ciliumbgppeeringpolicies
  10. + - ciliumbgploadbalancerippools
  11. verbs:
  12. - '*'
  13. {{- end }}

It’s important to note, neither the Agent nor the Operator installs these manifests to the Kubernetes clusters. This means when testing your CRD out the updated clusterrole must be written to the cluster manually.

Also please note, you should be specific about which ‘verbs’ are added to the Agent’s cluster role. This ensures a good security posture and best practice.

A convenient script for this follows:

  1. createTemplate(){
  2. if [ -z "${1}" ]; then
  3. echo "Commit SHA not set"
  4. return
  5. fi
  6. ciliumVersion=${1}
  7. MODIFY THIS LINE CD TO CILIUM ROOT DIR <-----
  8. cd install/kubernetes
  9. CILIUM_CI_TAG="${1}"
  10. helm template cilium ./cilium \
  11. --namespace kube-system \
  12. --set image.repository=quay.io/cilium/cilium-ci \
  13. --set image.tag=$CILIUM_CI_TAG \
  14. --set operator.image.repository=quay.io/cilium/operator \
  15. --set operator.image.suffix=-ci \
  16. --set operator.image.tag=$CILIUM_CI_TAG \
  17. --set clustermesh.apiserver.image.repository=quay.io/cilium/clustermesh-apiserver-ci \
  18. --set clustermesh.apiserver.image.tag=$CILIUM_CI_TAG \
  19. --set hubble.relay.image.repository=quay.io/cilium/hubble-relay-ci \
  20. --set hubble.relay.image.tag=$CILIUM_CI_TAG > /tmp/cilium.yaml
  21. echo "run kubectl apply -f /tmp/cilium.yaml"
  22. }

The above script with install Cilium and newest clusterrole manifests to anywhere your kubectl is pointed.