Installing the MetalLB Operator

As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster.

MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator.

Installing the MetalLB Operator from the OperatorHub using the web console

As a cluster administrator, you can install the MetalLB Operator by using the OKD web console.

Prerequisites

  • Log in as a user with cluster-admin privileges.

Procedure

  1. In the OKD web console, navigate to OperatorsOperatorHub.

  2. Search for the MetalLB Operator, then click Install.

  3. On the Install Operator page, accept the defaults and click Install.

Verification

  1. To confirm that the installation is successful:

    1. Navigate to the OperatorsInstalled Operators page.

    2. Check that the Operator is installed in the openshift-operators namespace and that its status is Succeeded.

  2. If the Operator is not installed successfully, check the status of the Operator and review the logs:

    1. Navigate to the OperatorsInstalled Operators page and inspect the Status column for any errors or failures.

    2. Navigate to the WorkloadsPods page and check the logs in any pods in the openshift-operators project that are reporting issues.

Installing from OperatorHub using the CLI

Instead of using the OKD web console, you can install an Operator from OperatorHub using the CLI. You can use the OpenShift CLI (oc) to install the MetalLB Operator.

It is recommended that when using the CLI you install the Operator in the metallb-system namespace.

Prerequisites

  • A cluster installed on bare-metal hardware.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a namespace for the MetalLB Operator by entering the following command:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: v1
    3. kind: Namespace
    4. metadata:
    5. name: metallb-system
    6. EOF
  2. Create an Operator group custom resource (CR) in the namespace:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: operators.coreos.com/v1
    3. kind: OperatorGroup
    4. metadata:
    5. name: metallb-operator
    6. namespace: metallb-system
    7. EOF
  3. Confirm the Operator group is installed in the namespace:

    1. $ oc get operatorgroup -n metallb-system

    Example output

    1. NAME AGE
    2. metallb-operator 14m
  4. Create a Subscription CR:

    1. Define the Subscription CR and save the YAML file, for example, metallb-sub.yaml:

      1. apiVersion: operators.coreos.com/v1alpha1
      2. kind: Subscription
      3. metadata:
      4. name: metallb-operator-sub
      5. namespace: metallb-system
      6. spec:
      7. channel: stable
      8. name: metallb-operator
      9. source: redhat-operators (1)
      10. sourceNamespace: openshift-marketplace
      1You must specify the redhat-operators value.
    2. To create the Subscription CR, run the following command:

      1. $ oc create -f metallb-sub.yaml
  5. Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command:

    1. $ oc label ns metallb-system "openshift.io/cluster-monitoring=true"

Verification

The verification steps assume the MetalLB Operator is installed in the metallb-system namespace.

  1. Confirm the install plan is in the namespace:

    1. $ oc get installplan -n metallb-system

    Example output

    1. NAME CSV APPROVAL APPROVED
    2. install-wzg94 metallb-operator.4.12.0-nnnnnnnnnnnn Automatic true

    Installation of the Operator might take a few seconds.

  2. To verify that the Operator is installed, enter the following command:

    1. $ oc get clusterserviceversion -n metallb-system \
    2. -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    1. Name Phase
    2. metallb-operator.4.12.0-nnnnnnnnnnnn Succeeded

Starting MetalLB on your cluster

After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the MetalLB Operator.

Procedure

This procedure assumes the MetalLB Operator is installed in the metallb-system namespace. If you installed using the web console substitute openshift-operators for the namespace.

  1. Create a single instance of a MetalLB custom resource:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: metallb.io/v1beta1
    3. kind: MetalLB
    4. metadata:
    5. name: metallb
    6. namespace: metallb-system
    7. EOF

Verification

Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running.

  1. Verify that the deployment for the controller is running:

    1. $ oc get deployment -n metallb-system controller

    Example output

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. controller 1/1 1 1 11m
  2. Verify that the daemon set for the speaker is running:

    1. $ oc get daemonset -n metallb-system speaker

    Example output

    1. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    2. speaker 6 6 6 6 6 kubernetes.io/os=linux 18m

    The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster.

Deployment specifications for MetalLB

When you start an instance of MetalLB using the MetalLB custom resource, you can configure deployment specifications in the MetalLB custom resource to manage how the controller or speaker pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks:

  • Select nodes for MetalLB pod deployment.

  • Manage scheduling by using pod priority and pod affinity.

  • Assign CPU limits for MetalLB pods.

  • Assign a container RuntimeClass for MetalLB pods.

  • Assign metadata for MetalLB pods.

Limit speaker pods to specific nodes

By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker pod on each node in the cluster. Only the nodes with a speaker pod can advertise a load balancer IP address. You can configure the MetalLB custom resource with a node selector to specify which nodes run the speaker pods.

The most common reason to limit the speaker pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running speaker pod are advertised as destinations of the load balancer IP address.

If you limit the speaker pods to specific nodes and specify local for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes.

Example configuration to limit speaker pods to worker nodes

  1. apiVersion: metallb.io/v1beta1
  2. kind: MetalLB
  3. metadata:
  4. name: metallb
  5. namespace: metallb-system
  6. spec:
  7. nodeSelector: (1)
  8. node-role.kubernetes.io/worker: ""
  9. speakerTolerations: (2)
  10. - key: "Example"
  11. operator: "Exists"
  12. effect: "NoExecute"
1The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector.
2In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the key value and effect value using the operator.

After you apply a manifest with the spec.nodeSelector field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker command. Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker=.

You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources.

Configuring pod priority and pod affinity in a MetalLB deployment

You can optionally assign pod priority and pod affinity rules to controller and speaker pods by configuring the MetalLB custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your controller or speaker pod to ensure scheduling priority over other pods on the node.

Pod affinity manages relationships among pods. Assign pod affinity to the controller or speaker pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can allow pods with logically related workloads on the same node, or ensure pods with conflicting workloads are on separate nodes.

Prerequisites

  • You are logged in as a user with cluster-admin privileges.

  • You have installed the MetalLB Operator.

Procedure

  1. Create a PriorityClass custom resource, such as myPriorityClass.yaml, to configure the priority level. This example uses a high-priority class:

    1. apiVersion: scheduling.k8s.io/v1
    2. kind: PriorityClass
    3. metadata:
    4. name: high-priority
    5. value: 1000000
  2. Apply the PriorityClass custom resource configuration:

    1. $ oc apply -f myPriorityClass.yaml
  3. Create a MetalLB custom resource, such as MetalLBPodConfig.yaml, to specify the priorityClassName and podAffinity values:

    1. apiVersion: metallb.io/v1beta1
    2. kind: MetalLB
    3. metadata:
    4. name: metallb
    5. namespace: metallb-system
    6. spec:
    7. logLevel: debug
    8. controllerConfig:
    9. priorityClassName: high-priority
    10. runtimeClassName: myclass
    11. speakerConfig:
    12. priorityClassName: high-priority
    13. runtimeClassName: myclass
    14. affinity:
    15. podAffinity:
    16. requiredDuringSchedulingIgnoredDuringExecution:
    17. - labelSelector:
    18. matchLabels:
    19. app: metallb
    20. topologyKey: kubernetes.io/hostname
  4. Apply the MetalLB custom resource configuration:

    1. $ oc apply -f MetalLBPodConfig.yaml

Verification

  • To view the priority class that you assigned to pods in a namespace, run the following command, replacing <namespace> with your target namespace:

    1. $ oc get pods -n <namespace> -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName
  • To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod’s node by running the following command, replacing <namespace> with your target namespace:

    1. $ oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n <namespace>

Configuring pod CPU limits in a MetalLB deployment

You can optionally assign pod CPU limits to controller and speaker pods by configuring the MetalLB custom resource. Defining CPU limits for the controller or speaker pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping.

Prerequisites

  • You are logged in as a user with cluster-admin privileges.

  • You have installed the MetalLB Operator.

Procedure

  1. Create a MetalLB custom resource file, such as CPULimits.yaml, to specify the cpu value for the controller and speaker pods:

    1. apiVersion: metallb.io/v1beta1
    2. kind: MetalLB
    3. metadata:
    4. name: metallb
    5. namespace: metallb-system
    6. spec:
    7. logLevel: debug
    8. controllerConfig:
    9. resources:
    10. limits:
    11. cpu: "200m"
    12. speakerConfig:
    13. resources:
    14. limits:
    15. cpu: "300m"
  2. Apply the MetalLB custom resource configuration:

    1. $ oc apply -f CPULimits.yaml

Verification

  • To view compute resources for a pod, run the following command, replacing <pod_name> with your target pod:

    1. $ oc describe pod <pod_name>

Configuring a container runtime class in a MetalLB deployment

You can optionally assign a container runtime class to controller and speaker pods by configuring the MetalLB custom resource. For example, for Windows workloads, you can assign a Windows runtime class to the pod, which uses this runtime class for all containers in the pod.

Prerequisites

  • You are logged in as a user with cluster-admin privileges.

  • You have installed the MetalLB Operator.

Procedure

  1. Create a RuntimeClass custom resource, such as myRuntimeClass.yaml, to define your runtime class:

    1. apiVersion: node.k8s.io/v1
    2. kind: RuntimeClass
    3. metadata:
    4. name: myclass
    5. handler: myconfiguration
  2. Apply the RuntimeClass custom resource configuration:

    1. $ oc apply -f myRuntimeClass.yaml
  3. Create a MetalLB custom resource, such as MetalLBRuntime.yaml, to specify the runtimeClassName value:

    1. apiVersion: metallb.io/v1beta1
    2. kind: MetalLB
    3. metadata:
    4. name: metallb
    5. namespace: metallb-system
    6. spec:
    7. logLevel: debug
    8. controllerConfig:
    9. runtimeClassName: myclass
    10. annotations: (1)
    11. controller: demo
    12. speakerConfig:
    13. runtimeClassName: myclass
    14. annotations: (1)
    15. speaker: demo
    1This example uses annotations to add metadata such as build release information or GitHub pull request information. You can populate annotations with characters not permitted in labels. However, you cannot use annotations to identify or select objects.
  4. Apply the MetalLB custom resource configuration:

    1. $ oc apply -f MetalLBRuntime.yaml

Verification

  • To view the container runtime for a pod, run the following command:

    1. $ oc get pod -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName

Additional resources

Next steps