Extend Service IP Ranges

FEATURE STATE: Kubernetes v1.29 [alpha]

This document shares how to extend the existing Service IP range assigned to a cluster.

Before you begin

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

Your Kubernetes server must be at or later than version v1.29. To check the version, enter kubectl version.

API

Kubernetes clusters with kube-apiservers that have enabled the MultiCIDRServiceAllocator feature gate and the networking.k8s.io/v1alpha1 API, will create a new ServiceCIDR object that takes the well-known name kubernetes, and that uses an IP address range based on the value of the --service-cluster-ip-range command line argument to kube-apiserver.

  1. kubectl get servicecidr
  1. NAME CIDRS AGE
  2. kubernetes 10.96.0.0/28 17d

The well-known kubernetes Service, that exposes the kube-apiserver endpoint to the Pods, calculates the first IP address from the default ServiceCIDR range and uses that IP address as its cluster IP address.

  1. kubectl get service kubernetes
  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d

The default Service, in this case, uses the ClusterIP 10.96.0.1, that has the corresponding IPAddress object.

  1. kubectl get ipaddress 10.96.0.1
  1. NAME PARENTREF
  2. 10.96.0.1 services/default/kubernetes

The ServiceCIDRs are protected with finalizers, to avoid leaving Service ClusterIPs orphans; the finalizer is only removed if there is another subnet that contains the existing IPAddresses or there are no IPAddresses belonging to the subnet.

Extend the number of available IPs for Services

There are cases that users will need to increase the number addresses available to Services, previously, increasing the Service range was a disruptive operation that could also cause data loss. With this new feature users only need to add a new ServiceCIDR to increase the number of available addresses.

Adding a new ServiceCIDR

On a cluster with a 10.96.0.0/28 range for Services, there is only 2^(32-28) - 2 = 14 IP addresses available. The kubernetes.default Service is always created; for this example, that leaves you with only 13 possible Services.

  1. for i in $(seq 1 13); do kubectl create service clusterip "test-$i" --tcp 80 -o json | jq -r .spec.clusterIP; done
  1. 10.96.0.11
  2. 10.96.0.5
  3. 10.96.0.12
  4. 10.96.0.13
  5. 10.96.0.14
  6. 10.96.0.2
  7. 10.96.0.3
  8. 10.96.0.4
  9. 10.96.0.6
  10. 10.96.0.7
  11. 10.96.0.8
  12. 10.96.0.9
  13. error: failed to create ClusterIP service: Internal error occurred: failed to allocate a serviceIP: range is full

You can increase the number of IP addresses available for Services, by creating a new ServiceCIDR that extends or adds new IP address ranges.

  1. cat <EOF | kubectl apply -f -
  2. apiVersion: networking.k8s.io/v1alpha1
  3. kind: ServiceCIDR
  4. metadata:
  5. name: newcidr1
  6. spec:
  7. cidrs:
  8. - 10.96.0.0/24
  9. EOF
  1. servicecidr.networking.k8s.io/newcidr1 created

and this will allow you to create new Services with ClusterIPs that will be picked from this new range.

  1. for i in $(seq 13 16); do kubectl create service clusterip "test-$i" --tcp 80 -o json | jq -r .spec.clusterIP; done
  1. 10.96.0.48
  2. 10.96.0.200
  3. 10.96.0.121
  4. 10.96.0.144

Deleting a ServiceCIDR

You cannot delete a ServiceCIDR if there are IPAddresses that depend on the ServiceCIDR.

  1. kubectl delete servicecidr newcidr1
  1. servicecidr.networking.k8s.io "newcidr1" deleted

Kubernetes uses a finalizer on the ServiceCIDR to track this dependent relationship.

  1. kubectl get servicecidr newcidr1 -o yaml
  1. apiVersion: networking.k8s.io/v1alpha1
  2. kind: ServiceCIDR
  3. metadata:
  4. creationTimestamp: "2023-10-12T15:11:07Z"
  5. deletionGracePeriodSeconds: 0
  6. deletionTimestamp: "2023-10-12T15:12:45Z"
  7. finalizers:
  8. - networking.k8s.io/service-cidr-finalizer
  9. name: newcidr1
  10. resourceVersion: "1133"
  11. uid: 5ffd8afe-c78f-4e60-ae76-cec448a8af40
  12. spec:
  13. cidrs:
  14. - 10.96.0.0/24
  15. status:
  16. conditions:
  17. - lastTransitionTime: "2023-10-12T15:12:45Z"
  18. message: There are still IPAddresses referencing the ServiceCIDR, please remove
  19. them or create a new ServiceCIDR
  20. reason: OrphanIPAddress
  21. status: "False"
  22. type: Ready

By removing the Services containing the IP addresses that are blocking the deletion of the ServiceCIDR

  1. for i in $(seq 13 16); do kubectl delete service "test-$i" ; done
  1. service "test-13" deleted
  2. service "test-14" deleted
  3. service "test-15" deleted
  4. service "test-16" deleted

the control plane notices the removal. The control plane then removes its finalizer, so that the ServiceCIDR that was pending deletion will actually be removed.

  1. kubectl get servicecidr newcidr1
  1. Error from server (NotFound): servicecidrs.networking.k8s.io "newcidr1" not found