Device Plugins

Device plugins let you configure your cluster with support for devices or resources that require vendor-specific setup, such as GPUs, NICs, FPGAs, or non-volatile main memory.

FEATURE STATE: Kubernetes v1.26 [stable]

Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.

Instead of customizing the code for Kubernetes itself, vendors can implement a device plugin that you deploy either manually or as a DaemonSet. The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adapters, and other similar computing resources that may require vendor specific initialization and setup.

Device plugin registration

The kubelet exports a Registration gRPC service:

  1. service Registration {
  2. rpc Register(RegisterRequest) returns (Empty) {}
  3. }

A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:

  • The name of its Unix socket.
  • The Device Plugin API version against which it was built.
  • The ResourceName it wants to advertise. Here ResourceName needs to follow the extended resource naming scheme as vendor-domain/resourcetype. (For example, an NVIDIA GPU is advertised as nvidia.com/gpu.)

Following a successful registration, the device plugin sends the kubelet the list of devices it manages, and the kubelet is then in charge of advertising those resources to the API server as part of the kubelet node status update. For example, after a device plugin registers hardware-vendor.example/foo with the kubelet and reports two healthy devices on a node, the node status is updated to advertise that the node has 2 “Foo” devices installed and available.

Then, users can request devices as part of a Pod specification (see container). Requesting extended resources is similar to how you manage requests and limits for other resources, with the following differences:

  • Extended resources are only supported as integer resources and cannot be overcommitted.
  • Devices cannot be shared between containers.

Example

Suppose a Kubernetes cluster is running a device plugin that advertises resource hardware-vendor.example/foo on certain nodes. Here is an example of a pod requesting this resource to run a demo workload:

  1. ---
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: demo-pod
  6. spec:
  7. containers:
  8. - name: demo-container-1
  9. image: registry.k8s.io/pause:2.0
  10. resources:
  11. limits:
  12. hardware-vendor.example/foo: 2
  13. #
  14. # This Pod needs 2 of the hardware-vendor.example/foo devices
  15. # and can only schedule onto a Node that's able to satisfy
  16. # that need.
  17. #
  18. # If the Node has more than 2 of those devices available, the
  19. # remainder would be available for other Pods to use.

Device plugin implementation

The general workflow of a device plugin includes the following steps:

  1. Initialization. During this phase, the device plugin performs vendor-specific initialization and setup to make sure the devices are in a ready state.

  2. The plugin starts a gRPC service, with a Unix socket under the host path /var/lib/kubelet/device-plugins/, that implements the following interfaces:

    1. service DevicePlugin {
    2. // GetDevicePluginOptions returns options to be communicated with Device Manager.
    3. rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
    4. // ListAndWatch returns a stream of List of Devices
    5. // Whenever a Device state change or a Device disappears, ListAndWatch
    6. // returns the new list
    7. rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
    8. // Allocate is called during container creation so that the Device
    9. // Plugin can run device specific operations and instruct Kubelet
    10. // of the steps to make the Device available in the container
    11. rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
    12. // GetPreferredAllocation returns a preferred set of devices to allocate
    13. // from a list of available ones. The resulting preferred allocation is not
    14. // guaranteed to be the allocation ultimately performed by the
    15. // devicemanager. It is only designed to help the devicemanager make a more
    16. // informed allocation decision when possible.
    17. rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}
    18. // PreStartContainer is called, if indicated by Device Plugin during registeration phase,
    19. // before each container start. Device plugin can run device specific operations
    20. // such as resetting the device before making devices available to the container.
    21. rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}
    22. }

    Note: Plugins are not required to provide useful implementations for GetPreferredAllocation() or PreStartContainer(). Flags indicating the availability of these calls, if any, should be set in the DevicePluginOptions message sent back by a call to GetDevicePluginOptions(). The kubelet will always call GetDevicePluginOptions() to see which optional functions are available, before calling any of them directly.

  3. The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.

    Note: The ordering of the workflow is important. A plugin MUST start serving gRPC service before registering itself with kubelet for successful registration.

  4. After successfully registering itself, the device plugin runs in serving mode, during which it keeps monitoring device health and reports back to the kubelet upon any device state changes. It is also responsible for serving Allocate gRPC requests. During Allocate, the device plugin may do device-specific preparation; for example, GPU cleanup or QRNG initialization. If the operations succeed, the device plugin returns an AllocateResponse that contains container runtime configurations for accessing the allocated devices. The kubelet passes this information to the container runtime.

Handling kubelet restarts

A device plugin is expected to detect kubelet restarts and re-register itself with the new kubelet instance. A new kubelet instance deletes all the existing Unix sockets under /var/lib/kubelet/device-plugins when it starts. A device plugin can monitor the deletion of its Unix socket and re-register itself upon such an event.

Device plugin deployment

You can deploy a device plugin as a DaemonSet, as a package for your node’s operating system, or manually.

The canonical directory /var/lib/kubelet/device-plugins requires privileged access, so a device plugin must run in a privileged security context. If you’re deploying a device plugin as a DaemonSet, /var/lib/kubelet/device-plugins must be mounted as a Volume in the plugin’s PodSpec.

If you choose the DaemonSet approach you can rely on Kubernetes to: place the device plugin’s Pod onto Nodes, to restart the daemon Pod after failure, and to help automate upgrades.

API compatibility

Previously, the versioning scheme required the Device Plugin’s API version to match exactly the Kubelet’s version. Since the graduation of this feature to Beta in v1.12 this is no longer a hard requirement. The API is versioned and has been stable since Beta graduation of this feature. Because of this, kubelet upgrades should be seamless but there still may be changes in the API before stabilization making upgrades not guaranteed to be non-breaking.

Note: Although the Device Manager component of Kubernetes is a generally available feature, the device plugin API is not stable. For information on the device plugin API and version compatibility, read Device Plugin API versions.

As a project, Kubernetes recommends that device plugin developers:

  • Watch for Device Plugin API changes in the future releases.
  • Support multiple versions of the device plugin API for backward/forward compatibility.

To run device plugins on nodes that need to be upgraded to a Kubernetes release with a newer device plugin API version, upgrade your device plugins to support both versions before upgrading these nodes. Taking that approach will ensure the continuous functioning of the device allocations during the upgrade.

Monitoring device plugin resources

FEATURE STATE: Kubernetes v1.15 [beta]

In order to monitor resources provided by device plugins, monitoring agents need to be able to discover the set of devices that are in-use on the node and obtain metadata to describe which container the metric should be associated with. Prometheus metrics exposed by device monitoring agents should follow the Kubernetes Instrumentation Guidelines, identifying containers using pod, namespace, and container prometheus labels.

The kubelet provides a gRPC service to enable discovery of in-use devices, and to provide metadata for these devices:

  1. // PodResourcesLister is a service provided by the kubelet that provides information about the
  2. // node resources consumed by pods and containers on the node
  3. service PodResourcesLister {
  4. rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
  5. rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}
  6. rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}
  7. }

List gRPC endpoint

The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains the information about memory and hugepages reserved for a container.

Starting from Kubernetes v1.27, the List enpoint can provide information on resources of running pods allocated in ResourceClaims by the DynamicResourceAllocation API. To enable this feature kubelet must be started with the following flags:

  1. --feature-gates=DynamicResourceAllocation=true,KubeletPodResourcesDynamiceResources=true
  1. // ListPodResourcesResponse is the response returned by List function
  2. message ListPodResourcesResponse {
  3. repeated PodResources pod_resources = 1;
  4. }
  5. // PodResources contains information about the node resources assigned to a pod
  6. message PodResources {
  7. string name = 1;
  8. string namespace = 2;
  9. repeated ContainerResources containers = 3;
  10. }
  11. // ContainerResources contains information about the resources assigned to a container
  12. message ContainerResources {
  13. string name = 1;
  14. repeated ContainerDevices devices = 2;
  15. repeated int64 cpu_ids = 3;
  16. repeated ContainerMemory memory = 4;
  17. repeated DynamicResource dynamic_resources = 5;
  18. }
  19. // ContainerMemory contains information about memory and hugepages assigned to a container
  20. message ContainerMemory {
  21. string memory_type = 1;
  22. uint64 size = 2;
  23. TopologyInfo topology = 3;
  24. }
  25. // Topology describes hardware topology of the resource
  26. message TopologyInfo {
  27. repeated NUMANode nodes = 1;
  28. }
  29. // NUMA representation of NUMA node
  30. message NUMANode {
  31. int64 ID = 1;
  32. }
  33. // ContainerDevices contains information about the devices assigned to a container
  34. message ContainerDevices {
  35. string resource_name = 1;
  36. repeated string device_ids = 2;
  37. TopologyInfo topology = 3;
  38. }
  39. // DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation
  40. message DynamicResource {
  41. string class_name = 1;
  42. string claim_name = 2;
  43. string claim_namespace = 3;
  44. repeated ClaimResource claim_resources = 4;
  45. }
  46. // ClaimResource contains per-plugin resource information
  47. message ClaimResource {
  48. repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = "CDIDevices"];
  49. }
  50. // CDIDevice specifies a CDI device information
  51. message CDIDevice {
  52. // Fully qualified CDI device name
  53. // for example: vendor.com/gpu=gpudevice1
  54. // see more details in the CDI specification:
  55. // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md
  56. string name = 1;
  57. }

Note:

cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoint needs to be used in conjunction with the GetAllocatableResources endpoint as explained below:

  1. Call GetAllocatableResources to get a list of all the allocatable CPUs
  2. Call GetCpuIds on all ContainerResources in the system
  3. Subtract out all of the CPUs from the GetCpuIds calls from the GetAllocatableResources call

GetAllocatableResources gRPC endpoint

FEATURE STATE: Kubernetes v1.23 [beta]

GetAllocatableResources provides information on resources initially available on the worker node. It provides more information than kubelet exports to APIServer.

Note:

GetAllocatableResources should only be used to evaluate allocatable resources on a node. If the goal is to evaluate free/unallocated resources it should be used in conjunction with the List() endpoint. The result obtained by GetAllocatableResources would remain the same unless the underlying resources exposed to kubelet change. This happens rarely but when it does (for example: hotplug/hotunplug, device health changes), client is expected to call GetAlloctableResources endpoint.

However, calling GetAllocatableResources endpoint is not sufficient in case of cpu and/or memory update and Kubelet needs to be restarted to reflect the correct resource capacity and allocatable.

  1. // AllocatableResourcesResponses contains informations about all the devices known by the kubelet
  2. message AllocatableResourcesResponse {
  3. repeated ContainerDevices devices = 1;
  4. repeated int64 cpu_ids = 2;
  5. repeated ContainerMemory memory = 3;
  6. }

Starting from Kubernetes v1.23, the GetAllocatableResources is enabled by default. You can disable it by turning off the KubeletPodResourcesGetAllocatable feature gate.

Preceding Kubernetes v1.23, to enable this feature kubelet must be started with the following flag:

  1. --feature-gates=KubeletPodResourcesGetAllocatable=true

ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what device plugins report when they register themselves to the kubelet.

The gRPC service is served over a unix socket at /var/lib/kubelet/pod-resources/kubelet.sock. Monitoring agents for device plugin resources can be deployed as a daemon, or as a DaemonSet. The canonical directory /var/lib/kubelet/pod-resources requires privileged access, so monitoring agents must run in a privileged security context. If a device monitoring agent is running as a DaemonSet, /var/lib/kubelet/pod-resources must be mounted as a Volume in the device monitoring agent’s PodSpec.

Support for the PodResourcesLister service requires KubeletPodResources feature gate to be enabled. It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.

Get gRPC endpoint

FEATURE STATE: Kubernetes v1.27 [alpha]

The Get endpoint provides information on resources of a running Pod. It exposes information similar to those described in the List endpoint. The Get endpoint requires PodName and PodNamespace of the running Pod.

  1. // GetPodResourcesRequest contains information about the pod
  2. message GetPodResourcesRequest {
  3. string pod_name = 1;
  4. string pod_namespace = 2;
  5. }

To enable this feature, you must start your kubelet services with the following flag:

  1. --feature-gates=KubeletPodResourcesGet=true

The Get endpoint can provide Pod information related to dynamic resources allocated by the dynamic resource allocation API. To enable this feature, you must ensure your kubelet services are started with the following flags:

  1. --feature-gates=KubeletPodResourcesGet=true,DynamicResourceAllocation=true,KubeletPodResourcesDynamiceResources=true

Device plugin integration with the Topology Manager

FEATURE STATE: Kubernetes v1.18 [beta]

The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo struct.

  1. message TopologyInfo {
  2. repeated NUMANode nodes = 1;
  3. }
  4. message NUMANode {
  5. int64 ID = 1;
  6. }

Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The device manager will then use this information to consult with the Topology Manager and make resource assignment decisions.

TopologyInfo supports setting a nodes field to either nil or a list of NUMA nodes. This allows the Device Plugin to advertise a device that spans multiple NUMA nodes.

Setting TopologyInfo to nil or providing an empty list of NUMA nodes for a given device indicates that the Device Plugin does not have a NUMA affinity preference for that device.

An example TopologyInfo struct populated for a device by a Device Plugin:

  1. pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}}

Device plugin examples

Note: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren’t responsible for these projects, which are listed alphabetically. To add a project to this list, read the content guide before submitting a change. More information.

Here are some examples of device plugin implementations:

What’s next