RBD Persistent Cache

Shared, Read-only Parent Image Cache

Cloned RBD images usually modify only a small fraction of the parent image. For example, in a VDI use-case, VMs are cloned from the same base image and initially differ only by hostname and IP address. During booting, all of these VMs read portions of the same parent image data. If we have a local cache of the parent image, this speeds up reads on the caching host. We also achieve reduction of client-to-cluster network traffic. RBD cache must be explicitly enabled in ceph.conf. The ceph-immutable-object-cache daemon is responsible for caching the parent content on the local disk, and future reads on that data will be serviced from the local cache.

Note

RBD shared read-only parent image cache requires the Ceph Nautilus release or later.

../../_images/606105b562e21db4756e42c68566e71117b5bbbe8f3c0332360742d08a125eba.png

Enable RBD Shared Read-only Parent Image Cache

To enable RBD shared read-only parent image cache, the following Ceph settings need to added in the [client] section of your ceph.conf file:

  1. rbd parent cache enabled = true
  2. rbd plugins = parent_cache

Immutable Object Cache Daemon

The ceph-immutable-object-cache daemon is responsible for caching parent image content within its local caching directory. For better performance it’s recommended to use SSDs as the underlying storage.

The key components of the daemon are:

  1. Domain socket based IPC: The daemon will listen on a local domain socket on start up and wait for connections from librbd clients.

  2. LRU based promotion/demotion policy: The daemon will maintain in-memory statistics of cache-hits on each cache file. It will demote the cold cache if capacity reaches to the configured threshold.

  3. File-based caching store: The daemon will maintain a simple file based cache store. On promotion the RADOS objects will be fetched from RADOS cluster and stored in the local caching directory.

On opening each cloned rbd image, librbd will try to connect to the cache daemon through its Unix domain socket. Once successfully connected, librbd will coordinate with the daemon on the subsequent reads. If there’s a read that’s not cached, the daemon will promote the RADOS object to local caching directory, so the next read on that object will be serviced from cache. The daemon also maintains simple LRU statistics so that under capacity pressure it will evict cold cache files as needed.

Here are some important cache configuration settings:

  • immutable_object_cache_sock The path to the domain socket used for communication between librbd clients and the ceph-immutable-object-cache daemon.

  • immutable_object_cache_path The immutable object cache data directory.

  • immutable_object_cache_max_size The max size for immutable cache.

  • immutable_object_cache_watermark The high-water mark for the cache. If the capacity reaches this threshold the daemon will delete cold cache based on LRU statistics.

The ceph-immutable-object-cache daemon is available within the optional ceph-immutable-object-cache distribution package.

Important

ceph-immutable-object-cache daemon requires the ability to connect RADOS clusters.

ceph-immutable-object-cache daemon should use a unique Ceph user ID. To create a Ceph user, with ceph specify the auth get-or-create command, user name, monitor caps, and OSD caps:

  1. ceph auth get-or-create client.ceph-immutable-object-cache.{unique id} mon 'allow r' osd 'profile rbd-read-only'

The ceph-immutable-object-cache daemon can be managed by systemd by specifying the user ID as the daemon instance:

  1. systemctl enable ceph-immutable-object-cache@immutable-object-cache.{unique id}

The ceph-immutable-object-cache can also be run in foreground by ceph-immutable-object-cache command:

  1. ceph-immutable-object-cache -f --log-file={log_path}