RBD Persistent Cache

Shared, Read-only Parent Image Cache

Cloned RBD images from a parent usually only modify a small portion ofthe image. For example, in a VDI workload, the VMs are cloned from the samebase image and initially only differ by hostname and IP address. During thebooting stage, all of these VMs would re-read portions of duplicate parentimage data from the RADOS cluster. If we have a local cache of the parentimage, this will help to speed up the read process on one host, as well asto save the client to cluster network traffic.RBD shared read-only parent image cache requires explicitly enabling inceph.conf. The ceph-immmutable-object-cache daemon is responsible forcaching the parent content on the local disk, and future reads on that datawill be serviced from the local cache.

Note

RBD shared read-only parent image cache requires the Ceph Nautilus release or later.

Persistent Cache - 图1

Enable RBD Shared Read-only Parent Image Cache

To enable RBD shared read-only parent image cache, the following Ceph settingsneed to added in the [client] section of your ceph.conf file.

rbd parent cache enabled = true

Immutable Object Cache Daemon

The ceph-immutable-object-cache daemon is responsible for caching parentimage content within its local caching directory. For better performance it’srecommended to use SSDs as the underlying storage.

The key components of the daemon are:

  • Domain socket based IPC: The daemon will listen on a local domainsocket on start up and wait for connections from librbd clients.

  • LRU based promotion/demotion policy: The daemon will maintainin-memory statistics of cache-hits on each cache file. It will demote thecold cache if capacity reaches to the configured threshold.

  • File-based caching store: The daemon will maintain a simple filebased cache store. On promotion the RADOS objects will be fetched fromRADOS cluster and stored in the local caching directory.

On opening each cloned rbd image, librbd will try to connect to thecache daemon over its domain socket. If it’s successfully connected,librbd will automatically check with the daemon on the subsequent reads.If there’s a read that’s not cached, the daemon will promote the RADOS objectto local caching directory, so the next read on that object will be servicedfrom local file. The daemon also maintains simple LRU statistics so if there’snot enough capacity it will delete some cold cache files.

Here are some important cache options correspond to the following settings:

  • immutable_object_cache_path The immutable object cache data directory.

  • immutable_object_cache_max_size The max size for immutable cache.

  • immutable_object_cache_watermark The watermark for the cache. If thecapacity reaches to this watermark, the daemon will delete cold cache basedon the LRU statistics.

The ceph-immutable-object-cache daemon is available within the optionalceph-immutable-object-cache distribution package.

Important

ceph-immutable-object-cache daemon requires the ability toconnect RADOS clusters.

ceph-immutable-object-cache daemon should use a unique Ceph user ID.To create a Ceph user, with ceph specify the auth get-or-createcommand, user name, monitor caps, and OSD caps:

  1. ceph auth get-or-create client.ceph-immutable-object-cache.{unique id} mon 'allow r' osd 'profile rbd-read-only'

The ceph-immutable-object-cache daemon can be managed by systemd by specifying the userID as the daemon instance:

  1. systemctl enable ceph-immutable-object-cache@immutable-object-cache.{unique id}

The ceph-immutable-object-cache can also be run in foreground by ceph-immutable-object-cache command:

  1. ceph-immutable-object-cache -f --log-file={log_path}