Intro to Ceph

Whether you want to provide Ceph Object Storage and/orCeph Block Device services to Cloud Platforms, deploya Ceph File System or use Ceph for another purpose, allCeph Storage Cluster deployments begin with setting up eachCeph Node, your network, and the Ceph Storage Cluster. A CephStorage Cluster requires at least one Ceph Monitor, Ceph Manager, andCeph OSD (Object Storage Daemon). The Ceph Metadata Server is alsorequired when running Ceph File System clients.

Intro to Ceph - 图1

  • Monitors: A Ceph Monitor (ceph-mon) maintains mapsof the cluster state, including the monitor map, manager map, theOSD map, and the CRUSH map. These maps are critical cluster staterequired for Ceph daemons to coordinate with each other. Monitorsare also responsible for managing authentication between daemons andclients. At least three monitors are normally required forredundancy and high availability.

  • Managers: A Ceph Manager daemon (ceph-mgr) isresponsible for keeping track of runtime metrics and the currentstate of the Ceph cluster, including storage utilization, currentperformance metrics, and system load. The Ceph Manager daemons alsohost python-based modules to manage and expose Ceph clusterinformation, including a web-based Ceph Dashboard andREST API. At least two managers are normally required for highavailability.

  • Ceph OSDs: A Ceph OSD (object storage daemon,ceph-osd) stores data, handles data replication, recovery,rebalancing, and provides some monitoring information to CephMonitors and Managers by checking other Ceph OSD Daemons for aheartbeat. At least 3 Ceph OSDs are normally required for redundancyand high availability.

  • MDSs: A Ceph Metadata Server (MDS, ceph-mds) storesmetadata on behalf of the Ceph File System (i.e., Ceph BlockDevices and Ceph Object Storage do not use MDS). Ceph MetadataServers allow POSIX file system users to execute basic commands (likels, find, etc.) without placing an enormous burden on theCeph Storage Cluster.

Ceph stores data as objects within logical storage pools. Using theCRUSH algorithm, Ceph calculates which placement group shouldcontain the object, and further calculates which Ceph OSD Daemonshould store the placement group. The CRUSH algorithm enables theCeph Storage Cluster to scale, rebalance, and recover dynamically.

Recommendations

To begin using Ceph in production, you should review our hardwarerecommendations and operating system recommendations.

Get Involved

You can avail yourself of help or contribute documentation, sourcecode or bugs by getting involved in the Ceph community.