ceph – ceph administration tool

Synopsis

ceph**auth** [ add | caps | del | export | get | get-key | get-or-create | get-or-create-key | import | list | print-key | print_key ] …

ceph**compact**

ceph**config** [ dump | ls | help | get | show | show-with-defaults | set | rm | log | reset | assimilate-conf | generate-minimal-conf ] …

ceph**config-key** [ rm | exists | get | ls | dump | set ] …

ceph**daemon**<name> | <path>__<command>

ceph**daemonperf**<name> | <path> [ interval [ count ] ]

ceph**df**{detail}

ceph**fs** [ ls | new | reset | rm ] …

ceph**fsid**

ceph**health**{detail}

ceph**injectargs**<injectedargs> [ <injectedargs>… ]

ceph**log**<logtext> [ <logtext>… ]

ceph**mds** [ compat | fail | rm | rmfailed | set_state | stat | repaired ] …

ceph**mon** [ add | dump | getmap | remove | stat ] …

ceph**mon_status**

ceph**osd** [ blacklist | blocked-by | create | new | deep-scrub | df | down | dump | erasure-code-profile | find | getcrushmap | getmap | getmaxosd | in | ls | lspools | map | metadata | ok-to-stop | out | pause | perf | pg-temp | force-create-pg | primary-affinity | primary-temp | repair | reweight | reweight-by-pg | rm | destroy | purge | safe-to-destroy | scrub | set | setcrushmap | setmaxosd | stat | tree | unpause | unset ] …

ceph**osd**crush [ add | add-bucket | create-or-move | dump | get-tunable | link | move | remove | rename-bucket | reweight | reweight-all | reweight-subtree | rm | rule | set | set-tunable | show-tunables | tunables | unlink ] …

ceph**osd**pool [ create | delete | get | get-quota | ls | mksnap | rename | rmsnap | set | set-quota | stats ] …

ceph**osdpoolapplication** [ disable | enable | get | rm | set ] …

ceph**osd**tier [ add | add-cache | cache-mode | remove | remove-overlay | set-overlay ] …

ceph**pg** [ debug | deep-scrub | dump | dump_json | dump_pools_json | dump_stuck | getmap | ls | ls-by-osd | ls-by-pool | ls-by-primary | map | repair | scrub | stat ] …

ceph**quorum_status**

ceph**report** { <tags> [ <tags>… ] }

ceph**scrub**

ceph**status**

ceph**sync**force {–yes-i-really-mean-it} {–i-know-what-i-am-doing}

ceph**tell**<name (type.id)> <command> [options…]

ceph**version**

Description

ceph is a control utility which is used for manual deployment and maintenanceof a Ceph cluster. It provides a diverse set of commands that allows deployment ofmonitors, OSDs, placement groups, MDS and overall maintenance, administrationof the cluster.

Commands

auth

Manage authentication keys. It is used for adding, removing, exportingor updating of authentication keys for a particular entity such as a monitor orOSD. It uses some additional subcommands.

Subcommand add adds authentication info for a particular entity from inputfile, or random key if no input is given and/or any caps specified in the command.

Usage:

  1. ceph auth add <entity> {<caps> [<caps>...]}

Subcommand caps updates caps for name from caps specified in the command.

Usage:

  1. ceph auth caps <entity> <caps> [<caps>...]

Subcommand del deletes all caps for name.

Usage:

  1. ceph auth del <entity>

Subcommand export writes keyring for requested entity, or master keyring ifnone given.

Usage:

  1. ceph auth export {<entity>}

Subcommand get writes keyring file with requested key.

Usage:

  1. ceph auth get <entity>

Subcommand get-key displays requested key.

Usage:

  1. ceph auth get-key <entity>

Subcommand get-or-create adds authentication info for a particular entityfrom input file, or random key if no input given and/or any caps specified in thecommand.

Usage:

  1. ceph auth get-or-create <entity> {<caps> [<caps>...]}

Subcommand get-or-create-key gets or adds key for name from system/capspairs specified in the command. If key already exists, any given caps must matchthe existing caps for that key.

Usage:

  1. ceph auth get-or-create-key <entity> {<caps> [<caps>...]}

Subcommand import reads keyring from input file.

Usage:

  1. ceph auth import

Subcommand ls lists authentication state.

Usage:

  1. ceph auth ls

Subcommand print-key displays requested key.

Usage:

  1. ceph auth print-key <entity>

Subcommand print_key displays requested key.

Usage:

  1. ceph auth print_key <entity>

compact

Causes compaction of monitor’s leveldb storage.

Usage:

  1. ceph compact

config

Configure the cluster. By default, Ceph daemons and clients retrieve theirconfiguration options from monitor when they start, and are updated if any ofthe tracked options is changed at run time. It uses following additionalsubcommand.

Subcommand dump to dump all options for the cluster

Usage:

  1. ceph config dump

Subcommand ls to list all option names for the cluster

Usage:

  1. ceph config ls

Subcommand help to describe the specified configuration option

Usage:

  1. ceph config help <option>

Subcommand get to dump the option(s) for the specified entity.

Usage:

  1. ceph config get <who> {<option>}

Subcommand show to display the running configuration of the specifiedentity. Please note, unlike get, which only shows the options managedby monitor, show displays all the configurations being actively used.These options are pulled from several sources, for instance, the compiled-indefault value, the monitor’s configuration database, ceph.conf file onthe host. The options can even be overridden at runtime. So, there is chancethat the configuration options in the output of show could be differentfrom those in the output of get.

Usage:

  1. ceph config show {<who>}

Subcommand show-with-defaults to display the running configuration along with the compiled-in defaults of the specified entity

Usage:

  1. ceph config show {<who>}

Subcommand set to set an option for one or more specified entities

Usage:

  1. ceph config set <who> <option> <value> {--force}

Subcommand rm to clear an option for one or more entities

Usage:

  1. ceph config rm <who> <option>

Subcommand log to show recent history of config changes. If count optionis omitted it defeaults to 10.

Usage:

  1. ceph config log {<count>}

Subcommand reset to revert configuration to the specified historical version

Usage:

  1. ceph config reset <version>

Subcommand assimilate-conf to assimilate options from stdin, and return anew, minimal conf file

Usage:

  1. ceph config assimilate-conf -i <input-config-path> > <output-config-path>
  2. ceph config assimilate-conf < <input-config-path>

Subcommand generate-minimal-conf to generate a minimal ceph.conf file,which can be used for bootstrapping a daemon or a client.

Usage:

  1. ceph config generate-minimal-conf > <minimal-config-path>

config-key

Manage configuration key. Config-key is a general purpose key/value serviceoffered by the monitors. This service is mainly used by Ceph tools and daemonsfor persisting various settings. Among which, ceph-mgr modules uses it forstoring their options. It uses some additional subcommands.

Subcommand rm deletes configuration key.

Usage:

  1. ceph config-key rm <key>

Subcommand exists checks for configuration keys existence.

Usage:

  1. ceph config-key exists <key>

Subcommand get gets the configuration key.

Usage:

  1. ceph config-key get <key>

Subcommand ls lists configuration keys.

Usage:

  1. ceph config-key ls

Subcommand dump dumps configuration keys and values.

Usage:

  1. ceph config-key dump

Subcommand set puts configuration key and value.

Usage:

  1. ceph config-key set <key> {<val>}

daemon

Submit admin-socket commands.

Usage:

  1. ceph daemon {daemon_name|socket_path} {command} ...

Example:

  1. ceph daemon osd.0 help

daemonperf

Watch performance counters from a Ceph daemon.

Usage:

  1. ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]

df

Show cluster’s free space status.

Usage:

  1. ceph df {detail}

features

Show the releases and features of all connected daemons and clients connectedto the cluster, along with the numbers of them in each bucket grouped by thecorresponding features/releases. Each release of Ceph supports a different setof features, expressed by the features bitmask. New cluster features requirethat clients support the feature, or else they are not allowed to connect tothese new features. As new features or capabilities are enabled after anupgrade, older clients are prevented from connecting.

Usage:

  1. ceph features

fs

Manage cephfs file systems. It uses some additional subcommands.

Subcommand ls to list file systems

Usage:

  1. ceph fs ls

Subcommand new to make a new file system using named pools <metadata> and <data>

Usage:

  1. ceph fs new <fs_name> <metadata> <data>

Subcommand reset is used for disaster recovery only: reset to a single-MDS map

Usage:

  1. ceph fs reset <fs_name> {--yes-i-really-mean-it}

Subcommand rm to disable the named file system

Usage:

  1. ceph fs rm <fs_name> {--yes-i-really-mean-it}

fsid

Show cluster’s FSID/UUID.

Usage:

  1. ceph fsid

health

Show cluster’s health.

Usage:

  1. ceph health {detail}

heap

Show heap usage info (available only if compiled with tcmalloc)

Usage:

  1. ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats

Subcommand release to make TCMalloc to releases no-longer-used memory back to the kernel at once.

Usage:

  1. ceph tell <name (type.id)> heap release

Subcommand (get|set)_release_rate get or set the TCMalloc memory release rate. TCMalloc releasesno-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never returnmemory to system, 1 means wait for 1000 pages after releasing a page to system. It is 1.0 by default..

Usage:

  1. ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}

injectargs

Inject configuration arguments into monitor.

Usage:

  1. ceph injectargs <injected_args> [<injected_args>...]

log

Log supplied text to the monitor log.

Usage:

  1. ceph log <logtext> [<logtext>...]

mds

Manage metadata server configuration and administration. It uses someadditional subcommands.

Subcommand compat manages compatible features. It uses some additionalsubcommands.

Subcommand rm_compat removes compatible feature.

Usage:

  1. ceph mds compat rm_compat <int[0-]>

Subcommand rm_incompat removes incompatible feature.

Usage:

  1. ceph mds compat rm_incompat <int[0-]>

Subcommand show shows mds compatibility settings.

Usage:

  1. ceph mds compat show

Subcommand fail forces mds to status fail.

Usage:

  1. ceph mds fail <role|gid>

Subcommand rm removes inactive mds.

Usage:

  1. ceph mds rm <int[0-]> <name> (type.id)>

Subcommand rmfailed removes failed mds.

Usage:

  1. ceph mds rmfailed <int[0-]>

Subcommand set_state sets mds state of <gid> to <numeric-state>.

Usage:

  1. ceph mds set_state <int[0-]> <int[0-20]>

Subcommand stat shows MDS status.

Usage:

  1. ceph mds stat

Subcommand repaired mark a damaged MDS rank as no longer damaged.

Usage:

  1. ceph mds repaired <role>

mon

Manage monitor configuration and administration. It uses some additionalsubcommands.

Subcommand add adds new monitor named <name> at <addr>.

Usage:

  1. ceph mon add <name> <IPaddr[:port]>

Subcommand dump dumps formatted monmap (optionally from epoch)

Usage:

  1. ceph mon dump {<int[0-]>}

Subcommand getmap gets monmap.

Usage:

  1. ceph mon getmap {<int[0-]>}

Subcommand remove removes monitor named <name>.

Usage:

  1. ceph mon remove <name>

Subcommand stat summarizes monitor status.

Usage:

  1. ceph mon stat

mon_status

Reports status of monitors.

Usage:

  1. ceph mon_status

mgr

Ceph manager daemon configuration and management.

Subcommand dump dumps the latest MgrMap, which describes the activeand standby manager daemons.

Usage:

  1. ceph mgr dump

Subcommand fail will mark a manager daemon as failed, removing itfrom the manager map. If it is the active manager daemon a standbywill take its place.

Usage:

  1. ceph mgr fail <name>

Subcommand module ls will list currently enabled manager modules (plugins).

Usage:

  1. ceph mgr module ls

Subcommand module enable will enable a manager module. Available modules are included in MgrMap and visible via mgr dump.

Usage:

  1. ceph mgr module enable <module>

Subcommand module disable will disable an active manager module.

Usage:

  1. ceph mgr module disable <module>

Subcommand metadata will report metadata about all manager daemons or, if the name is specified, a single manager daemon.

Usage:

  1. ceph mgr metadata [name]

Subcommand versions will report a count of running daemon versions.

Usage:

  1. ceph mgr versions

Subcommand count-metadata will report a count of any daemon metadata field.

Usage:

  1. ceph mgr count-metadata <field>

osd

Manage OSD configuration and administration. It uses some additionalsubcommands.

Subcommand blacklist manage blacklisted clients. It uses some additionalsubcommands.

Subcommand add add <addr> to blacklist (optionally until <expire> secondsfrom now)

Usage:

  1. ceph osd blacklist add <EntityAddr> {<float[0.0-]>}

Subcommand ls show blacklisted clients

Usage:

  1. ceph osd blacklist ls

Subcommand rm remove <addr> from blacklist

Usage:

  1. ceph osd blacklist rm <EntityAddr>

Subcommand blocked-by prints a histogram of which OSDs are blocking their peers

Usage:

  1. ceph osd blocked-by

Subcommand create creates new osd (with optional UUID and ID).

This command is DEPRECATED as of the Luminous release, and will be removed ina future release.

Subcommand new should instead be used.

Usage:

  1. ceph osd create {<uuid>} {<id>}

Subcommand new can be used to create a new OSD or to recreate a previouslydestroyed OSD with a specific id. The new OSD will have the specified uuid,and the command expects a JSON file containing the base64 cephx key for authentity client.osd.<id>, as well as optional base64 cepx key for dm-cryptlockbox access and a dm-crypt key. Specifying a dm-crypt requires specifyingthe accompanying lockbox cephx key.

Usage:

  1. ceph osd new {<uuid>} {<id>} -i {<params.json>}

The parameters JSON file is optional but if provided, is expected to maintaina form of the following format:

  1. {
  2. "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
  3. "crush_device_class": "myclass"
  4. }

Or:

  1. {
  2. "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
  3. "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
  4. "dmcrypt_key": "<dm-crypt key>",
  5. "crush_device_class": "myclass"
  6. }

Or:

  1. {
  2. "crush_device_class": "myclass"
  3. }

The “crush_device_class” property is optional. If specified, it will set theinitial CRUSH device class for the new OSD.

Subcommand crush is used for CRUSH management. It uses some additionalsubcommands.

Subcommand add adds or updates crushmap position and weight for <name> with<weight> and location <args>.

Usage:

  1. ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]

Subcommand add-bucket adds no-parent (probably root) crush bucket <name> oftype <type>.

Usage:

  1. ceph osd crush add-bucket <name> <type>

Subcommand create-or-move creates entry or moves existing entry for <name><weight> at/to location <args>.

Usage:

  1. ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
  2. [<args>...]

Subcommand dump dumps crush map.

Usage:

  1. ceph osd crush dump

Subcommand get-tunable get crush tunable straw_calc_version

Usage:

  1. ceph osd crush get-tunable straw_calc_version

Subcommand link links existing entry for <name> under location <args>.

Usage:

  1. ceph osd crush link <name> <args> [<args>...]

Subcommand move moves existing entry for <name> to location <args>.

Usage:

  1. ceph osd crush move <name> <args> [<args>...]

Subcommand remove removes <name> from crush map (everywhere, or just at<ancestor>).

Usage:

  1. ceph osd crush remove <name> {<ancestor>}

Subcommand rename-bucket renames bucket <srcname> to <dstname>

Usage:

  1. ceph osd crush rename-bucket <srcname> <dstname>

Subcommand reweight change <name>’s weight to <weight> in crush map.

Usage:

  1. ceph osd crush reweight <name> <float[0.0-]>

Subcommand reweight-all recalculate the weights for the tree toensure they sum correctly

Usage:

  1. ceph osd crush reweight-all

Subcommand reweight-subtree changes all leaf items beneath <name>to <weight> in crush map

Usage:

  1. ceph osd crush reweight-subtree <name> <weight>

Subcommand rm removes <name> from crush map (everywhere, or just at<ancestor>).

Usage:

  1. ceph osd crush rm <name> {<ancestor>}

Subcommand rule is used for creating crush rules. It uses some additionalsubcommands.

Subcommand create-erasure creates crush rule <name> for erasure coded poolcreated with <profile> (default default).

Usage:

  1. ceph osd crush rule create-erasure <name> {<profile>}

Subcommand create-simple creates crush rule <name> to start from <root>,replicate across buckets of type <type>, using a choose mode of <firstn|indep>(default firstn; indep best for erasure pools).

Usage:

  1. ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}

Subcommand dump dumps crush rule <name> (default all).

Usage:

  1. ceph osd crush rule dump {<name>}

Subcommand ls lists crush rules.

Usage:

  1. ceph osd crush rule ls

Subcommand rm removes crush rule <name>.

Usage:

  1. ceph osd crush rule rm <name>

Subcommand set used alone, sets crush map from input file.

Usage:

  1. ceph osd crush set

Subcommand set with osdname/osd.id update crushmap position and weightfor <name> to <weight> with location <args>.

Usage:

  1. ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]

Subcommand set-tunable set crush tunable <tunable> to <value>. The onlytunable that can be set is straw_calc_version.

Usage:

  1. ceph osd crush set-tunable straw_calc_version <value>

Subcommand show-tunables shows current crush tunables.

Usage:

  1. ceph osd crush show-tunables

Subcommand tree shows the crush buckets and items in a tree view.

Usage:

  1. ceph osd crush tree

Subcommand tunables sets crush tunables values to <profile>.

Usage:

  1. ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default

Subcommand unlink unlinks <name> from crush map (everywhere, or just at<ancestor>).

Usage:

  1. ceph osd crush unlink <name> {<ancestor>}

Subcommand df shows OSD utilization

Usage:

  1. ceph osd df {plain|tree}

Subcommand deep-scrub initiates deep scrub on specified osd.

Usage:

  1. ceph osd deep-scrub <who>

Subcommand down sets osd(s) <id> [<id>…] down.

Usage:

  1. ceph osd down <ids> [<ids>...]

Subcommand dump prints summary of OSD map.

Usage:

  1. ceph osd dump {<int[0-]>}

Subcommand erasure-code-profile is used for managing the erasure codeprofiles. It uses some additional subcommands.

Subcommand get gets erasure code profile <name>.

Usage:

  1. ceph osd erasure-code-profile get <name>

Subcommand ls lists all erasure code profiles.

Usage:

  1. ceph osd erasure-code-profile ls

Subcommand rm removes erasure code profile <name>.

Usage:

  1. ceph osd erasure-code-profile rm <name>

Subcommand set creates erasure code profile <name> with [<key[=value]> …]pairs. Add a –force at the end to override an existing profile (IT IS RISKY).

Usage:

  1. ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}

Subcommand find find osd <id> in the CRUSH map and shows its location.

Usage:

  1. ceph osd find <int[0-]>

Subcommand getcrushmap gets CRUSH map.

Usage:

  1. ceph osd getcrushmap {<int[0-]>}

Subcommand getmap gets OSD map.

Usage:

  1. ceph osd getmap {<int[0-]>}

Subcommand getmaxosd shows largest OSD id.

Usage:

  1. ceph osd getmaxosd

Subcommand in sets osd(s) <id> [<id>…] in.

Usage:

  1. ceph osd in <ids> [<ids>...]

Subcommand lost marks osd as permanently lost. THIS DESTROYS DATA IF NOMORE REPLICAS EXIST, BE CAREFUL.

Usage:

  1. ceph osd lost <int[0-]> {--yes-i-really-mean-it}

Subcommand ls shows all OSD ids.

Usage:

  1. ceph osd ls {<int[0-]>}

Subcommand lspools lists pools.

Usage:

  1. ceph osd lspools {<int>}

Subcommand map finds pg for <object> in <pool>.

Usage:

  1. ceph osd map <poolname> <objectname>

Subcommand metadata fetches metadata for osd <id>.

Usage:

  1. ceph osd metadata {int[0-]} (default all)

Subcommand out sets osd(s) <id> [<id>…] out.

Usage:

  1. ceph osd out <ids> [<ids>...]

Subcommand ok-to-stop checks whether the list of OSD(s) can bestopped without immediately making data unavailable. That is, alldata should remain readable and writeable, although data redundancymay be reduced as some PGs may end up in a degraded (but active)state. It will return a success code if it is okay to stop theOSD(s), or an error code and informative message if it is not or if noconclusion can be drawn at the current time.

Usage:

  1. ceph osd ok-to-stop <id> [<ids>...]

Subcommand pause pauses osd.

Usage:

  1. ceph osd pause

Subcommand perf prints dump of OSD perf summary stats.

Usage:

  1. ceph osd perf

Subcommand pg-temp set pg_temp mapping pgid:[<id> [<id>…]] (developersonly).

Usage:

  1. ceph osd pg-temp <pgid> {<id> [<id>...]}

Subcommand force-create-pg forces creation of pg <pgid>.

Usage:

  1. ceph osd force-create-pg <pgid>

Subcommand pool is used for managing data pools. It uses some additionalsubcommands.

Subcommand create creates pool.

Usage:

  1. ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
  2. {<erasure_code_profile>} {<rule>} {<int>}

Subcommand delete deletes pool.

Usage:

  1. ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}

Subcommand get gets pool parameter <var>.

Usage:

  1. ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed

Only for tiered pools:

  1. ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
  2. target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
  3. cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
  4. min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n

Only for erasure coded pools:

  1. ceph osd pool get <poolname> erasure_code_profile

Use all to get all pool parameters that apply to the pool’s type:

  1. ceph osd pool get <poolname> all

Subcommand get-quota obtains object or byte limits for pool.

Usage:

  1. ceph osd pool get-quota <poolname>

Subcommand ls list pools

Usage:

  1. ceph osd pool ls {detail}

Subcommand mksnap makes snapshot <snap> in <pool>.

Usage:

  1. ceph osd pool mksnap <poolname> <snap>

Subcommand rename renames <srcpool> to <destpool>.

Usage:

  1. ceph osd pool rename <poolname> <poolname>

Subcommand rmsnap removes snapshot <snap> from <pool>.

Usage:

  1. ceph osd pool rmsnap <poolname> <snap>

Subcommand set sets pool parameter <var> to <val>.

Usage:

  1. ceph osd pool set <poolname> size|min_size|pg_num|
  2. pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
  3. hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
  4. target_max_bytes|target_max_objects|cache_target_dirty_ratio|
  5. cache_target_dirty_high_ratio|
  6. cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
  7. min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
  8. hit_set_search_last_n
  9. <val> {--yes-i-really-mean-it}

Subcommand set-quota sets object or byte limit on pool.

Usage:

  1. ceph osd pool set-quota <poolname> max_objects|max_bytes <val>

Subcommand stats obtain stats from all pools, or from specified pool.

Usage:

  1. ceph osd pool stats {<name>}

Subcommand application is used for adding an annotation to the givenpool. By default, the possible applications are object, block, and filestorage (corresponding app-names are “rgw”, “rbd”, and “cephfs”). However,there might be other applications as well. Based on the application, theremay or may not be some processing conducted.

Subcommand disable disables the given application on the given pool.

Usage:

  1. ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}

Subcommand enable adds an annotation to the given pool for the mentionedapplication.

Usage:

  1. ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}

Subcommand get displays the value for the given key that is associatedwith the given application of the given pool. Not passing the optionalarguments would display all key-value pairs for all applications for allpools.

Usage:

  1. ceph osd pool application get {<pool-name>} {<app>} {<key>}

Subcommand rm removes the key-value pair for the given key in the givenapplication of the given pool.

Usage:

  1. ceph osd pool application rm <pool-name> <app> <key>

Subcommand set assosciates or updates, if it already exists, a key-valuepair with the given application for the given pool.

Usage:

  1. ceph osd pool application set <pool-name> <app> <key> <value>

Subcommand primary-affinity adjust osd primary-affinity from 0.0 <=<weight><= 1.0

Usage:

  1. ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>

Subcommand primary-temp sets primary_temp mapping pgid:<id>|-1 (developersonly).

Usage:

  1. ceph osd primary-temp <pgid> <id>

Subcommand repair initiates repair on a specified osd.

Usage:

  1. ceph osd repair <who>

Subcommand reweight reweights osd to 0.0 < <weight> < 1.0.

Usage:

  1. osd reweight <int[0-]> <float[0.0-1.0]>

Subcommand reweight-by-pg reweight OSDs by PG distribution[overload-percentage-for-consideration, default 120].

Usage:

  1. ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
  2. {--no-increasing}

Subcommand reweight-by-utilization reweight OSDs by utilization[overload-percentage-for-consideration, default 120].

Usage:

  1. ceph osd reweight-by-utilization {<int[100-]>}
  2. {--no-increasing}

Subcommand rm removes osd(s) <id> [<id>…] from the OSD map.

Usage:

  1. ceph osd rm <ids> [<ids>...]

Subcommand destroy marks OSD id as destroyed, removing its cephxentity’s keys and all of its dm-crypt and daemon-private config keyentries.

This command will not remove the OSD from crush, nor will it remove theOSD from the OSD map. Instead, once the command successfully completes,the OSD will show marked as destroyed.

In order to mark an OSD as destroyed, the OSD must first be marked aslost.

Usage:

  1. ceph osd destroy <id> {--yes-i-really-mean-it}

Subcommand purge performs a combination of osd destroy,osd rm and osd crush remove.

Usage:

  1. ceph osd purge <id> {--yes-i-really-mean-it}

Subcommand safe-to-destroy checks whether it is safe to remove ordestroy an OSD without reducing overall data redundancy or durability.It will return a success code if it is definitely safe, or an errorcode and informative message if it is not or if no conclusion can bedrawn at the current time.

Usage:

  1. ceph osd safe-to-destroy <id> [<ids>...]

Subcommand scrub initiates scrub on specified osd.

Usage:

  1. ceph osd scrub <who>

Subcommand set sets cluster-wide <flag> by updating OSD map.The full flag is not honored anymore since the Mimic release, andceph osd set full is not supported in the Octopus release.

Usage:

  1. ceph osd set pause|noup|nodown|noout|noin|nobackfill|
  2. norebalance|norecover|noscrub|nodeep-scrub|notieragent

Subcommand setcrushmap sets crush map from input file.

Usage:

  1. ceph osd setcrushmap

Subcommand setmaxosd sets new maximum osd value.

Usage:

  1. ceph osd setmaxosd <int[0-]>

Subcommand set-require-min-compat-client enforces the cluster to be backwardcompatible with the specified client version. This subcommand prevents you frommaking any changes (e.g., crush tunables, or using new features) thatwould violate the current setting. Please note, This subcommand will fail ifany connected daemon or client is not compatible with the features offered bythe given <version>. To see the features and releases of all clients connectedto cluster, please see ceph features.

Usage:

  1. ceph osd set-require-min-compat-client <version>

Subcommand stat prints summary of OSD map.

Usage:

  1. ceph osd stat

Subcommand tier is used for managing tiers. It uses some additionalsubcommands.

Subcommand add adds the tier <tierpool> (the second one) to base pool <pool>(the first one).

Usage:

  1. ceph osd tier add <poolname> <poolname> {--force-nonempty}

Subcommand add-cache adds a cache <tierpool> (the second one) of size <size>to existing pool <pool> (the first one).

Usage:

  1. ceph osd tier add-cache <poolname> <poolname> <int[0-]>

Subcommand cache-mode specifies the caching mode for cache tier <pool>.

Usage:

  1. ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
  2. readforward|readproxy

Subcommand remove removes the tier <tierpool> (the second one) from base pool<pool> (the first one).

Usage:

  1. ceph osd tier remove <poolname> <poolname>

Subcommand remove-overlay removes the overlay pool for base pool <pool>.

Usage:

  1. ceph osd tier remove-overlay <poolname>

Subcommand set-overlay set the overlay pool for base pool <pool> to be<overlaypool>.

Usage:

  1. ceph osd tier set-overlay <poolname> <poolname>

Subcommand tree prints OSD tree.

Usage:

  1. ceph osd tree {<int[0-]>}

Subcommand unpause unpauses osd.

Usage:

  1. ceph osd unpause

Subcommand unset unsets cluster-wide <flag> by updating OSD map.

Usage:

  1. ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
  2. norebalance|norecover|noscrub|nodeep-scrub|notieragent

pg

It is used for managing the placement groups in OSDs. It uses someadditional subcommands.

Subcommand debug shows debug info about pgs.

Usage:

  1. ceph pg debug unfound_objects_exist|degraded_pgs_exist

Subcommand deep-scrub starts deep-scrub on <pgid>.

Usage:

  1. ceph pg deep-scrub <pgid>

Subcommand dump shows human-readable versions of pg map (only ‘all’ validwith plain).

Usage:

  1. ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}

Subcommand dump_json shows human-readable version of pg map in json only.

Usage:

  1. ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}

Subcommand dump_pools_json shows pg pools info in json only.

Usage:

  1. ceph pg dump_pools_json

Subcommand dump_stuck shows information about stuck pgs.

Usage:

  1. ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
  2. {<int>}

Subcommand getmap gets binary pg map to -o/stdout.

Usage:

  1. ceph pg getmap

Subcommand ls lists pg with specific pool, osd, state

Usage:

  1. ceph pg ls {<int>} {<pg-state> [<pg-state>...]}

Subcommand ls-by-osd lists pg on osd [osd]

Usage:

  1. ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
  2. {<pg-state> [<pg-state>...]}

Subcommand ls-by-pool lists pg with pool = [poolname]

Usage:

  1. ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}

Subcommand ls-by-primary lists pg with primary = [osd]

Usage:

  1. ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
  2. {<pg-state> [<pg-state>...]}

Subcommand map shows mapping of pg to osds.

Usage:

  1. ceph pg map <pgid>

Subcommand repair starts repair on <pgid>.

Usage:

  1. ceph pg repair <pgid>

Subcommand scrub starts scrub on <pgid>.

Usage:

  1. ceph pg scrub <pgid>

Subcommand stat shows placement group status.

Usage:

  1. ceph pg stat

quorum

Cause a specific MON to enter or exit quorum.

Usage:

  1. ceph tell mon.<id> quorum enter|exit

quorum_status

Reports status of monitor quorum.

Usage:

  1. ceph quorum_status

report

Reports full status of cluster, optional title tag strings.

Usage:

  1. ceph report {<tags> [<tags>...]}

scrub

Scrubs the monitor stores.

Usage:

  1. ceph scrub

status

Shows cluster status.

Usage:

  1. ceph status

tell

Sends a command to a specific daemon.

Usage:

  1. ceph tell <name (type.id)> <command> [options...]

List all available commands.

Usage:

  1. ceph tell <name (type.id)> help

version

Show mon daemon version

Usage:

  1. ceph version

Options

  • -i infile
  • will specify an input file to be passed along as a payload with thecommand to the monitor cluster. This is only used for specificmonitor commands.
  • -o outfile
  • will write any payload returned by the monitor cluster with itsreply to outfile. Only specific monitor commands (e.g. osd getmap)return a payload.
  • —setuser user
  • will apply the appropriate user ownership to the file specified bythe option ‘-o’.
  • —setgroup group
  • will apply the appropriate group ownership to the file specified bythe option ‘-o’.
  • -c ceph.conf, —conf=ceph.conf
  • Use ceph.conf configuration file instead of the default/etc/ceph/ceph.conf to determine monitor addresses during startup.
  • —id CLIENT_ID, —user CLIENT_ID
  • Client id for authentication.
  • —name CLIENT_NAME, -n CLIENT_NAME
  • Client name for authentication.
  • —cluster CLUSTER
  • Name of the Ceph cluster.
  • —admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
  • Submit admin-socket commands via admin sockets in /var/run/ceph.
  • —admin-socket ADMIN_SOCKET_NOPE
  • You probably mean –admin-daemon
  • -s, —status
  • Show cluster status.
  • -w, —watch
  • Watch live cluster changes.
  • —watch-debug
  • Watch debug events.
  • —watch-info
  • Watch info events.
  • —watch-sec
  • Watch security events.
  • —watch-warn
  • Watch warning events.
  • —watch-error
  • Watch error events.
  • —version, -v
  • Display version.
  • —verbose
  • Make verbose.
  • —concise
  • Make less verbose.
  • -f {json,json-pretty,xml,xml-pretty,plain}, —format
  • Format of output.
  • —connect-timeout CLUSTER_TIMEOUT
  • Set a timeout for connecting to the cluster.
  • —no-increasing
  • —no-increasing is off by default. So increasing the osd weight is allowedusing the reweight-by-utilization or test-reweight-by-utilization commands.If this option is used with these commands, it will help not to increase osd weighteven the osd is under utilized.
  • —block
  • block until completion (scrub and deep-scrub only)

Availability

ceph is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer tothe Ceph documentation at http://ceph.com/docs for more information.

See also

ceph-mon(8),ceph-osd(8),ceph-mds(8)