Ceph命令

OSD状态

列举当前所有OSD。

  1. root@dev:/# ceph osd ls
  2. 0

查看OSD状态。

  1. root@dev:/# ceph osd stat
  2. osdmap e21: 1 osds: 1 up, 1 in

查看OSD树形结构。

  1. root@dev:/# ceph osd tree
  2. ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
  3. -1 1.00000 root default
  4. -2 1.00000 host dev
  5. 0 1.00000 osd.0 up 1.00000 1.00000

导出OSD详细信息。

  1. root@dev:/# ceph osd dump
  2. epoch 21
  3. fsid fee30c76-aec4-44d4-8138-763969aaa562
  4. created 2015-07-12 05:59:12.189734
  5. modified 2015-07-12 11:57:56.706958
  6. flags
  7. pool 0 'rbd' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 2 flags hashpspool stripe_width 0
  8. pool 1 'cephfs_data' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 7 flags hashpspool crash_replay_interval 45 stripe_width 0
  9. pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 6 flags hashpspool stripe_width 0
  10. pool 3 '.rgw.root' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 8 owner 18446744073709551615 flags hashpspool stripe_width 0
  11. pool 4 '.rgw.control' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 10 owner 18446744073709551615 flags hashpspool stripe_width 0
  12. pool 5 '.rgw' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 12 owner 18446744073709551615 flags hashpspool stripe_width 0
  13. pool 6 '.rgw.gc' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 13 owner 18446744073709551615 flags hashpspool stripe_width 0
  14. pool 7 '.users.uid' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 14 owner 18446744073709551615 flags hashpspool stripe_width 0
  15. max_osd 1
  16. osd.0 up in weight 1 up_from 20 up_thru 20 down_at 19 last_clean_interval [5,19) 10.0.2.15:6800/199 10.0.2.15:6801/2000199 10.0.2.15:6802/2000199 10.0.2.15:6803/2000199 exists,up 5aaf2355-aa45-4452-a401-9add47541a88

Monitor状态

查看Monitor状态。

  1. root@dev:/# ceph mon stat
  2. e1: 1 mons at {dev=10.0.2.15:6789/0}, election epoch 2, quorum 0 dev

导出Monitor详细信息。

  1. root@dev:/# ceph mon dump
  2. dumped monmap epoch 1
  3. epoch 1
  4. fsid fee30c76-aec4-44d4-8138-763969aaa562
  5. last_changed 2015-07-12 05:59:11.900924
  6. created 2015-07-12 05:59:11.900924
  7. 0: 10.0.2.15:6789/0 mon.dev

PG操作

导出PG详细信息。

  1. root@dev:/# ceph pg dump
  2. pool 0 0 0 0 0 0 0 0 0
  3. pool 1 0 0 0 0 0 0 0 0
  4. pool 2 20 0 0 0 0 1962 30 30
  5. pool 3 3 0 0 0 0 848 3 3
  6. pool 4 8 0 0 0 0 0 50 50
  7. pool 5 0 0 0 0 0 0 0 0
  8. pool 6 32 0 0 0 0 0 192 192
  9. pool 7 0 0 0 0 0 0 0 0
  10. sum 63 0 0 0 0 2810 275 275
  11. osdstat kbused kbavail kb hb in hb out
  12. 0 4630564 13428604 19049892 [] []
  13. sum 4630564 13428604 19049892

CRUSH操作

导出CRUSH详细信息。

  1. root@dev:/# ceph osd crush dump

将CURHS导出为不可读的配置文件。

  1. root@dev:/# ceph osd getcrushmap -o crushmap.txt
  2. got crush map from osdmap epoch 21

解码CRUSH配置文件为可读的文本文件。

  1. root@dev:/# crushtool -d crushmap.txt -o crushmap-decompile.txt
  2. root@dev:/# vim crushmap-decompile.txt