默认情况下,cephfs文件系统只配置一个活跃的mds进程。在大型系统中,为了扩展元数据性能,可以配置多个活跃的mds进程,此时他们会共同承担元数据负载。

要配置mds多活,只需要修改cephfs系统的max_mds参数即可。以下是未配置之前的集群状态

  1. # ceph -s
  2. cluster:
  3. id: 94e1228c-caba-4eb5-af86-259876a44c28
  4. health: HEALTH_OK
  5. services:
  6. mon: 3 daemons, quorum test1,test2,test3
  7. mgr: test1(active), standbys: test3, test2
  8. mds: cephfs-2/2/1 up {0=test2=up:active,1=test3=up:active}, 1 up:standby
  9. osd: 18 osds: 18 up, 18 in
  10. rgw: 3 daemons active
  11. data:
  12. pools: 8 pools, 400 pgs
  13. objects: 305 objects, 3.04MiB
  14. usage: 18.4GiB used, 7.84TiB / 7.86TiB avail
  15. pgs: 400 active+clean
1、配置多活
  1. # ceph mds set max_mds 2
  2. # ceph -s
  3. cluster:
  4. id: 94e1228c-caba-4eb5-af86-259876a44c28
  5. health: HEALTH_OK
  6. services:
  7. mon: 3 daemons, quorum test1,test2,test3
  8. mgr: test1(active), standbys: test3, test2
  9. mds: cephfs-2/2/2 up {0=test2=up:active,1=test3=up:active}, 1 up:standby
  10. osd: 18 osds: 18 up, 18 in
  11. rgw: 3 daemons active
  12. data:
  13. pools: 8 pools, 400 pgs
  14. objects: 305 objects, 3.04MiB
  15. usage: 18.4GiB used, 7.84TiB / 7.86TiB avail
  16. pgs: 400 active+clean
2、恢复单活mds
  1. # ceph mds set max_mds 1
  2. # ceph mds deactivate 1
  3. # ceph -s
  4. cluster:
  5. id: 94e1228c-caba-4eb5-af86-259876a44c28
  6. health: HEALTH_OK
  7. services:
  8. mon: 3 daemons, quorum test1,test2,test3
  9. mgr: test1(active), standbys: test3, test2
  10. mds: cephfs-1/1/1 up {0=test2=up:active}, 2 up:standby
  11. osd: 18 osds: 18 up, 18 in
  12. rgw: 3 daemons active
  13. data:
  14. pools: 8 pools, 400 pgs
  15. objects: 305 objects, 3.04MiB
  16. usage: 18.4GiB used, 7.84TiB / 7.86TiB avail
  17. pgs: 400 active+clean
  18. io:
  19. client: 31.7KiB/s rd, 170B/s wr, 31op/s rd, 21op/s wr