环境配置

1、设备列表
IP 主机名 功能
192.168.1.10 mon1 mon,osd,rgw
192.168.1.11 mon2 mon,osd,rgw
192.168.1.12 mon3 mon,osd,rgw
2、配置yum源

使用的是阿里的yum源,同时去掉yum源中的aliyuncs的内网源,修改源的版本号为指定的版本,可以提高yum源的效率,以下操作需要在每个节点上执行。

  1. $ rm -rf /etc/yum.repos.d/*
  2. $ wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  3. $ wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
  4. $ sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
  5. $ sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
  6. $ sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo

配置Ceph源,这里使用Luminous版的Ceph源

  1. $ vim /etc/yum.repos.d/ceph-luminous.repo
  2. [ceph]
  3. name=x86_64
  4. baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
  5. gpgcheck=0
  6. [ceph-noarch]
  7. name=noarch
  8. baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
  9. gpgcheck=0
  10. [ceph-arrch64]
  11. name=arrch64
  12. baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/aarch64/
  13. gpgcheck=0
  14. [ceph-SRPMS]
  15. name=SRPMS
  16. baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS/
  17. gpgcheck=0

Monitor部署

安装软件包

在每个Monitor节点上使用yum安装ceph

  1. $ yum -y install ceph
部署第一个Monitor节点

1、登录到mon1节点,查看ceph目录是否已经生成

  1. $ ls /etc/ceph/
  2. rbdmap

2、生成ceph集群的uuid

  1. $ uuidgen
  2. b6f87f4d-c8f7-48c0-892e-71bc16cce7ff

3、生成ceph集群配置文件,并写入如下内容

  1. $ vim /etc/ceph/ceph.conf
  2. [global]
  3. fsid = b6f87f4d-c8f7-48c0-892e-71bc16cce7ff
  4. mon initial members = mon1
  5. mon host = 192.168.1.10
  6. public network = 192.168.1.0/24
  7. cluster network = 192.168.1.0/24

4、生成key

  1. $ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

5、创建管理用户key,并添加权限

  1. $ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin \
  2. --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mgr 'allow *' --cap rgw 'allow *'

6、生成osd引导key

  1. $ sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd \
  2. --cap mon 'profile bootstrap-osd'

7、把生成的密钥添加到ceph.mon.keyring

  1. $ sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
  2. $ sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring

8、生成monmap

  1. $ monmaptool --create --add mon1 192.168.1.10 --fsid b6f87f4d-c8f7-48c0-892e-71bc16cce7ff /tmp/monmap

9、创建数据目录

  1. $ sudo -u ceph mkdir /var/lib/ceph/mon/ceph-mon1

10、修改ceph.mon.keyring文件权限

  1. $ chown ceph.ceph /tmp/ceph.mon.keyring

11、初始化Monitor

  1. $ sudo -u ceph ceph-mon --mkfs -i mon1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

12、启动Monitor

  1. $ systemctl status ceph-mon@mon1
  2. $ systemctl status ceph-mon@mon1
  3. ceph-mon@mon1.service - Ceph cluster monitor daemon
  4. Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; disabled; vendor preset: disabled)
  5. Active: active (running) since Mon 2019-03-04 11:19:47 CST; 50s ago
  6. Main PID: 1462 (ceph-mon)

13、设置Monitor开机自启

  1. $ systemctl enable ceph-mon@mon1
部署其他Monitor节点

以下以mon2节点为例 1、拷贝配置文件

  1. $ scp /etc/ceph/* mon2:/etc/ceph/
  2. $ scp /var/lib/ceph/bootstrap-osd/ceph.keyring mon2:/var/lib/ceph/bootstrap-osd/
  3. $ scp /tmp/ceph.mon.keyring mon2:/tmp/ceph.mon.keyring

2、在mon2节点上创建数据目录

  1. $ sudo -u ceph mkdir /var/lib/ceph/mon/ceph-mon2

3、修改ceph.mon.keyring文件权限

  1. $ chown ceph.ceph /tmp/ceph.mon.keyring

4、获取密钥和monmap

  1. $ ceph auth get mon. -o /tmp/ceph.mon.keyring
  2. $ ceph mon getmap -o /tmp/ceph.mon.map

5、初始化Monitor

  1. $ sudo -u ceph ceph-mon --mkfs -i mon2 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring

6、启动Monitor

  1. $ systemctl start ceph-mon@mon2
  2. $ systemctl status ceph-mon@mon2
  3. ceph-mon@mon2.service - Ceph cluster monitor daemon
  4. Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; disabled; vendor preset: disabled)
  5. Active: active (running) since Mon 2019-03-04 11:31:37 CST; 8s ago
  6. Main PID: 1417 (ceph-mon)

7、配置Monitor开机自启

  1. $ systemctl enable ceph-mon@mon2

8、以同样的方式,在mon3节点上部署Monitor

  1. $ sudo -u ceph mkdir /var/lib/ceph/mon/ceph-mon3
  2. $ chown ceph.ceph /tmp/ceph.mon.keyring
  3. $ ceph auth get mon. -o /tmp/ceph.mon.keyring
  4. $ ceph mon getmap -o /tmp/ceph.mon.map
  5. $ sudo -u ceph ceph-mon --mkfs -i mon3 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring
  6. $ systemctl start ceph-mon@mon3
  7. $ systemctl enable ceph-mon@mon3

9、部署完成后把mon2,mon3节点添加到ceph的配置文件中,并将配置文件拷贝到其他节点

  1. $ vim /etc/ceph/ceph.conf
  2. [global]
  3. fsid = b6f87f4d-c8f7-48c0-892e-71bc16cce7ff
  4. mon initial members = mon1,mon2,mon3
  5. mon host = 192.168.1.10,192.168.1.11,192.168.1.12
  6. public network = 192.168.1.0/24
  7. cluster network = 192.168.1.0/24
  8. $ scp /etc/ceph/ceph.conf mon2:/etc/ceph/ceph.conf
  9. $ scp /etc/ceph/ceph.conf mon3:/etc/ceph/ceph.conf

10、分别登录3个节点重启每个节点上的Monitor进程

  1. $ systemctl restart ceph-mon@mon1
  2. $ systemctl restart ceph-mon@mon2
  3. $ systemctl restart ceph-mon@mon3

11、查看当前ceph集群状态

  1. $ ceph -s
  2. cluster:
  3. id: b6f87f4d-c8f7-48c0-892e-71bc16cce7ff
  4. health: HEALTH_OK
  5. services:
  6. mon: 3 daemons, quorum mon1,mon2,mon3
  7. mgr: no daemons active
  8. osd: 0 osds: 0 up, 0 in
  9. data:
  10. pools: 0 pools, 0 pgs
  11. objects: 0 objects, 0B
  12. usage: 0B used, 0B / 0B avail
  13. pgs:

OSD部署

1、使用sgdisk命令对磁盘进行分区,并格式数据分区

  1. $ sgdisk -Z /dev/sdb
  2. $ sgdisk -n 2:0:+5G -c 2:"ceph journal" -t 2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 /dev/sdb
  3. $ sgdisk -n 1:0:0 -c 1:"ceph data" -t 1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d /dev/sdb
  4. $ mkfs.xfs -f -i size=2048 /dev/sdb1

2、创建OSD

  1. $ ceph osd create
  2. 0

3、创建数据目录,将数据分区挂载到数据目录下

  1. $ mkdir /var/lib/ceph/osd/ceph-0
  2. $ mount /dev/sdb1 /var/lib/ceph/osd/ceph-0/

4、创建OSD的key

  1. $ ceph-osd -i 0 --mkfs --mkkey

5、删除自动生产的journal文件

  1. $ rm -rf /var/lib/ceph/osd/ceph-0/journal

6、查看journal分区的uuid

  1. $ ll /dev/disk/by-partuuid/ | grep sdb2
  2. lrwxrwxrwx 1 root root 10 Mar 4 11:49 8bf1768a-5d64-4696-a04d-e0a929fa99ef -> ../../sdb2

7、根据查询结果创建journal分区的软链,把uuid写入文件,然后重新生成journal

  1. $ ln -s /dev/disk/by-partuuid/8bf1768a-5d64-4696-a04d-e0a929fa99ef /var/lib/ceph/osd/ceph-0/journal
  2. $ echo 8bf1768a-5d64-4696-a04d-e0a929fa99ef > /var/lib/ceph/osd/ceph-0/journal_uuid
  3. $ ceph-osd -i 0 --mkjournal

8、添加osd的key,并添加权限

  1. $ ceph auth add osd.0 mon 'allow profile osd' mgr 'allow profile osd' osd 'allow *' rgw 'allow *' -i /var/lib/ceph/osd/ceph-0/keyring

9、修改crushmap

  1. $ ceph osd crush add osd.0 0.01459 host=mon1
  2. $ ceph osd crush move mon1 root=default

10、修改权限

  1. $ chown -R ceph:ceph /var/lib/ceph/osd/ceph-0

11、激活OSD

  1. $ ceph-disk activate --mark-init systemd --mount /dev/sdb1

以同样的方法添加其他的OSD,以这种方式写了一个脚本,可以快速完成其他OSD的部署

  1. #!/bin/bash
  2. HOSTNAME=$(hostname)
  3. SYS_DISK=sda
  4. DISK=$(lsblk | grep disk | awk '{print $1}')
  5. JOURNAL_TYPE=45b0969e-9b03-4f30-b4c6-b4b80ceff106
  6. DATA_TYPE=4fbd7e29-9d25-41b8-afd0-062c0ceff05d
  7. function Sgdisk() {
  8. sgdisk -n 2:0:+5G -c 2:"ceph journal" -t 2:$JOURNAL_TYPE /dev/$1 &> /dev/null
  9. sgdisk -n 1:0:0 -c 1:"ceph data" -t 1:$DATA_TYPE /dev/$1 &> /dev/null
  10. mkfs.xfs -f -i size=2048 /dev/${1}1 &> /dev/null
  11. }
  12. function Crushmap() {
  13. ceph osd crush add-bucket $1 host &> /dev/null
  14. ceph osd crush move $1 root=default &> /dev/null
  15. }
  16. function CreateOSD() {
  17. OSD_URL=/var/lib/ceph/osd/ceph-$1
  18. DATA_PARTITION=/dev/${i}1
  19. JOURNAL_UUID=$(ls -l /dev/disk/by-partuuid/ | grep ${i}2 | awk '{print $9}')
  20. mkdir -p $OSD_URL
  21. mount $DATA_PARTITION $OSD_URL
  22. ceph-osd -i $1 --mkfs --mkkey &> /dev/null
  23. rm -rf $OSD_URL/journal
  24. ln -s /dev/disk/by-partuuid/$JOURNAL_UUID $OSD_URL/journal
  25. echo $JOURNAL_UUID > $OSD_URL/journal_uuid
  26. ceph-osd -i $1 --mkjournal &> /dev/null
  27. ceph auth add osd.$1 mon 'allow profile osd' mgr 'allow profile osd' osd 'allow *' rgw 'allow *' -i $OSD_URL/keyring &> /dev/null
  28. ceph osd crush add osd.$1 0.01459 host=mon1 &> /dev/null
  29. chown -R ceph:ceph $OSD_URL
  30. ceph-disk activate --mark-init systemd --mount $DATA_PARTITION &> /dev/null
  31. if [ $? =0 ] ; then
  32. echo -e "\033[32mosd.$1 was created successfully\033[0m"
  33. fi
  34. }
  35. ceph osd tree | grep mon1 &> /dev/null
  36. if [ $? != 0 ] ; then
  37. Crushmap $HOSTNAME
  38. fi
  39. for i in $DISK ; do
  40. blkid | grep ceph | grep $i &> /dev/null
  41. if [ $? != 0 ] && [ $i != $SYS_DISK ] ; then
  42. Sgdisk $i
  43. ID=$(ceph osd create)
  44. CreateOSD $ID
  45. else
  46. continue
  47. fi
  48. done

Mgr部署

1、创建密钥

  1. $ ceph auth get-or-create mgr.mon1 mon 'allow *' osd 'allow *'

2、创建数据目录

  1. $ mkdir /var/lib/ceph/mgr/ceph-mon1

3、导出密钥

  1. $ ceph auth get mgr.mon1 -o /var/lib/ceph/mgr/ceph-mon1/keyring

4、启动mgr

  1. $ systemctl enable ceph-mgr@mon1
  2. $ systemctl start ceph-mgr@mon1

以同样的方式在其他节点添加mgr 5、开启dashboard模块

  1. $ ceph mgr module enable dashboard
  2. $ ceph mgr services
  3. {
  4. "dashboard": "http://mon1:7000/"
  5. }

6、浏览器访问dashborad ceph_dashboard

RGW部署

1、安装软件包

  1. $ yum -y install ceph-radosgw

2、创建key

  1. $ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring

3、修改key的权限

  1. $ sudo chown ceph:ceph /etc/ceph/ceph.client.radosgw.keyring

4、创建rgw用户和key

  1. $ sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.rgw.mon1 --gen-key

5、为用户添加权限

  1. $ sudo ceph-authtool -n client.rgw.mon1 --cap osd 'allow rwx' --cap mon 'allow rwx' --cap mgr 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring

6、导入key

  1. $ sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.rgw.mon1 -i /etc/ceph/ceph.client.radosgw.keyring

7、配置文件中添加rgw配置

  1. $ vim /etc/ceph/ceph.conf
  2. [client.rgw.mon1]
  3. host=mon1
  4. keyring=/etc/ceph/ceph.client.radosgw.keyring
  5. rgw_frontends = civetweb port=8080

8、启动rgw

  1. $ systemctl enable ceph-radosgw@rgw.mon1
  2. $ systemctl start ceph-radosgw@rgw.mon1

9、查看端口监听状态

  1. $ netstat -antpu | grep 8080
  2. tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 7989/radosgw

以同样的方式在其他节点部署rgw,部署完成后,使用ceph -s查看当前集群状态

  1. $ ceph -s
  2. cluster:
  3. id: b6f87f4d-c8f7-48c0-892e-71bc16cce7ff
  4. health: HEALTH_WARN
  5. too few PGs per OSD (10 < min 30)
  6. services:
  7. mon: 3 daemons, quorum mon1,mon2,mon3
  8. mgr: mon1(active), standbys: mon2, mon3
  9. osd: 9 osds: 9 up, 9 in
  10. rgw: 3 daemons active
  11. data:
  12. pools: 4 pools, 32 pgs
  13. objects: 187 objects, 1.09KiB
  14. usage: 976MiB used, 134GiB / 135GiB avail
  15. pgs: 32 active+clean

配置s3cmd

1、安装软件包

  1. $ yum -y install s3cmd

2、创建s3用户

  1. $ radosgw-admin user create --uid=admin --access-key=123456 --secret-key=123456 --display-name=admin
  2. {
  3. "user_id": "admin",
  4. "display_name": "admin",
  5. "email": "",
  6. "suspended": 0,
  7. "max_buckets": 1000,
  8. "auid": 0,
  9. "subusers": [],
  10. "keys": [
  11. {
  12. "user": "admin",
  13. "access_key": "123456",
  14. "secret_key": "123456"
  15. }
  16. ],
  17. "swift_keys": [],
  18. "caps": [],
  19. "op_mask": "read, write, delete",
  20. "default_placement": "",
  21. "placement_tags": [],
  22. "bucket_quota": {
  23. "enabled": false,
  24. "check_on_raw": false,
  25. "max_size": -1,
  26. "max_size_kb": 0,
  27. "max_objects": -1
  28. },
  29. "user_quota": {
  30. "enabled": false,
  31. "check_on_raw": false,
  32. "max_size": -1,
  33. "max_size_kb": 0,
  34. "max_objects": -1
  35. },
  36. "temp_url_keys": [],
  37. "type": "rgw"
  38. }

3、生成s3配置文件

  1. $ vim .s3cfg
  2. [default]
  3. access_key = 123456
  4. bucket_location = US
  5. cloudfront_host = 192.168.1.10:8080
  6. cloudfront_resource = /2010-07-15/distribution
  7. default_mime_type = binary/octet-stream
  8. delete_removed = False
  9. dry_run = False
  10. encoding = UTF-8
  11. encrypt = False
  12. follow_symlinks = False
  13. force = False
  14. get_continue = False
  15. gpg_command = /usr/bin/gpg
  16. gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
  17. gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
  18. gpg_passphrase =
  19. guess_mime_type = True
  20. host_base = 192.168.1.10:8080
  21. host_bucket = 192.168.1.10:8080/%(bucket)
  22. human_readable_sizes = False
  23. list_md5 = False
  24. log_target_prefix =
  25. preserve_attrs = True
  26. progress_meter = True
  27. proxy_host =
  28. proxy_port = 0
  29. recursive = False
  30. recv_chunk = 4096
  31. reduced_redundancy = False
  32. secret_key = 123456
  33. send_chunk = 96
  34. simpledb_host = sdb.amazonaws.com
  35. skip_existing = False
  36. socket_timeout = 300
  37. urlencoding_mode = normal
  38. use_https = False
  39. verbosity = WARNING
  40. signature_v2 = True

4、修改ceph.conf,在global中添加pg_num和pgp_num配置

  1. $ vim /etc/ceph/ceph.conf
  2. osd_pool_default_pg_num = 64
  3. osd_pool_default_pgp_num = 64

5、重启进程,使配置生效

  1. $ systemctl restart ceph-mon@mon1

6、创建bucket,并测试上传文件

  1. $ s3cmd mb s3://test
  2. Bucket 's3://test/' created
  3. $ s3cmd put osd.sh s3://test
  4. upload: 'osd.sh' -> 's3://test/osd.sh' [1 of 1]
  5. 1684 of 1684 100% in 1s 1327.43 B/s done

7、再次查看集群状态

  1. $ ceph -s
  2. cluster:
  3. id: b6f87f4d-c8f7-48c0-892e-71bc16cce7ff
  4. health: HEALTH_OK
  5. services:
  6. mon: 3 daemons, quorum mon1,mon2,mon3
  7. mgr: mon1(active), standbys: mon2, mon3
  8. osd: 9 osds: 9 up, 9 in
  9. rgw: 3 daemons active
  10. data:
  11. pools: 6 pools, 160 pgs
  12. objects: 194 objects, 3.39KiB
  13. usage: 979MiB used, 134GiB / 135GiB avail
  14. pgs: 160 active+clean