手动部署集群

编译构建

使用如下命令同时构建server,client及相关的依赖:

  1. $ git clone http://github.com/chubaofs/chubaofs.git
  2. $ cd chubaofs
  3. $ make build

如果构建成功,将在`build/bin` 目录中生成可执行文件`cfs-server`和`cfs-client`。

集群部署

启动资源管理节点

  1. nohup ./cfs-server -c master.json &

示例 master.json :注意:master服务最少应该启动3个节点实例

  1. {
  2. "role": "master",
  3. "ip": "10.196.59.198",
  4. "listen": "17010",
  5. "prof":"17020",
  6. "id":"1",
  7. "peers": 1:10.196.59.198:17010,2:10.196.59.199:17010,3:10.196.59.200:17010",
  8. "retainLogs":"20000",
  9. "logDir": "/cfs/master/log",
  10. "logLevel":"info",
  11. "walDir":"/cfs/master/data/wal",
  12. "storeDir":"/cfs/master/data/store",
  13. "consulAddr": "http://consul.prometheus-cfs.local",
  14. "exporterPort": 9500,
  15. "clusterName":"chubaofs01",
  16. "metaNodeReservedMem": "1073741824"
  17. }

详细配置参数请参考 资源管理节点

启动元数据节点

  1. nohup ./cfs-server -c metanode.json &

示例 meta.json :注意:metanode服务最少应该启动3个节点实例

  1. {
  2. "role": "metanode",
  3. "listen": "17210",
  4. "prof": "17220",
  5. "logLevel": "info",
  6. "metadataDir": "/cfs/metanode/data/meta",
  7. "logDir": "/cfs/metanode/log",
  8. "raftDir": "/cfs/metanode/data/raft",
  9. "raftHeartbeatPort": "17230",
  10. "raftReplicaPort": "17240",
  11. "totalMem": "8589934592",
  12. "consulAddr": "http://consul.prometheus-cfs.local",
  13. "exporterPort": 9501,
  14. "masterAddr": [
  15. "10.196.59.198:17010",
  16. "10.196.59.199:17010",
  17. "10.196.59.200:17010"
  18. ]
  19. }

启动元数据节点

  1. nohup ./cfs-server -c metanode.json &

示例 meta.json :注意:metanode服务最少应该启动3个节点实例

  1. {
  2. "role": "metanode",
  3. "listen": "17210",
  4. "prof": "17220",
  5. "logLevel": "info",
  6. "metadataDir": "/cfs/metanode/data/meta",
  7. "logDir": "/cfs/metanode/log",
  8. "raftDir": "/cfs/metanode/data/raft",
  9. "raftHeartbeatPort": "17230",
  10. "raftReplicaPort": "17240",
  11. "totalMem": "8589934592",
  12. "consulAddr": "http://consul.prometheus-cfs.local",
  13. "exporterPort": 9501,
  14. "masterAddr": [
  15. "10.196.59.198:17010",
  16. "10.196.59.199:17010",
  17. "10.196.59.200:17010"
  18. ]
  19. }

启动动 ObjectNode

  1. nohup ./cfs-server -c objectnode.json &

示例 objectnode.json 内容如下

  1. {
  2. "role": "objectnode",
  3. "domains": [
  4. "object.cfs.local"
  5. ],
  6. "listen": 17410,
  7. "masterAddr": [
  8. "10.196.59.198:17010",
  9. "10.196.59.199:17010",
  10. "10.196.59.200:17010"
  11. ],
  12. "logLevel": "info",
  13. "logDir": "/cfs/Logs/objectnode"
  14. }

配置文件的详细信息 objectnode.json, 请参阅 对象存储(ObjectNode).

启动管理平台(非必须)

  1. nohup ./cfs-server -c console.json &

示例 console.json 内容如下

  1. {
  2. "role": "console",
  3. "logDir": "/cfs/log/",
  4. "logLevel": "debug",
  5. "listen": "80",
  6. "masterAddr": [
  7. "192.168.0.11:17010",
  8. "192.168.0.12:17010",
  9. "192.168.0.13:17010"
  10. ],
  11. "objectNodeDomain": "object.chubao.io",
  12. "master_instance": "192.168.0.11:9066",
  13. "monitor_addr": "http://192.168.0.102:9090",
  14. "dashboard_addr": "http://192.168.0.103",
  15. "monitor_app": "cfs",
  16. "monitor_cluster": "cfs"
  17. }

配置文件的详细信息 console.json, 请参阅 Console.

详细配置参数请参考 元数据节点.

启动数据节点

  1. 准备数据目录

    推荐 使用单独磁盘作为数据目录,配置多块磁盘能够达到更高的性能。

    磁盘准备

    1.1 查看机器磁盘信息,选择给ChubaoFS使用的磁盘

    1. fdisk -l

    1.2 格式化磁盘,建议格式化为XFS

    1. mkfs.xfs -f /dev/sdx

    1.3 创建挂载目录

    1. mkdir /data0

    1.4 挂载磁盘

    1. mount /dev/sdx /data0
  2. 启动数据节点

    1. nohup ./cfs-server -c datanode.json &

    示例 datanode.json :注意:datanode服务最少应该启动4个节点实例

    1. {
    2. "role": "datanode",
    3. "listen": "17310",
    4. "prof": "17320",
    5. "logDir": "/cfs/datanode/log",
    6. "logLevel": "info",
    7. "raftHeartbeat": "17330",
    8. "raftReplica": "17340",
    9. "raftDir":"/cfs/datanode/log",
    10. "consulAddr": "http://consul.prometheus-cfs.local",
    11. "exporterPort": 9502,
    12. "masterAddr": [
    13. "10.196.59.198:17010",
    14. "10.196.59.199:17010",
    15. "10.196.59.200:17010"
    16. ],
    17. "disks": [
    18. "/data0:10737418240",
    19. "/data1:10737418240"
    20. ]
    21. }

    详细配置参数请参考 数据节点.

启动对象管理节点

  1. nohup ./cfs-server -c objectnode.json &

示例 objectnode.json is 如下:

  1. {
  2. "role": "objectnode",
  3. "domains": [
  4. "object.cfs.local"
  5. ],
  6. "listen": 17410,
  7. "masterAddr": [
  8. "10.196.59.198:17010",
  9. "10.196.59.199:17010",
  10. "10.196.59.200:17010"
  11. ],
  12. "logLevel": "info",
  13. "logDir": "/cfs/Logs/objectnode"
  14. }

关于 object.json 的更多详细配置请参考 对象存储(ObjectNode).

创建Volume卷

  1. curl -v "http://10.196.59.198:17010/admin/createVol?name=test&capacity=10000&owner=cfs"
  2. 如果执行性能测试,请调用相应的API,创建足够多的数据分片(data partition),如果集群中有8块磁盘,那么需要创建80datapartition

挂载客户端

  1. 运行 modprobe fuse 插入FUSE内核模块。

  2. 运行 yum install -y fuse 安装libfuse。

  3. 运行 nohup client -c fuse.json & 启动客户端。

    样例 fuse.json ,

    1. {
    2. "mountPoint": "/cfs/mountpoint",
    3. "volName": "ltptest",
    4. "owner": "ltptest",
    5. "masterAddr": "10.196.59.198:17010,10.196.59.199:17010,10.196.59.200:17010",
    6. "logDir": "/cfs/client/log",
    7. "profPort": "17510",
    8. "exporterPort": "9504",
    9. "logLevel": "info"
    10. }

详细配置参数请参考 客户端.

用户可以使用不同的挂载点在同一台机器上同时启动多个客户端

升级注意事项

集群数据节点和元数据节点升级前,请先禁止集群自动为卷扩容数据分片.

  1. 冻结集群
  1. curl -v "http://10.196.59.198:17010/cluster/freeze?enable=true"
  1. 升级节点
  2. 开启自动扩容数据分片
  1. curl -v "http://10.196.59.198:17010/cluster/freeze?enable=false"

注:升级节点时不能修改各节点配置文件的端口。