OpenStack-Wallaby 部署指南

OpenStack 简介

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

openEuler 21.09 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

约定

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

ALL in One模式:

  1. 忽略所有可能的后缀

Distributed模式:

  1. `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
  2. `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
  3. `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
  4. 除此之外表示此条配置或者命令同时适用`控制节点``计算节点`

注意

涉及到以上约定的服务如下:

  • Cinder
  • Nova
  • Neutron

准备环境

环境配置

  1. 配置 21.09 官方yum源,需要启用EPOL软件仓以支持OpenStack

    1. cat << EOF >> /etc/yum.repos.d/21.09-OpenStack_Wallaby.repo
    2. [OS]
    3. name=OS
    4. baseurl=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/
    5. enabled=1
    6. gpgcheck=1
    7. gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler
    8. [everything]
    9. name=everything
    10. baseurl=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/
    11. enabled=1
    12. gpgcheck=1
    13. gpgkey=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/RPM-GPG-KEY-openEuler
    14. [EPOL]
    15. name=EPOL
    16. baseurl=http://repo.openeuler.org/openEuler-21.09/EPOL/$basearch/
    17. enabled=1
    18. gpgcheck=1
    19. gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler
    20. EOF
    21. yum clean all && yum makecache
  2. 修改主机名以及映射

    设置各个节点的主机名

    1. hostnamectl set-hostname controller (CTL)
    2. hostnamectl set-hostname compute (CPT)

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    1. 10.0.0.11 controller
    2. 10.0.0.12 compute

安装 SQL DataBase

  1. 执行如下命令,安装软件包。

    1. yum install mariadb mariadb-server python3-PyMySQL
  2. 执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    1. vim /etc/my.cnf.d/openstack.cnf
    2. [mysqld]
    3. bind-address = 10.0.0.11
    4. default-storage-engine = innodb
    5. innodb_file_per_table = on
    6. max_connections = 4096
    7. collation-server = utf8_general_ci
    8. character-set-server = utf8

    注意

    其中 bind-address 设置为控制节点的管理IP地址。

  3. 启动 DataBase 服务,并为其配置开机自启动:

    1. systemctl enable mariadb.service
    2. systemctl start mariadb.service
  4. 配置DataBase的默认密码(可选)

    1. mysql_secure_installation

    注意

    根据提示进行即可

安装 RabbitMQ

  1. 执行如下命令,安装软件包。

    1. yum install rabbitmq-server
  2. 启动 RabbitMQ 服务,并为其配置开机自启动。

    1. systemctl enable rabbitmq-server.service
    2. systemctl start rabbitmq-server.service
  3. 添加 OpenStack用户。

    1. rabbitmqctl add_user openstack RABBIT_PASS

    注意

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

  4. 设置openstack用户权限,允许进行配置、写、读:

    1. rabbitmqctl set_permissions openstack ".*" ".*" ".*"

安装 Memcached

  1. 执行如下命令,安装依赖软件包。

    1. yum install memcached python3-memcached
  2. 编辑 /etc/sysconfig/memcached 文件。

    1. vim /etc/sysconfig/memcached
    2. OPTIONS="-l 127.0.0.1,::1,controller"
  3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    1. systemctl enable memcached.service
    2. systemctl start memcached.service

    注意

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

安装 OpenStack

Keystone 安装

  1. 创建 keystone 数据库并授权。

    1. mysql -u root -p
    2. MariaDB [(none)]> CREATE DATABASE keystone;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    4. IDENTIFIED BY 'KEYSTONE_DBPASS';
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    6. IDENTIFIED BY 'KEYSTONE_DBPASS';
    7. MariaDB [(none)]> exit

    注意

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

  2. 安装软件包。

    1. yum install openstack-keystone httpd mod_wsgi
  3. 配置keystone相关配置

    1. vim /etc/keystone/keystone.conf
    2. [database]
    3. connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    4. [token]
    5. provider = fernet

    解释

    [database]部分,配置数据库入口

    [token]部分,配置token provider

    注意:

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

  4. 同步数据库。

    1. su -s /bin/sh -c "keystone-manage db_sync" keystone
  5. 初始化Fernet密钥仓库。

    1. keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    2. keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  6. 启动服务。

    1. keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    2. --bootstrap-admin-url http://controller:5000/v3/ \
    3. --bootstrap-internal-url http://controller:5000/v3/ \
    4. --bootstrap-public-url http://controller:5000/v3/ \
    5. --bootstrap-region-id RegionOne

    注意

    替换 ADMIN_PASS,为 admin 用户设置密码

  7. 配置Apache HTTP server

    1. vim /etc/httpd/conf/httpd.conf
    2. ServerName controller
    1. ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

    解释

    配置 ServerName 项引用控制节点

    注意 如果 ServerName 项不存在则需要创建

  8. 启动Apache HTTP服务。

    1. systemctl enable httpd.service
    2. systemctl start httpd.service
  9. 创建环境变量配置。

    1. cat << EOF >> ~/.admin-openrc
    2. export OS_PROJECT_DOMAIN_NAME=Default
    3. export OS_USER_DOMAIN_NAME=Default
    4. export OS_PROJECT_NAME=admin
    5. export OS_USERNAME=admin
    6. export OS_PASSWORD=ADMIN_PASS
    7. export OS_AUTH_URL=http://controller:5000/v3
    8. export OS_IDENTITY_API_VERSION=3
    9. export OS_IMAGE_API_VERSION=2
    10. EOF

    注意

    替换 ADMIN_PASS 为 admin 用户的密码

  10. 依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    1. yum install python3-openstackclient

    导入环境变量

    1. source ~/.admin-openrc

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    1. openstack domain create --description "An Example Domain" example
    1. openstack project create --domain default --description "Service Project" service

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    1. openstack project create --domain default --description "Demo Project" myproject
    2. openstack user create --domain default --password-prompt myuser
    3. openstack role create myrole
    4. openstack role add --project myproject --user myuser myrole
  11. 验证

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    1. source ~/.admin-openrc
    2. unset OS_AUTH_URL OS_PASSWORD

    为admin用户请求token:

    1. openstack --os-auth-url http://controller:5000/v3 \
    2. --os-project-domain-name Default --os-user-domain-name Default \
    3. --os-project-name admin --os-username admin token issue

    为myuser用户请求token:

    1. openstack --os-auth-url http://controller:5000/v3 \
    2. --os-project-domain-name Default --os-user-domain-name Default \
    3. --os-project-name myproject --os-username myuser token issue

Glance 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    1. mysql -u root -p
    2. MariaDB [(none)]> CREATE DATABASE glance;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    4. IDENTIFIED BY 'GLANCE_DBPASS';
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    6. IDENTIFIED BY 'GLANCE_DBPASS';
    7. MariaDB [(none)]> exit

    注意:

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    创建服务凭证

    1. source ~/.admin-openrc
    2. openstack user create --domain default --password-prompt glance
    3. openstack role add --project service --user glance admin
    4. openstack service create --name glance --description "OpenStack Image" image

    创建镜像服务API端点:

    1. openstack endpoint create --region RegionOne image public http://controller:9292
    2. openstack endpoint create --region RegionOne image internal http://controller:9292
    3. openstack endpoint create --region RegionOne image admin http://controller:9292
  2. 安装软件包

    1. yum install openstack-glance
  3. 配置glance相关配置:

    1. vim /etc/glance/glance-api.conf
    2. [database]
    3. connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    4. [keystone_authtoken]
    5. www_authenticate_uri = http://controller:5000
    6. auth_url = http://controller:5000
    7. memcached_servers = controller:11211
    8. auth_type = password
    9. project_domain_name = Default
    10. user_domain_name = Default
    11. project_name = service
    12. username = glance
    13. password = GLANCE_PASS
    14. [paste_deploy]
    15. flavor = keystone
    16. [glance_store]
    17. stores = file,http
    18. default_store = file
    19. filesystem_store_datadir = /var/lib/glance/images/

    解释:

    [database]部分,配置数据库入口

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    注意

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    替换 GLANCE_PASS 为 glance 用户的密码

  4. 同步数据库:

    1. su -s /bin/sh -c "glance-manage db_sync" glance
  5. 启动服务:

    1. systemctl enable openstack-glance-api.service
    2. systemctl start openstack-glance-api.service
  6. 验证

    下载镜像

    1. source ~/.admin-openrc
    2. wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

    注意

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    向Image服务上传镜像:

    1. openstack image create --disk-format qcow2 --container-format bare \
    2. --file cirros-0.4.0-x86_64-disk.img --public cirros

    确认镜像上传并验证属性:

    1. openstack image list

Placement安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    1. mysql -u root -p
    2. MariaDB [(none)]> CREATE DATABASE placement;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    4. IDENTIFIED BY 'PLACEMENT_DBPASS';
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    6. IDENTIFIED BY 'PLACEMENT_DBPASS';
    7. MariaDB [(none)]> exit

    注意

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    1. source admin-openrc

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    创建Placement API服务

    1. openstack user create --domain default --password-prompt placement
    2. openstack role add --project service --user placement admin
    3. openstack service create --name placement --description "Placement API" placement

    创建placement服务API端点:

    1. openstack endpoint create --region RegionOne placement public http://controller:8778
    2. openstack endpoint create --region RegionOne placement internal http://controller:8778
    3. openstack endpoint create --region RegionOne placement admin http://controller:8778
  2. 安装和配置

    安装软件包:

    1. yum install openstack-placement-api

    配置placement:

    编辑 /etc/placement/placement.conf 文件:

    在[placement_database]部分,配置数据库入口

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    1. # vim /etc/placement/placement.conf
    2. [placement_database]
    3. # ...
    4. connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    5. [api]
    6. # ...
    7. auth_strategy = keystone
    8. [keystone_authtoken]
    9. # ...
    10. auth_url = http://controller:5000/v3
    11. memcached_servers = controller:11211
    12. auth_type = password
    13. project_domain_name = Default
    14. user_domain_name = Default
    15. project_name = service
    16. username = placement
    17. password = PLACEMENT_PASS

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    同步数据库:

    1. su -s /bin/sh -c "placement-manage db sync" placement

    启动httpd服务:

    1. systemctl restart httpd
  3. 验证

    执行如下命令,执行状态检查:

    1. . admin-openrc
    2. placement-status upgrade check

    安装osc-placement,列出可用的资源类别及特性:

    1. yum install python3-osc-placement
    2. openstack --os-placement-api-version 1.2 resource class list --sort-column name
    3. openstack --os-placement-api-version 1.6 trait list --sort-column name

Nova 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    1. mysql -u root -p (CTL)
    2. MariaDB [(none)]> CREATE DATABASE nova_api;
    3. MariaDB [(none)]> CREATE DATABASE nova;
    4. MariaDB [(none)]> CREATE DATABASE nova_cell0;
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    6. IDENTIFIED BY 'NOVA_DBPASS';
    7. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    8. IDENTIFIED BY 'NOVA_DBPASS';
    9. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    10. IDENTIFIED BY 'NOVA_DBPASS';
    11. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    12. IDENTIFIED BY 'NOVA_DBPASS';
    13. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    14. IDENTIFIED BY 'NOVA_DBPASS';
    15. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    16. IDENTIFIED BY 'NOVA_DBPASS';
    17. MariaDB [(none)]> exit

    注意

    替换NOVA_DBPASS,为nova数据库设置密码

    1. source ~/.admin-openrc (CTL)

    创建nova服务凭证:

    1. openstack user create --domain default --password-prompt nova (CTL)
    2. openstack role add --project service --user nova admin (CTL)
    3. openstack service create --name nova --description "OpenStack Compute" compute (CTL)

    创建nova API端点:

    1. openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL)
    2. openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL)
    3. openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL)
  2. 安装软件包

    1. yum install openstack-nova-api openstack-nova-conductor \ (CTL)
    2. openstack-nova-novncproxy openstack-nova-scheduler
    3. yum install openstack-nova-compute (CPT)

    注意

    如果为arm64结构,还需要执行以下命令

    1. yum install edk2-aarch64 (CPT)
  3. 配置nova相关配置

    1. vim /etc/nova/nova.conf
    2. [DEFAULT]
    3. enabled_apis = osapi_compute,metadata
    4. transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    5. my_ip = 10.0.0.1
    6. use_neutron = true
    7. firewall_driver = nova.virt.firewall.NoopFirewallDriver
    8. compute_driver=libvirt.LibvirtDriver (CPT)
    9. instances_path = /var/lib/nova/instances/ (CPT)
    10. lock_path = /var/lib/nova/tmp (CPT)
    11. [api_database]
    12. connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL)
    13. [database]
    14. connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL)
    15. [api]
    16. auth_strategy = keystone
    17. [keystone_authtoken]
    18. www_authenticate_uri = http://controller:5000/
    19. auth_url = http://controller:5000/
    20. memcached_servers = controller:11211
    21. auth_type = password
    22. project_domain_name = Default
    23. user_domain_name = Default
    24. project_name = service
    25. username = nova
    26. password = NOVA_PASS
    27. [vnc]
    28. enabled = true
    29. server_listen = $my_ip
    30. server_proxyclient_address = $my_ip
    31. novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT)
    32. [libvirt]
    33. virt_type = qemu (CPT)
    34. cpu_mode = custom (CPT)
    35. cpu_model = cortex-a72 (CPT)
    36. [glance]
    37. api_servers = http://controller:9292
    38. [oslo_concurrency]
    39. lock_path = /var/lib/nova/tmp (CTL)
    40. [placement]
    41. region_name = RegionOne
    42. project_domain_name = Default
    43. project_name = service
    44. auth_type = password
    45. user_domain_name = Default
    46. auth_url = http://controller:5000/v3
    47. username = placement
    48. password = PLACEMENT_PASS
    49. [neutron]
    50. auth_url = http://controller:5000
    51. auth_type = password
    52. project_domain_name = default
    53. user_domain_name = default
    54. region_name = RegionOne
    55. project_name = service
    56. username = neutron
    57. password = NEUTRON_PASS
    58. service_metadata_proxy = true (CTL)
    59. metadata_proxy_shared_secret = METADATA_SECRET (CTL)

    解释

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    [api_database] [database]部分,配置数据库入口;

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    [vnc]部分,启用并配置远程控制台入口;

    [glance]部分,配置镜像服务API的地址;

    [oslo_concurrency]部分,配置lock path;

    [placement]部分,配置placement服务的入口。

    注意

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    配置 my_ip 为控制节点的管理IP地址;

    替换 NOVA_DBPASS 为nova数据库的密码;

    替换 NOVA_PASS 为nova用户的密码;

    替换 PLACEMENT_PASS 为placement用户的密码;

    替换 NEUTRON_PASS 为neutron用户的密码;

    替换METADATA_SECRET为合适的元数据代理secret。

    额外

    确定是否支持虚拟机硬件加速(x86架构):

    1. egrep -c '(vmx|svm)' /proc/cpuinfo (CPT)

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    1. vim /etc/nova/nova.conf (CPT)
    2. [libvirt]
    3. virt_type = qemu

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    注意

    如果为arm64结构,还需要执行以下命令

    1. vim /etc/libvirt/qemu.conf
    2. nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    3. /usr/share/AAVMF/AAVMF_VARS.fd", \
    4. "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    5. /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    6. vim /etc/qemu/firmware/edk2-aarch64.json
    7. {
    8. "description": "UEFI firmware for ARM64 virtual machines",
    9. "interface-types": [
    10. "uefi"
    11. ],
    12. "mapping": {
    13. "device": "flash",
    14. "executable": {
    15. "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    16. "format": "raw"
    17. },
    18. "nvram-template": {
    19. "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    20. "format": "raw"
    21. }
    22. },
    23. "targets": [
    24. {
    25. "architecture": "aarch64",
    26. "machines": [
    27. "virt-*"
    28. ]
    29. }
    30. ],
    31. "features": [
    32. ],
    33. "tags": [
    34. ]
    35. }
    36. (CPT)
  4. 同步数据库

    同步nova-api数据库:

    1. su -s /bin/sh -c "nova-manage api_db sync" nova (CTL)

    注册cell0数据库:

    1. su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL)

    创建cell1 cell:

    1. su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL)

    同步nova数据库:

    1. su -s /bin/sh -c "nova-manage db sync" nova (CTL)

    验证cell0和cell1注册正确:

    1. su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL)

    添加计算节点到openstack集群

    1. su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT)
  5. 启动服务

    1. systemctl enable \ (CTL)
    2. openstack-nova-api.service \
    3. openstack-nova-scheduler.service \
    4. openstack-nova-conductor.service \
    5. openstack-nova-novncproxy.service
    6. systemctl start \ (CTL)
    7. openstack-nova-api.service \
    8. openstack-nova-scheduler.service \
    9. openstack-nova-conductor.service \
    10. openstack-nova-novncproxy.service
    1. systemctl enable libvirtd.service openstack-nova-compute.service (CPT)
    2. systemctl start libvirtd.service openstack-nova-compute.service (CPT)
  6. 验证

    1. source ~/.admin-openrc (CTL)

    列出服务组件,验证每个流程都成功启动和注册:

    1. openstack compute service list (CTL)

    列出身份服务中的API端点,验证与身份服务的连接:

    1. openstack catalog list (CTL)

    列出镜像服务中的镜像,验证与镜像服务的连接:

    1. openstack image list (CTL)

    检查cells是否运作成功,以及其他必要条件是否已具备。

    1. nova-status upgrade check (CTL)

Neutron 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    1. mysql -u root -p (CTL)
    2. MariaDB [(none)]> CREATE DATABASE neutron;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    4. IDENTIFIED BY 'NEUTRON_DBPASS';
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    6. IDENTIFIED BY 'NEUTRON_DBPASS';
    7. MariaDB [(none)]> exit

    注意

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    1. source ~/.admin-openrc (CTL)

    创建neutron服务凭证

    1. openstack user create --domain default --password-prompt neutron (CTL)
    2. openstack role add --project service --user neutron admin (CTL)
    3. openstack service create --name neutron --description "OpenStack Networking" network (CTL)

    创建Neutron服务API端点:

    1. openstack endpoint create --region RegionOne network public http://controller:9696 (CTL)
    2. openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL)
    3. openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL)
  2. 安装软件包:

    1. yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL)
    2. openstack-neutron-ml2
    1. yum install openstack-neutron-linuxbridge ebtables ipset (CPT)
  3. 配置neutron相关配置:

    配置主体配置

    1. vim /etc/neutron/neutron.conf
    2. [database]
    3. connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL)
    4. [DEFAULT]
    5. core_plugin = ml2 (CTL)
    6. service_plugins = router (CTL)
    7. allow_overlapping_ips = true (CTL)
    8. transport_url = rabbit://openstack:RABBIT_PASS@controller
    9. auth_strategy = keystone
    10. notify_nova_on_port_status_changes = true (CTL)
    11. notify_nova_on_port_data_changes = true (CTL)
    12. api_workers = 3 (CTL)
    13. [keystone_authtoken]
    14. www_authenticate_uri = http://controller:5000
    15. auth_url = http://controller:5000
    16. memcached_servers = controller:11211
    17. auth_type = password
    18. project_domain_name = Default
    19. user_domain_name = Default
    20. project_name = service
    21. username = neutron
    22. password = NEUTRON_PASS
    23. [nova]
    24. auth_url = http://controller:5000 (CTL)
    25. auth_type = password (CTL)
    26. project_domain_name = Default (CTL)
    27. user_domain_name = Default (CTL)
    28. region_name = RegionOne (CTL)
    29. project_name = service (CTL)
    30. username = nova (CTL)
    31. password = NOVA_PASS (CTL)
    32. [oslo_concurrency]
    33. lock_path = /var/lib/neutron/tmp

    解释

    [database]部分,配置数据库入口;

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    [default] [keystone]部分,配置身份认证服务入口;

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    [oslo_concurrency]部分,配置lock path。

    注意

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    替换NEUTRON_PASS为 neutron 用户的密码;

    替换NOVA_PASS为 nova 用户的密码。

    配置ML2插件:

    1. vim /etc/neutron/plugins/ml2/ml2_conf.ini
    2. [ml2]
    3. type_drivers = flat,vlan,vxlan
    4. tenant_network_types = vxlan
    5. mechanism_drivers = linuxbridge,l2population
    6. extension_drivers = port_security
    7. [ml2_type_flat]
    8. flat_networks = provider
    9. [ml2_type_vxlan]
    10. vni_ranges = 1:1000
    11. [securitygroup]
    12. enable_ipset = true

    创建/etc/neutron/plugin.ini的符号链接

    1. ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

    注意

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    [securitygroup]部分,配置允许 ipset。

    补充

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    配置 Linux bridge 代理:

    1. vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    2. [linux_bridge]
    3. physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    4. [vxlan]
    5. enable_vxlan = true
    6. local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    7. l2_population = true
    8. [securitygroup]
    9. enable_security_group = true
    10. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

    解释

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    注意

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    配置Layer-3代理:

    1. vim /etc/neutron/l3_agent.ini (CTL)
    2. [DEFAULT]
    3. interface_driver = linuxbridge

    解释

    在[default]部分,配置接口驱动为linuxbridge

    配置DHCP代理:

    1. vim /etc/neutron/dhcp_agent.ini (CTL)
    2. [DEFAULT]
    3. interface_driver = linuxbridge
    4. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    5. enable_isolated_metadata = true

    解释

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    配置metadata代理:

    1. vim /etc/neutron/metadata_agent.ini (CTL)
    2. [DEFAULT]
    3. nova_metadata_host = controller
    4. metadata_proxy_shared_secret = METADATA_SECRET

    解释

    [default]部分,配置元数据主机和shared secret。

    注意

    替换METADATA_SECRET为合适的元数据代理secret。

  4. 配置nova相关配置

    1. vim /etc/nova/nova.conf
    2. [neutron]
    3. auth_url = http://controller:5000
    4. auth_type = password
    5. project_domain_name = Default
    6. user_domain_name = Default
    7. region_name = RegionOne
    8. project_name = service
    9. username = neutron
    10. password = NEUTRON_PASS
    11. service_metadata_proxy = true (CTL)
    12. metadata_proxy_shared_secret = METADATA_SECRET (CTL)

    解释

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    注意

    替换NEUTRON_PASS为 neutron 用户的密码;

    替换METADATA_SECRET为合适的元数据代理secret。

  5. 同步数据库:

    1. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    2. --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  6. 重启计算API服务:

    1. systemctl restart openstack-nova-api.service
  7. 启动网络服务

    1. systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL)
    2. neutron-dhcp-agent.service neutron-metadata-agent.service \
    3. systemctl enable neutron-l3-agent.service
    4. systemctl restart openstack-nova-api.service neutron-server.service (CTL)
    5. neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    6. neutron-metadata-agent.service neutron-l3-agent.service
    7. systemctl enable neutron-linuxbridge-agent.service (CPT)
    8. systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT)
  8. 验证

    验证 neutron 代理启动成功:

    1. openstack network agent list

Cinder 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    1. mysql -u root -p
    2. MariaDB [(none)]> CREATE DATABASE cinder;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    4. IDENTIFIED BY 'CINDER_DBPASS';
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    6. IDENTIFIED BY 'CINDER_DBPASS';
    7. MariaDB [(none)]> exit

    注意

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    1. source ~/.admin-openrc

    创建cinder服务凭证:

    1. openstack user create --domain default --password-prompt cinder
    2. openstack role add --project service --user cinder admin
    3. openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    4. openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

    创建块存储服务API端点:

    1. openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    2. openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    3. openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    4. openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    5. openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    6. openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
  2. 安装软件包:

    1. yum install openstack-cinder-api openstack-cinder-scheduler (CTL)
    1. yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG)
    2. openstack-cinder-volume openstack-cinder-backup
  3. 准备存储设备,以下仅为示例:

    ```shell pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb

    vim /etc/lvm/lvm.conf

  1. devices {
  2. ...
  3. filter = [ "a/vdb/", "r/.*/"]
  4. ```
  5. ***解释***
  6. 在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。
  1. 准备NFS

    1. mkdir -p /root/cinder/backup
    2. cat << EOF >> /etc/export
    3. /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    4. EOF
  2. 配置cinder相关配置:

    1. vim /etc/cinder/cinder.conf
    2. [DEFAULT]
    3. transport_url = rabbit://openstack:RABBIT_PASS@controller
    4. auth_strategy = keystone
    5. my_ip = 10.0.0.11
    6. enabled_backends = lvm (STG)
    7. backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG)
    8. backup_share=HOST:PATH (STG)
    9. [database]
    10. connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    11. [keystone_authtoken]
    12. www_authenticate_uri = http://controller:5000
    13. auth_url = http://controller:5000
    14. memcached_servers = controller:11211
    15. auth_type = password
    16. project_domain_name = Default
    17. user_domain_name = Default
    18. project_name = service
    19. username = cinder
    20. password = CINDER_PASS
    21. [oslo_concurrency]
    22. lock_path = /var/lib/cinder/tmp
    23. [lvm]
    24. volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG)
    25. volume_group = cinder-volumes (STG)
    26. iscsi_protocol = iscsi (STG)
    27. iscsi_helper = tgtadm (STG)

    解释

    [database]部分,配置数据库入口;

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    [oslo_concurrency]部分,配置lock path。

    注意

    替换CINDER_DBPASS为 cinder 数据库的密码;

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    配置my_ip为控制节点的管理 IP 地址;

    替换CINDER_PASS为 cinder 用户的密码;

    替换HOST:PATH为 NFS的HOSTIP和共享路径 用户的密码;

  3. 同步数据库:

    1. su -s /bin/sh -c "cinder-manage db sync" cinder (CTL)
  4. 配置nova:

    1. vim /etc/nova/nova.conf (CTL)
    2. [cinder]
    3. os_region_name = RegionOne
  5. 重启计算API服务

    1. systemctl restart openstack-nova-api.service
  6. 启动cinder服务

    1. systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL)
    2. systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL)
    1. systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG)
    2. openstack-cinder-volume.service \
    3. openstack-cinder-backup.service
    4. systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG)
    5. openstack-cinder-volume.service \
    6. openstack-cinder-backup.service

    注意

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    1. include /var/lib/cinder/volumes/*
  7. 验证

    1. source ~/.admin-openrc
    2. openstack volume service list

horizon 安装

  1. 安装软件包

    1. yum install openstack-dashboard
  2. 修改文件

    修改变量

    1. vim /etc/openstack-dashboard/local_settings
    2. OPENSTACK_HOST = "controller"
    3. ALLOWED_HOSTS = ['*', ]
    4. SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    5. CACHES = {
    6. 'default': {
    7. 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    8. 'LOCATION': 'controller:11211',
    9. }
    10. }
    11. OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    12. OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    13. OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    14. OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
    15. OPENSTACK_API_VERSIONS = {
    16. "identity": 3,
    17. "image": 2,
    18. "volume": 3,
    19. }
  3. 重启 httpd 服务

    1. systemctl restart httpd.service memcached.service
  4. 验证 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    注意

    替换HOSTIP为控制节点管理平面IP地址

Tempest 安装

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

  1. 安装Tempest

    1. yum install openstack-tempest
  2. 初始化目录

    1. tempest init mytest
  3. 修改配置文件。

    1. cd mytest
    2. vi etc/tempest.conf

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

  4. 执行测试

    1. tempest run
  5. 安装tempest扩展(可选) OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用:

    1. yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

Ironic 安装

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

  1. 设置数据库

    裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

    1. mysql -u root -p
    2. MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
    4. IDENTIFIED BY 'IRONIC_DBPASSWORD';
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
    6. IDENTIFIED BY 'IRONIC_DBPASSWORD';
  2. 创建服务用户认证

    1、创建Bare Metal服务用户

    1. openstack user create --password IRONIC_PASSWORD \
    2. --email ironic@example.com ironic
    3. openstack role add --project service --user ironic admin
    4. openstack service create --name ironic
    5. --description "Ironic baremetal provisioning service" baremetal
    6. openstack service create --name ironic-inspector --description "Ironic inspector baremetal provisioning service" baremetal-introspection
    7. openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
    8. openstack role add --project service --user ironic-inspector admin

    2、创建Bare Metal服务访问入口

    1. openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
    2. openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
    3. openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
    4. openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
    5. openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
    6. openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
  3. 配置ironic-api服务

    配置文件路径/etc/ironic/ironic.conf

    1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

    1. [database]
    2. # The SQLAlchemy connection string used to connect to the
    3. # database (string value)
    4. connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic

    2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

    1. [DEFAULT]
    2. # A URL representing the messaging driver to use and its full
    3. # configuration. (string value)
    4. transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/

    用户也可自行使用json-rpc方式替换rabbitmq

    3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

    1. [DEFAULT]
    2. # Authentication strategy used by ironic-api: one of
    3. # "keystone" or "noauth". "noauth" should not be used in a
    4. # production environment because all authentication will be
    5. # disabled. (string value)
    6. auth_strategy=keystone
    7. host = controller
    8. memcache_servers = controller:11211
    9. enabled_network_interfaces = flat,noop,neutron
    10. default_network_interface = noop
    11. transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
    12. enabled_hardware_types = ipmi
    13. enabled_boot_interfaces = pxe
    14. enabled_deploy_interfaces = direct
    15. default_deploy_interface = direct
    16. enabled_inspect_interfaces = inspector
    17. enabled_management_interfaces = ipmitool
    18. enabled_power_interfaces = ipmitool
    19. enabled_rescue_interfaces = no-rescue,agent
    20. isolinux_bin = /usr/share/syslinux/isolinux.bin
    21. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
    22. [keystone_authtoken]
    23. # Authentication type to load (string value)
    24. auth_type=password
    25. # Complete public Identity API endpoint (string value)
    26. www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
    27. # Complete admin Identity API endpoint. (string value)
    28. auth_url=http://PRIVATE_IDENTITY_IP:5000
    29. # Service username. (string value)
    30. username=ironic
    31. # Service account password. (string value)
    32. password=IRONIC_PASSWORD
    33. # Service tenant name. (string value)
    34. project_name=service
    35. # Domain name containing project (string value)
    36. project_domain_name=Default
    37. # User's domain name (string value)
    38. user_domain_name=Default
    39. [agent]
    40. deploy_logs_collect = always
    41. deploy_logs_local_path = /var/log/ironic/deploy
    42. deploy_logs_storage_backend = local
    43. image_download_source = http
    44. stream_raw_images = false
    45. force_raw_images = false
    46. verify_ca = False
    47. [oslo_concurrency]
    48. [oslo_messaging_notifications]
    49. transport_url = rabbit://openstack:123456@172.20.19.25:5672/
    50. topics = notifications
    51. driver = messagingv2
    52. [oslo_messaging_rabbit]
    53. amqp_durable_queues = True
    54. rabbit_ha_queues = True
    55. [pxe]
    56. ipxe_enabled = false
    57. pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    58. image_cache_size = 204800
    59. tftp_root=/var/lib/tftpboot/cephfs/
    60. tftp_master_path=/var/lib/tftpboot/cephfs/master_images
    61. [dhcp]
    62. dhcp_provider = none

    4、创建裸金属服务数据库表

    1. ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema

    5、重启ironic-api服务

    1. sudo systemctl restart openstack-ironic-api
  4. 配置ironic-conductor服务

    1、替换HOST_IP为conductor host的IP

    1. [DEFAULT]
    2. # IP address of this host. If unset, will determine the IP
    3. # programmatically. If unable to do so, will use "127.0.0.1".
    4. # (string value)
    5. my_ip=HOST_IP

    2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

    1. [database]
    2. # The SQLAlchemy connection string to use to connect to the
    3. # database. (string value)
    4. connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic

    3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

    1. [DEFAULT]
    2. # A URL representing the messaging driver to use and its full
    3. # configuration. (string value)
    4. transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/

    用户也可自行使用json-rpc方式替换rabbitmq

    4、配置凭证访问其他OpenStack服务

    为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

    1. [neutron] - 访问OpenStack网络服务
    2. [glance] - 访问OpenStack镜像服务
    3. [swift] - 访问OpenStack对象存储服务
    4. [cinder] - 访问OpenStack块存储服务
    5. [inspector] - 访问OpenStack裸金属introspection服务
    6. [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点

    简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

    在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

    1. 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
    2. 请求时使用特定的CA SSL证书进行HTTPS连接
    3. ironic-api服务配置相同的服务用户
    4. 动态密码认证插件基于其他选项发现合适的身份认证服务API版本
    1. [neutron]
    2. # Authentication type to load (string value)
    3. auth_type = password
    4. # Authentication URL (string value)
    5. auth_url=https://IDENTITY_IP:5000/
    6. # Username (string value)
    7. username=ironic
    8. # User's password (string value)
    9. password=IRONIC_PASSWORD
    10. # Project name to scope to (string value)
    11. project_name=service
    12. # Domain ID containing project (string value)
    13. project_domain_id=default
    14. # User's domain id (string value)
    15. user_domain_id=default
    16. # PEM encoded Certificate Authority to use when verifying
    17. # HTTPs connections. (string value)
    18. cafile=/opt/stack/data/ca-bundle.pem
    19. # The default region_name for endpoint URL discovery. (string
    20. # value)
    21. region_name = RegionOne
    22. # List of interfaces, in order of preference, for endpoint
    23. # URL. (list value)
    24. valid_interfaces=public

    默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

    1. [neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>

    5、配置允许的驱动程序和硬件类型

    通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

    1. [DEFAULT] enabled_hardware_types = ipmi

    配置硬件接口:

    1. enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool

    配置接口默认值:

    1. [DEFAULT] default_deploy_interface = direct default_network_interface = neutron

    如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

    6、重启ironic-conductor服务

    1. sudo systemctl restart openstack-ironic-conductor
  5. 配置ironic-inspector服务

    配置文件路径/etc/ironic-inspector/inspector.conf

    1、创建数据库

    1. # mysql -u root -p
    2. MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
    4. MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
    5. IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';

    2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

    1. [database]
    2. backend = sqlalchemy
    3. connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
    4. min_pool_size = 100
    5. max_pool_size = 500
    6. pool_timeout = 30
    7. max_retries = 5
    8. max_overflow = 200
    9. db_retry_interval = 2
    10. db_inc_retry_interval = True
    11. db_max_retry_interval = 2
    12. db_max_retries = 5

    3、配置消息度列通信地址

    1. [DEFAULT]
    2. transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/

    4、设置keystone认证

    1. [DEFAULT]
    2. auth_strategy = keystone
    3. timeout = 900
    4. rootwrap_config = /etc/ironic-inspector/rootwrap.conf
    5. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
    6. log_dir = /var/log/ironic-inspector
    7. state_path = /var/lib/ironic-inspector
    8. use_stderr = False
    9. [ironic]
    10. api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
    11. auth_type = password
    12. auth_url = http://PUBLIC_IDENTITY_IP:5000
    13. auth_strategy = keystone
    14. ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
    15. os_region = RegionOne
    16. project_name = service
    17. project_domain_name = Default
    18. user_domain_name = Default
    19. username = IRONIC_SERVICE_USER_NAME
    20. password = IRONIC_SERVICE_USER_PASSWORD
    21. [keystone_authtoken]
    22. auth_type = password
    23. auth_url = http://control:5000
    24. www_authenticate_uri = http://control:5000
    25. project_domain_name = default
    26. user_domain_name = default
    27. project_name = service
    28. username = ironic_inspector
    29. password = IRONICPASSWD
    30. region_name = RegionOne
    31. memcache_servers = control:11211
    32. token_cache_time = 300
    33. [processing]
    34. add_ports = active
    35. processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
    36. ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
    37. always_store_ramdisk_logs = true
    38. store_data =none
    39. power_off = false
    40. [pxe_filter]
    41. driver = iptables
    42. [capabilities]
    43. boot_mode=True

    5、配置ironic inspector dnsmasq服务

    1. # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
    2. port=0
    3. interface=enp3s0 #替换为实际监听网络接口
    4. dhcp-range=172.20.19.100,172.20.19.110 #替换为实际dhcp地址范围
    5. bind-interfaces
    6. enable-tftp
    7. dhcp-match=set:efi,option:client-arch,7
    8. dhcp-match=set:efi,option:client-arch,9
    9. dhcp-match=aarch64, option:client-arch,11
    10. dhcp-boot=tag:aarch64,grubaa64.efi
    11. dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
    12. dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
    13. tftp-root=/tftpboot #替换为实际tftpboot目录
    14. log-facility=/var/log/dnsmasq.log

    6、关闭ironic provision网络子网的dhcp

    1. openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c

    7、初始化ironic-inspector服务的数据库

    在控制节点执行:

    1. ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade

    8、启动服务

    1. systemctl enable --now openstack-ironic-inspector.service
    2. systemctl enable --now openstack-ironic-inspector-dnsmasq.service
  6. 配置httpd服务

    1. 创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

      1. mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
  1. 安装和配置httpd服务
  1. 1. 安装httpd服务,已有请忽略
  2. ```
  3. yum install httpd -y
  4. ```
  5. 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:
  6. ```
  7. Listen 8080
  8. <VirtualHost *:8080>
  9. ServerName ironic.openeuler.com
  10. ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
  11. CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
  12. DocumentRoot "/var/lib/ironic/httproot"
  13. <Directory "/var/lib/ironic/httproot">
  14. Options Indexes FollowSymLinks
  15. Require all granted
  16. </Directory>
  17. LogLevel warn
  18. AddDefaultCharset UTF-8
  19. EnableSendfile on
  20. </VirtualHost>
  21. ```
  22. 注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。
  23. 3. 重启httpd服务。
  24. ```
  25. systemctl restart httpd
  26. ```
  1. deploy ramdisk镜像制作

    W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 若使用W版原生工具,则需要安装对应的软件包。

    1. yum install openstack-ironic-python-agent
    2. 或者
    3. yum install diskimage-builder

    具体的使用方法可以参考官方文档

    这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

    1. 安装 ironic-python-agent-builder

      1. 安装工具:

        1. pip install ironic-python-agent-builder
      2. 修改以下文件中的python解释器:

        1. /usr/bin/yum /usr/libexec/urlgrabber-ext-down
      3. 安装其它必须的工具:

        1. yum install git

        由于DIB依赖semanage命令,所以在制作镜像之前确定该命令是否可用:semanage --help,如果提示无此命令,安装即可:

        1. # 先查询需要安装哪个包
        2. [root@localhost ~]# yum provides /usr/sbin/semanage
        3. 已加载插件:fastestmirror
        4. Loading mirror speeds from cached hostfile
        5. * base: mirror.vcu.edu
        6. * extras: mirror.vcu.edu
        7. * updates: mirror.math.princeton.edu
        8. policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
        9. base
        10. 匹配来源:
        11. 文件名 :/usr/sbin/semanage
        12. # 安装
        13. [root@localhost ~]# yum install policycoreutils-python
    2. 制作镜像

      如果是arm架构,需要添加:

      1. export ARCH=aarch64

      基本用法:

      1. usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
      2. [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
      3. distribution
      4. positional arguments:
      5. distribution Distribution to use
      6. optional arguments:
      7. -h, --help show this help message and exit
      8. -r RELEASE, --release RELEASE
      9. Distribution release to use
      10. -o OUTPUT, --output OUTPUT
      11. Output base file name
      12. -e ELEMENT, --element ELEMENT
      13. Additional DIB element to use
      14. -b BRANCH, --branch BRANCH
      15. If set, override the branch that is used for ironic-
      16. python-agent and requirements
      17. -v, --verbose Enable verbose logging in diskimage-builder
      18. --extra-args EXTRA_ARGS
      19. Extra arguments to pass to diskimage-builder

      举例说明:

      1. ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    3. 允许ssh登陆

      初始化环境变量,然后制作镜像:

      1. export DIB_DEV_USER_USERNAME=ipa \
      2. export DIB_DEV_USER_PWDLESS_SUDO=yes \
      3. export DIB_DEV_USER_PASSWORD='123'
      4. ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    4. 指定代码仓库

      初始化对应的环境变量,然后制作镜像:

      1. # 指定仓库地址以及版本
      2. DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
      3. DIB_REPOREF_ironic_python_agent=origin/develop
      4. # 直接从gerrit上clone代码
      5. DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
      6. DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1

      参考:source-repositories

      指定仓库地址及版本验证成功。

    5. 注意

原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:

生成的错误配置文件:

erro

如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。

需要用户对生成grub.cfg的代码逻辑自行修改。

ironic向ipa发送查询命令执行状态请求的tls报错:

w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

  1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
  1. [agent]
  2. verify_ca = False
  3. [pxe]
  4. pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1

2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:

/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

  1. [DEFAULT]
  2. enable_auto_tls = False

设置权限:

  1. chown -R ipa.ipa /etc/ironic_python_agent/
  1. 修改ipa服务的服务启动文件,添加配置文件选项

    vim usr/lib/systemd/system/ironic-python-agent.service

    1. [Unit]
    2. Description=Ironic Python Agent
    3. After=network-online.target
    4. [Service]
    5. ExecStartPre=/sbin/modprobe vfat
    6. ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    7. Restart=always
    8. RestartSec=30s
    9. [Install]
    10. WantedBy=multi-user.target

Kolla 安装

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 21.09中引入了Kolla和Kolla-ansible服务。

Kolla的安装十分简单,只需要安装对应的RPM包即可

  1. yum install openstack-kolla openstack-kolla-ansible

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

Trove 安装

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

  1. 设置数据库

    数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

    1. mysql -u root -p
    2. MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
    3. MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
    4. IDENTIFIED BY 'TROVE_DBPASSWORD';
    5. MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
    6. IDENTIFIED BY 'TROVE_DBPASSWORD';
  2. 创建服务用户认证

    1、创建Trove服务用户

    1. openstack user create --password TROVE_PASSWORD \
    2. --email trove@example.com trove
    3. openstack role add --project service --user trove admin
    4. openstack service create --name trove
    5. --description "Database service" database

    解释: TROVE_PASSWORD 替换为trove用户的密码

    2、创建Database服务访问入口

    1. openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
    2. openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
    3. openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
  3. 安装和配置Trove各组件 1、安装Troveshell script yum install openstack-trove python-troveclient

    1. 配置trove.conf ```shell script vim /etc/trove/trove.conf

    [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True

    Set these if using Neutron Networking

    network_driver=trove.network.neutron.NeutronDriver network_label_regex=.*

transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/

[database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove

[keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password

[service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove

[mariadb] tcp_ports = 3306,4444,4567,4568

[mysql] tcp_ports = 3306

[postgresql] tcp_ports = 5432

  1. **解释:**
  2. - `[Default]`分组中`bind_host`配置为Trove部署节点的IP
  3. - `nova_compute_url` `cinder_url` NovaCinderKeystone中创建的endpoint
  4. - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例
  5. - `transport_url` `RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码
  6. - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息
  7. - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码
  8. 5. 配置`trove-guestagent.conf`
  9. ```shell script
  10. vim /etc/trove/trove-guestagent.conf
  11. [DEFAULT]
  12. log_file = trove-guestagent.log
  13. log_dir = /var/log/trove/
  14. ignore_users = os_admin
  15. control_exchange = trove
  16. transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
  17. rpc_backend = rabbit
  18. command_process_timeout = 60
  19. use_syslog = False
  20. debug = True
  21. [service_credentials]
  22. auth_url = http://controller:5000/v3/
  23. region_name = RegionOne
  24. project_name = service
  25. password = TROVE_PASS
  26. project_domain_name = Default
  27. user_domain_name = Default
  28. username = trove
  29. [mysql]
  30. docker_image = your-registry/your-repo/mysql
  31. backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 报心跳,因此需要配置RabbitMQ的用户和密码信息。 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  1. 生成数据Trove数据库表 shell script su -s /bin/sh -c "trove-manage db_sync" trove
    1. 完成安装配置
  2. 配置Trove服务自启动 shell script systemctl enable openstack-trove-api.service \ openstack-trove-taskmanager.service \ openstack-trove-conductor.service
  3. 启动服务 shell script systemctl start openstack-trove-api.service \ openstack-trove-taskmanager.service \ openstack-trove-conductor.service

    Swift 安装

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

  1. 创建服务凭证、API端点。

    创建服务凭证

    1. #创建swift用户:
    2. openstack user create --domain default --password-prompt swift
    3. #admin为swift用户添加角色:
    4. openstack role add --project service --user swift admin
    5. #创建swift服务实体:
    6. openstack service create --name swift --description "OpenStack Object Storage" object-store

    创建swift API 端点:

    1. openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
    2. openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
    3. openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
  2. 安装软件包:

    1. yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached CTL
  3. 配置proxy-server相关配置

    Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

    注意

    注意替换password为您swift在身份服务中为用户选择的密码

  4. 安装和配置存储节点 (STG)

    安装支持的程序包:

    1. yum install xfsprogs rsync

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    1. mkfs.xfs /dev/vdb
    2. mkfs.xfs /dev/vdc

    创建挂载点目录结构:

    1. mkdir -p /srv/node/vdb
    2. mkdir -p /srv/node/vdc

    找到新分区的 UUID:

    1. blkid

    编辑/etc/fstab文件并将以下内容添加到其中:

    1. UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    2. UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2

    挂载设备:

    1. mount /srv/node/vdb
    2. mount /srv/node/vdc

    注意

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    1. [DEFAULT]
    2. uid = swift
    3. gid = swift
    4. log file = /var/log/rsyncd.log
    5. pid file = /var/run/rsyncd.pid
    6. address = MANAGEMENT_INTERFACE_IP_ADDRESS
    7. [account]
    8. max connections = 2
    9. path = /srv/node/
    10. read only = False
    11. lock file = /var/lock/account.lock
    12. [container]
    13. max connections = 2
    14. path = /srv/node/
    15. read only = False
    16. lock file = /var/lock/container.lock
    17. [object]
    18. max connections = 2
    19. path = /srv/node/
    20. read only = False
    21. lock file = /var/lock/object.lock

    替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    启动rsyncd服务并配置它在系统启动时启动:

    1. systemctl enable rsyncd.service
    2. systemctl start rsyncd.service
  5. 在存储节点安装和配置组件 (STG)

    安装软件包:

    1. yum install openstack-swift-account openstack-swift-container openstack-swift-object

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    确保挂载点目录结构的正确所有权:

    1. chown -R swift:swift /srv/node

    创建recon目录并确保其拥有正确的所有权:

    1. mkdir -p /var/cache/swift
    2. chown -R root:swift /var/cache/swift
    3. chmod -R 775 /var/cache/swift
  6. 创建账号环 (CTL)

    切换到/etc/swift目录。

    1. cd /etc/swift

    创建基础account.builder文件:

    1. swift-ring-builder account.builder create 10 1 1

    将每个存储节点添加到环中:

    1. swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    注意 对每个存储节点上的每个存储设备重复此命令

    验证戒指内容:

    1. swift-ring-builder account.builder

    重新平衡戒指:

    1. swift-ring-builder account.builder rebalance
  7. 创建容器环 (CTL)

    切换到/etc/swift目录。

    创建基础container.builder文件:

    1. swift-ring-builder container.builder create 10 1 1

    将每个存储节点添加到环中:

    1. swift-ring-builder container.builder \
    2. add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    3. --device DEVICE_NAME --weight 100

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    注意 对每个存储节点上的每个存储设备重复此命令

    验证戒指内容:

    1. swift-ring-builder container.builder

    重新平衡戒指:

    1. swift-ring-builder account.builder rebalance
  8. 创建对象环 (CTL)

    切换到/etc/swift目录。

    创建基础object.builder文件:

    1. ```shell
    2. swift-ring-builder object.builder create 10 1 1
    3. ```

    将每个存储节点添加到环中

    1. swift-ring-builder object.builder \
    2. add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    3. --device DEVICE_NAME --weight 100

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    注意 对每个存储节点上的每个存储设备重复此命令

    验证戒指内容:

    1. swift-ring-builder object.builder

    重新平衡戒指:

    1. swift-ring-builder account.builder rebalance

    分发环配置文件:

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上目录。

  1. 完成安装

    编辑/etc/swift/swift.conf文件

    1. [swift-hash]
    2. swift_hash_path_suffix = test-hash
    3. swift_hash_path_prefix = test-hash
    4. [storage-policy:0]
    5. name = Policy-0
    6. default = yes

    用唯一值替换 test-hash

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    在所有节点上,确保配置目录的正确所有权:

    1. chown -R root:swift /etc/swift

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    1. systemctl enable openstack-swift-proxy.service memcached.service
    2. systemctl start openstack-swift-proxy.service memcached.service

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    1. systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    2. systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    3. systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    4. systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    5. systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    6. systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service