腾讯云VPC部署

使用腾讯云VPC虚拟机部署Pigsty

本样例将基于腾讯云VPC部署Pigsty

资源准备

申请虚拟机

买几台虚拟机,如下图所示,其中11这一台作为元节点,带有公网IP,数据库节点3台,普通1核1G即可。

腾讯云VPC部署 - 图1

配置SSH远程登录

现在假设我们的管理用户名为vonng,就是我啦!现在首先配置我在元节点上到其他三台节点的ssh免密码访问。

  1. # vonng@172.21.0.11 # meta
  2. ssh-copy-id root@172.21.0.3 # pg-test-1
  3. ssh-copy-id root@172.21.0.4 # pg-test-2
  4. ssh-copy-id root@172.21.0.16 # pg-test-3
  5. scp ~/.ssh/id_rsa.pub root@172.21.0.3:/tmp/
  6. scp ~/.ssh/id_rsa.pub root@172.21.0.4:/tmp/
  7. scp ~/.ssh/id_rsa.pub root@172.21.0.16:/tmp/
  8. ssh root@172.21.0.3 'useradd vonng; mkdir -m 700 -p /home/vonng/.ssh; mv /tmp/id_rsa.pub /home/vonng/.ssh/authorized_keys; chown -R vonng /home/vonng; chmod 0600 /home/vonng/.ssh/authorized_keys;'
  9. ssh root@172.21.0.4 'useradd vonng; mkdir -m 700 -p /home/vonng/.ssh; mv /tmp/id_rsa.pub /home/vonng/.ssh/authorized_keys; chown -R vonng /home/vonng; chmod 0600 /home/vonng/.ssh/authorized_keys;'
  10. ssh root@172.21.0.16 'useradd vonng; mkdir -m 700 -p /home/vonng/.ssh; mv /tmp/id_rsa.pub /home/vonng/.ssh/authorized_keys; chown -R vonng /home/vonng; chmod 0600 /home/vonng/.ssh/authorized_keys;'

然后配置该用户免密码执行sudo的权限:

  1. ssh root@172.21.0.3 "echo '%vonng ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/vonng"
  2. ssh root@172.21.0.4 "echo '%vonng ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/vonng"
  3. ssh root@172.21.0.16 "echo '%vonng ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/vonng"
  4. # 校验配置是否成功
  5. ssh 172.21.0.3 'sudo ls'
  6. ssh 172.21.0.4 'sudo ls'
  7. ssh 172.21.0.16 'sudo ls'

下载项目

  1. # 从Github克隆代码
  2. git clone https://github.com/Vonng/pigsty
  3. # 如果您不能访问Github,也可以使用Pigsty CDN下载代码包
  4. curl http://pigsty-1304147732.cos.accelerate.myqcloud.com/latest/pigsty.tar.gz -o pigsty.tgz && tar -xf pigsty.tgz && cd pigsty

下载离线安装包

  1. # 从Github Release页面下载
  2. # https://github.com/Vonng/pigsty
  3. # 如果您不能访问Github,也可以使用Pigsty CDN下载离线软件包
  4. curl http://pigsty-1304147732.cos.accelerate.myqcloud.com/latest/pkg.tgz -o files/pkg.tgz
  5. # 将离线安装包解压至元节点指定位置 (也许要sudo)
  6. mv -rf /www/pigsty /www/pigsty-backup && mkdir -p /www/pigsty
  7. tar -xf files/pkg.tgz --strip-component=1 -C /www/pigsty/

调整配置

我们可以基于Pigsty沙箱的配置文件进行调整。因为都是普通低配虚拟机,因此不需要任何实质配置修改,只需要修改连接参数与节点信息即可。简单的说,只要改IP地址就可以了!

现在将沙箱中的IP地址全部替换为云环境中的实际IP地址。(如果使用了L2 VIP,VIP也需要替换为合理的地址)

说明沙箱IP虚拟机IP
元节点10.10.10.10172.21.0.11
数据库节点110.10.10.11172.21.0.3
数据库节点210.10.10.12172.21.0.4
数据库节点310.10.10.13172.21.0.16
pg-meta VIP10.10.10.2172.21.0.8
pg-test VIP10.10.10.3172.21.0.9

编辑配置文件:pigsty.yml,如果都是规格差不多的虚拟机,通常您只需要修改IP地址即可。特别需要注意的是在沙箱中我们是通过SSH Alias来连接的(诸如meta, node-1之类),记得移除所有ansible_host配置,我们将直接使用IP地址连接目标节点。

  1. cat pigsty.yml | \
  2. sed 's/10.10.10.10/172.21.0.11/g' |\
  3. sed 's/10.10.10.11/172.21.0.3/g' |\
  4. sed 's/10.10.10.12/172.21.0.4/g' |\
  5. sed 's/10.10.10.13/172.21.0.16/g' |\
  6. sed 's/10.10.10.2/172.21.0.8/g' |\
  7. sed 's/10.10.10.3/172.21.0.9/g' |\
  8. sed 's/10.10.10.3/172.21.0.9/g' |\
  9. sed 's/, ansible_host: meta//g' |\
  10. sed 's/ansible_host: meta//g' |\
  11. sed 's/, ansible_host: node-[123]//g' |\
  12. sed 's/vip_interface: eth1/vip_interface: eth0/g' |\
  13. sed 's/vip_cidrmask: 8/vip_cidrmask: 24/g' > pigsty2.yml
  14. mv pigsty.yml pigsty-backup.yml; mv pigsty2.yml pigsty.yml

就这?

是的,配置文件已经修改完了!我们可以看看到底修改了什么东西

  1. $ diff pigsty.yml pigsty-backup.yml
  2. 38c38
  3. < hosts: {172.21.0.11: {}}
  4. ---
  5. > hosts: {10.10.10.10: {ansible_host: meta}}
  6. 46c46
  7. < 172.21.0.11: {pg_seq: 1, pg_role: primary}
  8. ---
  9. > 10.10.10.10: {pg_seq: 1, pg_role: primary, ansible_host: meta}
  10. 109,111c109,111
  11. < vip_address: 172.21.0.8 # virtual ip address
  12. < vip_cidrmask: 24 # cidr network mask length
  13. < vip_interface: eth0 # interface to add virtual ip
  14. ---
  15. > vip_address: 10.10.10.2 # virtual ip address
  16. > vip_cidrmask: 8 # cidr network mask length
  17. > vip_interface: eth1 # interface to add virtual ip
  18. 120,122c120,122
  19. < 172.21.0.3: {pg_seq: 1, pg_role: primary}
  20. < 172.21.0.4: {pg_seq: 2, pg_role: replica}
  21. < 172.21.0.16: {pg_seq: 3, pg_role: offline}
  22. ---
  23. > 10.10.10.11: {pg_seq: 1, pg_role: primary, ansible_host: node-1}
  24. > 10.10.10.12: {pg_seq: 2, pg_role: replica, ansible_host: node-2}
  25. > 10.10.10.13: {pg_seq: 3, pg_role: offline, ansible_host: node-3}
  26. 147,149c147,149
  27. < vip_address: 172.21.0.9 # virtual ip address
  28. < vip_cidrmask: 24 # cidr network mask length
  29. < vip_interface: eth0 # interface to add virtual ip
  30. ---
  31. > vip_address: 10.10.10.3 # virtual ip address
  32. > vip_cidrmask: 8 # cidr network mask length
  33. > vip_interface: eth1 # interface to add virtual ip
  34. 326c326
  35. < - 172.21.0.11 yum.pigsty
  36. ---
  37. > - 10.10.10.10 yum.pigsty
  38. 329c329
  39. < - 172.21.0.11
  40. ---
  41. > - 10.10.10.10
  42. 393c393
  43. < - server 172.21.0.11 iburst
  44. ---
  45. > - server 10.10.10.10 iburst
  46. 417,430c417,430
  47. < - 172.21.0.8 pg-meta # sandbox vip for pg-meta
  48. < - 172.21.0.9 pg-test # sandbox vip for pg-test
  49. < - 172.21.0.11 meta-1 # sandbox node meta-1 (node-0)
  50. < - 172.21.0.3 node-1 # sandbox node node-1
  51. < - 172.21.0.4 node-2 # sandbox node node-2
  52. < - 172.21.0.16 node-3 # sandbox node node-3
  53. < - 172.21.0.11 pigsty
  54. < - 172.21.0.11 y.pigsty yum.pigsty
  55. < - 172.21.0.11 c.pigsty consul.pigsty
  56. < - 172.21.0.11 g.pigsty grafana.pigsty
  57. < - 172.21.0.11 p.pigsty prometheus.pigsty
  58. < - 172.21.0.11 a.pigsty alertmanager.pigsty
  59. < - 172.21.0.11 n.pigsty ntp.pigsty
  60. < - 172.21.0.11 h.pigsty haproxy.pigsty
  61. ---
  62. > - 10.10.10.2 pg-meta # sandbox vip for pg-meta
  63. > - 10.10.10.3 pg-test # sandbox vip for pg-test
  64. > - 10.10.10.10 meta-1 # sandbox node meta-1 (node-0)
  65. > - 10.10.10.11 node-1 # sandbox node node-1
  66. > - 10.10.10.12 node-2 # sandbox node node-2
  67. > - 10.10.10.13 node-3 # sandbox node node-3
  68. > - 10.10.10.10 pigsty
  69. > - 10.10.10.10 y.pigsty yum.pigsty
  70. > - 10.10.10.10 c.pigsty consul.pigsty
  71. > - 10.10.10.10 g.pigsty grafana.pigsty
  72. > - 10.10.10.10 p.pigsty prometheus.pigsty
  73. > - 10.10.10.10 a.pigsty alertmanager.pigsty
  74. > - 10.10.10.10 n.pigsty ntp.pigsty
  75. > - 10.10.10.10 h.pigsty haproxy.pigsty
  76. 442c442
  77. < grafana_url: http://admin:admin@172.21.0.11:3000 # grafana url
  78. ---
  79. > grafana_url: http://admin:admin@10.10.10.10:3000 # grafana url
  80. 478,480c478,480
  81. < meta-1: 172.21.0.11 # you could use existing dcs cluster
  82. < # meta-2: 172.21.0.3 # host which have their IP listed here will be init as server
  83. < # meta-3: 172.21.0.4 # 3 or 5 dcs nodes are recommend for production environment
  84. ---
  85. > meta-1: 10.10.10.10 # you could use existing dcs cluster
  86. > # meta-2: 10.10.10.11 # host which have their IP listed here will be init as server
  87. > # meta-3: 10.10.10.12 # 3 or 5 dcs nodes are recommend for production environment
  88. 692c692
  89. < - host all all 172.21.0.11/32 md5
  90. ---
  91. > - host all all 10.10.10.10/32 md5

执行剧本

您可以使用同样的 沙箱初始化 来完成 基础设施和数据库集群的初始化。

其输出结果除了IP地址,与沙箱并无区别。参考输出

访问Demo

现在,您可以通过公网IP访问元节点上的服务了!请注意做好信息安全工作。

与沙箱环境不同的是,如果您需要从公网访问Pigsty管理界面,需要自己把定义的域名写入/etc/hosts中,或者使用真正申请的域名。

否则就只能通过IP端口直连的方式访问,例如: http://<meta_node_public_ip>:3000

Nginx监听的域名可以通过可以通过 nginx_upstream 选项。

  1. nginx_upstream:
  2. - { name: home, host: pigsty.cc, url: "127.0.0.1:3000"}
  3. - { name: consul, host: c.pigsty.cc, url: "127.0.0.1:8500" }
  4. - { name: grafana, host: g.pigsty.cc, url: "127.0.0.1:3000" }
  5. - { name: prometheus, host: p.pigsty.cc, url: "127.0.0.1:9090" }
  6. - { name: alertmanager, host: a.pigsty.cc, url: "127.0.0.1:9093" }
  7. - { name: haproxy, host: h.pigsty.cc, url: "127.0.0.1:9091" }

最后修改 2022-05-27: init commit (1e3e284)