NODES Playbook

Use the NODES playbook to bring more nodes to Pigsty, adjusting nodes to the state described in the config.

Use the NODES playbook to bring more nodes to Pigsty, adjusting nodes to the state described in the config.

Once pigsty is installed on the meta node with infra.yml, You can add more nodes to Pigsty with nodes.yml or remove them from Pigsty with nodes-remove.yml.

PlaybookFunctionLink
nodesNode Provisioning. Register node into Pigsty and prepare for database deploymentsrc
nodes-removeNode Removal, uninstall DCS & Monitoring & Logging, de-register from Pigstysrc

nodes

The nodes.yml playbook will register nodes to Pigsty.

This playbook adjusts the target nodes to the state described in the inventory, installs the Consul service, and incorporates it into the Pigsty monitoring system. Nodes can be used for database deployment once provisioning is complete.

The behavior of this playbook is determined by the Config: NODES. The complete execution of this playbook may take 1 to 3 minutes when using the local yum repo, depending on the machine spec.

  1. ./nodes.yml # init all nodes in inventory (danger!)
  2. ./nodes.yml -l pg-test # init nodes under group pg-test (recommended!)
  3. ./nodes.yml -l pg-meta,pg-test # init nodes in both clusters: pg-meta and pg-test
  4. ./nodes.yml -l 10.10.10.11 # init node with ip address 10.10.10.11

Playbook - 图1

This playbook will run the following tasks:

  • Generate node identity parameters
  • Provisioning Node
    • Configure the node’s hostname
    • Configure static DNS records
    • Configure dynamic DNS resolver
    • Configure yum repo
    • Install specified RPM packages
    • Configure features such as NUMA/SWAP/firewall
    • Configure node tuned tuning templates
    • Configure shortcuts and environment variables for the node
    • Create node admin user and configure its SSH access
    • Configure timezone
    • Configure NTP service
  • Initialize the DCS service on the node: Consul
    • Erase existing Consul if it exists (with protection disabled)
    • Initialize the Consul Agent or Server service for the current node
  • Initialize the node monitoring component and incorporate Pigsty
    • Install Node Exporter
    • Register Node Exporter to Prometheus on meta nodes.

Be careful when running this playbook on provisioned nodes. It may lead to the database being temporarily unavailable because of the removal of the consul service.

The dcs_clean provides a SafeGuard to avoid accidental purge. When existing Consul Instance is detected during playbook execution. It will take action about it.

When using the complete nodes.yml playbook or just the section on dcs|consul, please double-check that the -tags|-t and -limit|-l is correct. Make sure you are running the right tasks on the correct targets.

SafeGuard

Pigsty provides a SafeGuard to avoid purging running consul instances with fat fingers. There are two parameters.

  • dcs_safeguard: Disabled by default, if enabled, running consul will not be purged by any circumstance.
  • dcs_clean: disabled by default, nodes.yml will purge running consul during node init.

When running consul exists, nodes.yml will act as:

dcs_safeguard / pg_cleandcs_clean=truedcs_clean=false
dcs_safeguard=trueABORTABORT
dcs_safeguard=falsePURGEABORT

When running consul exists, nodes-remove.yml will act as:

dcs_safeguard / pg_cleandcs_clean=truedcs_clean=false
dcs_safeguard=trueABORTABORT
dcs_safeguard=falsePURGEPURGE

Selective Execution

You can selectively execute a subset of this playbook through tags.

For example, if you want to re-deploy node monitor components only:

  1. ./nodes.yml --tags=node-monitor

Common tasks are listed below:

  1. # play
  2. ./nodes.yml --tags=node-id # generate & print node identity params
  3. ./nodes.yml --tags=node-init # provisoning the node
  4. ./nodes.yml --tags=dcs-init # init dcs on node
  5. ./nodes.yml --tags=node-monitor # init monitor (metrics & logs) on node
  6. # tasks
  7. ./nodes.yml --tags=node_name # Configure node‘s hostname
  8. ./nodes.yml --tags=node_dns # Configure node's static DNS records
  9. ./nodes.yml --tags=node_resolv # Configuring Dynamic DNS Resolver
  10. ./nodes.yml --tags=node_repo # Configure yum repo
  11. ./nodes.yml --tags=node_pkgs # Install specified RPM package
  12. ./nodes.yml --tags=node_feature # Configure NUMA/SWAP/FIREWALL...
  13. ./nodes.yml --tags=node_tuned # Configure tuned tuning templates
  14. ./nodes.yml --tags=node_profile # Configure shortcuts & env variables
  15. ./nodes.yml --tags=node_admin # Create node admin user and configure SSH access
  16. ./nodes.yml --tags=node_timezone # Configure node time zone
  17. ./nodes.yml --tags=node_ntp # Configure NTP service
  18. ./nodes.yml --tags=docker # Configure dockerd daemon
  19. ./nodes.yml --tags=consul # Configure consul agent/server
  20. ./nodes.yml --tags=consul -e dcs_clean=clean # Force consul reinit
  21. ./nodes.yml --tags=node_exporter # Configure node_exporter on the node and register it
  22. ./nodes.yml --tags=node_register # Registering node monitoring to a meta node
  23. ./nodes.yml --tags=node_deregister # Deregister node monitoring from meta node

Admin User Provision

Admin user provisioning is a chicken-and-egg problem. To execute playbooks, you need to have an admin user. To create a dedicated admin user, you need to run this playbook.

Pigsty recommends leaving admin user provisioning to your vendor. It’s common to deliver the node with an admin user with ssh & sudo access.

It may require a password to execute ssh & sudo. You can pass them via extra params --ask-pass|-k and --ask-become-pass|-K, entering SSH and sudo password when prompted. You can create a dedicated admin user (with no pass sudo & ssh) with another admin user (with password sudo & ssh).

The following parameters are used to describe the dedicated admin user.

  1. ./nodes.yml -t node_admin -l <target_hosts> --ask-pass --ask-become-pass

The default admin user is dba (uid=88). Please do not use postgres or {{ dbsu }} as the admin user. Please try to avoid using root as the admin user directly.

The default user vagrant in the local sandbox has been provisioned with nopass ssh & sudo. You can use vagrant to ssh to all other nodes from the sandbox meta node.

  1. ./nodes.yml --limit <target_hosts> --tags node_admin \
  2. -e ansible_user=<another_admin> --ask-pass --ask-become-pass

Refer to: Prepare: Admin User for more details.


nodes-remove

The nodes-remove.yml playbook is used to remove nodes from Pigsty.

The playbook needs to be executed on meta nodes, and targeting nodes need to be removed.

  1. ./nodes.yml # Remove all nodes (dangerous!)
  2. ./nodes.yml -l nodes-test # Remove nodes under group nodes-test
  3. ./nodes.yml -l 10.10.10.11 # Remove node 10.10.10.11
  4. ./nodes.yml -l 10.10.10.10 -e rm_dcs_servers=true # Remove even If there's a DCS Server

Playbook - 图2

Task Subsets

  1. # play
  2. ./nodes-remove.yml --tags=register # Remove node registration
  3. ./nodes-remove.yml --tags=node-exporter # Remove node metrics collector
  4. ./nodes-remove.yml --tags=promtail # Remove Promtail log agent
  5. ./nodes-remove.yml --tags=consul # Remove Consul Agent service
  6. ./nodes-remove.yml --tags=docker # Remove Docker service
  7. ./nodes-remove.yml --tags=consul -e rm_dcs_servers=true # Remove Consul (Including Server!)

Last modified 2022-06-04: fii en docs batch 2 (61bf601)