Node

Pigsty use node for deployment. It could be bare metal, vm or even pod

You can manage more nodes with Pigsty, and use them to deploy various databases or your applications.

The nodes managed by Pigsty are adjusted by nodes.yml to the state described by Config: NODES, and the node monitoring and log collection components are installed so you can check the node status and logs from the monitoring system.

Node Identity

Each node has identity parameters that are configured by parameters in <cluster>.hosts and <cluster>.vars.

There are two important node identity parameters: nodename and node_cluster, which will be used as the node’s instance identity (ins) and cluster identity (cls) in the monitoring system. nodename and node_cluster are NOT REQUIRED since they all have proper default values: Hostname and constant nodes.

Besides, Pigsty uses an IP address as a unique node identifier, too. Which is the inventory_hostname reflected as the key in the <cluster>.hosts object. A node may have multiple interfaces & IP addresses. But you must explicitly designate one as the PRIMARY IP ADDRESS. Which should be an intranet IP for service access. It’s not mandatory to use that same IP address to ssh from the meta node, you can use ssh tunnel & jump server with Ansible Connect parameters.

NameTypeLevelAttributeDescription
inventory_hostnameip-REQUIREDNode IP
nodenamestringIOptionalNode Name
node_clusterstringCOptionalNode Cluster Name

The following cluster configuration declares a three-node cluster.

  1. node-test:
  2. hosts:
  3. 10.10.10.11: { nodename: node-test-1 }
  4. 10.10.10.12: { pg_hostname: true } # Borrowed identity pg-test-2
  5. 10.10.10.13: { } # Use the original hostname: node-3
  6. vars:
  7. node_cluster: node-test
hostnode_clusternodenameinstance
10.10.10.11node-testnode-test-1pg-test-1
10.10.10.12node-testpg-test-2pg-test-2
10.10.10.13node-testnode-3pg-test-3

IIn the monitoring system, the time-series monitoring data are labeled as follows.

  1. node_load1{cls="pg-meta", ins="pg-meta-1", ip="10.10.10.10", job="nodes"}
  2. node_load1{cls="pg-test", ins="pg-test-1", ip="10.10.10.11", job="nodes"}
  3. node_load1{cls="pg-test", ins="pg-test-2", ip="10.10.10.12", job="nodes"}
  4. node_load1{cls="pg-test", ins="pg-test-3", ip="10.10.10.13", job="nodes"}

Node Services

ComponentPortDescription
Consul Agent8500Distributed Configuration Management and Service Discovery
Node Exporter9100Node Monitoring Metrics Exporter
Promtail9080Collection of Postgres, Pgbouncer, Patroni logs (Optional)
Consul DNS8600DNS Service

PGSQL Node

A PGSQL Node is a node with a PGSQL module installed.

Pigsty uses exclusively deploy policy for PGSQL. This means the node’s identity and pgsql’s identity are exchangeable. The pg_hostname parameter is designed to assign the Postgres identity to its underlying node: pg_instance and pg_cluster will be assigned to the node’s nodename & node_cluster.

In addition to node default services, the following services are available on PGSQL nodes.

ComponentPortDescription
Postgres5432Pigsty CMDB
Pgbouncer6432Pgbouncer Connection Pooling Service
Patroni8008Patroni HA Component
Consul8500Distributed Configuration Management and Service Discovery
Haproxy Primary5433Primary connection pool: Read/Write Service
Haproxy Replica5434Replica connection pool: Read-only Service
Haproxy Default5436Primary Direct Connect Service
Haproxy Offline5438Offline Direct Connect: Offline Read Service
Haproxy service543xCustomized Services
Haproxy Admin9101Monitoring metrics and traffic management
PG Exporter9630PG Monitoring Metrics Exporter
PGBouncer Exporter9631PGBouncer Monitoring Metrics Exporter
Node Exporter9100Node Monitoring Metrics Exporter
Promtail9080Collection of Postgres, Pgbouncer, Patroni logs (Optional)
Consul DNS8600DNS Service
vip-manager-Bind VIP to the primary

Last modified 2022-06-04: fii en docs batch 2 (61bf601)