Parameters

Parameter reference for Pigsty

IDNameModuleSectionTypeLevelComment
101versionINFRAMETAstringGpigsty version string
102admin_ipINFRAMETAipGadmin node ip address
103regionINFRAMETAenumGupstream mirror region: default,china,europe
104proxy_envINFRAMETAdictGglobal proxy env when downloading packages
105ca_methodINFRACAenumGcreate,recreate,copy, create by default
106ca_cnINFRACAstringGca common name, fixed as pigsty-ca
107cert_validityINFRACAintervalGcert validity, 20 years by default
108infra_seqINFRAINFRA_IDintIinfra node identity, REQUIRED
109infra_portalINFRAINFRA_IDdictGinfra services exposed via portal
110repo_enabledINFRAREPOboolG/Icreate a yum repo on this infra node?
111repo_homeINFRAREPOpathGrepo home dir, /www by default
112repo_nameINFRAREPOstringGrepo name, pigsty by default
113repo_endpointINFRAREPOurlGaccess point to this repo by domain or ip:port
114repo_removeINFRAREPOboolG/Aremove existing upstream repo
115repo_upstreamINFRAREPOupstream[]Gwhere to download upstream packages
116repo_packagesINFRAREPOstring[]Gwhich packages to be included
117repo_url_packagesINFRAREPOstring[]Gextra packages from url
118infra_packagesINFRAINFRA_PACKAGEstring[]Gpackages to be installed on infra nodes
119infra_packages_pipINFRAINFRA_PACKAGEstringGpip installed packages for infra nodes
120nginx_enabledINFRANGINXboolG/Ienable nginx on this infra node?
121nginx_sslmodeINFRANGINXenumGnginx ssl mode? disable,enable,enforce
122nginx_homeINFRANGINXpathGnginx content dir, /www by default
123nginx_portINFRANGINXportGnginx listen port, 80 by default
124nginx_ssl_portINFRANGINXportGnginx ssl listen port, 443 by default
125nginx_navbarINFRANGINXindex[]Gnginx index page navigation links
126dns_enabledINFRADNSboolG/Isetup dnsmasq on this infra node?
127dns_portINFRADNSportGdns server listen port, 53 by default
128dns_recordsINFRADNSstring[]Gdynamic dns records resolved by dnsmasq
129prometheus_enabledINFRAPROMETHEUSboolG/Ienable prometheus on this infra node?
130prometheus_cleanINFRAPROMETHEUSboolG/Aclean prometheus data during init?
131prometheus_dataINFRAPROMETHEUSpathGprometheus data dir, /data/prometheus by default
132prometheus_sd_intervalINFRAPROMETHEUSintervalGprometheus target refresh interval, 5s by default
133prometheus_scrape_intervalINFRAPROMETHEUSintervalGprometheus scrape & eval interval, 10s by default
134prometheus_scrape_timeoutINFRAPROMETHEUSintervalGprometheus global scrape timeout, 8s by default
135prometheus_optionsINFRAPROMETHEUSargGprometheus extra server options
136pushgateway_enabledINFRAPROMETHEUSboolG/Isetup pushgateway on this infra node?
137pushgateway_optionsINFRAPROMETHEUSargGpushgateway extra server options
138blackbox_enabledINFRAPROMETHEUSboolG/Isetup blackbox_exporter on this infra node?
139blackbox_optionsINFRAPROMETHEUSargGblackbox_exporter extra server options
140alertmanager_enabledINFRAPROMETHEUSboolG/Isetup alertmanager on this infra node?
141alertmanager_optionsINFRAPROMETHEUSargGalertmanager extra server options
142exporter_metrics_pathINFRAPROMETHEUSpathGexporter metric path, /metrics by default
143exporter_installINFRAPROMETHEUSenumGhow to install exporter? none,yum,binary
144exporter_repo_urlINFRAPROMETHEUSurlGexporter repo file url if install exporter via yum
145grafana_enabledINFRAGRAFANAboolG/Ienable grafana on this infra node?
146grafana_cleanINFRAGRAFANAboolG/Aclean grafana data during init?
147grafana_admin_usernameINFRAGRAFANAusernameGgrafana admin username, admin by default
148grafana_admin_passwordINFRAGRAFANApasswordGgrafana admin password, pigsty by default
149grafana_plugin_cacheINFRAGRAFANApathGpath to grafana plugins cache tarball
150grafana_plugin_listINFRAGRAFANAstring[]Ggrafana plugins to be downloaded with grafana-cli
151loki_enabledINFRALOKIboolG/Ienable loki on this infra node?
152loki_cleanINFRALOKIboolG/Awhether remove existing loki data?
153loki_dataINFRALOKIpathGloki data dir, /data/loki by default
154loki_retentionINFRALOKIintervalGloki log retention period, 15d by default
201nodenameNODENODE_IDstringInode instance identity, use hostname if missing, optional
202node_clusterNODENODE_IDstringCnode cluster identity, use ’nodes’ if missing, optional
203nodename_overwriteNODENODE_IDboolCoverwrite node’s hostname with nodename?
204nodename_exchangeNODENODE_IDboolCexchange nodename among play hosts?
205node_id_from_pgNODENODE_IDboolCuse postgres identity as node identity if applicable?
210node_default_etc_hostsNODENODE_DNSstring[]Gstatic dns records in /etc/hosts
211node_etc_hostsNODENODE_DNSstring[]Cextra static dns records in /etc/hosts
212node_dns_methodNODENODE_DNSenumChow to handle dns servers: add,none,overwrite
213node_dns_serversNODENODE_DNSstring[]Cdynamic nameserver in /etc/resolv.conf
214node_dns_optionsNODENODE_DNSstring[]Cdns resolv options in /etc/resolv.conf
220node_repo_methodNODENODE_PACKAGEenumChow to setup node repo: none,local,public
221node_repo_removeNODENODE_PACKAGEboolCremove existing repo on node?
222node_repo_local_urlsNODENODE_PACKAGEstring[]Clocal repo url, if node_repo_method = local
223node_packagesNODENODE_PACKAGEstring[]Cpackages to be installed current nodes
224node_default_packagesNODENODE_PACKAGEstring[]Gdefault packages to be installed on all nodes
230node_disable_firewallNODENODE_TUNEboolCdisable node firewall? true by default
231node_disable_selinuxNODENODE_TUNEboolCdisable node selinux? true by default
232node_disable_numaNODENODE_TUNEboolCdisable node numa, reboot required
233node_disable_swapNODENODE_TUNEboolCdisable node swap, use with caution
234node_static_networkNODENODE_TUNEboolCpreserve dns resolver settings after reboot
235node_disk_prefetchNODENODE_TUNEboolCsetup disk prefetch on HDD to increase performance
236node_kernel_modulesNODENODE_TUNEstring[]Ckernel modules to be enabled on this node
237node_hugepage_countNODENODE_TUNEintCnumber of 2MB hugepage, take precedence over ratio
238node_hugepage_ratioNODENODE_TUNEfloatCnode mem hugepage ratio, 0 disable it by default
239node_overcommit_ratioNODENODE_TUNEfloatCnode mem overcommit ratio, 0 disable it by default
240node_tuneNODENODE_TUNEenumCnode tuned profile: none,oltp,olap,crit,tiny
241node_sysctl_paramsNODENODE_TUNEdictCsysctl parameters in k:v format in addition to tuned
250node_dataNODENODE_ADMINpathCnode main data directory, /data by default
251node_admin_enabledNODENODE_ADMINboolCcreate a admin user on target node?
252node_admin_uidNODENODE_ADMINintCuid and gid for node admin user
253node_admin_usernameNODENODE_ADMINusernameCname of node admin user, dba by default
254node_admin_ssh_exchangeNODENODE_ADMINboolCexchange admin ssh key among node cluster
255node_admin_pk_currentNODENODE_ADMINboolCadd current user’s ssh pk to admin authorized_keys
256node_admin_pk_listNODENODE_ADMINstring[]Cssh public keys to be added to admin user
260node_timezoneNODENODE_TIMEstringCsetup node timezone, empty string to skip
261node_ntp_enabledNODENODE_TIMEboolCenable chronyd time sync service?
262node_ntp_serversNODENODE_TIMEstring[]Cntp servers in /etc/chrony.conf
263node_crontab_overwriteNODENODE_TIMEboolCoverwrite or append to /etc/crontab?
264node_crontabNODENODE_TIMEstring[]Ccrontab entries in /etc/crontab
270haproxy_enabledNODEHAPROXYboolCenable haproxy on this node?
271haproxy_cleanNODEHAPROXYboolG/C/Acleanup all existing haproxy config?
272haproxy_reloadNODEHAPROXYboolAreload haproxy after config?
273haproxy_auth_enabledNODEHAPROXYboolGenable authentication for haproxy admin page
274haproxy_admin_usernameNODEHAPROXYusernameGhaproxy admin username, admin by default
275haproxy_admin_passwordNODEHAPROXYpasswordGhaproxy admin password, pigsty by default
276haproxy_exporter_portNODEHAPROXYportChaproxy admin/exporter port, 9101 by default
277haproxy_client_timeoutNODEHAPROXYintervalCclient side connection timeout, 24h by default
278haproxy_server_timeoutNODEHAPROXYintervalCserver side connection timeout, 24h by default
279haproxy_servicesNODEHAPROXYservice[]Clist of haproxy service to be exposed on node
280node_exporter_enabledNODENODE_EXPORTERboolCsetup node_exporter on this node?
281node_exporter_portNODENODE_EXPORTERportCnode exporter listen port, 9100 by default
282node_exporter_optionsNODENODE_EXPORTERargCextra server options for node_exporter
283promtail_enabledNODEPROMTAILboolCenable promtail logging collector?
284promtail_cleanNODEPROMTAILboolG/Apurge existing promtail status file during init?
285promtail_portNODEPROMTAILportCpromtail listen port, 9080 by default
286promtail_positionsNODEPROMTAILpathCpromtail position status file path
401docker_enabledNODEDOCKERboolCenable docker on this node?
402docker_cgroups_driverNODEDOCKERenumCdocker cgroup fs driver: cgroupfs,systemd
403docker_registry_mirrorsNODEDOCKERstring[]Cdocker registry mirror list
404docker_image_cacheNODEDOCKERpathCdocker image cache dir, /tmp/docker by default
501etcd_seqETCDETCDintIetcd instance identifier, REQUIRED
502etcd_clusterETCDETCDstringCetcd cluster & group name, etcd by default
503etcd_safeguardETCDETCDboolG/C/Aprevent purging running etcd instance?
504etcd_cleanETCDETCDboolG/C/Apurging existing etcd during initialization?
505etcd_dataETCDETCDpathCetcd data directory, /data/etcd by default
506etcd_portETCDETCDportCetcd client port, 2379 by default
507etcd_peer_portETCDETCDportCetcd peer port, 2380 by default
508etcd_initETCDETCDenumCetcd initial cluster state, new or existing
509etcd_election_timeoutETCDETCDintCetcd election timeout, 1000ms by default
510etcd_heartbeat_intervalETCDETCDintCetcd heartbeat interval, 100ms by default
601minio_seqMINIOMINIOintIminio instance identifier, REQUIRED
602minio_clusterMINIOMINIOstringCminio cluster name, minio by default
603minio_cleanMINIOMINIOboolG/C/Acleanup minio during init?, false by default
604minio_userMINIOMINIOusernameCminio os user, minio by default
605minio_nodeMINIOMINIOstringCminio node name pattern
606minio_dataMINIOMINIOpathCminio data dir(s), use {x…y} to specify multi drivers
607minio_domainMINIOMINIOstringGminio service domain name, sss.pigsty by default
608minio_portMINIOMINIOportCminio service port, 9000 by default
609minio_admin_portMINIOMINIOportCminio console port, 9001 by default
610minio_access_keyMINIOMINIOusernameCroot access key, minioadmin by default
611minio_secret_keyMINIOMINIOpasswordCroot secret key, minioadmin by default
612minio_extra_varsMINIOMINIOstringCextra environment variables for minio server
613minio_aliasMINIOMINIOstringGalias name for local minio deployment
614minio_bucketsMINIOMINIObucket[]Clist of minio bucket to be created
615minio_usersMINIOMINIOuser[]Clist of minio user to be created
701redis_clusterREDISREDISstringCredis cluster name, required identity parameter
702redis_instancesREDISREDISdictIredis instances definition on this redis node
703redis_nodeREDISREDISintIredis node sequence number, node int id required
710redis_fs_mainREDISREDISpathCredis main data mountpoint, /data by default
711redis_exporter_enabledREDISREDISboolCinstall redis exporter on redis nodes?
712redis_exporter_portREDISREDISportCredis exporter listen port, 9121 by default
713redis_exporter_optionsREDISREDISstringC/Icli args and extra options for redis exporter
720redis_safeguardREDISREDISboolCprevent purging running redis instance?
721redis_cleanREDISREDISboolCpurging existing redis during init?
722redis_rmdataREDISREDISboolAremove redis data when purging redis server?
723redis_modeREDISREDISenumCredis mode: standalone,cluster,sentinel
724redis_confREDISREDISstringCredis config template path, except sentinel
725redis_bind_addressREDISREDISipCredis bind address, empty string will use host ip
726redis_max_memoryREDISREDISsizeC/Imax memory used by each redis instance
727redis_mem_policyREDISREDISenumCredis memory eviction policy
728redis_passwordREDISREDISpasswordCredis password, empty string will disable password
729redis_rdb_saveREDISREDISstring[]Credis rdb save directives, disable with empty list
730redis_aof_enabledREDISREDISboolCenable redis append only file?
731redis_rename_commandsREDISREDISdictCrename redis dangerous commands
732redis_cluster_replicasREDISREDISintCreplica number for one master in redis cluster
801pg_modePGSQLPG_IDenumCpgsql cluster mode: pgsql,citus,gpsql
802pg_clusterPGSQLPG_IDstringCpgsql cluster name, REQUIRED identity parameter
803pg_seqPGSQLPG_IDintIpgsql instance seq number, REQUIRED identity parameter
804pg_rolePGSQLPG_IDenumIpgsql role, REQUIRED, could be primary,replica,offline
805pg_instancesPGSQLPG_IDdictIdefine multiple pg instances on node in {port:ins_vars} format
806pg_upstreamPGSQLPG_IDipIrepl upstream ip addr for standby cluster or cascade replica
807pg_shardPGSQLPG_IDstringCpgsql shard name, optional identity for sharding clusters
808pg_groupPGSQLPG_IDintCpgsql shard index number, optional identity for sharding clusters
809gp_rolePGSQLPG_IDenumCgreenplum role of this cluster, could be master or segment
810pg_exportersPGSQLPG_IDdictCadditional pg_exporters to monitor remote postgres instances
811pg_offline_queryPGSQLPG_IDboolIset to true to enable offline query on this instance
820pg_usersPGSQLPG_BUSINESSuser[]Cpostgres business users
821pg_databasesPGSQLPG_BUSINESSdatabase[]Cpostgres business databases
822pg_servicesPGSQLPG_BUSINESSservice[]Cpostgres business services
823pg_hba_rulesPGSQLPG_BUSINESShba[]Cbusiness hba rules for postgres
824pgb_hba_rulesPGSQLPG_BUSINESShba[]Cbusiness hba rules for pgbouncer
831pg_replication_usernamePGSQLPG_BUSINESSusernameGpostgres replication username, replicator by default
832pg_replication_passwordPGSQLPG_BUSINESSpasswordGpostgres replication password, DBUser.Replicator by default
833pg_admin_usernamePGSQLPG_BUSINESSusernameGpostgres admin username, dbuser_dba by default
834pg_admin_passwordPGSQLPG_BUSINESSpasswordGpostgres admin password in plain text, DBUser.DBA by default
835pg_monitor_usernamePGSQLPG_BUSINESSusernameGpostgres monitor username, dbuser_monitor by default
836pg_monitor_passwordPGSQLPG_BUSINESSpasswordGpostgres monitor password, DBUser.Monitor by default
837pg_dbsu_passwordPGSQLPG_BUSINESSpasswordG/Cpostgres dbsu password, empty string disable it by default
840pg_dbsuPGSQLPG_INSTALLusernameCos dbsu name, postgres by default, better not change it
841pg_dbsu_uidPGSQLPG_INSTALLintCos dbsu uid and gid, 26 for default postgres users and groups
842pg_dbsu_sudoPGSQLPG_INSTALLenumCdbsu sudo privilege, none,limit,all,nopass. limit by default
843pg_dbsu_homePGSQLPG_INSTALLpathCpostgresql home directory, /var/lib/pgsql by default
844pg_dbsu_ssh_exchangePGSQLPG_INSTALLboolCexchange postgres dbsu ssh key among same pgsql cluster
845pg_versionPGSQLPG_INSTALLenumCpostgres major version to be installed, 15 by default
846pg_bin_dirPGSQLPG_INSTALLpathCpostgres binary dir, /usr/pgsql/bin by default
847pg_log_dirPGSQLPG_INSTALLpathCpostgres log dir, /pg/log/postgres by default
848pg_packagesPGSQLPG_INSTALLstring[]Cpg packages to be installed, ${pg_version} will be replaced
849pg_extensionsPGSQLPG_INSTALLstring[]Cpg extensions to be installed, ${pg_version} will be replaced
850pg_safeguardPGSQLPG_BOOTSTRAPboolG/C/Aprevent purging running postgres instance? false by default
851pg_cleanPGSQLPG_BOOTSTRAPboolG/C/Apurging existing postgres during pgsql init? true by default
852pg_dataPGSQLPG_BOOTSTRAPpathCpostgres data directory, /pg/data by default
853pg_fs_mainPGSQLPG_BOOTSTRAPpathCmountpoint/path for postgres main data, /data by default
854pg_fs_bkupPGSQLPG_BOOTSTRAPpathCmountpoint/path for pg backup data, /data/backup by default
855pg_storage_typePGSQLPG_BOOTSTRAPenumCstorage type for pg main data, SSD,HDD, SSD by default
856pg_dummy_filesizePGSQLPG_BOOTSTRAPsizeCsize of /pg/dummy, hold 64MB disk space for emergency use
857pg_listenPGSQLPG_BOOTSTRAPipCpostgres listen address, 0.0.0.0 (all ipv4 addr) by default
858pg_portPGSQLPG_BOOTSTRAPportCpostgres listen port, 5432 by default
859pg_localhostPGSQLPG_BOOTSTRAPpathCpostgres unix socket dir for localhost connection
860pg_namespacePGSQLPG_BOOTSTRAPpathCtop level key namespace in etcd, used by patroni & vip
861patroni_enabledPGSQLPG_BOOTSTRAPboolCif disabled, no postgres cluster will be created during init
862patroni_modePGSQLPG_BOOTSTRAPenumCpatroni working mode: default,pause,remove
863patroni_portPGSQLPG_BOOTSTRAPportCpatroni listen port, 8008 by default
864patroni_log_dirPGSQLPG_BOOTSTRAPpathCpatroni log dir, /pg/log/patroni by default
865patroni_ssl_enabledPGSQLPG_BOOTSTRAPboolGsecure patroni RestAPI communications with SSL?
866patroni_watchdog_modePGSQLPG_BOOTSTRAPenumCpatroni watchdog mode: automatic,required,off. off by default
867patroni_usernamePGSQLPG_BOOTSTRAPusernameCpatroni restapi username, postgres by default
868patroni_passwordPGSQLPG_BOOTSTRAPpasswordCpatroni restapi password, Patroni.API by default
869patroni_citus_dbPGSQLPG_BOOTSTRAPstringCcitus database managed by patroni, postgres by default
870pg_confPGSQLPG_BOOTSTRAPenumCconfig template: oltp,olap,crit,tiny. oltp.yml by default
871pg_max_connPGSQLPG_BOOTSTRAPintCpostgres max connections, auto will use recommended value
872pg_shared_buffer_ratioPGSQLPG_BOOTSTRAPfloatCpostgres shared buffer memory ratio, 0.25 by default, 0.1~0.4
873pg_rtoPGSQLPG_BOOTSTRAPintCrecovery time objective in seconds, 30s by default
874pg_rpoPGSQLPG_BOOTSTRAPintCrecovery point objective in bytes, 1MiB at most by default
875pg_libsPGSQLPG_BOOTSTRAPstringCpreloaded libraries, pg_stat_statements,auto_explain by default
876pg_delayPGSQLPG_BOOTSTRAPintervalIreplication apply delay for standby cluster leader
877pg_checksumPGSQLPG_BOOTSTRAPboolCenable data checksum for postgres cluster?
878pg_pwd_encPGSQLPG_BOOTSTRAPenumCpasswords encryption algorithm: md5,scram-sha-256
879pg_encodingPGSQLPG_BOOTSTRAPenumCdatabase cluster encoding, UTF8 by default
880pg_localePGSQLPG_BOOTSTRAPenumCdatabase cluster local, C by default
881pg_lc_collatePGSQLPG_BOOTSTRAPenumCdatabase cluster collate, C by default
882pg_lc_ctypePGSQLPG_BOOTSTRAPenumCdatabase character type, en_US.UTF8 by default
890pgbouncer_enabledPGSQLPG_BOOTSTRAPboolCif disabled, pgbouncer will not be launched on pgsql host
891pgbouncer_portPGSQLPG_BOOTSTRAPportCpgbouncer listen port, 6432 by default
892pgbouncer_log_dirPGSQLPG_BOOTSTRAPpathCpgbouncer log dir, /pg/log/pgbouncer by default
893pgbouncer_auth_queryPGSQLPG_BOOTSTRAPboolCquery postgres to retrieve unlisted business users?
894pgbouncer_poolmodePGSQLPG_BOOTSTRAPenumCpooling mode: transaction,session,statement, transaction by default
895pgbouncer_sslmodePGSQLPG_BOOTSTRAPenumCpgbouncer client ssl mode, disable by default
900pg_provisionPGSQLPG_PROVISIONboolCprovision postgres cluster after bootstrap
901pg_initPGSQLPG_PROVISIONstringG/Cprovision init script for cluster template, pg-init by default
902pg_default_rolesPGSQLPG_PROVISIONrole[]G/Cdefault roles and users in postgres cluster
903pg_default_privilegesPGSQLPG_PROVISIONstring[]G/Cdefault privileges when created by admin user
904pg_default_schemasPGSQLPG_PROVISIONstring[]G/Cdefault schemas to be created
905pg_default_extensionsPGSQLPG_PROVISIONextension[]G/Cdefault extensions to be created
906pg_reloadPGSQLPG_PROVISIONboolAreload postgres after hba changes
907pg_default_hba_rulesPGSQLPG_PROVISIONhba[]G/Cpostgres default host-based authentication rules
908pgb_default_hba_rulesPGSQLPG_PROVISIONhba[]G/Cpgbouncer default host-based authentication rules
910pgbackrest_enabledPGSQLPG_BACKUPboolCenable pgbackrest on pgsql host?
911pgbackrest_cleanPGSQLPG_BACKUPboolCremove pg backup data during init?
912pgbackrest_log_dirPGSQLPG_BACKUPpathCpgbackrest log dir, /pg/log/pgbackrest by default
913pgbackrest_methodPGSQLPG_BACKUPenumCpgbackrest repo method: local,minio,etc…
914pgbackrest_repoPGSQLPG_BACKUPdictG/Cpgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
921pg_weightPGSQLPG_SERVICEintIrelative load balance weight in service, 100 by default, 0-255
922pg_service_providerPGSQLPG_SERVICEstringG/Cdedicate haproxy node group name, or empty string for local nodes by default
923pg_default_service_destPGSQLPG_SERVICEenumG/Cdefault service destination if svc.dest=‘default’
924pg_default_servicesPGSQLPG_SERVICEservice[]G/Cpostgres default service definitions
931pg_vip_enabledPGSQLPG_SERVICEboolCenable a l2 vip for pgsql primary? false by default
932pg_vip_addressPGSQLPG_SERVICEcidr4Cvip address in <ipv4>/<mask> format, require if vip is enabled
933pg_vip_interfacePGSQLPG_SERVICEstringC/Ivip network interface to listen, eth0 by default
934pg_dns_suffixPGSQLPG_SERVICEstringCpgsql dns suffix, ’’ by default
935pg_dns_targetPGSQLPG_SERVICEenumCauto, primary, vip, none, or ad hoc ip
940pg_exporter_enabledPGSQLPG_EXPORTERboolCenable pg_exporter on pgsql hosts?
941pg_exporter_configPGSQLPG_EXPORTERstringCpg_exporter configuration file name
942pg_exporter_cache_ttlsPGSQLPG_EXPORTERstringCpg_exporter collector ttl stage in seconds, ‘1,10,60,300’ by default
943pg_exporter_portPGSQLPG_EXPORTERportCpg_exporter listen port, 9630 by default
944pg_exporter_paramsPGSQLPG_EXPORTERstringCextra url parameters for pg_exporter dsn
945pg_exporter_urlPGSQLPG_EXPORTERpgurlCoverwrite auto-generate pg dsn if specified
946pg_exporter_auto_discoveryPGSQLPG_EXPORTERboolCenable auto database discovery? enabled by default
947pg_exporter_exclude_databasePGSQLPG_EXPORTERstringCcsv of database that WILL NOT be monitored during auto-discovery
948pg_exporter_include_databasePGSQLPG_EXPORTERstringCcsv of database that WILL BE monitored during auto-discovery
949pg_exporter_connect_timeoutPGSQLPG_EXPORTERintCpg_exporter connect timeout in ms, 200 by default
950pg_exporter_optionsPGSQLPG_EXPORTERargCoverwrite extra options for pg_exporter
951pgbouncer_exporter_enabledPGSQLPG_EXPORTERboolCenable pgbouncer_exporter on pgsql hosts?
952pgbouncer_exporter_portPGSQLPG_EXPORTERportCpgbouncer_exporter listen port, 9631 by default
953pgbouncer_exporter_urlPGSQLPG_EXPORTERpgurlCoverwrite auto-generate pgbouncer dsn if specified
954pgbouncer_exporter_optionsPGSQLPG_EXPORTERargCoverwrite extra options for pgbouncer_exporter

INFRA

Parameters about pigsty infrastructure components: local yum repo, nginx, dnsmasq, prometheus, grafana, loki, alertmanager, pushgateway, blackbox_exporter, etc…


META

This section contains some metadata of current pigsty deployments, such as version string, admin node IP address, repo mirror region and http(s) proxy when downloading pacakges.

  1. version: v2.0.2 # pigsty version string
  2. admin_ip: 10.10.10.10 # admin node ip address
  3. region: default # upstream mirror region: default,china,europe
  4. proxy_env: # global proxy env when downloading packages
  5. no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
  6. # http_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
  7. # https_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
  8. # all_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com

version

name: version, type: string, level: G

pigsty version string

default value:v2.0.2

It will be used for pigsty introspection & content rendering.

admin_ip

name: admin_ip, type: ip, level: G

admin node ip address

default value:10.10.10.10

Node with this ip address will be treated as admin node, usually point to the first node that install Pigsty.

The default value 10.10.10.10 is a placeholder which will be replaced during configure

This parameter is referenced by many other parameters, such as:

The exact string ${admin_ip} will be replaced with the actual admin_ip for above parameters.

region

name: region, type: enum, level: G

upstream mirror region: default,china,europe

default value: default

If a region other than default is set, and there’s a corresponding entry in repo_upstream.[repo].baseurl, it will be used instead of default.

For example, if china is used, pigsty will use China mirrors designated in repo_upstream if applicable.

proxy_env

name: proxy_env, type: dict, level: G

global proxy env when downloading packages

default value:

  1. proxy_env: # global proxy env when downloading packages
  2. http_proxy: 'http://username:password@proxy.address.com'
  3. https_proxy: 'http://username:password@proxy.address.com'
  4. all_proxy: 'http://username:password@proxy.address.com'
  5. no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.aliyuncs.com,mirrors.tuna.tsinghua.edu.cn,mirrors.zju.edu.cn"

It’s quite important to use http proxy in restricted production environment, or your Internet access is blocked (e.g. Mainland China)


CA

Self-Signed CA used by pigsty. It is required to support advanced security features.

  1. ca_method: create # create,recreate,copy, create by default
  2. ca_cn: pigsty-ca # ca common name, fixed as pigsty-ca
  3. cert_validity: 7300d # cert validity, 20 years by default

ca_method

name: ca_method, type: enum, level: G

available options: create,recreate,copy

default value: create

  • create: Create a new CA public-private key pair if not exists, use if exists
  • recreate: Always re-create a new CA public-private key pair
  • copy: Copy the existing CA public and private keys from local files/pki/ca, abort if missing

If you already have a pair of ca.crt and ca.key, put them under files/pki/ca and set ca_method to copy.

ca_cn

name: ca_cn, type: string, level: G

ca common name, not recommending to change it.

default value: pigsty-ca

you can check that with openssl x509 -text -in /etc/pki/ca.crt

cert_validity

name: cert_validity, type: interval, level: G

cert validity, 20 years by default, which is enough for most scenarios

default value: 7300d


INFRA_ID

Infrastructure identity and portal definition.

  1. #infra_seq: 1 # infra node identity, explicitly required
  2. infra_portal: # infra services exposed via portal
  3. home : { domain: h.pigsty }
  4. grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" ,websocket: true }
  5. prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
  6. alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
  7. blackbox : { endpoint: "${admin_ip}:9115" }
  8. loki : { endpoint: "${admin_ip}:3100" }

infra_seq

name: infra_seq, type: int, level: I

infra node identity, REQUIRED, no default value, you have to assign it explicitly.

infra_portal

name: infra_portal, type: dict, level: G

infra services exposed via portal

default value will expose home, grafana, prometheus, alertmanager via nginx with corresponding domain names.

  1. infra_portal: # infra services exposed via portal
  2. home : { domain: h.pigsty }
  3. grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" ,websocket: true }
  4. prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
  5. alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
  6. blackbox : { endpoint: "${admin_ip}:9115" }
  7. loki : { endpoint: "${admin_ip}:3100" }

Each record contains three subsections: key as name, representing the component name, the external access domain, and the internal TCP port, respectively. and the value contains domain, and endpoint,

  • The name definition of the default record is fixed and referenced by other modules, so do not modify the default entry names.
  • The domain is the domain name that should be used for external access to this upstream server. domain names will be added to Nginx SSL cert SAN.
  • The endpoint is an internally reachable TCP port. and ${admin_ip} will be replaced with actual admin_ip in runtime.
  • If websocket is set to true, http protocol will be auto upgraded for ws connections.
  • If scheme is given (http or https), it will be used as part of proxy_pass URL.

REPO

This section is about local yum repo, which is used by all other modules.

Pigsty is installed on a meta node. Pigsty pulls up a localYum repo for the current environment to install RPM packages.

During initialization, Pigsty downloads all packages and their dependencies (specified by repo_packages) from the Internet upstream repo (specified by repo_upstream) to {{ nginx_home }} / {{ repo_name }} (default is /www/pigsty). The total size of all dependent software is about 1GB or so.

When creating a localYum repo, Pigsty will skip the software download phase if the directory already exists and if there is a marker file named repo_complete in the dir.

If the download speed of some packages is too slow, you can set the download proxy to complete the first download by using the proxy_env config entry or directly download the pre-packaged offline package.

The offline package is a zip archive of the {{ nginx_home }}/{{ repo_name }} dir pkg.tgz. During configure, if Pigsty finds the offline package /tmp/pkg.tgz, it will extract it to {{ nginx_home }}/{{ repo_name }}, skipping the software download step during installation.

The default offline package is based on CentOS 7.9.2011 x86_64; if you use a different OS, there may be RPM package conflict and dependency error problems; please refer to the FAQ to solve.

  1. repo_enabled: true # create a yum repo on this infra node?
  2. repo_home: /www # repo home dir, `/www` by default
  3. repo_name: pigsty # repo name, pigsty by default
  4. repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
  5. repo_remove: true # remove existing upstream repo
  6. repo_upstream: # where to download #
  7. - { name: base ,description: 'EL 7 Base' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/os/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/' , europe: 'https://mirrors.xtom.de/centos/$releasever/os/$basearch/' }}
  8. - { name: updates ,description: 'EL 7 Updates' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/updates/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/' , europe: 'https://mirrors.xtom.de/centos/$releasever/updates/$basearch/' }}
  9. - { name: extras ,description: 'EL 7 Extras' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/extras/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/' , europe: 'https://mirrors.xtom.de/centos/$releasever/extras/$basearch/' }}
  10. - { name: epel ,description: 'EL 7 EPEL' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://download.fedoraproject.org/pub/epel/$releasever/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/$basearch/' , europe: 'https://mirrors.xtom.de/epel/$releasever/$basearch/' }}
  11. - { name: centos-sclo ,description: 'EL 7 SCLo' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/sclo/$basearch/sclo/' , china: 'https://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/sclo/' , europe: 'https://mirrors.xtom.de/centos/$releasever/sclo/$basearch/sclo/' }}
  12. - { name: centos-sclo-rh ,description: 'EL 7 SCLo rh' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/sclo/$basearch/rh/' , china: 'https://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/rh/' , europe: 'https://mirrors.xtom.de/centos/$releasever/sclo/$basearch/rh/' }}
  13. - { name: baseos ,description: 'EL 8+ BaseOS' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/BaseOS/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/BaseOS/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/BaseOS/$basearch/os/' }}
  14. - { name: appstream ,description: 'EL 8+ AppStream' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/AppStream/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/AppStream/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/AppStream/$basearch/os/' }}
  15. - { name: extras ,description: 'EL 8+ Extras' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/extras/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/extras/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/extras/$basearch/os/' }}
  16. - { name: epel ,description: 'EL 8+ EPEL' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'http://download.fedoraproject.org/pub/epel/$releasever/Everything/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/Everything/$basearch/' , europe: 'https://mirrors.xtom.de/epel/$releasever/Everything/$basearch/' }}
  17. - { name: powertools ,description: 'EL 8 PowerTools' ,module: node ,releases: [ 8 ] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/PowerTools/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/PowerTools/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/PowerTools/$basearch/os/' }}
  18. - { name: crb ,description: 'EL 9 CRB' ,module: node ,releases: [ 9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/CRB/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/CRB/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/CRB/$basearch/os/' }}
  19. - { name: grafana ,description: 'Grafana' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://packages.grafana.com/oss/rpm' , china: 'https://mirrors.tuna.tsinghua.edu.cn/grafana/yum/rpm' }}
  20. - { name: prometheus ,description: 'Prometheus' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://packagecloud.io/prometheus-rpm/release/el/$releasever/$basearch' }}
  21. - { name: nginx ,description: 'Nginx Repo' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://nginx.org/packages/centos/$releasever/$basearch/' }}
  22. - { name: docker-ce ,description: 'Docker CE' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable' , china: 'https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/$basearch/stable' , europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
  23. - { name: pgdg15 ,description: 'PostgreSQL 15' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-$releasever-$basearch' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' }}
  24. - { name: pgdg-common ,description: 'PostgreSQL Common' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' }}
  25. - { name: pgdg-extras ,description: 'PostgreSQL Extra' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' }}
  26. - { name: pgdg-el8fix ,description: 'PostgreSQL EL8FIX' ,module: pgsql ,releases: [ 8 ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' }}
  27. - { name: timescaledb ,description: 'TimescaleDB' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/el/$releasever/$basearch' }}
  28. - { name: citus ,description: 'Citus Community' ,module: pgsql ,releases: [7 ] ,baseurl: { default: 'https://repos.citusdata.com/community/el/$releasever/$basearch' }}
  29. repo_packages: # which packages to be included
  30. - grafana loki logcli promtail prometheus2 alertmanager pushgateway blackbox_exporter node_exporter redis_exporter
  31. - nginx nginx_exporter wget createrepo_c sshpass ansible python3 python3-pip python3-requests python3-jmespath mtail dnsmasq docker-ce docker-compose-plugin etcd
  32. - lz4 unzip bzip2 zlib yum dnf-utils pv jq git ncdu make patch bash lsof wget uuid tuned chrony perf flamegraph nvme-cli numactl grubby sysstat iotop htop modulemd-tools
  33. - netcat socat rsync ftp lrzsz s3cmd net-tools tcpdump ipvsadm bind-utils telnet audit ca-certificates openssl openssh-clients readline vim-minimal haproxy redis
  34. - postgresql15* postgis33_15* citus_15* pglogical_15* pg_squeeze_15* wal2json_15* pg_repack_15* timescaledb-2-postgresql-15* timescaledb-tools
  35. - patroni patroni-etcd pgbouncer pgbadger pgbackrest tail_n_mail pgloader pg_activity libuser openldap-compat annobin gcc-plugin-annobin
  36. - orafce_15* mysqlcompat_15 mongo_fdw_15* tds_fdw_15* mysql_fdw_15 hdfs_fdw_15 sqlite_fdw_15 pgbouncer_fdw_15 pg_dbms_job_15
  37. - pg_stat_kcache_15* pg_stat_monitor_15* pg_qualstats_15 pg_track_settings_15 pg_wait_sampling_15 system_stats_15 logerrors_15 pg_top_15
  38. - plprofiler_15* plproxy_15 plsh_15* pldebugger_15 plpgsql_check_15* pgtt_15 pgq_15* pgsql_tweaks_15 count_distinct_15 hypopg_15
  39. - timestamp9_15* semver_15* prefix_15* rum_15 geoip_15 periods_15 ip4r_15 tdigest_15 hll_15 pgmp_15 extra_window_functions_15 topn_15
  40. - pg_comparator_15 pg_ivm_15* pgsodium_15* pgfincore_15* ddlx_15 credcheck_15 postgresql_anonymizer_15* postgresql_faker_15 safeupdate_15
  41. - pg_fkpart_15 pg_jobmon_15 pg_partman_15 pg_permissions_15 pgaudit17_15 pgexportdoc_15 pgimportdoc_15 pg_statement_rollback_15*
  42. - pg_cron_15 pg_background_15 e-maj_15 pg_catcheck_15 pg_prioritize_15 pgcopydb_15 pg_filedump_15 pgcryptokey_15
  43. repo_url_packages: # extra packages from url
  44. - https://github.com/Vonng/pg_exporter/releases/download/v0.5.0/pg_exporter-0.5.0.x86_64.rpm
  45. - https://github.com/cybertec-postgresql/vip-manager/releases/download/v2.1.0/vip-manager_2.1.0_Linux_x86_64.rpm
  46. - https://github.com/dalibo/pev2/releases/download/v1.7.0/index.html
  47. - https://dl.min.io/server/minio/release/linux-amd64/archive/minio-20230222182345.0.0.x86_64.rpm
  48. - https://dl.min.io/client/mc/release/linux-amd64/archive/mcli-20230216192011.0.0.x86_64.rpm

repo_enabled

name: repo_enabled, type: bool, level: G/I

create a yum repo on this infra node? default value: true

If you have multiple infra nodes, you can disable yum repo on other standby nodes to reduce Internet traffic.

repo_home

name: repo_home, type: path, level: G

repo home dir, /www by default

repo_name

name: repo_name, type: string, level: G

repo name, pigsty by default, it is not wise to change this value

repo_endpoint

name: repo_endpoint, type: url, level: G

access point to this repo by domain or ip:port

default value: http://${admin_ip}:80

repo_remove

name: repo_remove, type: bool, level: G/A

remove existing upstream repo, default value: true

If you want to keep existing upstream repo, set this value to false.

repo_upstream

name: repo_upstream, type: upstream[], level: G

where to download upstream packages

default values:

  1. repo_upstream: # where to download #
  2. - { name: base ,description: 'EL 7 Base' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/os/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/' , europe: 'https://mirrors.xtom.de/centos/$releasever/os/$basearch/' }}
  3. - { name: updates ,description: 'EL 7 Updates' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/updates/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/' , europe: 'https://mirrors.xtom.de/centos/$releasever/updates/$basearch/' }}
  4. - { name: extras ,description: 'EL 7 Extras' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/extras/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/' , europe: 'https://mirrors.xtom.de/centos/$releasever/extras/$basearch/' }}
  5. - { name: epel ,description: 'EL 7 EPEL' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://download.fedoraproject.org/pub/epel/$releasever/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/$basearch/' , europe: 'https://mirrors.xtom.de/epel/$releasever/$basearch/' }}
  6. - { name: centos-sclo ,description: 'EL 7 SCLo' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/sclo/$basearch/sclo/' , china: 'https://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/sclo/' , europe: 'https://mirrors.xtom.de/centos/$releasever/sclo/$basearch/sclo/' }}
  7. - { name: centos-sclo-rh ,description: 'EL 7 SCLo rh' ,module: node ,releases: [7 ] ,baseurl: { default: 'http://mirror.centos.org/centos/$releasever/sclo/$basearch/rh/' , china: 'https://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/rh/' , europe: 'https://mirrors.xtom.de/centos/$releasever/sclo/$basearch/rh/' }}
  8. - { name: baseos ,description: 'EL 8+ BaseOS' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/BaseOS/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/BaseOS/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/BaseOS/$basearch/os/' }}
  9. - { name: appstream ,description: 'EL 8+ AppStream' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/AppStream/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/AppStream/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/AppStream/$basearch/os/' }}
  10. - { name: extras ,description: 'EL 8+ Extras' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/extras/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/extras/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/extras/$basearch/os/' }}
  11. - { name: epel ,description: 'EL 8+ EPEL' ,module: node ,releases: [ 8,9] ,baseurl: { default: 'http://download.fedoraproject.org/pub/epel/$releasever/Everything/$basearch/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/Everything/$basearch/' , europe: 'https://mirrors.xtom.de/epel/$releasever/Everything/$basearch/' }}
  12. - { name: powertools ,description: 'EL 8 PowerTools' ,module: node ,releases: [ 8 ] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/PowerTools/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/PowerTools/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/PowerTools/$basearch/os/' }}
  13. - { name: crb ,description: 'EL 9 CRB' ,module: node ,releases: [ 9] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/CRB/$basearch/os/' , china: 'https://mirrors.aliyun.com/rockylinux/$releasever/CRB/$basearch/os/' , europe: 'https://mirrors.xtom.de/rocky/$releasever/CRB/$basearch/os/' }}
  14. - { name: grafana ,description: 'Grafana' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://packages.grafana.com/oss/rpm' , china: 'https://mirrors.tuna.tsinghua.edu.cn/grafana/yum/rpm' }}
  15. - { name: prometheus ,description: 'Prometheus' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://packagecloud.io/prometheus-rpm/release/el/$releasever/$basearch' }}
  16. - { name: nginx ,description: 'Nginx Repo' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://nginx.org/packages/centos/$releasever/$basearch/' }}
  17. - { name: docker-ce ,description: 'Docker CE' ,module: infra ,releases: [7,8,9] ,baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable' , china: 'https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/$basearch/stable' , europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
  18. - { name: pgdg15 ,description: 'PostgreSQL 15' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-$releasever-$basearch' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' }}
  19. - { name: pgdg-common ,description: 'PostgreSQL Common' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' }}
  20. - { name: pgdg-extras ,description: 'PostgreSQL Extra' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' }}
  21. - { name: pgdg-el8fix ,description: 'PostgreSQL EL8FIX' ,module: pgsql ,releases: [ 8 ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' , china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' }}
  22. - { name: timescaledb ,description: 'TimescaleDB' ,module: pgsql ,releases: [7,8,9] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/el/$releasever/$basearch' }}
  23. - { name: citus ,description: 'Citus Community' ,module: pgsql ,releases: [7 ] ,baseurl: { default: 'https://repos.citusdata.com/community/el/$releasever/$basearch' }}

repo_packages

name: repo_packages, type: string[], level: G

which packages to be included

default values:

  1. repo_packages: # which packages to be included
  2. - grafana loki logcli promtail prometheus2 alertmanager pushgateway blackbox_exporter node_exporter redis_exporter
  3. - nginx nginx_exporter wget createrepo_c sshpass ansible python3 python3-pip python3-requests python3-jmespath mtail dnsmasq docker-ce docker-compose-plugin etcd
  4. - lz4 unzip bzip2 zlib yum dnf-utils pv jq git ncdu make patch bash lsof wget uuid tuned chrony perf flamegraph nvme-cli numactl grubby sysstat iotop htop modulemd-tools
  5. - netcat socat rsync ftp lrzsz s3cmd net-tools tcpdump ipvsadm bind-utils telnet audit ca-certificates openssl openssh-clients readline vim-minimal haproxy redis
  6. - postgresql15* postgis33_15* citus_15* pglogical_15* pg_squeeze_15* wal2json_15* pg_repack_15* timescaledb-2-postgresql-15* timescaledb-tools
  7. - patroni patroni-etcd pgbouncer pgbadger pgbackrest tail_n_mail pgloader pg_activity libuser openldap-compat annobin gcc-plugin-annobin
  8. - orafce_15* mysqlcompat_15 mongo_fdw_15* tds_fdw_15* mysql_fdw_15 hdfs_fdw_15 sqlite_fdw_15 pgbouncer_fdw_15 pg_dbms_job_15
  9. - pg_stat_kcache_15* pg_stat_monitor_15* pg_qualstats_15 pg_track_settings_15 pg_wait_sampling_15 system_stats_15 logerrors_15 pg_top_15
  10. - plprofiler_15* plproxy_15 plsh_15* pldebugger_15 plpgsql_check_15* pgtt_15 pgq_15* pgsql_tweaks_15 count_distinct_15 hypopg_15
  11. - timestamp9_15* semver_15* prefix_15* rum_15 geoip_15 periods_15 ip4r_15 tdigest_15 hll_15 pgmp_15 extra_window_functions_15 topn_15
  12. - pg_comparator_15 pg_ivm_15* pgsodium_15* pgfincore_15* ddlx_15 credcheck_15 postgresql_anonymizer_15* postgresql_faker_15 safeupdate_15
  13. - pg_fkpart_15 pg_jobmon_15 pg_partman_15 pg_permissions_15 pgaudit17_15 pgexportdoc_15 pgimportdoc_15 pg_statement_rollback_15*
  14. - pg_cron_15 pg_background_15 e-maj_15 pg_catcheck_15 pg_prioritize_15 pgcopydb_15 pg_filedump_15 pgcryptokey_15

Each line is a set of package names separated by spaces, where the specified software will be downloaded via repotrack.

EL7, 8, 9 packages are slightly different, here are some ad hoc packages:

  • EL7: docker-compose citus112_15*
  • EL8: modulemd-tools python39-jmespath haproxy redis docker-compose-plugin citus_15* flamegraph citus_15*
  • EL9: modulemd-tools python3-jmespath haproxy redis docker-compose-plugin citus_15* flamegraph libuser openldap-compat annobin gcc-plugin-annobin

repo_url_packages

name: repo_url_packages, type: string[], level: G

extra packages from url

default value:

  1. repo_url_packages: # extra packages from url
  2. - https://github.com/Vonng/pg_exporter/releases/download/v0.5.0/pg_exporter-0.5.0.x86_64.rpm
  3. - https://github.com/cybertec-postgresql/vip-manager/releases/download/v1.0.2/vip-manager-1.0.2-1.x86_64.rpm
  4. - https://github.com/dalibo/pev2/releases/download/v1.7.0/index.html
  5. - https://dl.min.io/server/minio/release/linux-amd64/archive/minio-20230222182345.0.0.x86_64.rpm
  6. - https://dl.min.io/client/mc/release/linux-amd64/archive/mcli-20230216192011.0.0.x86_64.rpm

Currently, these packages are downloaded via url rather than upstream yum repo

  • pg_exporter: Required, core components of the monitor system.
  • vip-manager: Required, package required to enable L2 VIP for managing VIP.
  • pev2: Optional, PostgreSQL execution plan visualization
  • minio/mcli: Optional, Setup minio clusters for PostgreSQL backup center.

There are two missing packages in EL7: haproxy & redis:

  1. - https://github.com/Vonng/pigsty-pkg/releases/download/misc/redis-6.2.7-1.el7.remi.x86_64.rpm # redis.el7
  2. - https://github.com/Vonng/haproxy-rpm/releases/download/v2.7.2/haproxy-2.7.2-1.el7.x86_64.rpm # haproxy.el7

INFRA_PACKAGE

These packages are installed on infra nodes only, including common rpm pacakges, and pip packages.

  1. infra_packages: # packages to be installed on infra nodes
  2. - grafana,loki,prometheus2,alertmanager,pushgateway,blackbox_exporter,nginx_exporter,redis_exporter,pg_exporter
  3. - nginx,ansible,python3-requests,redis,mcli,logcli,postgresql15
  4. infra_packages_pip: '' # pip installed packages for infra nodes

infra_packages

name: infra_packages, type: string[], level: G

packages to be installed on infra nodes

default value:

  1. infra_packages: # packages to be installed on infra nodes
  2. - grafana,loki,prometheus2,alertmanager,pushgateway,blackbox_exporter,nginx_exporter,redis_exporter,pg_exporter
  3. - nginx,ansible,python3-requests,redis,mcli,logcli,postgresql15

infra_packages_pip

name: infra_packages_pip, type: string, level: G

pip installed packages for infra nodes, default value is empty string


NGINX

Pigsty exposes all Web services through Nginx: Home Page, Grafana, Prometheus, AlertManager, etc…, and other optional tools such as PGWe, Jupyter Lab, Pgadmin, Bytebase ,and other static resource & report such as pev, schemaspy & pgbadger

This nginx also serves as a local yum repo.

  1. nginx_enabled: true # enable nginx on this infra node?
  2. nginx_sslmode: enable # nginx ssl mode? disable,enable,enforce
  3. nginx_home: /www # nginx content dir, `/www` by default
  4. nginx_port: 80 # nginx listen port, 80 by default
  5. nginx_ssl_port: 443 # nginx ssl listen port, 443 by default
  6. nginx_navbar: # nginx index page navigation links
  7. - { name: CA Cert ,url: '/ca.crt' ,desc: 'pigsty self-signed ca.crt' }
  8. - { name: Package ,url: '/pigsty' ,desc: 'local yum repo packages' }
  9. - { name: Explain ,url: '/pev.html' ,desc: 'postgres explain visualizer' }
  10. - { name: PG Logs ,url: '/logs' ,desc: 'postgres raw csv logs' }
  11. - { name: Reports ,url: '/report' ,desc: 'pgbadger summary report' }

nginx_enabled

name: nginx_enabled, type: bool, level: G/I

enable nginx on this infra node? default value: true

nginx_sslmode

name: nginx_sslmode, type: enum, level: G

nginx ssl mode? disable,enable,enforce

default value: enable

  • disable: listen on default port only
  • enable: serve both http / https requests
  • enforce: all links are rendered as https://

nginx_home

name: nginx_home, type: path, level: G

nginx content dir, /www by default

Nginx root directory which contains static resource and repo resource. It’s wise to set this value same as repo_home so that local repo content is automatically served.

nginx_port

name: nginx_port, type: port, level: G

nginx listen port, 80 by default

nginx_ssl_port

name: nginx_ssl_port, type: port, level: G

nginx ssl listen port, 443 by default

nginx_navbar

name: nginx_navbar, type: index[], level: G

nginx index page navigation links

default value:

  1. nginx_navbar: # nginx index page navigation links
  2. - { name: CA Cert ,url: '/ca.crt' ,desc: 'pigsty self-signed ca.crt' }
  3. - { name: Package ,url: '/pigsty' ,desc: 'local yum repo packages' }
  4. - { name: Explain ,url: '/pev.html' ,desc: 'postgres explain visualizer' }
  5. - { name: PG Logs ,url: '/logs' ,desc: 'postgres raw csv logs' }
  6. - { name: Reports ,url: '/report' ,desc: 'pgbadger summary report' }

Each record is rendered as a navigation link to the Pigsty home page App drop-down menu, and the apps are all optional, mounted by default on the Pigsty default server under http://pigsty/.

The url parameter specifies the URL PATH for the app, with the exception that if the ${grafana} string is present in the URL, it will be automatically replaced with the Grafana domain name defined in infra_portal.


DNS

You can set a default DNSMASQ server on infra nodes to serve DNS inquiry.

All records on infra node’s /etc/hosts.d/* will be resolved.

You have to add nameserver {{ admin_ip }} to your /etc/resolv to use this dns server

For pigsty managed node, the default "${admin_ip}" in node_dns_servers will do the trick.

  1. dns_enabled: true # setup dnsmasq on this infra node?
  2. dns_port: 53 # dns server listen port, 53 by default
  3. dns_records: # dynamic dns records resolved by dnsmasq
  4. - "${admin_ip} h.pigsty a.pigsty p.pigsty g.pigsty"
  5. - "${admin_ip} api.pigsty adm.pigsty cli.pigsty ddl.pigsty lab.pigsty git.pigsty sss.pigsty wiki.pigsty"

dns_enabled

name: dns_enabled, type: bool, level: G/I

setup dnsmasq on this infra node? default value: true

dns_port

name: dns_port, type: port, level: G

dns server listen port, 53 by default

dns_records

name: dns_records, type: string[], level: G

dynamic dns records resolved by dnsmasq, Some auxiliary domain names will be written to /etc/hosts.d/default by default

  1. dns_records: # dynamic dns records resolved by dnsmasq
  2. - "${admin_ip} h.pigsty a.pigsty p.pigsty g.pigsty"
  3. - "${admin_ip} api.pigsty adm.pigsty cli.pigsty ddl.pigsty lab.pigsty git.pigsty sss.pigsty wiki.pigsty"

PROMETHEUS

Prometheus is used as time-series database for metrics scrape, storage & analysis.

  1. prometheus_enabled: true # enable prometheus on this infra node?
  2. prometheus_clean: true # clean prometheus data during init?
  3. prometheus_data: /data/prometheus # prometheus data dir, `/data/prometheus` by default
  4. prometheus_sd_interval: 5s # prometheus target refresh interval, 5s by default
  5. prometheus_scrape_interval: 10s # prometheus scrape & eval interval, 10s by default
  6. prometheus_scrape_timeout: 8s # prometheus global scrape timeout, 8s by default
  7. prometheus_options: '--storage.tsdb.retention.time=15d' # prometheus extra server options
  8. pushgateway_enabled: true # setup pushgateway on this infra node?
  9. pushgateway_options: '--persistence.interval=1m' # pushgateway extra server options
  10. blackbox_enabled: true # setup blackbox_exporter on this infra node?
  11. blackbox_options: '' # blackbox_exporter extra server options
  12. alertmanager_enabled: true # setup alertmanager on this infra node?
  13. alertmanager_options: '' # alertmanager extra server options
  14. exporter_metrics_path: /metrics # exporter metric path, `/metrics` by default
  15. exporter_install: none # how to install exporter? none,yum,binary
  16. exporter_repo_url: '' # exporter repo file url if install exporter via yum

prometheus_enabled

name: prometheus_enabled, type: bool, level: G/I

enable prometheus on this infra node?

default value: true

prometheus_clean

name: prometheus_clean, type: bool, level: G/A

clean prometheus data during init? default value: true

prometheus_data

name: prometheus_data, type: path, level: G

prometheus data dir, /data/prometheus by default

prometheus_sd_interval

name: prometheus_sd_interval, type: interval, level: G

prometheus target refresh interval, 5s by default

prometheus_scrape_interval

name: prometheus_scrape_interval, type: interval, level: G

prometheus scrape & eval interval, 10s by default

prometheus_scrape_timeout

name: prometheus_scrape_timeout, type: interval, level: G

prometheus global scrape timeout, 8s by default

DO NOT set this larger than prometheus_scrape_interval

prometheus_options

name: prometheus_options, type: arg, level: G

prometheus extra server options

default value: --storage.tsdb.retention.time=15d

Extra cli args for prometheus server, the default value will set up a 15-day data retention to limit disk usage.

pushgateway_enabled

name: pushgateway_enabled, type: bool, level: G/I

setup pushgateway on this infra node? default value: true

pushgateway_options

name: pushgateway_options, type: arg, level: G

pushgateway extra server options, default value: --persistence.interval=1m

blackbox_enabled

name: blackbox_enabled, type: bool, level: G/I

setup blackbox_exporter on this infra node? default value: true

blackbox_options

name: blackbox_options, type: arg, level: G

blackbox_exporter extra server options, default value is empty string

alertmanager_enabled

name: alertmanager_enabled, type: bool, level: G/I

setup alertmanager on this infra node? default value: true

alertmanager_options

name: alertmanager_options, type: arg, level: G

alertmanager extra server options, default value is empty string

exporter_metrics_path

name: exporter_metrics_path, type: path, level: G

exporter metric path, /metrics by default

exporter_install

name: exporter_install, type: enum, level: G

how to install exporter? none,yum,binary

default value: none

Specify how to install Exporter:

  • none: No installation, (by default, the Exporter has been previously installed by the node.pkgs task)
  • yum: Install using yum (if yum installation is enabled, run yum to install node_exporter and pg_exporter before deploying Exporter)
  • binary: Install using a copy binary (copy node_exporter and pg_exporter binary directly from the meta node, not recommended)

When installing with yum, if exporter_repo_url is specified (not empty), the installation will first install the REPO file under that URL into /etc/yum.repos.d. This feature allows you to install Exporter directly without initializing the node infrastructure. It is not recommended for regular users to use binary installation. This mode is usually used for emergency troubleshooting and temporary problem fixes.

  1. <meta>:<pigsty>/files/node_exporter -> <target>:/usr/bin/node_exporter
  2. <meta>:<pigsty>/files/pg_exporter -> <target>:/usr/bin/pg_exporter

exporter_repo_url

name: exporter_repo_url, type: url, level: G

exporter repo file url if install exporter via yum

default value is empty string

Default is empty; when exporter_install is yum, the repo specified by this parameter will be added to the node source list.


GRAFANA

Grafana is the visualization platform for Pigsty’s monitoring system.

It can also be used as a low code data visualization environment

  1. grafana_enabled: true # enable grafana on this infra node?
  2. grafana_clean: true # clean grafana data during init?
  3. grafana_admin_username: admin # grafana admin username, `admin` by default
  4. grafana_admin_password: pigsty # grafana admin password, `pigsty` by default
  5. grafana_plugin_cache: /www/pigsty/plugins.tgz # path to grafana plugins cache tarball
  6. grafana_plugin_list: # grafana plugins to be downloaded with grafana-cli
  7. - volkovlabs-echarts-panel
  8. - marcusolsson-treemap-panel
  9. loki_enabled: true # enable loki on this infra node?
  10. loki_clean: false # whether remove existing loki data?
  11. loki_data: /data/loki # loki data dir, `/data/loki` by default
  12. loki_retention: 15d # loki log retention period, 15d by default

grafana_enabled

name: grafana_enabled, type: bool, level: G/I

enable grafana on this infra node? default value: true

grafana_clean

name: grafana_clean, type: bool, level: G/A

clean grafana data during init? default value: true

grafana_admin_username

name: grafana_admin_username, type: username, level: G

grafana admin username, admin by default

grafana_admin_password

name: grafana_admin_password, type: password, level: G

grafana admin password, pigsty by default

default value: pigsty

!> WARNING: Change this to a strong password before deploying to production environment

grafana_plugin_cache

name: grafana_plugin_cache, type: path, level: G

path to grafana plugins cache tarball

default value: /www/pigsty/plugins.tgz

If that cache exists, pigsty use that instead of downloading plugins from the Internet

grafana_plugin_list

name: grafana_plugin_list, type: string[], level: G

grafana plugins to be downloaded with grafana-cli

default value:

  1. ["volkovlabs-echarts-panel", "marcusolsson-treemap-panel"]

Which will install echarts panel & treemap panel for grafana


LOKI

loki_enabled

name: loki_enabled, type: bool, level: G/I

enable loki on this infra node? default value: true

loki_clean

name: loki_clean, type: bool, level: G/A

whether remove existing loki data? default value: false

loki_data

name: loki_data, type: path, level: G

loki data dir, default value: /data/loki

loki_retention

name: loki_retention, type: interval, level: G

loki log retention period, 15d by default


NODE

Node module are tuning target nodes into desired state and take it into the Pigsty monitor system.


NODE_ID

Each node has identity parameters that are configured through the parameters in <cluster>.hosts and <cluster>.vars.

Pigsty uses IP as a unique identifier for database nodes. This IP must be the IP that the database instance listens to and serves externally, But it would be inappropriate to use a public IP address!

This is very important. The IP is the inventory_hostname of the host in the inventory, which is reflected as the key in the <cluster>.hosts object.

You can use ansible_* parameters to overwrite ssh behavior, e.g. connect via domain name / alias, but the primary IPv4 is still the core identity of the node.

nodename and node_cluster are not mandatory; nodename will use the node’s current hostname by default, while node_cluster will use the fixed default value: nodes.

If node_id_from_pg is enabled, the node will borrow PGSQL identity and use it as Node’s identity, i.e. node_cluster is set to pg_cluster if applicable, and nodename is set to ${pg_cluster}-${pg_seq}. If nodename_overwrite is enabled, node’s hostname will be overwritten by nodename

Pigsty labels a node with identity parameters in the monitoring system. Which maps nodename to ins, and node_cluster into cls.

NameTypeLevelNecessityComment
inventory_hostnameip-RequiredNode IP
nodenamestringIOptionalNode Name
node_clusterstringCOptionalNode cluster name

The following cluster config declares a three-node node cluster:

  1. node-test:
  2. hosts:
  3. 10.10.10.11: { nodename: node-test-1 }
  4. 10.10.10.12: { nodename: node-test-2 }
  5. 10.10.10.13: { nodename: node-test-3 }
  6. vars:
  7. node_cluster: node-test

Default values:

  1. #nodename: # [INSTANCE] # node instance identity, use hostname if missing, optional
  2. node_cluster: nodes # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
  3. nodename_overwrite: true # overwrite node's hostname with nodename?
  4. nodename_exchange: false # exchange nodename among play hosts?
  5. node_id_from_pg: true # use postgres identity as node identity if applicable?

nodename

name: nodename, type: string, level: I

node instance identity, use hostname if missing, optional

no default value, Null or empty string means nodename will be set to node’s current hostname.

If node_id_from_pg is true, nodename will try to use ${pg_cluster}-${pg_seq} first, if PGSQL is not defined on this node, it will fall back to default HOSTNAME.

If nodename_overwrite is true, the node name will also be used as the HOSTNAME.

node_cluster

name: node_cluster, type: string, level: C

node cluster identity, use ’nodes’ if missing, optional

default values: nodes

If node_id_from_pg is true, node_cluster will try to use ${pg_cluster}-${pg_seq} first, if PGSQL is not defined on this node, it will fall back to default HOSTNAME.

If nodename_overwrite is true, the node name will also be used as the HOSTNAME.

nodename_overwrite

name: nodename_overwrite, type: bool, level: C

overwrite node’s hostname with nodename?

default value is true, a non-empty node name nodename will override the hostname of the current node.

No changes are made to the hostname if the nodename parameter is undefined, empty, or an empty string.

nodename_exchange

name: nodename_exchange, type: bool, level: C

exchange nodename among play hosts?

default value is false

When this parameter is enabled, node names are exchanged between the same group of nodes executing the node.yml playbook, written to /etc/hosts.

node_id_from_pg

name: node_id_from_pg, type: bool, level: C

use postgres identity as node identity if applicable?

default value is true

Boworrow PostgreSQL cluster & instance identity if application.

It’s useful to use same identity for postgres & node if there’s a 1:1 relationship


NODE_DNS

Pigsty configs static DNS records and dynamic DNS resolver for nodes.

If you already have a DNS server, set node_dns_method to none to disable dynamic DNS setup.

  1. node_default_etc_hosts: # static dns records in `/etc/hosts`
  2. - "${admin_ip} h.pigsty a.pigsty p.pigsty g.pigsty"
  3. node_etc_hosts: [] # extra static dns records in `/etc/hosts`
  4. node_dns_method: add # how to handle dns servers: add,none,overwrite
  5. node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
  6. node_dns_options: # dns resolv options in `/etc/resolv.conf`
  7. - options single-request-reopen timeout:1

node_default_etc_hosts

name: node_default_etc_hosts, type: string[], level: G

static dns records in /etc/hosts

default value:

  1. ["${admin_ip} h.pigsty a.pigsty p.pigsty g.pigsty"]

node_default_etc_hosts is an array. Each element is a DNS record with format <ip> <name>.

It is used for global static DNS records. You can use node_etc_hosts for ad hoc records for each cluster.

Make sure to write a DNS record like 10.10.10.10 h.pigsty a.pigsty p.pigsty g.pigsty to /etc/hosts to ensure that the local yum repo can be accessed using the domain name before the DNS Nameserver starts.

node_etc_hosts

name: node_etc_hosts, type: string[], level: C

extra static dns records in /etc/hosts

default values: []

Same as node_default_etc_hosts, but in addition to it.

node_dns_method

name: node_dns_method, type: enum, level: C

how to handle dns servers: add,none,overwrite

default values: add

  • add: Append the records in node_dns_servers to /etc/resolv.conf and keep the existing DNS servers. (default)
  • overwrite: Overwrite /etc/resolv.conf with the record in node_dns_servers
  • none: If a DNS server is provided in the production env, the DNS server config can be skipped.

node_dns_servers

name: node_dns_servers, type: string[], level: C

dynamic nameserver in /etc/resolv.conf

default values: ["${admin_ip}"] , the default nameserver on admin node will be added to /etc/resolv.conf as the first nameserver.

node_dns_options

name: node_dns_options, type: string[], level: C

dns resolv options in /etc/resolv.conf

default value:

  1. ["options single-request-reopen timeout:1"]

NODE_PACKAGE

This section is about upstream yum repos & packages to be installed.

  1. node_repo_method: local # how to setup node repo: none,local,public
  2. node_repo_remove: true # remove existing repo on node?
  3. node_repo_local_urls: # local repo url, if node_repo_method = local
  4. - http://${admin_ip}/pigsty.repo
  5. node_packages: [ ] # packages to be installed current nodes
  6. node_default_packages: # default packages to be installed on all nodes
  7. - lz4,unzip,bzip2,zlib,yum,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,chrony,perf,nvme-cli,numactl,grubby,sysstat,iotop,htop,yum,yum-utils
  8. - wget,netcat,socat,rsync,ftp,lrzsz,s3cmd,net-tools,tcpdump,ipvsadm,bind-utils,telnet,dnsmasq,audit,ca-certificates,openssl,openssh-clients,readline,vim-minimal
  9. - node_exporter,etcd,mtail,python3-idna,python3-requests,haproxy

node_repo_method

name: node_repo_method, type: enum, level: C

how to setup node repo: none, local, public

default values: local

  • local: Use the local Yum repo on the meta node, the default behavior (recommended).
  • public: To install using internet sources, write the public repo in repo_upstream to /etc/yum.repos.d/. (obsolete)
  • none: No config and modification of local repos.

node_repo_remove

name: node_repo_remove, type: bool, level: C

remove existing repo on node?

default value is true, and thus Pigsty will move existing repo file in /etc/yum.repos.d to backup dir: /etc/yum.repos.d/backup before adding upstream repos

node_repo_local_urls

name: node_repo_local_urls, type: string[], level: C

local repo url, if node_repo_method = local

default values: ["http://${admin_ip}/pigsty.repo"]

When node_repo_method = local, the Repo file URLs listed here will be downloaded to /etc/yum.repos.d.

node_packages

name: node_packages, type: string[], level: C

packages to be installed current nodes

default values: []

Like node_packages_default, but in addition to it. designed for overwriting in cluster/instance level.

node_default_packages

name: node_default_packages, type: string[], level: G

default packages to be installed on all nodes

default value:

  1. node_default_packages: # default packages to be installed on all nodes
  2. - lz4,unzip,bzip2,zlib,yum,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,chrony,perf,nvme-cli,numactl,grubby,sysstat,iotop,htop,yum,yum-utils
  3. - wget,netcat,socat,rsync,ftp,lrzsz,s3cmd,net-tools,tcpdump,ipvsadm,bind-utils,telnet,dnsmasq,audit,ca-certificates,openssl,openssh-clients,readline,vim-minimal
  4. - node_exporter,etcd,mtail,python3-idna,python3-requests,haproxy

NODE_TUNE

Configure tuned templates, features, kernel modules, sysctl params on node.

  1. node_disable_firewall: true # disable node firewall? true by default
  2. node_disable_selinux: true # disable node selinux? true by default
  3. node_disable_numa: false # disable node numa, reboot required
  4. node_disable_swap: false # disable node swap, use with caution
  5. node_static_network: true # preserve dns resolver settings after reboot
  6. node_disk_prefetch: false # setup disk prefetch on HDD to increase performance
  7. node_kernel_modules: [ softdog, br_netfilter, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
  8. node_hugepage_count: 0 # number of 2MB hugepage, take precedence over ratio
  9. node_hugepage_ratio: 0 # node mem hugepage ratio, 0 disable it by default
  10. node_overcommit_ratio: 0 # node mem overcommit ratio, 0 disable it by default
  11. node_tune: oltp # node tuned profile: none,oltp,olap,crit,tiny
  12. node_sysctl_params: { } # sysctl parameters in k:v format in addition to tuned

node_disable_firewall

name: node_disable_firewall, type: bool, level: C

disable node firewall? true by default

default value is true

node_disable_selinux

name: node_disable_selinux, type: bool, level: C

disable node selinux? true by default

default value is true

node_disable_numa

name: node_disable_numa, type: bool, level: C

disable node numa, reboot required

default value is false

Boolean flag, default is not off. Note that turning off NUMA requires a reboot of the machine before it can take effect!

If you don’t know how to set the CPU affinity, it is recommended to turn off NUMA.

node_disable_swap

name: node_disable_swap, type: bool, level: C

disable node swap, use with caution

default value is false

But turning off SWAP is not recommended. But SWAP should be disabled when your node is used for a Kubernetes deployment.

If there is enough memory and the database is deployed exclusively. it may slightly improve performance

node_static_network

name: node_static_network, type: bool, level: C

preserve dns resolver settings after reboot, default value is true

Enabling static networking means that machine reboots will not overwrite your DNS Resolv config with NIC changes. It is recommended to enable it in production environment.

node_disk_prefetch

name: node_disk_prefetch, type: bool, level: C

setup disk prefetch on HDD to increase performance

default value is false, Consider enable this when using HDD.

node_kernel_modules

name: node_kernel_modules, type: string[], level: C

kernel modules to be enabled on this node

default value:

  1. node_kernel_modules: [ softdog, br_netfilter, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]

An array consisting of kernel module names declaring the kernel modules that need to be installed on the node.

node_hugepage_count

name: node_hugepage_count, type: int, level: C

number of 2MB hugepage, take precedence over ratio, 0 by default

Take precedence over node_hugepage_ratio. If a non-zero value is given, it will be written to /etc/sysctl.d/hugepage.conf

If node_hugepage_count and node_hugepage_ratio are both 0 (default), hugepage will be disabled at all.

Negative value will not work, and number higher than 90% node mem will be ceil to 90% of node mem.

It should slightly larger than pg_shared_buffer_ratio, if not zero.

node_hugepage_ratio

name: node_hugepage_ratio, type: float, level: C

node mem hugepage ratio, 0 disable it by default, valid range: 0 ~ 0.40

default values: 0, which will set vm.nr_hugepages=0 and not use HugePage at all.

Percent of this memory will be allocated as HugePage, and reserved for PostgreSQL.

It should be equal or slightly larger than pg_shared_buffer_ratio, if not zero.

For example, if you have default 25% mem for postgres shard buffers, you can set this value to 27 ~ 30. Wasted hugepage can be reclaimed later with /pg/bin/pg-tune-hugepage

node_overcommit_ratio

name: node_overcommit_ratio, type: int, level: C

node mem overcommit ratio, 0 disable it by default. this is an integer from 0 to 100+ .

default values: 0, which will set vm.overcommit_memory=0, otherwise vm.overcommit_memory=2 will be used, and this value will be used as vm.overcommit_ratio.

It is recommended to set use a vm.overcommit_ratio on dedicated pgsql nodes. e.g. 50 ~ 100.

node_tune

name: node_tune, type: enum, level: C

node tuned profile: none,oltp,olap,crit,tiny

default values: oltp

  • tiny: Micro Virtual Machine (1 ~ 3 Core, 1 ~ 8 GB Mem)
  • oltp: Regular OLTP templates with optimized latency
  • olap : Regular OLAP templates to optimize throughput
  • crit: Core financial business templates, optimizing the number of dirty pages

Usually, the database tuning template pg_conf should be paired with the node tuning template: node_tune

node_sysctl_params

name: node_sysctl_params, type: dict, level: C

sysctl parameters in k:v format in addition to tuned

default values: {}

Dictionary K-V structure, Key is kernel sysctl parameter name, Value is the parameter value.

You can also define sysctl parameters with tuned profile


NODE_ADMIN

This section is about admin users and it’s credentials.

  1. node_data: /data # node main data directory, `/data` by default
  2. node_admin_enabled: true # create a admin user on target node?
  3. node_admin_uid: 88 # uid and gid for node admin user
  4. node_admin_username: dba # name of node admin user, `dba` by default
  5. node_admin_ssh_exchange: true # exchange admin ssh key among node cluster
  6. node_admin_pk_current: true # add current user's ssh pk to admin authorized_keys
  7. node_admin_pk_list: [] # ssh public keys to be added to admin user

node_data

name: node_data, type: path, level: C

node main data directory, /data by default

default values: /data

If specified, this path will be used as major data disk mountpoint. And a dir will be created and throwing a warning if path not exists.

The data dir is owned by root with mode 0777.

node_admin_enabled

name: node_admin_enabled, type: bool, level: C

create a admin user on target node?

default value is true

Create an admin user on each node (password-free sudo and ssh), an admin user named dba (uid=88) will be created by default, which can access other nodes in the env and perform sudo from the meta node via SSH password-free.

node_admin_uid

name: node_admin_uid, type: int, level: C

uid and gid for node admin user

default values: 88

node_admin_username

name: node_admin_username, type: username, level: C

name of node admin user, dba by default

default values: dba

node_admin_ssh_exchange

name: node_admin_ssh_exchange, type: bool, level: C

exchange admin ssh key among node cluster

default value is true

When enabled, Pigsty will exchange SSH public keys between members during playbook execution, allowing admins node_admin_username to access each other from different nodes.

node_admin_pk_current

name: node_admin_pk_current, type: bool, level: C

add current user’s ssh pk to admin authorized_keys

default value is true

When enabled, on the current node, the SSH public key (~/.ssh/id_rsa.pub) of the current user is copied to the authorized_keys of the target node admin user.

When deploying in a production env, be sure to pay attention to this parameter, which installs the default public key of the user currently executing the command to the admin user of all machines.

node_admin_pk_list

name: node_admin_pk_list, type: string[], level: C

ssh public keys to be added to admin user

default values: []

Each element of the array is a string containing the key written to the admin user ~/.ssh/authorized_keys, and the user with the corresponding private key can log in as an admin user.

When deploying in production envs, be sure to note this parameter and add only trusted keys to this list.


NODE_TIME

  1. node_timezone: '' # setup node timezone, empty string to skip
  2. node_ntp_enabled: true # enable chronyd time sync service?
  3. node_ntp_servers: # ntp servers in `/etc/chrony.conf`
  4. - pool pool.ntp.org iburst
  5. node_crontab_overwrite: true # overwrite or append to `/etc/crontab`?
  6. node_crontab: [ ] # crontab entries in `/etc/crontab`

node_timezone

name: node_timezone, type: string, level: C

setup node timezone, empty string to skip

default value is empty string, which will not change the default timezone (usually UTC)

node_ntp_enabled

name: node_ntp_enabled, type: bool, level: C

enable chronyd time sync service?

default value is true, and thus Pigsty will override the node’s /etc/chrony.conf by with node_ntp_servers.

If you already a NTP server configured, just set to false to leave it be.

node_ntp_servers

name: node_ntp_servers, type: string[], level: C

ntp servers in /etc/chrony.conf

default value: ["pool pool.ntp.org iburst"]

It only takes effect if node_ntp_enabled is true.

You can use ${admin_ip} to sync time with ntp server on admin node rather than public ntp server.

  1. node_ntp_servers: [ 'pool ${admin_ip} iburst' ]

node_crontab_overwrite

name: node_crontab_overwrite, type: bool, level: C

overwrite or append to /etc/crontab?

default value is true, and pigsty will render records in node_crontab in overwrite mode rather than appending to it.

node_crontab

name: node_crontab, type: string[], level: C

crontab entries in /etc/crontab

default values: []


HAPROXY

HAProxy is installed on every node by default, exposing services in a NodePort manner.

It is used by PGSQL Service.

  1. haproxy_enabled: true # enable haproxy on this node?
  2. haproxy_clean: false # cleanup all existing haproxy config?
  3. haproxy_reload: true # reload haproxy after config?
  4. haproxy_auth_enabled: true # enable authentication for haproxy admin page
  5. haproxy_admin_username: admin # haproxy admin username, `admin` by default
  6. haproxy_admin_password: pigsty # haproxy admin password, `pigsty` by default
  7. haproxy_exporter_port: 9101 # haproxy admin/exporter port, 9101 by default
  8. haproxy_client_timeout: 24h # client side connection timeout, 24h by default
  9. haproxy_server_timeout: 24h # server side connection timeout, 24h by default
  10. haproxy_services: [] # list of haproxy service to be exposed on node

haproxy_enabled

name: haproxy_enabled, type: bool, level: C

enable haproxy on this node?

default value is true

haproxy_clean

name: haproxy_clean, type: bool, level: G/C/A

cleanup all existing haproxy config?

default value is false

haproxy_reload

name: haproxy_reload, type: bool, level: A

reload haproxy after config?

default value is true, it will reload haproxy after config change.

If you wish to check before apply, you can turn off this with cli args and check it.

haproxy_auth_enabled

name: haproxy_auth_enabled, type: bool, level: G

enable authentication for haproxy admin page

default value is true, which will require a http basic auth for admin page.

disable it is not recommended, since your traffic control will be exposed

haproxy_admin_username

name: haproxy_admin_username, type: username, level: G

haproxy admin username, admin by default

default values: admin

haproxy_admin_password

name: haproxy_admin_password, type: password, level: G

haproxy admin password, pigsty by default

default values: pigsty

haproxy_exporter_port

name: haproxy_exporter_port, type: port, level: C

haproxy admin/exporter port, 9101 by default

default values: 9101

haproxy_client_timeout

name: haproxy_client_timeout, type: interval, level: C

client side connection timeout, 24h by default

default values: 24h

haproxy_server_timeout

name: haproxy_server_timeout, type: interval, level: C

server side connection timeout, 24h by default

default values: 24h

haproxy_services

name: haproxy_services, type: service[], level: C

list of haproxy service to be exposed on node

default values: [], each element is a service definition, here is an ad hoc haproxy service example:

  1. haproxy_services: # list of haproxy service
  2. # expose pg-test read only replicas
  3. - name: pg-test-ro # [REQUIRED] service name, unique
  4. port: 5440 # [REQUIRED] service port, unique
  5. ip: "*" # [OPTIONAL] service listen addr, "*" by default
  6. protocol: tcp # [OPTIONAL] service protocol, 'tcp' by default
  7. balance: leastconn # [OPTIONAL] load balance algorithm, roundrobin by default (or leastconn)
  8. maxconn: 20000 # [OPTIONAL] max allowed front-end connection, 20000 by default
  9. default: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
  10. options:
  11. - option httpchk
  12. - option http-keep-alive
  13. - http-check send meth OPTIONS uri /read-only
  14. - http-check expect status 200
  15. servers:
  16. - { name: pg-test-1 ,ip: 10.10.10.11 , port: 5432 , options: check port 8008 , backup: true }
  17. - { name: pg-test-2 ,ip: 10.10.10.12 , port: 5432 , options: check port 8008 }
  18. - { name: pg-test-3 ,ip: 10.10.10.13 , port: 5432 , options: check port 8008 }

It will be rendered to /etc/haproxy/<service.name>.cfg and take effect after reload.


NODE_EXPORTER

  1. node_exporter_enabled: true # setup node_exporter on this node?
  2. node_exporter_port: 9100 # node exporter listen port, 9100 by default
  3. node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.ntp --collector.tcpstat --collector.processes'

node_exporter_enabled

name: node_exporter_enabled, type: bool, level: C

setup node_exporter on this node?

default value is true

node_exporter_port

name: node_exporter_port, type: port, level: C

node exporter listen port, 9100 by default

default values: 9100

node_exporter_options

name: node_exporter_options, type: arg, level: C

extra server options for node_exporter

default value: --no-collector.softnet --no-collector.nvme --collector.ntp --collector.tcpstat --collector.processes

Pigsty enables ntp, tcpstat, processes three extra metrics, collectors, by default, and disables softnet, nvme metrics collectors by default.


PROMTAIL

Promtail will collect logs from other modules, and send them to LOKI

  • INFRA: Infra logs, collected only on meta nodes.

    • nginx-access: /var/log/nginx/access.log
    • nginx-error: /var/log/nginx/error.log
    • grafana: /var/log/grafana/grafana.log
  • NODES: Host node logs, collected on all nodes.

    • syslog: /var/log/messages
    • dmesg: /var/log/dmesg
    • cron: /var/log/cron
  • PGSQL: PostgreSQL logs, collected when a node is defined with pg_cluster.

    • postgres: /pg/log/postgres/*.csv
    • patroni: /pg/log/patroni.log
    • pgbouncer: /pg/log/pgbouncer/pgbouncer.log
    • pgbackrest: /pg/log/pgbackrest/*.log
  • REDIS: Redis logs, collected when a node is defined with redis_cluster.

    • redis: /var/log/redis/*.log

!> Log directory are customizable according to pg_log_dir, patroni_log_dir, pgbouncer_log_dir, pgbackrest_log_dir

  1. promtail_enabled: true # enable promtail logging collector?
  2. promtail_clean: false # purge existing promtail status file during init?
  3. promtail_port: 9080 # promtail listen port, 9080 by default
  4. promtail_positions: /var/log/positions.yaml # promtail position status file path

promtail_enabled

name: promtail_enabled, type: bool, level: C

enable promtail logging collector?

default value is true

promtail_clean

name: promtail_clean, type: bool, level: G/A

purge existing promtail status file during init?

default value is false, if you choose to clean, Pigsty will remove the existing state file defined by promtail_positions which means that Promtail will recollect all logs on the current node and send them to Loki again.

promtail_port

name: promtail_port, type: port, level: C

promtail listen port, 9080 by default

default values: 9080

promtail_positions

name: promtail_positions, type: path, level: C

promtail position status file path

default values: /var/log/positions.yaml

Promtail records the consumption offsets of all logs, which are periodically written to the file specified by promtail_positions.


DOCKER

You can install docker on nodes with docker.yml

  1. docker_enabled: false # enable docker on this node?
  2. docker_cgroups_driver: systemd # docker cgroup fs driver: cgroupfs,systemd
  3. docker_registry_mirrors: [] # docker registry mirror list
  4. docker_image_cache: /tmp/docker # docker image cache dir, `/tmp/docker` by default

docker_enabled

name: docker_enabled, type: bool, level: C

enable docker on this node? default value is false

docker_cgroups_driver

name: docker_cgroups_driver, type: enum, level: C

docker cgroup fs driver, could be cgroupfs or systemd, default values: systemd

docker_registry_mirrors

name: docker_registry_mirrors, type: string[], level: C

docker registry mirror list, default values: [], Example:

  1. [ "https://mirror.ccs.tencentyun.com" ] # tencent cloud mirror, intranet only
  2. ["https://registry.cn-hangzhou.aliyuncs.com"] # aliyun cloud mirror, login required

docker_image_cache

name: docker_image_cache, type: path, level: C

docker image cache dir, /tmp/docker by default.

The local docker image cache with .tgz suffix under this directory will be loaded into docker one by one:

  1. cat {{ docker_image_cache }}/*.tgz | gzip -d -c - | docker load

ETCD

ETCD is a distributed, reliable key-value store for the most critical data of a distributed system, and pigsty use etcd as DCS, Which is critical to PostgreSQL High-Availability.

Pigsty has a hard coded group name etcd for etcd cluster, it can be an existing & external etcd cluster, or a new etcd cluster created by pigsty with etcd.yml.

  1. #etcd_seq: 1 # etcd instance identifier, explicitly required
  2. #etcd_cluster: etcd # etcd cluster & group name, etcd by default
  3. etcd_safeguard: false # prevent purging running etcd instance?
  4. etcd_clean: true # purging existing etcd during initialization?
  5. etcd_data: /data/etcd # etcd data directory, /data/etcd by default
  6. etcd_port: 2379 # etcd client port, 2379 by default
  7. etcd_peer_port: 2380 # etcd peer port, 2380 by default
  8. etcd_init: new # etcd initial cluster state, new or existing
  9. etcd_election_timeout: 1000 # etcd election timeout, 1000ms by default
  10. etcd_heartbeat_interval: 100 # etcd heartbeat interval, 100ms by default

etcd_seq

name: etcd_seq, type: int, level: I

etcd instance identifier, REQUIRED

no default value, you have to specify it explicitly. Here is a 3-node etcd cluster example:

  1. etcd: # dcs service for postgres/patroni ha consensus
  2. hosts: # 1 node for testing, 3 or 5 for production
  3. 10.10.10.10: { etcd_seq: 1 } # etcd_seq required
  4. 10.10.10.11: { etcd_seq: 2 } # assign from 1 ~ n
  5. 10.10.10.12: { etcd_seq: 3 } # odd number please
  6. vars: # cluster level parameter override roles/etcd
  7. etcd_cluster: etcd # mark etcd cluster name etcd
  8. etcd_safeguard: false # safeguard against purging
  9. etcd_clean: true # purge etcd during init process

etcd_cluster

name: etcd_cluster, type: string, level: C

etcd cluster & group name, etcd by default

default values: etcd, which is a fixed group name, can be useful when you want to use deployed some extra etcd clusters

etcd_safeguard

name: etcd_safeguard, type: bool, level: G/C/A

prevent purging running etcd instance? default value is false

If enabled, running etcd instance will not be purged by etcd.yml playbook.

etcd_clean

name: etcd_clean, type: bool, level: G/C/A

purging existing etcd during initialization? default value is true

If enabled, running etcd instance will be purged by etcd.yml playbook, which makes etcd.yml a truly idempotent playbook.

But if etcd_safeguard is enabled, it will still abort on any running etcd instance.

etcd_data

name: etcd_data, type: path, level: C

etcd data directory, /data/etcd by default

etcd_port

name: etcd_port, type: port, level: C

etcd client port, 2379 by default

etcd_peer_port

name: etcd_peer_port, type: port, level: C

etcd peer port, 2380 by default

etcd_init

name: etcd_init, type: enum, level: C

etcd initial cluster state, new or existing

default values: new, which will create a standalone new etcd cluster.

The value existing is used when trying to add new member to existing etcd cluster.

etcd_election_timeout

name: etcd_election_timeout, type: int, level: C

etcd election timeout, 1000 (ms) by default

etcd_heartbeat_interval

name: etcd_heartbeat_interval, type: int, level: C

etcd heartbeat interval, 100 (ms) by default


MINIO

Minio is a S3 compatible object storage service. Which is used as an optional central backup storage repo for PostgreSQL.

But you can use it for other purpose, such as storing large files, document, pictures & videos.

  1. #minio_seq: 1 # minio instance identifier, REQUIRED
  2. minio_cluster: minio # minio cluster name, minio by default
  3. minio_clean: false # cleanup minio during init?, false by default
  4. minio_user: minio # minio os user, `minio` by default
  5. minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
  6. minio_data: '/data/minio' # minio data dir(s), use {x...y} to specify multi drivers
  7. minio_domain: sss.pigsty # minio external domain name, `sss.pigsty` by default
  8. minio_port: 9000 # minio service port, 9000 by default
  9. minio_admin_port: 9001 # minio console port, 9001 by default
  10. minio_access_key: minioadmin # root access key, `minioadmin` by default
  11. minio_secret_key: minioadmin # root secret key, `minioadmin` by default
  12. minio_extra_vars: '' # extra environment variables
  13. minio_alias: sss # alias name for local minio deployment
  14. minio_buckets: [ { name: pgsql }, { name: infra }, { name: redis } ]
  15. minio_users:
  16. - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
  17. - { access_key: pgbackrest , secret_key: S3User.Backup, policy: readwrite }

minio_seq

name: minio_seq, type: int, level: I

minio instance identifier, REQUIRED identity parameters. no default value, you have to assign it manually

minio_cluster

name: minio_cluster, type: string, level: C

minio cluster name, minio by default. This is useful when deploying multiple MinIO clusters

minio_clean

name: minio_clean, type: bool, level: G/C/A

cleanup minio during init?, false by default

minio_user

name: minio_user, type: username, level: C

minio os user name, minio by default

minio_node

name: minio_node, type: string, level: C

minio node name pattern, this is used for multi-node deployment

default values: ${minio_cluster}-${minio_seq}.pigsty

minio_data

name: minio_data, type: path, level: C

minio data dir(s)

default values: /data/minio, which is a common dir for single-node deployment.

For a multi-drive deployment, you can use {x...y} notion to specify multi drivers.

minio_domain

name: minio_domain, type: string, level: G

minio service domain name, sss.pigsty by default.

The client can access minio S3 service via this domain name. This name will be registered to local DNSMASQ and included in SSL certs.

minio_port

name: minio_port, type: port, level: C

minio service port, 9000 by default

minio_admin_port

name: minio_admin_port, type: port, level: C

minio console port, 9001 by default

minio_access_key

name: minio_access_key, type: username, level: C

root access key, minioadmin by default

!> PLEASE CHANGE THIS IN YOUR DEPLOYMENT

minio_secret_key

name: minio_secret_key, type: password, level: C

root secret key, minioadmin by default

default values: minioadmin

!> PLEASE CHANGE THIS IN YOUR DEPLOYMENT

minio_extra_vars

name: minio_extra_vars, type: string, level: C

extra environment variables for minio server. Check Minio Server for the complete list.

default value is empty string, you can use multiline string to passing multiple environment variables.

minio_alias

name: minio_alias, type: string, level: G

MinIO alias name for the local MinIO cluster

default values: sss, which will be written to infra nodes’ / admin users’ client alias profile.

minio_buckets

name: minio_buckets, type: bucket[], level: C

list of minio bucket to be created by default:

  1. minio_buckets: [ { name: pgsql }, { name: infra }, { name: redis } ]

Three default buckets are created for module PGSQL, INFRA, and REDIS

minio_users

name: minio_users, type: user[], level: C

list of minio user to be created, default value:

  1. minio_users:
  2. - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
  3. - { access_key: pgbackrest , secret_key: S3User.Backup, policy: readwrite }

Two default users are created for PostgreSQL DBA and pgBackREST.

!> PLEASE ADJUST THESE USERS & CREDENTIALS IN YOUR DEPLOYMENT!


REDIS

  1. #redis_cluster: <CLUSTER> # redis cluster name, required identity parameter
  2. #redis_node: 1 <NODE> # redis node sequence number, node int id required
  3. #redis_instances: {} <NODE> # redis instances definition on this redis node
  4. redis_fs_main: /data # redis main data mountpoint, `/data` by default
  5. redis_exporter_enabled: true # install redis exporter on redis nodes?
  6. redis_exporter_port: 9121 # redis exporter listen port, 9121 by default
  7. redis_exporter_options: '' # cli args and extra options for redis exporter
  8. redis_safeguard: false # prevent purging running redis instance?
  9. redis_clean: true # purging existing redis during init?
  10. redis_rmdata: true # remove redis data when purging redis server?
  11. redis_mode: standalone # redis mode: standalone,cluster,sentinel
  12. redis_conf: redis.conf # redis config template path, except sentinel
  13. redis_bind_address: '0.0.0.0' # redis bind address, empty string will use host ip
  14. redis_max_memory: 1GB # max memory used by each redis instance
  15. redis_mem_policy: allkeys-lru # redis memory eviction policy
  16. redis_password: '' # redis password, empty string will disable password
  17. redis_rdb_save: ['1200 1'] # redis rdb save directives, disable with empty list
  18. redis_aof_enabled: false # enable redis append only file?
  19. redis_rename_commands: {} # rename redis dangerous commands
  20. redis_cluster_replicas: 1 # replica number for one master in redis cluster

redis_instances

name: redis_instances, type: dict, level: I

redis instances definition on this redis node

no default value, you have to define redis instances on each redis node using this parameter explicitly.

Here is an example for a native redis cluster definition

  1. redis-test: # redis native cluster: 3m x 3s
  2. hosts:
  3. 10.10.10.12: { redis_node: 1 ,redis_instances: { 6501: { } ,6502: { } ,6503: { } } }
  4. 10.10.10.13: { redis_node: 2 ,redis_instances: { 6501: { } ,6502: { } ,6503: { } } }
  5. vars: { redis_cluster: redis-test ,redis_mode: cluster, redis_max_memory: 32MB }

redis_node

name: redis_node, type: int, level: I

redis node sequence number, unique integer among redis cluster is required

You have to explicitly define the node id for each redis node.

redis_cluster

name: redis_cluster, type: string, level: C

redis cluster name, required identity parameter

no default value, you have to define it explicitly.

redis_fs_main

name: redis_fs_main, type: path, level: C

redis main data mountpoint, /data by default

default values: /data, and /data/redis will be used as the redis data directory.

redis_exporter_enabled

name: redis_exporter_enabled, type: bool, level: C

install redis exporter on redis nodes?

default value is true, which will launch a redis_exporter on this redis_node

redis_exporter_port

name: redis_exporter_port, type: port, level: C

redis exporter listen port, 9121 by default

default values: 9121

redis_exporter_options

name: redis_exporter_options, type: string, level: C/I

cli args and extra options for redis exporter

default value is empty string

redis_safeguard

name: redis_safeguard, type: bool, level: C

prevent purging running redis instance?

default value is false, if set to true, and redis instance is running, init / remove playbook will abort immediately.

redis_clean

name: redis_clean, type: bool, level: C

purging existing redis during init?

default value is true, which will remove redis server during redis init or remove.

redis_rmdata

name: redis_rmdata, type: bool, level: A

remove redis data when purging redis server?

default value is true, which will remove redis rdb / aof along with redis instance.

redis_mode

name: redis_mode, type: enum, level: C

redis mode: standalone,cluster,sentinel

default values: standalone

  • standalone: setup redis as standalone (master-slave) mode
  • cluster: setup this redis cluster as a redis native cluster
  • sentinel: setup redis as sentinel for standalone redis HA

redis_conf

name: redis_conf, type: string, level: C

redis config template path, except sentinel

default values: redis.conf, which is a template file in roles/redis/templates/redis.conf.

If you want to use your own redis config template, you can put it in templates/ directory and set this parameter to the template file name.

Note that redis sentinel are using a different template file, which is roles/redis/templates/redis-sentinel.conf

redis_bind_address

name: redis_bind_address, type: ip, level: C

redis bind address, empty string will use inventory hostname

default values: 0.0.0.0, which will bind to all available IPv4 address on this host

!> PLEASE bind to intranet IP only in production environment, i.e. set this value to ''

redis_max_memory

name: redis_max_memory, type: size, level: C/I

max memory used by each redis instance, default values: 1GB

redis_mem_policy

name: redis_mem_policy, type: enum, level: C

redis memory eviction policy

default values: allkeys-lru, check redis eviction policy for more details

  • noeviction: New values aren’t saved when memory limit is reached. When a database uses replication, this applies to the primary database
  • allkeys-lru: Keeps most recently used keys; removes least recently used (LRU) keys
  • allkeys-lfu: Keeps frequently used keys; removes least frequently used (LFU) keys
  • volatile-lru: Removes least recently used keys with the expire field set to true.
  • volatile-lfu: Removes least frequently used keys with the expire field set to true.
  • allkeys-random: Randomly removes keys to make space for the new data added.
  • volatile-random: Randomly removes keys with expire field set to true.
  • volatile-ttl: Removes keys with expire field set to true and the shortest remaining time-to-live (TTL) value.

redis_password

name: redis_password, type: password, level: C/N

redis password, empty string will disable password, which is the default behavior

Note that due to the implementation limitation of redis_exporter, you can only set one redis_password per node. This is usually not a problem, because pigsty does not allow deploying two different redis cluster on the same node.

!> PLEASE use a strong password in production environment

redis_rdb_save

name: redis_rdb_save, type: string[], level: C

redis rdb save directives, disable with empty list, check redis persist for details.

the default value is ["1200 1"]: dump the dataset to disk every 20 minutes if at least 1 key changed:

redis_aof_enabled

name: redis_aof_enabled, type: bool, level: C

enable redis append only file? default value is false.

redis_rename_commands

name: redis_rename_commands, type: dict, level: C

rename redis dangerous commands, which is a dict of k:v old: new

default values: {}, you can hide dangerous commands like FLUSHDB and FLUSHALL by setting this value, here’s an example:

  1. {
  2. "keys": "op_keys",
  3. "flushdb": "op_flushdb",
  4. "flushall": "op_flushall",
  5. "config": "op_config"
  6. }

redis_cluster_replicas

name: redis_cluster_replicas, type: int, level: C

replica number for one master/primary in redis cluster, default values: 1


PGSQL

PGSQL module requires NODE module to be installed, and you also need a viable ETCD cluster to store cluster meta data.

Install PGSQL module on a single node will create a primary instance which a standalone PGSQL server/instance. Install it on additional nodes will create replicas, which can be used for serving read-only traffics, or use as standby backup. You can also create offline instance of ETL/OLAP/Interactive queries, use Sync Standby and Quorum Commit to increase data consistency, or even form a standby cluster and delayed standby cluster for disaster recovery.

You can define multiple PGSQL clusters and form a horizontal sharding cluster, which is a group of PGSQL clusters running on different nodes. Pigsty has native citus cluster group support, which can extend your PGSQL cluster to a distributed database sharding cluster.


PG_ID

Here are some common parameters used to identify PGSQL entities: instance, service, etc…

  1. # pg_cluster: #CLUSTER # pgsql cluster name, required identity parameter
  2. # pg_seq: 0 #INSTANCE # pgsql instance seq number, required identity parameter
  3. # pg_role: replica #INSTANCE # pgsql role, required, could be primary,replica,offline
  4. # pg_instances: {} #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
  5. # pg_upstream: #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
  6. # pg_shard: #CLUSTER # pgsql shard name, optional identity for sharding clusters
  7. # pg_group: 0 #CLUSTER # pgsql shard index number, optional identity for sharding clusters
  8. # gp_role: master #CLUSTER # greenplum role of this cluster, could be master or segment
  9. pg_offline_query: false #INSTANCE # set to true to enable offline query on this instance

You have to assign these identity parameters explicitly, there’s no default value for them.

NameTypeLevelDescription
pg_clusterstringCPG database cluster name
pg_seqnumberIPG database instance id
pg_roleenumIPG database instance role
pg_shardstringCPG database shard name of cluster
pg_groupnumberCPG database shard index of cluster
  • pg_cluster: It identifies the name of the cluster, which is configured at the cluster level.
  • pg_role: Configured at the instance level, identifies the role of the ins. Only the primary role will be handled specially. If not filled in, the default is the replica role and the special delayed and offline roles.
  • pg_seq: Used to identify the ins within the cluster, usually with an integer number incremented from 0 or 1, which is not changed once it is assigned.
  • {{ pg_cluster }}-{{ pg_seq }} is used to uniquely identify the ins, i.e. pg_instance.
  • {{ pg_cluster }}-{{ pg_role }} is used to identify the services within the cluster, i.e. pg_service.
  • pg_shard and pg_group are used for horizontally sharding clusters, for citus, greenplum, and matrixdb only.

pg_cluster, pg_role, pg_seq are core identity params, which are required for any Postgres cluster, and must be explicitly specified. Here’s an example:

  1. pg-test:
  2. hosts:
  3. 10.10.10.11: {pg_seq: 1, pg_role: replica}
  4. 10.10.10.12: {pg_seq: 2, pg_role: primary}
  5. 10.10.10.13: {pg_seq: 3, pg_role: replica}
  6. vars:
  7. pg_cluster: pg-test

All other params can be inherited from the global config or the default config, but the identity params must be explicitly specified and manually assigned. The current PGSQL identity params are as follows:

pg_mode

name: pg_mode, type: enum, level: C

pgsql cluster mode, cloud be pgsql, citus, or gpsql, pgsql by default.

If pg_mode is set to citus or gpsql, pg_shard and pg_group will be required for horizontal sharding clusters.

pg_cluster

name: pg_cluster, type: string, level: C

pgsql cluster name, REQUIRED identity parameter

The cluster name will be used as the namespace for PGSQL related resources within that cluster.

The naming needs to follow the specific naming pattern: [a-z][a-z0-9-]* to be compatible with the requirements of different constraints on the identity.

pg_seq

name: pg_seq, type: int, level: I

pgsql instance seq number, REQUIRED identity parameter

A serial number of this instance, unique within its cluster, starting from 0 or 1.

pg_role

name: pg_role, type: enum, level: I

pgsql role, REQUIRED, could be primary,replica,offline

Roles for PGSQL instance, can be: primary, replica, standby or offline.

  • primary: Primary, there is one and only one primary in a cluster.
  • replica: Replica for carrying online read-only traffic, there may be a slight replication delay through (10ms~100ms, 100KB).
  • standby: Special replica that is always synced with primary, there’s no replication delay & data loss on this replica. (currently same as replica)
  • offline: Offline replica for taking on offline read-only traffic, such as statistical analysis/ETL/personal queries, etc.

Identity params, required params, and instance-level params.

pg_instances

name: pg_instances, type: dict, level: I

define multiple pg instances on node in {port:ins_vars} format.

This parameter is reserved for multi-instance deployment on a single node which is not implemented in Pigsty yet.

pg_upstream

name: pg_upstream, type: ip, level: I

Upstream ip address for standby cluster or cascade replica

Setting pg_upstream is set on primary instance indicate that this cluster is a Standby Cluster, and will receiving changes from upstream instance, thus the primary is actually a standby leader.

Setting pg_upstream for a non-primary instance will explicitly set a replication upstream instance, if it is different from the primary ip addr, this instance will become a cascade replica. And it’s user’s responsibility to ensure that the upstream IP addr is another instance in the same cluster.

pg_shard

name: pg_shard, type: string, level: C

pgsql shard name, required identity parameter for sharding clusters (e.g. citus cluster), optional for common pgsql clusters.

When multiple pgsql clusters serve the same business together in a horizontally sharding style, Pigsty will mark this group of clusters as a Sharding Group.

pg_shard is the name of the shard group name. It’s usually the prefix of pg_cluster.

For example, if we have a sharding group pg-citus, and 4 clusters in it, there identity params will be:

  1. cls pg_shard: pg-citus
  2. cls pg_group = 0: pg-citus0
  3. cls pg_group = 1: pg-citus1
  4. cls pg_group = 2: pg-citus2
  5. cls pg_group = 3: pg-citus3

pg_group

name: pg_group, type: int, level: C

pgsql shard index number, required identity for sharding clusters, optional for common pgsql clusters.

Sharding cluster index of sharding group, used in pair with pg_shard. You can use any non-negative integer as the index number.

gp_role

name: gp_role, type: enum, level: C

greenplum/matrixdb role of this cluster, could be master or segment

  • master: mark the postgres cluster as greenplum master, which is the default value
  • segment mark the postgres cluster as greenplum segment

This parameter is only used for greenplum/matrixdb database, and is ignored for common pgsql cluster.

pg_exporters

name: pg_exporters, type: dict, level: C

additional pg_exporters to monitor remote postgres instances, default values: {}

If you wish to monitoring remote postgres instances, define them in pg_exporters and load them with pgsql-monitor.yml playbook.

  1. pg_exporters: # list all remote instances here, alloc a unique unused local port as k
  2. 20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.10 }
  3. 20004: { pg_cluster: pg-foo, pg_seq: 2, pg_host: 10.10.10.11 }
  4. 20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.12 }
  5. 20003: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.13 }

Check PGSQL Monitoring for details.

pg_offline_query

name: pg_offline_query, type: bool, level: I

set to true to enable offline query on this instance

default value is false

When set to true, the user group dbrole_offline can connect to the ins and perform offline queries, regardless of the role of the current instance, just like a offline instance.

If you just have one replica or even one primary in your postgres cluster, adding this could mark it for accepting ETL, slow queries with interactive access.


PG_BUSINESS

Database credentials, In-Database Objects that need to be taken care of by Users.

!> WARNING: YOU HAVE TO CHANGE THESE DEFAULT PASSWORDs in production environment.

  1. # postgres business object definition, overwrite in group vars
  2. pg_users: [] # postgres business users
  3. pg_databases: [] # postgres business databases
  4. pg_services: [] # postgres business services
  5. pg_hba_rules: [] # business hba rules for postgres
  6. pgb_hba_rules: [] # business hba rules for pgbouncer
  7. # global credentials, overwrite in global vars
  8. pg_dbsu_password: '' # dbsu password, empty string means no dbsu password by default
  9. pg_replication_username: replicator
  10. pg_replication_password: DBUser.Replicator
  11. pg_admin_username: dbuser_dba
  12. pg_admin_password: DBUser.DBA
  13. pg_monitor_username: dbuser_monitor
  14. pg_monitor_password: DBUser.Monitor

pg_users

name: pg_users, type: user[], level: C

postgres business users, has to be defined at cluster level.

default values: [], each object in the array defines a User/Role. Examples:

  1. pg_users: # define business users/roles on this cluster, array of user definition
  2. - name: dbuser_meta # REQUIRED, `name` is the only mandatory field of a user definition
  3. password: DBUser.Meta # optional, password, can be a scram-sha-256 hash string or plain text
  4. login: true # optional, can log in, true by default (new biz ROLE should be false)
  5. superuser: false # optional, is superuser? false by default
  6. createdb: false # optional, can create database? false by default
  7. createrole: false # optional, can create role? false by default
  8. inherit: true # optional, can this role use inherited privileges? true by default
  9. replication: false # optional, can this role do replication? false by default
  10. bypassrls: false # optional, can this role bypass row level security? false by default
  11. pgbouncer: true # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
  12. connlimit: -1 # optional, user connection limit, default -1 disable limit
  13. expire_in: 3650 # optional, now + n days when this role is expired (OVERWRITE expire_at)
  14. expire_at: '2030-12-31' # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
  15. comment: pigsty admin user # optional, comment string for this user/role
  16. roles: [dbrole_admin] # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
  17. parameters: {} # optional, role level parameters with `ALTER ROLE SET`
  18. pool_mode: transaction # optional, pgbouncer pool mode at user level, transaction by default
  19. pool_connlimit: -1 # optional, max database connections at user level, default -1 disable limit
  20. search_path: public # key value config parameters according to postgresql documentation (e.g: use pigsty as default search_path)
  21. - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
  22. - {name: dbuser_grafana ,password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for grafana database }
  23. - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database }
  24. - {name: dbuser_kong ,password: DBUser.Kong ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for kong api gateway }
  25. - {name: dbuser_gitea ,password: DBUser.Gitea ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for gitea service }
  26. - {name: dbuser_wiki ,password: DBUser.Wiki ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for wiki.js service }
  • Each user or role must specify a name and the rest of the fields are optional, a name must be unique in this list.
  • password is optional, if left blank then no password is set, you can use the MD5 ciphertext password.
  • login, superuser, createdb, createrole, inherit, replication and bypassrls are all boolean types used to set user attributes. If not set, the system defaults are used.
  • Users are created by CREATE USER, so they have the login attribute by default. If the role is created, you need to specify login: false.
  • expire_at and expire_in are used to control the user expiration time. expire_at uses a date timestamp in the shape of YYYY-mm-DD. expire_in uses the number of days to expire from now, and overrides the expire_at option if expire_in exists.
  • New users are not added to the Pgbouncer user list by default, and pgbouncer: true must be explicitly defined for the user to be added to the Pgbouncer user list.
  • Users/roles are created sequentially, and users defined later can belong to the roles defined earlier.
  • pool_mode, pool_connlimit are user-level pgbouncer parameters that will override default settings.
  • Users can use pre-defined pg_default_roles with roles field:
    • dbrole_readonly: Default production read-only user with global read-only privileges. (Read-only production access)
    • dbrole_offline: Default offline read-only user with read-only access on a specific ins. (offline query, personal account, ETL)
    • dbrole_readwrite: Default production read/write user with global CRUD privileges. (Regular production use)
    • dbrole_admin: Default production management user with the privilege to execute DDL changes. (Admin User)

Configure pgbouncer: true for the production account to add the user to pgbouncer; It’s important to use a connection pool if you got thousands of clients.

pg_databases

name: pg_databases, type: database[], level: C

postgres business databases, has to be defined at cluster level.

default values: [], each object in the array defines a Database. Examples:

  1. pg_databases: # define business databases on this cluster, array of database definition
  2. - name: meta # REQUIRED, `name` is the only mandatory field of a database definition
  3. baseline: cmdb.sql # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
  4. pgbouncer: true # optional, add this database to pgbouncer database list? true by default
  5. schemas: [pigsty] # optional, additional schemas to be created, array of schema names
  6. extensions: [{name: postgis}] # optional, additional extensions to be installed: array of `{name[,schema]}`
  7. comment: pigsty meta database # optional, comment string for this database
  8. owner: postgres # optional, database owner, postgres by default
  9. template: template1 # optional, which template to use, template1 by default
  10. encoding: UTF8 # optional, database encoding, UTF8 by default. (MUST same as template database)
  11. locale: C # optional, database locale, C by default. (MUST same as template database)
  12. lc_collate: C # optional, database collate, C by default. (MUST same as template database)
  13. lc_ctype: C # optional, database ctype, C by default. (MUST same as template database)
  14. tablespace: pg_default # optional, default tablespace, 'pg_default' by default.
  15. allowconn: true # optional, allow connection, true by default. false will disable connect at all
  16. revokeconn: false # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
  17. register_datasource: true # optional, register this database to grafana datasources? true by default
  18. connlimit: -1 # optional, database connection limit, default -1 disable limit
  19. pool_auth_user: dbuser_meta # optional, all connection to this pgbouncer database will be authenticated by this user
  20. pool_mode: transaction # optional, pgbouncer pool mode at database level, default transaction
  21. pool_size: 64 # optional, pgbouncer pool size at database level, default 64
  22. pool_size_reserve: 32 # optional, pgbouncer pool size reserve at database level, default 32
  23. pool_size_min: 0 # optional, pgbouncer pool size min at database level, default 0
  24. pool_max_db_conn: 100 # optional, max database connections at database level, default 100
  25. - { name: grafana ,owner: dbuser_grafana ,revokeconn: true ,comment: grafana primary database }
  26. - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
  27. - { name: kong ,owner: dbuser_kong ,revokeconn: true ,comment: kong the api gateway database }
  28. - { name: gitea ,owner: dbuser_gitea ,revokeconn: true ,comment: gitea meta database }
  29. - { name: wiki ,owner: dbuser_wiki ,revokeconn: true ,comment: wiki meta database }

In each database definition, the DB name is mandatory and the rest are optional.

pg_services

name: pg_services, type: service[], level: C

postgres business services exposed via haproxy, has to be defined at cluster level.

You can define ad hoc services with pg_services in additional to default pg_default_services

default values: [], each object in the array defines a Service. Examples:

  1. pg_services: # extra services in addition to pg_default_services, array of service definition
  2. - name: standby # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
  3. port: 5435 # required, service exposed port (work as kubernetes service node port mode)
  4. ip: "*" # optional, service bind ip address, `*` for all ip by default
  5. selector: "[]" # required, service member selector, use JMESPath to filter inventory
  6. dest: pgbouncer # optional, destination port, postgres|pgbouncer|<port_number> , pgbouncer(6432) by default
  7. check: /sync # optional, health check url path, / by default
  8. backup: "[? pg_role == `primary`]" # backup server selector
  9. maxconn: 3000 # optional, max allowed front-end connection
  10. balance: roundrobin # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
  11. options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

pg_hba_rules

name: pg_hba_rules, type: hba[], level: C

business hba rules for postgres

default values: [], each object in array is an HBA Rule definition:

Which are array of hba object, each hba object may look like

  1. # RAW HBA RULES
  2. - title: allow intranet password access
  3. role: common
  4. rules:
  5. - host all all 10.0.0.0/8 md5
  6. - host all all 172.16.0.0/12 md5
  7. - host all all 192.168.0.0/16 md5
  • title: Rule Title, transform into comment in hba file
  • rules: Array of strings, each string is a raw hba rule record
  • role: Applied roles, where to install these hba rules
    • common: apply for all instances
    • primary, replica,standby, offline: apply on corresponding instances with that pg_role.
    • special case: HBA rule with role == 'offline' will be installed on instance with pg_offline_query flag

or you can use another alias form

  1. - addr: 'intra' # world|intra|infra|admin|local|localhost|cluster|<cidr>
  2. auth: 'pwd' # trust|pwd|ssl|cert|deny|<official auth method>
  3. user: 'all' # all|${dbsu}|${repl}|${admin}|${monitor}|<user>|<group>
  4. db: 'all' # all|replication|....
  5. rules: [] # raw hba string precedence over above all
  6. title: allow intranet password access

pg_default_hba_rules is similar to this, but is used for global HBA rule settings

pgb_hba_rules

name: pgb_hba_rules, type: hba[], level: C

business hba rules for pgbouncer, default values: []

Similar to pg_hba_rules, array of hba rule object, except this is for pgbouncer.

pg_replication_username

name: pg_replication_username, type: username, level: G

postgres replication username, replicator by default

This parameter is globally used, it not wise to change it.

pg_replication_password

name: pg_replication_password, type: password, level: G

postgres replication password, DBUser.Replicator by default

!> WARNING: CHANGE THIS IN PRODUCTION ENVIRONMENT!!!!

pg_admin_username

name: pg_admin_username, type: username, level: G

postgres admin username, dbuser_dba by default, which is a global postgres superuser.

default values: dbuser_dba

pg_admin_password

name: pg_admin_password, type: password, level: G

postgres admin password in plain text, DBUser.DBA by default

!> WARNING: CHANGE THIS IN PRODUCTION ENVIRONMENT!!!!

pg_monitor_username

name: pg_monitor_username, type: username, level: G

postgres monitor username, dbuser_monitor by default, which is a global monitoring user.

pg_monitor_password

name: pg_monitor_password, type: password, level: G

postgres monitor password, DBUser.Monitor by default.

!> WARNING: CHANGE THIS IN PRODUCTION ENVIRONMENT!!!!

pg_dbsu_password

name: pg_dbsu_password, type: password, level: G/C

PostgreSQL dbsu password for pg_dbsu, empty string means no dbsu password, which is the default behavior.

!> WARNING: It’s not recommend to set a dbsu password for common PGSQL clusters, except for pg_mode = citus.


PG_INSTALL

This section is responsible for installing PostgreSQL & Extensions.

If you wish to install a different major version, just make sure repo packages exists and overwrite pg_version on cluster level.

  1. pg_dbsu: postgres # os dbsu name, postgres by default, better not change it
  2. pg_dbsu_uid: 26 # os dbsu uid and gid, 26 for default postgres users and groups
  3. pg_dbsu_sudo: limit # dbsu sudo privilege, none,limit,all,nopass. limit by default
  4. pg_dbsu_home: /var/lib/pgsql # postgresql home directory, `/var/lib/pgsql` by default
  5. pg_dbsu_ssh_exchange: true # exchange postgres dbsu ssh key among same pgsql cluster
  6. pg_version: 15 # postgres major version to be installed, 15 by default
  7. pg_bin_dir: /usr/pgsql/bin # postgres binary dir, `/usr/pgsql/bin` by default
  8. pg_log_dir: /pg/log/postgres # postgres log dir, `/pg/log/postgres` by default
  9. pg_packages: # pg packages to be installed, `${pg_version}` will be replaced
  10. - postgresql${pg_version}*
  11. - pgbouncer pg_exporter pgbadger vip-manager patroni patroni-etcd pgbackrest
  12. pg_extensions: # pg extensions to be installed, `${pg_version}` will be replaced
  13. - postgis33_${pg_version}* pg_repack_${pg_version} wal2json_${pg_version} timescaledb-2-postgresql-${pg_version} citus*${pg_version}*

pg_dbsu

name: pg_dbsu, type: username, level: C

os dbsu name, postgres by default, it’s not wise to change it.

When installing Greenplum / MatrixDB, set this parameter to the corresponding default value: gpadmin|mxadmin.

pg_dbsu_uid

name: pg_dbsu_uid, type: int, level: C

os dbsu uid and gid, 26 for default postgres users and groups, which is consistent with the official pgdg RPM.

pg_dbsu_sudo

name: pg_dbsu_sudo, type: enum, level: C

dbsu sudo privilege, coud be none, limit ,all ,nopass. limit by default

  • none: No Sudo privilege
  • limit: Limited sudo privilege to execute systemctl commands for database-related components, default.
  • all: Full sudo privilege, password required.
  • nopass: Full sudo privileges without a password (not recommended).

default values: limit, which only allow sudo systemctl <start|stop|reload> <postgres|patroni|pgbouncer|...>

pg_dbsu_home

name: pg_dbsu_home, type: path, level: C

postgresql home directory, /var/lib/pgsql by default, which is consistent with the official pgdg RPM.

pg_dbsu_ssh_exchange

name: pg_dbsu_ssh_exchange, type: bool, level: C

exchange postgres dbsu ssh key among same pgsql cluster?

default value is true, means the dbsu can ssh to each other among the same cluster.

pg_version

name: pg_version, type: enum, level: C

postgres major version to be installed, 15 by default

Note that PostgreSQL physical stream replication cannot cross major versions, so do not configure this on instance level.

You can use the parameters in pg_packages and pg_extensions to install rpms for the specific pg major version.

pg_bin_dir

name: pg_bin_dir, type: path, level: C

postgres binary dir, /usr/pgsql/bin by default

The default value is a soft link created manually during the installation process, pointing to the specific Postgres version dir installed.

For example /usr/pgsql -> /usr/pgsql-15. For more details, check PGSQL File Structure for details.

pg_log_dir

name: pg_log_dir, type: path, level: C

postgres log dir, /pg/log/postgres by default.

!> caveat: if pg_log_dir is prefixed with pg_data it will not be created explicit (it will be created by postgres itself then).

pg_packages

name: pg_packages, type: string[], level: C

pg packages to be installed, ${pg_version} will be replaced to the actual value of pg_version

PostgreSQL, pgbouncer, pg_exporter, pgbadger, vip-manager, patroni, pgbackrest are install by default.

  1. pg_packages: # pg packages to be installed, `${pg_version}` will be replaced
  2. - postgresql${pg_version}*
  3. - pgbouncer pg_exporter pgbadger vip-manager patroni patroni-etcd pgbackrest

pg_extensions

name: pg_extensions, type: string[], level: C

pg extensions to be installed, ${pg_version} will be replaced to pg_version

PostGIS, TimescaleDB, Citus, pg_repack, and wal2json will be installed by default.

  1. pg_extensions: # pg extensions to be installed, `${pg_version}` will be replaced
  2. - postgis33_${pg_version}* pg_repack_${pg_version} wal2json_${pg_version} timescaledb-2-postgresql-${pg_version}

PG_BOOTSTRAP

Bootstrap a postgres cluster with patroni, and setup pgbouncer connection pool along with it.

It also init cluster template databases with default roles, schemas & extensions & default privileges.

Then it will create business databases & users and add them to pgbouncer & monitoring system

On a machine with Postgres, create a set of databases.

  1. pg_safeguard: false # prevent purging running postgres instance? false by default
  2. pg_clean: true # purging existing postgres during pgsql init? true by default
  3. pg_data: /pg/data # postgres data directory, `/pg/data` by default
  4. pg_fs_main: /data # mountpoint/path for postgres main data, `/data` by default
  5. pg_fs_bkup: /data/backups # mountpoint/path for pg backup data, `/data/backup` by default
  6. pg_storage_type: SSD # storage type for pg main data, SSD,HDD, SSD by default
  7. pg_dummy_filesize: 64MiB # size of `/pg/dummy`, hold 64MB disk space for emergency use
  8. pg_listen: '0.0.0.0' # postgres listen address, `0.0.0.0` (all ipv4 addr) by default
  9. pg_port: 5432 # postgres listen port, 5432 by default
  10. pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
  11. pg_namespace: /pg # top level key namespace in etcd, used by patroni & vip
  12. patroni_enabled: true # if disabled, no postgres cluster will be created during init
  13. patroni_mode: default # patroni working mode: default,pause,remove
  14. patroni_port: 8008 # patroni listen port, 8008 by default
  15. patroni_log_dir: /pg/log/patroni # patroni log dir, `/pg/log/patroni` by default
  16. patroni_ssl_enabled: false # secure patroni RestAPI communications with SSL?
  17. patroni_watchdog_mode: off # patroni watchdog mode: automatic,required,off. off by default
  18. patroni_username: postgres # patroni restapi username, `postgres` by default
  19. patroni_password: Patroni.API # patroni restapi password, `Patroni.API` by default
  20. patroni_citus_db: postgres # citus database managed by patroni, postgres by default
  21. pg_conf: oltp.yml # config template: oltp,olap,crit,tiny. `oltp.yml` by default
  22. pg_max_conn: auto # postgres max connections, `auto` will use recommended value
  23. pg_shared_buffer_ratio: 0.25 # postgres shared buffer ratio, 0.25 by default, 0.1~0.4
  24. pg_rto: 30 # recovery time objective in seconds, `30s` by default
  25. pg_rpo: 1048576 # recovery point objective in bytes, `1MiB` at most by default
  26. pg_libs: 'timescaledb, pg_stat_statements, auto_explain' # extensions to be loaded
  27. pg_delay: 0 # replication apply delay for standby cluster leader
  28. pg_checksum: false # enable data checksum for postgres cluster?
  29. pg_pwd_enc: scram-sha-256 # passwords encryption algorithm: md5,scram-sha-256
  30. pg_encoding: UTF8 # database cluster encoding, `UTF8` by default
  31. pg_locale: C # database cluster local, `C` by default
  32. pg_lc_collate: C # database cluster collate, `C` by default
  33. pg_lc_ctype: en_US.UTF8 # database character type, `en_US.UTF8` by default
  34. pgbouncer_enabled: true # if disabled, pgbouncer will not be launched on pgsql host
  35. pgbouncer_port: 6432 # pgbouncer listen port, 6432 by default
  36. pgbouncer_log_dir: /pg/log/pgbouncer # pgbouncer log dir, `/pg/log/pgbouncer` by default
  37. pgbouncer_auth_query: false # query postgres to retrieve unlisted business users?
  38. pgbouncer_poolmode: transaction # pooling mode: transaction,session,statement, transaction by default
  39. pgbouncer_sslmode: disable # pgbouncer client ssl mode, disable by default

pg_safeguard

name: pg_safeguard, type: bool, level: G/C/A

prevent purging running postgres instance? false by default

default value is false, If enabled, pgsql.yml & pgsql-rm.yml will abort immediately if any postgres instance is running.

pg_clean

name: pg_clean, type: bool, level: G/C/A

purging existing postgres during pgsql init? true by default

default value is true, it will purge existing postgres instance during pgsql.yml init. which makes the playbook idempotent.

if set to false, pgsql.yml will abort if there’s already a running postgres instance. and pgsql-rm.yml will NOT remove postgres data (only stop the server).

pg_data

name: pg_data, type: path, level: C

postgres data directory, /pg/data by default

default values: /pg/data, DO NOT CHANGE IT.

It’s a soft link that point to underlying data directory.

Check PGSQL File Structure for details.

pg_fs_main

name: pg_fs_main, type: path, level: C

mountpoint/path for postgres main data, /data by default

default values: /data, which will be used as parent dir of postgres main data directory: /data/postgres.

It’s recommended to use NVME SSD for postgres main data storage, Pigsty is optimized for SSD storage by default. But HDD is also supported, you can change pg_storage_type to HDD to optimize for HDD storage.

pg_fs_bkup

name: pg_fs_bkup, type: path, level: C

mountpoint/path for pg backup data, /data/backup by default

If you are using the default pgbackrest_method = local, it is recommended to have a separate disk for backup storage.

The backup disk should be large enough to hold all your backups, at least enough for 3 basebackups + 2 days WAL archive. This is usually not a problem since you can use cheap & large HDD for that.

It’s recommended to use a separate disk for backup storage, otherwise pigsty will fall back to the main data disk.

pg_storage_type

name: pg_storage_type, type: enum, level: C

storage type for pg main data, SSD,HDD, SSD by default

default values: SSD, it will affect some tuning parameters, such as random_page_cost & effective_io_concurrency

pg_dummy_filesize

name: pg_dummy_filesize, type: size, level: C

size of /pg/dummy, default values: 64MiB, which hold 64MB disk space for emergency use

When the disk is full, removing the placeholder file can free up some space for emergency use, it is recommended to use at least 8GiB for production use.

pg_listen

name: pg_listen, type: ip, level: C

postgres listen address, 0.0.0.0 (all ipv4 addr) by default

If you want to include all IPv6 address, use * instead. It’s not wise to use this in node with public IP address.

pg_port

name: pg_port, type: port, level: C

postgres listen port, 5432 by default.

pg_localhost

name: pg_localhost, type: path, level: C

postgres unix socket dir for localhost connection, default values: /var/run/postgresql

The Unix socket dir for PostgreSQL and Pgbouncer local connection, which is used by pg_exporter and patroni.

pg_namespace

name: pg_namespace, type: path, level: C

top level key namespace in etcd, used by patroni & vip, default values is: /pg , and it’s not recommended to change it.

patroni_enabled

name: patroni_enabled, type: bool, level: C

if disabled, no postgres cluster will be created during init

default value is true, If disabled, Pigsty will skip pulling up patroni (thus postgres).

This option is useful when trying to add some components to an existing postgres instance.

patroni_mode

name: patroni_mode, type: enum, level: C

patroni working mode: default, pause, remove

default values: default

  • default: Bootstrap PostgreSQL cluster with Patroni
  • pause: Just like default, but entering maintenance mode after bootstrap
  • remove: Init the cluster with Patroni, them remove Patroni and use raw PostgreSQL instead.

patroni_port

name: patroni_port, type: port, level: C

patroni listen port, 8008 by default, changing it is not recommended.

The Patroni API server listens on this port for health checking & API requests.

patroni_log_dir

name: patroni_log_dir, type: path, level: C

patroni log dir, /pg/log/patroni by default, which will be collected by promtail.

patroni_ssl_enabled

name: patroni_ssl_enabled, type: bool, level: G

Secure patroni RestAPI communications with SSL? default value is false

This parameter is a global flag that can only be set before deployment.

Since if SSL is enabled for patroni, you’ll have to perform healthcheck, metrics scrape and API call with HTTPS instead of HTTP.

patroni_watchdog_mode

name: patroni_watchdog_mode, type: string, level: C

In case of primary failure, patroni can use watchdog to shutdown the old primary node to avoid split-brain.

patroni watchdog mode: automatic, required, off:

  • off: not using watchdog. avoid fencing at all. This is the default value.
  • automatic: Enable watchdog if the kernel has softdog module enabled and watchdog is owned by dbsu
  • required: Force watchdog, refuse to start if softdog is not available

default value is off, you should not enable watchdog on infra nodes to avoid fencing.

For those critical systems where data consistency prevails over availability, it is recommended to enable watchdog.

patroni_username

name: patroni_username, type: username, level: C

patroni restapi username, postgres by default, used in pair with patroni_password

Patroni unsafe RESTAPI is protected by username/password by default, check Config Cluster and Patroni RESTAPI for details.

patroni_password

name: patroni_password, type: password, level: C

patroni restapi password, Patroni.API by default

!> WARNING: CHANGE THIS IN PRODUCTION ENVIRONMENT!!!!

patroni_citus_db

name: patroni_citus_db, type: string, level: C

citus database managed by patroni, postgres by default.

Patroni 3.0’s native citus will specify a managed database for citus. which is created by patroni itself.

pg_conf

name: pg_conf, type: enum, level: C

config template: {oltp,olap,crit,tiny}.yml, oltp.yml by default

  • tiny.yml: optimize for tiny nodes, virtual machines, small demo, (1~8Core, 1~16GB)
  • oltp.yml: optimize for OLTP workloads and latency sensitive applications, (4C8GB+), which is the default template
  • olap.yml: optimize for OLAP workloads and throughput (4C8G+)
  • crit.yml: optimize for data consistency and critical applications (4C8G+)

default values: oltp.yml, but configure procedure will set this value to tiny.yml if current node is a tiny node.

You can have your own template, just put it under templates/<mode>.yml and set this value to the template name.

pg_max_conn

name: pg_max_conn, type: int, level: C

postgres max connections, You can specify a value between 50 and 5000, or use auto to use recommended value.

default value is auto, which will set max connections according to the pg_conf and pg_default_service_dest.

  • tiny: 100
  • olap: 200
  • oltp: 200 (pgbouncer) / 1000 (postgres)
    • pg_default_service_dest = pgbouncer : 200
    • pg_default_service_dest = postgres : 1000
  • crit: 200 (pgbouncer) / 1000 (postgres)
    • pg_default_service_dest = pgbouncer : 200
    • pg_default_service_dest = postgres : 1000

It’s not recommended to set this value greater than 5000, otherwise you have to increase the haproxy service connection limit manually as well.

Pgbouncer transaction pooling can alleviate the problem of too many OLTP connections, but it’s not recommended to use it in OLAP scenarios.

pg_shared_buffer_ratio

name: pg_shared_buffer_ratio, type: float, level: C

postgres shared buffer memory ratio, 0.25 by default, 0.1~0.4

default values: 0.25, means 25% of node memory will be used as PostgreSQL shard buffers.

Setting this value greater than 0.4 (40%) is usually not a good idea.

Note that shared buffer is only part of shared memory in PostgreSQL, to calculate the total shared memory, use show shared_memory_size_in_huge_pages;.

pg_rto

name: pg_rto, type: int, level: C

recovery time objective in seconds, This will be used as Patroni TTL value, 30s by default.

If a primary instance is missing for such a long time, a new leader election will be triggered.

Decrease the value can reduce the unavailable time (unable to write) of the cluster during failover, but it will make the cluster more sensitive to network jitter, thus increase the chance of false-positive failover.

Config this according to your network condition and expectation to trade-off between chance and impact, the default value is 30s, and it will be populated to the following patroni parameters:

  1. # the TTL to acquire the leader lock (in seconds). Think of it as the length of time before initiation of the automatic failover process. Default value: 30
  2. ttl: {{ pg_rto }}
  3. # the number of seconds the loop will sleep. Default value: 10 , this is patroni check loop interval
  4. loop_wait: {{ (pg_rto / 3)|round(0, 'ceil')|int }}
  5. # timeout for DCS and PostgreSQL operation retries (in seconds). DCS or network issues shorter than this will not cause Patroni to demote the leader. Default value: 10
  6. retry_timeout: {{ (pg_rto / 3)|round(0, 'ceil')|int }}
  7. # the amount of time a primary is allowed to recover from failures before failover is triggered (in seconds), Max RTO: 2 loop wait + primary_start_timeout
  8. primary_start_timeout: {{ (pg_rto / 3)|round(0, 'ceil')|int }}

pg_rpo

name: pg_rpo, type: int, level: C

recovery point objective in bytes, 1MiB at most by default

default values: 1048576, which will tolerate at most 1MiB data loss during failover.

when the primary is down and all replicas are lagged, you have to make a tough choice to trade off between Availability and Consistency:

  • Promote a replica to be the new primary and bring system back online ASAP, with the price of an acceptable data loss (e.g. less than 1MB).
  • Wait for the primary to come back (which may never be) or human intervention to avoid any data loss.

You can use crit.yml conf template to ensure no data loss during failover, but it will sacrifice some performance.

pg_libs

name: pg_libs, type: string, level: C

preloaded libraries, pg_stat_statements,auto_explain by default

default value: timescaledb, pg_stat_statements, auto_explain.

If you want to manage citus cluster by your own, add citus to the head of this list. If you are using patroni native citus cluster, patroni will add it automatically for you.

pg_delay

name: pg_delay, type: interval, level: I

replication apply delay for standby cluster leader , default values: 0.

if this value is set to a positive value, the standby cluster leader will be delayed for this time before apply WAL changes.

Check delayed standby cluster for details.

pg_checksum

name: pg_checksum, type: bool, level: C

enable data checksum for postgres cluster?, default value is false.

This parameter can only be set before PGSQL deployment. (but you can enable it manually later)

If pg_conf crit.yml template is used, data checksum is always enabled regardless of this parameter to ensure data integrity.

pg_pwd_enc

name: pg_pwd_enc, type: enum, level: C

passwords encryption algorithm: md5,scram-sha-256

default values: scram-sha-256, if you have compatibility issues with old clients, you can set it to md5 instead.

pg_encoding

name: pg_encoding, type: enum, level: C

database cluster encoding, UTF8 by default

pg_locale

name: pg_locale, type: enum, level: C

database cluster local, C by default

pg_lc_collate

name: pg_lc_collate, type: enum, level: C

database cluster collate, C by default, It’s not recommended to change this value unless you know what you are doing.

pg_lc_ctype

name: pg_lc_ctype, type: enum, level: C

database character type, en_US.UTF8 by default

pgbouncer_enabled

name: pgbouncer_enabled, type: bool, level: C

default value is true, if disabled, pgbouncer will not be launched on pgsql host

pgbouncer_port

name: pgbouncer_port, type: port, level: C

pgbouncer listen port, 6432 by default

pgbouncer_log_dir

name: pgbouncer_log_dir, type: path, level: C

pgbouncer log dir, /pg/log/pgbouncer by default, referenced by promtail the logging agent.

pgbouncer_auth_query

name: pgbouncer_auth_query, type: bool, level: C

query postgres to retrieve unlisted business users? default value is false

If enabled, pgbouncer user will be authenticated against postgres database with SELECT username, password FROM monitor.pgbouncer_auth($1), otherwise, only the users in pgbouncer_users will be allowed to connect to pgbouncer.

pgbouncer_poolmode

name: pgbouncer_poolmode, type: enum, level: C

pooling mode: transaction,session,statement, transaction by default

  • session, Session-level pooling with the best compatibility.
  • transaction, Transaction-level pooling with better performance (lots of small conns), could break some session level features such as PreparedStatements, notify, etc…
  • statements, Statement-level pooling which is used for simple read-only queries.

pgbouncer_sslmode

name: pgbouncer_sslmode, type: enum, level: C

pgbouncer client ssl mode, disable by default

default values: disable, beware that this may have a huge performance impact on your pgbouncer.

  • disable: Plain TCP. If client requests TLS, it’s ignored. Default.
  • allow: If client requests TLS, it is used. If not, plain TCP is used. If the client presents a client certificate, it is not validated.
  • prefer: Same as allow.
  • require: Client must use TLS. If not, the client connection is rejected. If the client presents a client certificate, it is not validated.
  • verify-ca: Client must use TLS with valid client certificate.
  • verify-full: Same as verify-ca.

PG_PROVISION

Init database roles, templates, default privileges, create schemas, extensions, and generate hba rules

  1. pg_provision: true # provision postgres cluster after bootstrap
  2. pg_init: pg-init # provision init script for cluster template, `pg-init` by default
  3. pg_default_roles: # default roles and users in postgres cluster
  4. - { name: dbrole_readonly ,login: false ,comment: role for global read-only access }
  5. - { name: dbrole_offline ,login: false ,comment: role for restricted read-only access }
  6. - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
  7. - { name: dbrole_admin ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
  8. - { name: postgres ,superuser: true ,comment: system superuser }
  9. - { name: replicator ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
  10. - { name: dbuser_dba ,superuser: true ,roles: [dbrole_admin] ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
  11. - { name: dbuser_monitor ,roles: [pg_monitor, dbrole_readonly] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
  12. pg_default_privileges: # default privileges when created by admin user
  13. - GRANT USAGE ON SCHEMAS TO dbrole_readonly
  14. - GRANT SELECT ON TABLES TO dbrole_readonly
  15. - GRANT SELECT ON SEQUENCES TO dbrole_readonly
  16. - GRANT EXECUTE ON FUNCTIONS TO dbrole_readonly
  17. - GRANT USAGE ON SCHEMAS TO dbrole_offline
  18. - GRANT SELECT ON TABLES TO dbrole_offline
  19. - GRANT SELECT ON SEQUENCES TO dbrole_offline
  20. - GRANT EXECUTE ON FUNCTIONS TO dbrole_offline
  21. - GRANT INSERT ON TABLES TO dbrole_readwrite
  22. - GRANT UPDATE ON TABLES TO dbrole_readwrite
  23. - GRANT DELETE ON TABLES TO dbrole_readwrite
  24. - GRANT USAGE ON SEQUENCES TO dbrole_readwrite
  25. - GRANT UPDATE ON SEQUENCES TO dbrole_readwrite
  26. - GRANT TRUNCATE ON TABLES TO dbrole_admin
  27. - GRANT REFERENCES ON TABLES TO dbrole_admin
  28. - GRANT TRIGGER ON TABLES TO dbrole_admin
  29. - GRANT CREATE ON SCHEMAS TO dbrole_admin
  30. pg_default_schemas: [ monitor ] # default schemas to be created
  31. pg_default_extensions: # default extensions to be created
  32. - { name: adminpack ,schema: pg_catalog }
  33. - { name: pg_stat_statements ,schema: monitor }
  34. - { name: pgstattuple ,schema: monitor }
  35. - { name: pg_buffercache ,schema: monitor }
  36. - { name: pageinspect ,schema: monitor }
  37. - { name: pg_prewarm ,schema: monitor }
  38. - { name: pg_visibility ,schema: monitor }
  39. - { name: pg_freespacemap ,schema: monitor }
  40. - { name: postgres_fdw ,schema: public }
  41. - { name: file_fdw ,schema: public }
  42. - { name: btree_gist ,schema: public }
  43. - { name: btree_gin ,schema: public }
  44. - { name: pg_trgm ,schema: public }
  45. - { name: intagg ,schema: public }
  46. - { name: intarray ,schema: public }
  47. - { name: pg_repack }
  48. pg_reload: true # reload postgres after hba changes
  49. pg_default_hba_rules: # postgres default host-based authentication rules
  50. - {user: '${dbsu}' ,db: all ,addr: local ,auth: ident ,title: 'dbsu access via local os user ident' }
  51. - {user: '${dbsu}' ,db: replication ,addr: local ,auth: ident ,title: 'dbsu replication from local os ident' }
  52. - {user: '${repl}' ,db: replication ,addr: localhost ,auth: pwd ,title: 'replicator replication from localhost'}
  53. - {user: '${repl}' ,db: replication ,addr: intra ,auth: pwd ,title: 'replicator replication from intranet' }
  54. - {user: '${repl}' ,db: postgres ,addr: intra ,auth: pwd ,title: 'replicator postgres db from intranet' }
  55. - {user: '${monitor}' ,db: all ,addr: localhost ,auth: pwd ,title: 'monitor from localhost with password' }
  56. - {user: '${monitor}' ,db: all ,addr: infra ,auth: pwd ,title: 'monitor from infra host with password'}
  57. - {user: '${admin}' ,db: all ,addr: infra ,auth: ssl ,title: 'admin @ infra nodes with pwd & ssl' }
  58. - {user: '${admin}' ,db: all ,addr: world ,auth: cert ,title: 'admin @ everywhere with ssl & cert' }
  59. - {user: '+dbrole_readonly',db: all ,addr: localhost ,auth: pwd ,title: 'pgbouncer read/write via local socket'}
  60. - {user: '+dbrole_readonly',db: all ,addr: intra ,auth: pwd ,title: 'read/write biz user via password' }
  61. - {user: '+dbrole_offline' ,db: all ,addr: intra ,auth: pwd ,title: 'allow etl offline tasks from intranet'}
  62. pgb_default_hba_rules: # pgbouncer default host-based authentication rules
  63. - {user: '${dbsu}' ,db: pgbouncer ,addr: local ,auth: peer ,title: 'dbsu local admin access with os ident'}
  64. - {user: 'all' ,db: all ,addr: localhost ,auth: pwd ,title: 'allow all user local access with pwd' }
  65. - {user: '${monitor}' ,db: pgbouncer ,addr: intra ,auth: pwd ,title: 'monitor access via intranet with pwd' }
  66. - {user: '${monitor}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other monitor access addr' }
  67. - {user: '${admin}' ,db: all ,addr: intra ,auth: pwd ,title: 'admin access via intranet with pwd' }
  68. - {user: '${admin}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other admin access addr' }
  69. - {user: 'all' ,db: all ,addr: intra ,auth: pwd ,title: 'allow all user intra access with pwd' }

pg_provision

name: pg_provision, type: bool, level: C

provision postgres cluster after bootstrap, default value is true.

If disabled, postgres cluster will not be provisioned after bootstrap.

pg_init

name: pg_init, type: string, level: G/C

Provision init script for cluster template, pg-init by default, which is located in roles/pgsql/templates/pg-init

You can add your own logic in the init script, or provide a new one in templates/ and set pg_init to the new script name.

pg_default_roles

name: pg_default_roles, type: role[], level: G/C

default roles and users in postgres cluster.

Pigsty has a built-in role system, check PGSQL Access Control for details.

  1. pg_default_roles: # default roles and users in postgres cluster
  2. - { name: dbrole_readonly ,login: false ,comment: role for global read-only access }
  3. - { name: dbrole_offline ,login: false ,comment: role for restricted read-only access }
  4. - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
  5. - { name: dbrole_admin ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
  6. - { name: postgres ,superuser: true ,comment: system superuser }
  7. - { name: replicator ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
  8. - { name: dbuser_dba ,superuser: true ,roles: [dbrole_admin] ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
  9. - { name: dbuser_monitor ,roles: [pg_monitor, dbrole_readonly] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }

pg_default_privileges

name: pg_default_privileges, type: string[], level: G/C

default privileges for each databases:

  1. pg_default_privileges: # default privileges when created by admin user
  2. - GRANT USAGE ON SCHEMAS TO dbrole_readonly
  3. - GRANT SELECT ON TABLES TO dbrole_readonly
  4. - GRANT SELECT ON SEQUENCES TO dbrole_readonly
  5. - GRANT EXECUTE ON FUNCTIONS TO dbrole_readonly
  6. - GRANT USAGE ON SCHEMAS TO dbrole_offline
  7. - GRANT SELECT ON TABLES TO dbrole_offline
  8. - GRANT SELECT ON SEQUENCES TO dbrole_offline
  9. - GRANT EXECUTE ON FUNCTIONS TO dbrole_offline
  10. - GRANT INSERT ON TABLES TO dbrole_readwrite
  11. - GRANT UPDATE ON TABLES TO dbrole_readwrite
  12. - GRANT DELETE ON TABLES TO dbrole_readwrite
  13. - GRANT USAGE ON SEQUENCES TO dbrole_readwrite
  14. - GRANT UPDATE ON SEQUENCES TO dbrole_readwrite
  15. - GRANT TRUNCATE ON TABLES TO dbrole_admin
  16. - GRANT REFERENCES ON TABLES TO dbrole_admin
  17. - GRANT TRIGGER ON TABLES TO dbrole_admin
  18. - GRANT CREATE ON SCHEMAS TO dbrole_admin

Pigsty has a built-in privileges base on default role system, check PGSQL Privileges for details.

pg_default_schemas

name: pg_default_schemas, type: string[], level: G/C

default schemas to be created, default values is: [ monitor ], which will create a monitor schema on all databases.

pg_default_extensions

name: pg_default_extensions, type: extension[], level: G/C

default extensions to be created, default value:

  1. pg_default_extensions: # default extensions to be created
  2. - { name: adminpack ,schema: pg_catalog }
  3. - { name: pg_stat_statements ,schema: monitor }
  4. - { name: pgstattuple ,schema: monitor }
  5. - { name: pg_buffercache ,schema: monitor }
  6. - { name: pageinspect ,schema: monitor }
  7. - { name: pg_prewarm ,schema: monitor }
  8. - { name: pg_visibility ,schema: monitor }
  9. - { name: pg_freespacemap ,schema: monitor }
  10. - { name: postgres_fdw ,schema: public }
  11. - { name: file_fdw ,schema: public }
  12. - { name: btree_gist ,schema: public }
  13. - { name: btree_gin ,schema: public }
  14. - { name: pg_trgm ,schema: public }
  15. - { name: intagg ,schema: public }
  16. - { name: intarray ,schema: public }
  17. - { name: pg_repack }

The only 3rd party extension is pg_repack, which is important for database maintenance, all other extensions are built-in postgres contrib extensions.

Monitor related extensions are installed in monitor schema, which is created by pg_default_schemas.

pg_reload

name: pg_reload, type: bool, level: A

reload postgres after hba changes, default value is true

This is useful when you want to check before applying HBA changes, set it to false to disable reload.

pg_default_hba_rules

name: pg_default_hba_rules, type: hba[], level: G/C

postgres default host-based authentication rules, array of hba rule object.

default value provides a fair enough security level for common scenarios, check PGSQL Authentication for details.

  1. pg_default_hba_rules: # postgres default host-based authentication rules
  2. - {user: '${dbsu}' ,db: all ,addr: local ,auth: ident ,title: 'dbsu access via local os user ident' }
  3. - {user: '${dbsu}' ,db: replication ,addr: local ,auth: ident ,title: 'dbsu replication from local os ident' }
  4. - {user: '${repl}' ,db: replication ,addr: localhost ,auth: pwd ,title: 'replicator replication from localhost'}
  5. - {user: '${repl}' ,db: replication ,addr: intra ,auth: pwd ,title: 'replicator replication from intranet' }
  6. - {user: '${repl}' ,db: postgres ,addr: intra ,auth: pwd ,title: 'replicator postgres db from intranet' }
  7. - {user: '${monitor}' ,db: all ,addr: localhost ,auth: pwd ,title: 'monitor from localhost with password' }
  8. - {user: '${monitor}' ,db: all ,addr: infra ,auth: pwd ,title: 'monitor from infra host with password'}
  9. - {user: '${admin}' ,db: all ,addr: infra ,auth: ssl ,title: 'admin @ infra nodes with pwd & ssl' }
  10. - {user: '${admin}' ,db: all ,addr: world ,auth: cert ,title: 'admin @ everywhere with ssl & cert' }
  11. - {user: '+dbrole_readonly',db: all ,addr: localhost ,auth: pwd ,title: 'pgbouncer read/write via local socket'}
  12. - {user: '+dbrole_readonly',db: all ,addr: intra ,auth: pwd ,title: 'read/write biz user via password' }
  13. - {user: '+dbrole_offline' ,db: all ,addr: intra ,auth: pwd ,title: 'allow etl offline tasks from intranet'}

pgb_default_hba_rules

name: pgb_default_hba_rules, type: hba[], level: G/C

pgbouncer default host-based authentication rules, array or hba rule object.

default value provides a fair enough security level for common scenarios, check PGSQL Authentication for details.

  1. pgb_default_hba_rules: # pgbouncer default host-based authentication rules
  2. - {user: '${dbsu}' ,db: pgbouncer ,addr: local ,auth: peer ,title: 'dbsu local admin access with os ident'}
  3. - {user: 'all' ,db: all ,addr: localhost ,auth: pwd ,title: 'allow all user local access with pwd' }
  4. - {user: '${monitor}' ,db: pgbouncer ,addr: intra ,auth: pwd ,title: 'monitor access via intranet with pwd' }
  5. - {user: '${monitor}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other monitor access addr' }
  6. - {user: '${admin}' ,db: all ,addr: intra ,auth: pwd ,title: 'admin access via intranet with pwd' }
  7. - {user: '${admin}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other admin access addr' }
  8. - {user: 'all' ,db: all ,addr: intra ,auth: pwd ,title: 'allow all user intra access with pwd' }

PG_BACKUP

This section defines variables for pgBackRest, which is used for PGSQL PITR (Point-In-Time-Recovery).

Check PGSQL Backup & PITR for details.

  1. pgbackrest_enabled: true # enable pgbackrest on pgsql host?
  2. pgbackrest_clean: true # remove pg backup data during init?
  3. pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
  4. pgbackrest_method: local # pgbackrest repo method: local,minio,[user-defined...]
  5. pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
  6. local: # default pgbackrest repo with local posix fs
  7. path: /pg/backup # local backup directory, `/pg/backup` by default
  8. retention_full_type: count # retention full backups by count
  9. retention_full: 2 # keep 2, at most 3 full backup when using local fs repo
  10. minio: # optional minio repo for pgbackrest
  11. type: s3 # minio is s3-compatible, so s3 is used
  12. s3_endpoint: sss.pigsty # minio endpoint domain name, `sss.pigsty` by default
  13. s3_region: us-east-1 # minio region, us-east-1 by default, useless for minio
  14. s3_bucket: pgsql # minio bucket name, `pgsql` by default
  15. s3_key: pgbackrest # minio user access key for pgbackrest
  16. s3_key_secret: S3User.Backup # minio user secret key for pgbackrest
  17. s3_uri_style: path # use path style uri for minio rather than host style
  18. path: /pgbackrest # minio backup path, default is `/pgbackrest`
  19. storage_port: 9000 # minio port, 9000 by default
  20. storage_ca_file: /etc/pki/ca.crt # minio ca file path, `/etc/pki/ca.crt` by default
  21. bundle: y # bundle small files into a single file
  22. cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
  23. cipher_pass: pgBackRest # AES encryption password, default is 'pgBackRest'
  24. retention_full_type: time # retention full backup by time on minio repo
  25. retention_full: 14 # keep full backup for last 14 days

pgbackrest_enabled

name: pgbackrest_enabled, type: bool, level: C

enable pgBackRest on pgsql host? default value is true

pgbackrest_clean

name: pgbackrest_clean, type: bool, level: C

remove pg backup data during init? default value is true

pgbackrest_log_dir

name: pgbackrest_log_dir, type: path, level: C

pgBackRest log dir, /pg/log/pgbackrest by default, which is referenced by promtail the logging agent.

pgbackrest_method

name: pgbackrest_method, type: enum, level: C

pgBackRest repo method: local, minio, or other user-defined methods, local by default

This parameter is used to determine which repo to use for pgBackRest, all available repo methods are defined in pgbackrest_repo.

Pigsty will use local backup repo by default, which will create a backup repo on primary instance’s /pg/backup directory. The underlying storage is specified by pg_fs_bkup.

pgbackrest_repo

name: pgbackrest_repo, type: dict, level: G/C

pgBackRest repo document: https://pgbackrest.org/configuration.html#section-repository

default value includes two repo methods: local and minio, which are defined as follows:

  1. pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
  2. local: # default pgbackrest repo with local posix fs
  3. path: /pg/backup # local backup directory, `/pg/backup` by default
  4. retention_full_type: count # retention full backups by count
  5. retention_full: 2 # keep 2, at most 3 full backup when using local fs repo
  6. minio: # optional minio repo for pgbackrest
  7. type: s3 # minio is s3-compatible, so s3 is used
  8. s3_endpoint: sss.pigsty # minio endpoint domain name, `sss.pigsty` by default
  9. s3_region: us-east-1 # minio region, us-east-1 by default, useless for minio
  10. s3_bucket: pgsql # minio bucket name, `pgsql` by default
  11. s3_key: pgbackrest # minio user access key for pgbackrest
  12. s3_key_secret: S3User.Backup # minio user secret key for pgbackrest
  13. s3_uri_style: path # use path style uri for minio rather than host style
  14. path: /pgbackrest # minio backup path, default is `/pgbackrest`
  15. storage_port: 9000 # minio port, 9000 by default
  16. storage_ca_file: /etc/pki/ca.crt # minio ca file path, `/etc/pki/ca.crt` by default
  17. bundle: y # bundle small files into a single file
  18. cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
  19. cipher_pass: pgBackRest # AES encryption password, default is 'pgBackRest'
  20. retention_full_type: time # retention full backup by time on minio repo
  21. retention_full: 14 # keep full backup for last 14 days

PG_SERVICE

This section is about exposing PostgreSQL service to outside world: including:

  • Exposing different PostgreSQL services on different ports with haproxy
  • Bind an optional L2 VIP to the primary instance with vip-manager
  • Register cluster/instance DNS records with to dnsmasq on infra nodes
  1. pg_weight: 100 #INSTANCE # relative load balance weight in service, 100 by default, 0-255
  2. pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
  3. pg_default_services: # postgres default service definitions
  4. - { name: primary ,port: 5433 ,dest: default ,check: /primary ,selector: "[]" }
  5. - { name: replica ,port: 5434 ,dest: default ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
  6. - { name: default ,port: 5436 ,dest: postgres ,check: /primary ,selector: "[]" }
  7. - { name: offline ,port: 5438 ,dest: postgres ,check: /replica ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
  8. pg_vip_enabled: false # enable a l2 vip for pgsql primary? false by default
  9. pg_vip_address: 127.0.0.1/24 # vip address in `<ipv4>/<mask>` format, require if vip is enabled
  10. pg_vip_interface: eth0 # vip network interface to listen, eth0 by default
  11. pg_dns_suffix: '' # pgsql dns suffix, '' by default
  12. pg_dns_target: auto # auto, primary, vip, none, or ad hoc ip

pg_weight

name: pg_weight, type: int, level: G

relative load balance weight in service, 100 by default, 0-255

default values: 100. you have to define it at instance vars, and reload-service to take effect.

pg_service_provider

name: pg_service_provider, type: string, level: G/C

dedicate haproxy node group name, or empty string for local nodes by default.

If specified, PostgreSQL Services will be registered to the dedicated haproxy node group instead of this pgsql cluster nodes.

Do remember to allocate unique ports on dedicate haproxy nodes for each service!

For example, if we define following parameters on 3-node pg-test cluster:

  1. pg_service_provider: infra # use load balancer on group `infra`
  2. pg_default_services: # alloc port 10001 and 10002 for pg-test primary/replica service
  3. - { name: primary ,port: 10001 ,dest: postgres ,check: /primary ,selector: "[]" }
  4. - { name: replica ,port: 10002 ,dest: postgres ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }

pg_default_service_dest

name: pg_default_service_dest, type: enum, level: G/C

When defining a service, if svc.dest=‘default’, this parameter will be used as the default value.

default values: pgbouncer, means 5433 primary service and 5434 replica service will route traffic to pgbouncer by default.

If you don’t want to use pgbouncer, set it to postgres instead. traffic will be route to postgres directly.

pg_default_services

name: pg_default_services, type: service[], level: G/C

postgres default service definitions

default value is four default services definition, which is explained in PGSQL Service

  1. pg_default_services: # postgres default service definitions
  2. - { name: primary ,port: 5433 ,dest: default ,check: /primary ,selector: "[]" }
  3. - { name: replica ,port: 5434 ,dest: default ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
  4. - { name: default ,port: 5436 ,dest: postgres ,check: /primary ,selector: "[]" }
  5. - { name: offline ,port: 5438 ,dest: postgres ,check: /replica ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}

pg_vip_enabled

name: pg_vip_enabled, type: bool, level: C

enable a l2 vip for pgsql primary?

default value is false, means no L2 VIP is created for this cluster.

L2 VIP can only be used in same L2 network, which may incurs extra restrictions on your network topology.

pg_vip_address

name: pg_vip_address, type: cidr4, level: C

vip address in <ipv4>/<mask> format, if vip is enabled, this parameter is required.

default values: 127.0.0.1/24. This value is consist of two parts: ipv4 and mask, separated by /.

pg_vip_interface

name: pg_vip_interface, type: string, level: C/I

vip network interface to listen, eth0 by default.

It should be the same primary intranet interface of your node, which is the IP address you used in the inventory file.

If your node have different interface, you can override it on instance vars:

  1. pg-test:
  2. hosts:
  3. 10.10.10.11: {pg_seq: 1, pg_role: replica ,pg_vip_interface: eth0 }
  4. 10.10.10.12: {pg_seq: 2, pg_role: primary ,pg_vip_interface: eth1 }
  5. 10.10.10.13: {pg_seq: 3, pg_role: replica ,pg_vip_interface: eth2 }
  6. vars:
  7. pg_vip_enabled: true # enable L2 VIP for this cluster, bind to primary instance by default
  8. pg_vip_address: 10.10.10.3/24 # the L2 network CIDR: 10.10.10.0/24, the vip address: 10.10.10.3
  9. # pg_vip_interface: eth1 # if your node have uniform interface, you can define it here

pg_dns_suffix

name: pg_dns_suffix, type: string, level: C

pgsql dns suffix, ’’ by default, cluster DNS name is defined as {{ pg_cluster }}{{ pg_dns_suffix }}

For example, if you set pg_dns_suffix to .db.vip.company.tld for cluster pg-test, then the cluster DNS name will be pg-test.db.vip.company.tld

pg_dns_target

name: pg_dns_target, type: enum, level: C

Could be: auto, primary, vip, none, or an ad hoc ip address, which will be the target IP address of cluster DNS record.

default values: auto , which will bind to pg_vip_address if pg_vip_enabled, or fallback to cluster primary instance ip address.

  • vip: bind to pg_vip_address
  • primary: resolve to cluster primary instance ip address
  • auto: resolve to pg_vip_address if pg_vip_enabled, or fallback to cluster primary instance ip address.
  • none: do not bind to any ip address
  • <ipv4>: bind to the given IP address

PG_EXPORTER

  1. pg_exporter_enabled: true # enable pg_exporter on pgsql hosts?
  2. pg_exporter_config: pg_exporter.yml # pg_exporter configuration file name
  3. pg_exporter_cache_ttls: '1,10,60,300' # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
  4. pg_exporter_port: 9630 # pg_exporter listen port, 9630 by default
  5. pg_exporter_params: 'sslmode=disable' # extra url parameters for pg_exporter dsn
  6. pg_exporter_url: '' # overwrite auto-generate pg dsn if specified
  7. pg_exporter_auto_discovery: true # enable auto database discovery? enabled by default
  8. pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
  9. pg_exporter_include_database: '' # csv of database that WILL BE monitored during auto-discovery
  10. pg_exporter_connect_timeout: 200 # pg_exporter connect timeout in ms, 200 by default
  11. pg_exporter_options: '' # overwrite extra options for pg_exporter
  12. pgbouncer_exporter_enabled: true # enable pgbouncer_exporter on pgsql hosts?
  13. pgbouncer_exporter_port: 9631 # pgbouncer_exporter listen port, 9631 by default
  14. pgbouncer_exporter_url: '' # overwrite auto-generate pgbouncer dsn if specified
  15. pgbouncer_exporter_options: '' # overwrite extra options for pgbouncer_exporter

pg_exporter_enabled

name: pg_exporter_enabled, type: bool, level: C

enable pg_exporter on pgsql hosts?

default value is true, if you don’t want to install pg_exporter, set it to false.

pg_exporter_config

name: pg_exporter_config, type: string, level: C

pg_exporter configuration file name

default values: pg_exporter.yml, if you want to use a custom configuration file, you can define it here.

Your config file should be placed in roles/files/<filename>.

pg_exporter_cache_ttls

name: pg_exporter_cache_ttls, type: string, level: C

pg_exporter collector ttl stage in seconds, ‘1,10,60,300’ by default

default values: 1,10,60,300, which will use 1s, 10s, 60s, 300s for different metric collectors.

  1. ttl_fast: "{{ pg_exporter_cache_ttls.split(',')[0]|int }}" # critical queries
  2. ttl_norm: "{{ pg_exporter_cache_ttls.split(',')[1]|int }}" # common queries
  3. ttl_slow: "{{ pg_exporter_cache_ttls.split(',')[2]|int }}" # slow queries (e.g table size)
  4. ttl_slowest: "{{ pg_exporter_cache_ttls.split(',')[3]|int }}" # ver slow queries (e.g bloat)

pg_exporter_port

name: pg_exporter_port, type: port, level: C

pg_exporter listen port, 9630 by default

pg_exporter_params

name: pg_exporter_params, type: string, level: C

extra url parameters for pg_exporter dsn

default values: sslmode=disable, which will disable SSL for monitoring connection (since it’s local unix socket by default)

pg_exporter_url

name: pg_exporter_url, type: pgurl, level: C

overwrite auto-generate pg dsn if specified

default value is empty string, If specified, it will be used as the pg_exporter dsn instead of constructing from other parameters:

This could be useful if you want to monitor a remote pgsql instance, or you want to use a different user/password for monitoring.

  1. 'postgres://{{ pg_monitor_username }}:{{ pg_monitor_password }}@{{ pg_host }}:{{ pg_port }}/postgres{% if pg_exporter_params != '' %}?{{ pg_exporter_params }}{% endif %}'

pg_exporter_auto_discovery

name: pg_exporter_auto_discovery, type: bool, level: C

enable auto database discovery? enabled by default

default value is true, which will auto-discover all databases on the postgres server and spawn a new pg_exporter connection for each database.

pg_exporter_exclude_database

name: pg_exporter_exclude_database, type: string, level: C

csv of database that WILL NOT be monitored during auto-discovery

default values: template0,template1,postgres, which will be excluded for database auto discovery.

pg_exporter_include_database

name: pg_exporter_include_database, type: string, level: C

csv of database that WILL BE monitored during auto-discovery

default value is empty string. If this value is set, only the databases in this list will be monitored during auto discovery.

pg_exporter_connect_timeout

name: pg_exporter_connect_timeout, type: int, level: C

pg_exporter connect timeout in ms, 200 by default

default values: 200ms , which is enough for most cases.

If your remote pgsql server is in another continent, you may want to increase this value to avoid connection timeout.

pg_exporter_options

name: pg_exporter_options, type: arg, level: C

overwrite extra options for pg_exporter

default value is empty string, which will fall back the following default options:

  1. PG_EXPORTER_OPTS='--log.level=info --log.format="logger:syslog?appname=pg_exporter&local=7"'

If you want to customize logging options or other pg_exporter options, you can set it here.

pgbouncer_exporter_enabled

name: pgbouncer_exporter_enabled, type: bool, level: C

enable pgbouncer_exporter on pgsql hosts?

default value is true, which will enable pg_exporter for pgbouncer connection pooler.

pgbouncer_exporter_port

name: pgbouncer_exporter_port, type: port, level: C

pgbouncer_exporter listen port, 9631 by default

default values: 9631

pgbouncer_exporter_url

name: pgbouncer_exporter_url, type: pgurl, level: C

overwrite auto-generate pgbouncer dsn if specified

default value is empty string, If specified, it will be used as the pgbouncer_exporter dsn instead of constructing from other parameters:

  1. 'postgres://{{ pg_monitor_username }}:{{ pg_monitor_password }}@:{{ pgbouncer_port }}/pgbouncer?host={{ pg_localhost }}&sslmode=disable'

This could be useful if you want to monitor a remote pgbouncer instance, or you want to use a different user/password for monitoring.

pgbouncer_exporter_options

name: pgbouncer_exporter_options, type: arg, level: C

overwrite extra options for pgbouncer_exporter

default value is empty string, which will fall back the following default options:

  1. '--log.level=info --log.format="logger:syslog?appname=pgbouncer_exporter&local=7"'

If you want to customize logging options or other pgbouncer_exporter options, you can set it here.

Last modified 2023-04-07: bump en docs to v2.0.2 (5a16652)