Infra as Code Toolbox

HashiCorp for Database! Infra as Code, Database as Data

A database is a software that manages the data, and a control system is software that manages the database.

Pigsty adopts the design philosophy of Infra as Data, using a declarative configuration similar to Kubernetes, with a large number of optional configuration options to describe the database and the operating environment, and an idempotent preconfigured script to automatically create the required database clusters, providing a private cloud experience.

Pigsty creates the required database clusters from bare metal nodes in minutes based on a list of user config files.

For example, creating a one-master-two-slave database cluster pg-test on three machines requires only a few lines of config and a single command pgsql.yml -l pg-test to create a highly available database cluster as described in the following section.

Infra as Code Toolbox - 图1

Example: Customize PGSQL Clusters

  1. #----------------------------------#
  2. # cluster: pg-meta (on meta node) #
  3. #----------------------------------#
  4. # pg-meta is the default SINGLE-NODE pgsql cluster deployed on meta node (10.10.10.10)
  5. # if you have multiple n meta nodes, consider deploying pg-meta as n-node cluster too
  6. pg-meta: # required, ansible group name , pgsql cluster name. should be unique among environment
  7. hosts: # `<cluster>.hosts` holds instances definition of this cluster
  8. 10.10.10.10: # INSTANCE-LEVEL CONFIG: ip address is the key. values are instance level config entries (dict)
  9. pg_seq: 1 # required, unique identity parameter (+integer) among pg_cluster
  10. pg_role: primary # required, pg_role is mandatory identity parameter, primary|replica|offline|delayed
  11. pg_offline_query: true # instance with `pg_offline_query: true` will take offline traffic (saga, etl,...)
  12. # some variables can be overwritten on instance level. e.g: pg_upstream, pg_weight, etc...
  13. #---------------
  14. # mandatory # all configuration above (`ip`, `pg_seq`, `pg_role`) and `pg_cluster` are mandatory
  15. #---------------
  16. vars: # `<cluster>.vars` holds CLUSTER LEVEL CONFIG of this pgsql cluster
  17. pg_cluster: pg-meta # required, pgsql cluster name, unique among cluster, used as namespace of cluster resources
  18. #---------------
  19. # optional # all configuration below are OPTIONAL for a pgsql cluster (Overwrite global default)
  20. #---------------
  21. pg_version: 14 # pgsql version to be installed (use global version if missing)
  22. node_tune: tiny # node optimization profile: {oltp|olap|crit|tiny}, use tiny for vm sandbox
  23. pg_conf: tiny.yml # pgsql template: {oltp|olap|crit|tiny}, use tiny for sandbox
  24. patroni_mode: default # entering patroni pause mode after bootstrap {default|pause|remove}
  25. patroni_watchdog_mode: off # disable patroni watchdog on meta node {off|require|automatic}
  26. pg_lc_ctype: en_US.UTF8 # use en_US.UTF8 locale for i18n char support (required by `pg_trgm`)
  27. #---------------
  28. # biz databases # Defining Business Databases (Optional)
  29. #---------------
  30. pg_databases: # define business databases on this cluster, array of database definition
  31. # define the default `meta` database
  32. - name: meta # required, `name` is the only mandatory field of a database definition
  33. baseline: cmdb.sql # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
  34. # owner: postgres # optional, database owner, postgres by default
  35. # template: template1 # optional, which template to use, template1 by default
  36. # encoding: UTF8 # optional, database encoding, UTF8 by default. (MUST same as template database)
  37. # locale: C # optional, database locale, C by default. (MUST same as template database)
  38. # lc_collate: C # optional, database collate, C by default. (MUST same as template database)
  39. # lc_ctype: C # optional, database ctype, C by default. (MUST same as template database)
  40. # tablespace: pg_default # optional, default tablespace, 'pg_default' by default.
  41. # allowconn: true # optional, allow connection, true by default. false will disable connect at all
  42. # revokeconn: false # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
  43. # pgbouncer: true # optional, add this database to pgbouncer database list? true by default
  44. comment: pigsty meta database # optional, comment string for this database
  45. connlimit: -1 # optional, database connection limit, default -1 disable limit
  46. schemas: [pigsty] # optional, additional schemas to be created, array of schema names
  47. extensions: # optional, additional extensions to be installed: array of schema definition `{name,schema}`
  48. - { name: adminpack, schema: pg_catalog } # install adminpack to pg_catalog
  49. - { name: postgis, schema: public } # if schema is omitted, extension will be installed according to search_path.
  50. - { name: timescaledb } # some extensions are not relocatable, you can just omit the schema part
  51. # define an additional database named grafana & prometheus (optional)
  52. # - { name: grafana, owner: dbuser_grafana , revokeconn: true , comment: grafana primary database }
  53. # - { name: prometheus, owner: dbuser_prometheus , revokeconn: true , comment: prometheus primary database , extensions: [{ name: timescaledb }]}
  54. #---------------
  55. # biz users # Defining Business Users (Optional)
  56. #---------------
  57. pg_users: # define business users/roles on this cluster, array of user definition
  58. # define admin user for meta database (This user are used for pigsty app deployment by default)
  59. - name: dbuser_meta # required, `name` is the only mandatory field of a user definition
  60. password: md5d3d10d8cad606308bdb180148bf663e1 # md5 salted password of 'DBUser.Meta'
  61. # optional, plain text and md5 password are both acceptable (prefixed with `md5`)
  62. login: true # optional, can login, true by default (new biz ROLE should be false)
  63. superuser: false # optional, is superuser? false by default
  64. createdb: false # optional, can create database? false by default
  65. createrole: false # optional, can create role? false by default
  66. inherit: true # optional, can this role use inherited privileges? true by default
  67. replication: false # optional, can this role do replication? false by default
  68. bypassrls: false # optional, can this role bypass row level security? false by default
  69. pgbouncer: true # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
  70. connlimit: -1 # optional, user connection limit, default -1 disable limit
  71. expire_in: 3650 # optional, now + n days when this role is expired (OVERWRITE expire_at)
  72. expire_at: '2030-12-31' # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
  73. comment: pigsty admin user # optional, comment string for this user/role
  74. roles: [dbrole_admin] # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
  75. parameters: {} # optional, role level parameters with `ALTER ROLE SET`
  76. # search_path: public # key value config parameters according to postgresql documentation (e.g: use pigsty as default search_path)
  77. - {name: dbuser_view , password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
  78. # define additional business users for prometheus & grafana (optional)
  79. - {name: dbuser_grafana , password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for grafana database }
  80. - {name: dbuser_prometheus , password: DBUser.Prometheus ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for prometheus database , createrole: true }
  81. #---------------
  82. # hba rules # Defining extra HBA rules on this cluster (Optional)
  83. #---------------
  84. pg_hba_rules_extra: # Extra HBA rules to be installed on this cluster
  85. - title: reject grafana non-local access # required, rule title (used as hba description & comment string)
  86. role: common # required, which roles will be applied? ('common' applies to all roles)
  87. rules: # required, rule content: array of hba string
  88. - local grafana dbuser_grafana md5
  89. - host grafana dbuser_grafana 127.0.0.1/32 md5
  90. - host grafana dbuser_grafana 10.10.10.10/32 md5
  91. vip_mode: l2 # setup a level-2 vip for cluster pg-meta
  92. vip_address: 10.10.10.2 # virtual ip address that binds to primary instance of cluster pg-meta
  93. vip_cidrmask: 8 # cidr network mask length
  94. vip_interface: eth1 # interface to add virtual ip

In addition, in addition to PostgreSQL, support for Redis deployment and monitoring has been provided since Pigsty v1.3

Example: Redis Cache Cluster

  1. #----------------------------------#
  2. # redis sentinel example #
  3. #----------------------------------#
  4. redis-meta:
  5. hosts:
  6. 10.10.10.10:
  7. redis_node: 1
  8. redis_instances: { 6001 : {} ,6002 : {} , 6003 : {} }
  9. vars:
  10. redis_cluster: redis-meta
  11. redis_mode: sentinel
  12. redis_max_memory: 128MB
  13. #----------------------------------#
  14. # redis cluster example #
  15. #----------------------------------#
  16. redis-test:
  17. hosts:
  18. 10.10.10.11:
  19. redis_node: 1
  20. redis_instances: { 6501 : {} ,6502 : {} ,6503 : {} ,6504 : {} ,6505 : {} ,6506 : {} }
  21. 10.10.10.12:
  22. redis_node: 2
  23. redis_instances: { 6501 : {} ,6502 : {} ,6503 : {} ,6504 : {} ,6505 : {} ,6506 : {} }
  24. vars:
  25. redis_cluster: redis-test # name of this redis 'cluster'
  26. redis_mode: cluster # standalone,cluster,sentinel
  27. redis_max_memory: 64MB # max memory used by each redis instance
  28. redis_mem_policy: allkeys-lru # memory eviction policy
  29. #----------------------------------#
  30. # redis standalone example #
  31. #----------------------------------#
  32. redis-common:
  33. hosts:
  34. 10.10.10.13:
  35. redis_node: 1
  36. redis_instances:
  37. 6501: {}
  38. 6502: { replica_of: '10.10.10.13 6501' }
  39. 6503: { replica_of: '10.10.10.13 6501' }
  40. vars:
  41. redis_cluster: redis-common # name of this redis 'cluster'
  42. redis_mode: standalone # standalone,cluster,sentinel
  43. redis_max_memory: 64MB # max memory used by each redis instance

Starting with Pigsty v1.4, initial support for MatrixDB (Greenplum7) is provided

Example: MatrixDB Data WareHouse

  1. #----------------------------------#
  2. # cluster: mx-mdw (gp master)
  3. #----------------------------------#
  4. mx-mdw:
  5. hosts:
  6. 10.10.10.10: { pg_seq: 1, pg_role: primary , nodename: mx-mdw-1 }
  7. vars:
  8. gp_role: master # this cluster is used as greenplum master
  9. pg_shard: mx # pgsql sharding name & gpsql deployment name
  10. pg_cluster: mx-mdw # this master cluster name is mx-mdw
  11. pg_databases:
  12. - { name: matrixmgr , extensions: [ { name: matrixdbts } ] }
  13. - { name: meta }
  14. pg_users:
  15. - { name: meta , password: DBUser.Meta , pgbouncer: true }
  16. - { name: dbuser_monitor , password: DBUser.Monitor , roles: [ dbrole_readonly ], superuser: true }
  17. pgbouncer_enabled: true # enable pgbouncer for greenplum master
  18. pgbouncer_exporter_enabled: false # enable pgbouncer_exporter for greenplum master
  19. pg_exporter_params: 'host=127.0.0.1&sslmode=disable' # use 127.0.0.1 as local monitor host
  20. #----------------------------------#
  21. # cluster: mx-sdw (gp master)
  22. #----------------------------------#
  23. mx-sdw:
  24. hosts:
  25. 10.10.10.11:
  26. nodename: mx-sdw-1 # greenplum segment node
  27. pg_instances: # greenplum segment instances
  28. 6000: { pg_cluster: mx-seg1, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
  29. 6001: { pg_cluster: mx-seg2, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
  30. 10.10.10.12:
  31. nodename: mx-sdw-2
  32. pg_instances:
  33. 6000: { pg_cluster: mx-seg2, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
  34. 6001: { pg_cluster: mx-seg3, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
  35. 10.10.10.13:
  36. nodename: mx-sdw-3
  37. pg_instances:
  38. 6000: { pg_cluster: mx-seg3, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
  39. 6001: { pg_cluster: mx-seg1, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
  40. vars:
  41. gp_role: segment # these are nodes for gp segments
  42. pg_shard: mx # pgsql sharding name & gpsql deployment name
  43. pg_cluster: mx-sdw # these segment clusters name is mx-sdw
  44. pg_preflight_skip: true # skip preflight check (since pg_seq & pg_role & pg_cluster not exists)
  45. pg_exporter_config: pg_exporter_basic.yml # use basic config to avoid segment server crash
  46. pg_exporter_params: 'options=-c%20gp_role%3Dutility&sslmode=disable' # use gp_role = utility to connect to segments

Last modified 2022-06-04: fill en docs (5a858d3)