Highlights

pigsty features highlight

The battery-include, auto-piloting, handy & thrifty distribution for open-source databases.

Highlights - 图1

High Availability / Ultimate Observability / Handy Toolbox / Database as Code / Versatile Scenario / Safety & Thrifty

PostgreSQL Distribution

RedHat for Linux!

  • Pigsty deeply integrates the latest PostgreSQL kernel (14) with powerful extensions: TimescaleDB 2.6, PostGIS 3.2, Citus 10, and hundreds+ of extensions, all Battery-include.

  • Pigsty packs the infrastructure needed for large-scale production environments: Grafana, Prometheus, Loki, Ansible, Consul, Docker, etc. It can also be used as a deployment monitor for other database and application runtimes.

  • Pigsty integrates with common tools for data analysis ecology: Jupyter, ECharts, Grafana, PostgREST, Postgres, which can be used as a data analysis environment, or a low-code data visualization application development platform.

Highlights - 图2

SRE Solution

Auto-Pilot for Postgres! Auto-Pilot! From something to something better for users Use it for fun!

Highlights - 图3

Developer Toolbox

HashiCorp for Database!

  • Pigsty upholds the Infra as Data design philosophy, users can create it in one click using an idempotent playbookwith just a few lines of declarative config file describing the database they want. Just like Kubernetes!
  • Pigsty delivers an easy-to-use database toolkit to developers: one-click download installation, automatic configuration; one-click deployment of various open-source databases, one-click migration backup, expansion, and reduction, greatly lowering the threshold of database management use, mass production DBA!
  • Pigsty can simplify database deployment and delivery, solve the problem of unified environment configuration: whether thousands of databases tens of thousands of core production environments, or a local 1C1G laptop can be fully operational; Vagrant-based local sandbox and Terraform-based multi-cloud deployment, cloud on cloud off, pull up with one click!

Highlights - 图4

Open Source RDS

Alternative for RDS!

  • Pigsty can save 50% - 80% of database hardware and software costs compared to cloud vendor RDS with a lower usage threshold and richer features, and junior R&D staff can manage hundreds of databases on their own.

  • Pigsty is modular and can be freely combined and extended on demand. It can deploy and manage various databases in a production environment, or just use them as a host to monitor; it can be used to develop data database visualization demos or support various SaaS applications.

  • Open source, free production-grade database solution to fill in the last missing piece of the cloud-native ecosystem. Stable and reliable, proven over time in large-scale production deployments, with optional professional technical support services.

Highlights - 图5


High Availability

Self-healing & Auto-Piloting.

Taking PostgreSQL as an example, Pigsty creates a database cluster that is distributed and highly available database cluster. As long as any instance of the cluster survives, the cluster can provide complete read-write service and read-only service to the outside world.

Pigsty’s high availability architecture has been tested in production environments. Pigsty uses Patroni + Consul for fault detection, Fencing, automatic failover, and HAProxy, VIP, or DNS for automatic traffic switching, achieving a complete high availability solution at a very low complexity cost, allowing the master-slave architecture of the database to be used with a cloth-like experience. Database-like experience.

The database cluster can automatically perform fault detection and master-slave switching, and common faults can be self-healing within seconds to tens of seconds: RTO < 1min for master failure, read-only traffic is almost unaffected, sync standby cluster RPO = 0 without data loss.

Each database instance in the database cluster is idempotent in use, and any instance can provide full read and write services through the built-in load balancing component HAProxy. Anyone or more Haproxy instances can act as a load balancer for the cluster and distribute traffic through health checks, shielding the cluster members from the outside world. Users can flexibly define services through config and access through various optional methods.

Highlights - 图6

Ultimate Observability

You can’t manage you don’t measure.

Monitoring systems provide metrics on the state of the system and are the cornerstone of operations and maintenance management. [DEMO]

Pigsty comes with a professional-grade monitoring system designed for large-scale database cluster management, based on industry best practices, using Prometheus, Alertmanager, Grafana, and Loki as the monitoring infrastructure. Open source, easy to customize, reusable, portable, no vendor lock-in.

Pigsty is unmatched in PostgreSQL monitoring, presenting about 1200+ categories of metrics through 30+ monitoring panels and thousands of dashboards, covering detailed information from the big global picture to individual objects. Compared with similar products, the coverage of metrics and the richness of monitoring panels are unparalleled, providing irreplaceable value for professional users. The appropriate level of detail is designed to provide an intuitive and convenient management experience for amateur users.

Pigsty’s monitoring system can be used to monitor all kinds of database instances deployed natively: PGSQL, REDIS, GPSQL, etc. It can also be used standalone to monitor existing database instances or remote cloud vendor RDS, or just as a host monitoring, it can also be used as a showcase for data visualization works.

Highlights - 图7

Handy Toolbox

Every additional command line in the install script halves the number of users.

Pigsty takes ease-of-use to the extreme: one command installs and pulls up all components, ready to install in 10 minutes, no dependency on containers and Kubernetes, no Internet access required when using offline packages, and a very low threshold for getting started.

Pigsty has two typical usage models: Standalone and Cluster. It can run completely on local single-core virtual machines and can be used for large-scale production environment database management. Simple operation and maintenance, no worries, no fuss, a one-time solution to all kinds of problems in production environments and personal use of PG.

In standalone mode, Pigsty deploys a complete infrastructure runtime with a single-node PostgreSQL database cluster on that node. For individual users, simple scenarios, and small and micro businesses, you can use this database right out of the box. The single-node model itself is fully functional and self-manageable and comes with a fully-armed and ready-to-use PG database for software development, testing, experiment, demonstration; or data cleansing, analysis, visualization, storage, or direct support for upper-tier applications: Gitlab, Jira, Confluence, UF, Kingdee, Qunhui, etc.

Pigsty has a built-in database management solution with Ansible as the core and is based on this package of command-line tools and graphical interface. It integrates the core functions of database management, including database cluster creation, destruction, expansion and contraction, user, database and service creation, etc.

What’s more, Pigsty packages and provides a complete set of application runtime, which allows users to use the node to manage any number of database clusters. You can initiate control from the node where Pigsty is installed (aka “meta node”) to bring more nodes under Pigsty’s management. You can use it to monitor existing database instances (including cloud vendor RDS) or deploy your own highly available fail-safe PostgreSQL database cluster directly on the node, as well as other kinds of applications or databases, such as Redis and MatrixDB, and Get real-time insights about nodes, databases, and applications.

Highlights - 图8

In addition, Pigsty provides templates for Local Sandbox and Multi-Cloud Deployment based on Vagrant and Terraform, so you can prepare the resources you need for your Pigsty deployment with one click.

Database as Code

A database is a software that manages the data, and a control system is software that manages the database.

Pigsty adopts the design philosophy of Infra as Data, using a declarative configuration similar to Kubernetes, with a large number of optional configuration options to describe the database and the operating environment, and an idempotent preconfigured script to automatically create the required database clusters, providing a private cloud experience.

Pigsty creates the required database clusters from bare metal nodes in minutes based on a list of user config files.

For example, creating a one-master-two-slave database cluster pg-test on three machines requires only a few lines of config and a single command pgsql.yml -l pg-test to create a highly available database cluster as described in the following section.

Highlights - 图9

Example: Customize PGSQL Clusters

  1. #----------------------------------#
  2. # cluster: pg-meta (on meta node) #
  3. #----------------------------------#
  4. # pg-meta is the default SINGLE-NODE pgsql cluster deployed on meta node (10.10.10.10)
  5. # if you have multiple n meta nodes, consider deploying pg-meta as n-node cluster too
  6. pg-meta: # required, ansible group name , pgsql cluster name. should be unique among environment
  7. hosts: # `<cluster>.hosts` holds instances definition of this cluster
  8. 10.10.10.10: # INSTANCE-LEVEL CONFIG: ip address is the key. values are instance level config entries (dict)
  9. pg_seq: 1 # required, unique identity parameter (+integer) among pg_cluster
  10. pg_role: primary # required, pg_role is mandatory identity parameter, primary|replica|offline|delayed
  11. pg_offline_query: true # instance with `pg_offline_query: true` will take offline traffic (saga, etl,...)
  12. # some variables can be overwritten on instance level. e.g: pg_upstream, pg_weight, etc...
  13. #---------------
  14. # mandatory # all configuration above (`ip`, `pg_seq`, `pg_role`) and `pg_cluster` are mandatory
  15. #---------------
  16. vars: # `<cluster>.vars` holds CLUSTER LEVEL CONFIG of this pgsql cluster
  17. pg_cluster: pg-meta # required, pgsql cluster name, unique among cluster, used as namespace of cluster resources
  18. #---------------
  19. # optional # all configuration below are OPTIONAL for a pgsql cluster (Overwrite global default)
  20. #---------------
  21. pg_version: 14 # pgsql version to be installed (use global version if missing)
  22. node_tune: tiny # node optimization profile: {oltp|olap|crit|tiny}, use tiny for vm sandbox
  23. pg_conf: tiny.yml # pgsql template: {oltp|olap|crit|tiny}, use tiny for sandbox
  24. patroni_mode: default # entering patroni pause mode after bootstrap {default|pause|remove}
  25. patroni_watchdog_mode: off # disable patroni watchdog on meta node {off|require|automatic}
  26. pg_lc_ctype: en_US.UTF8 # use en_US.UTF8 locale for i18n char support (required by `pg_trgm`)
  27. #---------------
  28. # biz databases # Defining Business Databases (Optional)
  29. #---------------
  30. pg_databases: # define business databases on this cluster, array of database definition
  31. # define the default `meta` database
  32. - name: meta # required, `name` is the only mandatory field of a database definition
  33. baseline: cmdb.sql # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
  34. # owner: postgres # optional, database owner, postgres by default
  35. # template: template1 # optional, which template to use, template1 by default
  36. # encoding: UTF8 # optional, database encoding, UTF8 by default. (MUST same as template database)
  37. # locale: C # optional, database locale, C by default. (MUST same as template database)
  38. # lc_collate: C # optional, database collate, C by default. (MUST same as template database)
  39. # lc_ctype: C # optional, database ctype, C by default. (MUST same as template database)
  40. # tablespace: pg_default # optional, default tablespace, 'pg_default' by default.
  41. # allowconn: true # optional, allow connection, true by default. false will disable connect at all
  42. # revokeconn: false # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
  43. # pgbouncer: true # optional, add this database to pgbouncer database list? true by default
  44. comment: pigsty meta database # optional, comment string for this database
  45. connlimit: -1 # optional, database connection limit, default -1 disable limit
  46. schemas: [pigsty] # optional, additional schemas to be created, array of schema names
  47. extensions: # optional, additional extensions to be installed: array of schema definition `{name,schema}`
  48. - { name: adminpack, schema: pg_catalog } # install adminpack to pg_catalog
  49. - { name: postgis, schema: public } # if schema is omitted, extension will be installed according to search_path.
  50. - { name: timescaledb } # some extensions are not relocatable, you can just omit the schema part
  51. # define an additional database named grafana & prometheus (optional)
  52. # - { name: grafana, owner: dbuser_grafana , revokeconn: true , comment: grafana primary database }
  53. # - { name: prometheus, owner: dbuser_prometheus , revokeconn: true , comment: prometheus primary database , extensions: [{ name: timescaledb }]}
  54. #---------------
  55. # biz users # Defining Business Users (Optional)
  56. #---------------
  57. pg_users: # define business users/roles on this cluster, array of user definition
  58. # define admin user for meta database (This user are used for pigsty app deployment by default)
  59. - name: dbuser_meta # required, `name` is the only mandatory field of a user definition
  60. password: md5d3d10d8cad606308bdb180148bf663e1 # md5 salted password of 'DBUser.Meta'
  61. # optional, plain text and md5 password are both acceptable (prefixed with `md5`)
  62. login: true # optional, can login, true by default (new biz ROLE should be false)
  63. superuser: false # optional, is superuser? false by default
  64. createdb: false # optional, can create database? false by default
  65. createrole: false # optional, can create role? false by default
  66. inherit: true # optional, can this role use inherited privileges? true by default
  67. replication: false # optional, can this role do replication? false by default
  68. bypassrls: false # optional, can this role bypass row level security? false by default
  69. pgbouncer: true # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
  70. connlimit: -1 # optional, user connection limit, default -1 disable limit
  71. expire_in: 3650 # optional, now + n days when this role is expired (OVERWRITE expire_at)
  72. expire_at: '2030-12-31' # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
  73. comment: pigsty admin user # optional, comment string for this user/role
  74. roles: [dbrole_admin] # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
  75. parameters: {} # optional, role level parameters with `ALTER ROLE SET`
  76. # search_path: public # key value config parameters according to postgresql documentation (e.g: use pigsty as default search_path)
  77. - {name: dbuser_view , password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
  78. # define additional business users for prometheus & grafana (optional)
  79. - {name: dbuser_grafana , password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for grafana database }
  80. - {name: dbuser_prometheus , password: DBUser.Prometheus ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for prometheus database , createrole: true }
  81. #---------------
  82. # hba rules # Defining extra HBA rules on this cluster (Optional)
  83. #---------------
  84. pg_hba_rules_extra: # Extra HBA rules to be installed on this cluster
  85. - title: reject grafana non-local access # required, rule title (used as hba description & comment string)
  86. role: common # required, which roles will be applied? ('common' applies to all roles)
  87. rules: # required, rule content: array of hba string
  88. - local grafana dbuser_grafana md5
  89. - host grafana dbuser_grafana 127.0.0.1/32 md5
  90. - host grafana dbuser_grafana 10.10.10.10/32 md5
  91. vip_mode: l2 # setup a level-2 vip for cluster pg-meta
  92. vip_address: 10.10.10.2 # virtual ip address that binds to primary instance of cluster pg-meta
  93. vip_cidrmask: 8 # cidr network mask length
  94. vip_interface: eth1 # interface to add virtual ip

In addition, in addition to PostgreSQL, support for Redis deployment and monitoring has been provided since Pigsty v1.3

Example: Redis Cache Cluster

  1. #----------------------------------#
  2. # redis sentinel example #
  3. #----------------------------------#
  4. redis-meta:
  5. hosts:
  6. 10.10.10.10:
  7. redis_node: 1
  8. redis_instances: { 6001 : {} ,6002 : {} , 6003 : {} }
  9. vars:
  10. redis_cluster: redis-meta
  11. redis_mode: sentinel
  12. redis_max_memory: 128MB
  13. #----------------------------------#
  14. # redis cluster example #
  15. #----------------------------------#
  16. redis-test:
  17. hosts:
  18. 10.10.10.11:
  19. redis_node: 1
  20. redis_instances: { 6501 : {} ,6502 : {} ,6503 : {} ,6504 : {} ,6505 : {} ,6506 : {} }
  21. 10.10.10.12:
  22. redis_node: 2
  23. redis_instances: { 6501 : {} ,6502 : {} ,6503 : {} ,6504 : {} ,6505 : {} ,6506 : {} }
  24. vars:
  25. redis_cluster: redis-test # name of this redis 'cluster'
  26. redis_mode: cluster # standalone,cluster,sentinel
  27. redis_max_memory: 64MB # max memory used by each redis instance
  28. redis_mem_policy: allkeys-lru # memory eviction policy
  29. #----------------------------------#
  30. # redis standalone example #
  31. #----------------------------------#
  32. redis-common:
  33. hosts:
  34. 10.10.10.13:
  35. redis_node: 1
  36. redis_instances:
  37. 6501: {}
  38. 6502: { replica_of: '10.10.10.13 6501' }
  39. 6503: { replica_of: '10.10.10.13 6501' }
  40. vars:
  41. redis_cluster: redis-common # name of this redis 'cluster'
  42. redis_mode: standalone # standalone,cluster,sentinel
  43. redis_max_memory: 64MB # max memory used by each redis instance

Starting with Pigsty v1.4, initial support for MatrixDB (Greenplum7) is provided

Example: MatrixDB Data WareHouse

  1. #----------------------------------#
  2. # cluster: mx-mdw (gp master)
  3. #----------------------------------#
  4. mx-mdw:
  5. hosts:
  6. 10.10.10.10: { pg_seq: 1, pg_role: primary , nodename: mx-mdw-1 }
  7. vars:
  8. gp_role: master # this cluster is used as greenplum master
  9. pg_shard: mx # pgsql sharding name & gpsql deployment name
  10. pg_cluster: mx-mdw # this master cluster name is mx-mdw
  11. pg_databases:
  12. - { name: matrixmgr , extensions: [ { name: matrixdbts } ] }
  13. - { name: meta }
  14. pg_users:
  15. - { name: meta , password: DBUser.Meta , pgbouncer: true }
  16. - { name: dbuser_monitor , password: DBUser.Monitor , roles: [ dbrole_readonly ], superuser: true }
  17. pgbouncer_enabled: true # enable pgbouncer for greenplum master
  18. pgbouncer_exporter_enabled: false # enable pgbouncer_exporter for greenplum master
  19. pg_exporter_params: 'host=127.0.0.1&sslmode=disable' # use 127.0.0.1 as local monitor host
  20. #----------------------------------#
  21. # cluster: mx-sdw (gp master)
  22. #----------------------------------#
  23. mx-sdw:
  24. hosts:
  25. 10.10.10.11:
  26. nodename: mx-sdw-1 # greenplum segment node
  27. pg_instances: # greenplum segment instances
  28. 6000: { pg_cluster: mx-seg1, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
  29. 6001: { pg_cluster: mx-seg2, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
  30. 10.10.10.12:
  31. nodename: mx-sdw-2
  32. pg_instances:
  33. 6000: { pg_cluster: mx-seg2, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
  34. 6001: { pg_cluster: mx-seg3, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
  35. 10.10.10.13:
  36. nodename: mx-sdw-3
  37. pg_instances:
  38. 6000: { pg_cluster: mx-seg3, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
  39. 6001: { pg_cluster: mx-seg1, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
  40. vars:
  41. gp_role: segment # these are nodes for gp segments
  42. pg_shard: mx # pgsql sharding name & gpsql deployment name
  43. pg_cluster: mx-sdw # these segment clusters name is mx-sdw
  44. pg_preflight_skip: true # skip preflight check (since pg_seq & pg_role & pg_cluster not exists)
  45. pg_exporter_config: pg_exporter_basic.yml # use basic config to avoid segment server crash
  46. pg_exporter_params: 'options=-c%20gp_role%3Dutility&sslmode=disable' # use gp_role = utility to connect to segments

Ubiquitous Deployment

Pigsty can use Vagrant and Virtualbox to pull up and install the required virtual machine environment on your own laptop, or through Terraform, automatically request ECS/VPC resources from your cloud provider, creating and destroying them with a single click.

The virtual machines in the sandbox environment have fixed resource names and IP addresses, making them very suitable for software development testing and experimental demonstrations.

The default sandbox configuration is a single node with 2 cores and 4GB, IP address 10.10.10.10, with a single database instance named pg-meta-1 deployed.

A full version of the sandbox is also available in a four-node version with three additional database nodes, which can be used to fully demonstrate the capabilities of Pigsty’s highly available architecture and monitoring system.

Highlights - 图10

System Requirements

  • Linux kernel, x86_64 processor
  • Use CentOS 7 / RedHat 7 / Oracle Linux 7 or other equivalent operating system distribution
  • CentOS 7.8.2003 x86_64 is highly recommended and has been tested in production environments for a long time

Single Node Basic Specifications

  • Min specification: 1 core, 1GB (OOM prone, at least 2GB of RAM recommended)
  • Recommended specifications: 2 cores, 4GB (sandbox default configuration)
  • A single PostgreSQL instance pg-meta-1 will be deployed
  • In the sandbox, the IP of this node is fixed to 10.10.10.10

Four node basic specifications

  • The meta node requirements are the same as described for a single node

  • Deploy an additional three-node PostgreSQL database cluster pg-test

  • Common database node with min specs: 1 core, 1GB, 2GB RAM recommended.

  • Three nodes with fixed IP addresses: 10.10.10.11, 10.10.10.12, 10.10.10.13

Versatile Scenario

One-click to pull up production SaaS applications, data analysis quickly, low code development visualization large screen

SaaS Software

Pigsty installs Docker by default on the meta node, and you can pull up all kinds of SaaS applications with one click: Gitlab, an open-source private code hosting platform; Discourse, an open-source forum; Mastodon, an open-source social network; Odoo, an open-source ERP software; and UFIDA, Kingdee, and other software.

You can use Docker to pull up stateless parts, modify their database connection strings to use external databases, and get a silky smooth cloud-native management experience with production-grade data persistence. For more details, please refer to Tutorial: Docker Application.

Data Analysis

Pigsty is both a battery-include PostgreSQL distribution and can be used as a data analysis environment, or to make low-code visualization applications. You can go directly from SQL data processing to Echarts plotting in one step, or you can use more elaborate workflows: for example, using PG as the main database, storing data and implementing business logic with SQL; using the built-in PostgREST to automate the back-end API, using the built-in JupyterLab to perform complex data analysis in Python, and using Echarts for data visualization, and Grafana for interaction capabilities.

Pigsty comes with several sample applications for reference.

  • Analysis of PG CSV log samples pglog
  • Visualization of new crown outbreak data covid
  • The global surface weather station data query isd
  • Database prevalence ranking trend dbeng
  • Query the work commuting schedule of a large factory’s worktime

Highlights - 图11

Safety and Thrifty

Pigsty can reduce the total cost of ownership of a database by 50% to 80% and put the data in the hands of the users themselves!

The public cloud database/RDS is a so-called “out-of-the-box” solution, but it delivers a long way from satisfying users: expensive compared to building your own database, many features that require super-user privileges are neutered, stupid UI and pot-luck features, but among all the problems, the most important one is the cloud software safty and cost issues.

Safty

  • Software that runs on your own computer can continue to run even if the software provider goes out of business. But if the company/department providing the cloud software goes out of business or decides to stop supporting it, that software won’t work, and the data you created with that software is locked up. Because the data is only stored in the cloud, not on your own server’s disk, and the only compensation you can expect is usually a chicken scratch voucher.
  • The problem of not being able to customize or scale is further exacerbated in cloud databases. Cloud databases typically do not offer database super users to users, which locks out a whole host of advanced features, as well as the ability to add extensions on your own. In contrast, ‘stream replication’, ‘high availability’, which should be standard in databases, are often sold to users as value-added items.
  • Cloud services may suddenly suspend your account without warning or recourse. You could be judged by an automated system to be in violation of the TOS when you are completely innocent: undocumented use of ports 80 & 53, account blasted and used to send malware or phishing emails, triggering a breach of the TOS. Or hammered over by a cloud vendor for some political reason, such as Parler.
  • The domestic habit of not using SaaS to insist on self-research or open-source is educated by the poor ecological industrial environment for real money. Putting your core asset – data, on someone else’s storage is just like leaving gold over the counter. There is nothing you can do to prevent, monitor, or even be aware of cloud vendors, or simply malicious or curious OPS and DBAs snooping around and stealing your precious data.

Not so with Pigsty, which can be deployed anywhere, including on your own servers. It is open source and free, requires no License, no Internet access, and does not collect any user data. You can run it on your own server until the sea runs out.

Thrifty

The cost of cloud databases is another issue: saving money is an immediate need for users. Public cloud vendors’ RDS may have advantages over traditional commercial databases, but they are still sky-high before building their own open-source databases. According to statistics, the comprehensive holding cost of RDS is up to 2~3x higher than self-build based on cloud servers, and even higher 5~10 times higher than self-build hosted by IDC.

52C/400GB/3TB x 2Price 5YCost/Year
IDC & Your own810K ¥160K ¥
ECS310K ¥63K ¥
RDS150K ¥30K ¥

Pigsty has significant cost advantages over using a cloud database. For example, you can buy the same size cloud server for half the overhead of a cloud database and deploy the database yourself using Pigsty. In this case, you can enjoy most of the ease and convenience of managing a public cloud (IaaS), while instantly saving more than half the overhead.

What’s more, Pigsty can significantly improve user performance: it allows one or two senior DBAs to leave all the trivial chores to the software and easily manage hundreds of database clusters; it also allows a junior R&D staff, after a simple learning training, can quickly reach a senior DBA’s cheap 70% correct level.

Pigsty open source and free, in the premise of providing similar or even exceed the cloud vendor RDS experience, can reduce the comprehensive cost of ownership of the database by 50% ~ 80%, and let the data really control in the hands of the user.


Open Source Distribution for PostgreSQL

RedHat for Linux! Battery-Included!

Unparalleled Observability

Open source monitoring best practice! Bring vision & insight to users

Auto Piloting

Self-Healing HA Architecture, Cold backups & WAL Archive, Steady as she goes!

Infra as Code Toolbox

HashiCorp for Database! Infra as Code, Database as Data

Multi-Cloud Deploy

Vagrant & Terraform, Deploy to anywhere you want!

Versatile Apps

Launch production-grade software, perform data analysis & visualization demo in low-code style

RDS Alternative

Safety and Thrifty, up to 50%-80% cost saving comparing to RDS!

Last modified 2022-06-04: fill en docs (5a858d3)