Module: MINIO

local S3 open source alternative for PostgreSQL backup repo

MinIO is an S3-compatible object storage server. It’s designed to be scalable, secure, and easy to use. It has native multi-node multi-driver HA support and can store documents, pictures, videos, and backups.

Pigsty uses MinIO as an optional PostgreSQL backup storage repo, in addition to the default local posix FS repo. If the MinIO repo is used, the MINIO module should be installed before any PGSQL modules.

MinIO requires a trusted CA to work, so you have to install it in addition to NODE module.


Playbook

There’s a built-in playbook: minio.yml for installing the MinIO cluster. But you have to define it first.

  1. ./minio.yml -l minio # install minio cluster on group 'minio'
  • minio-id : generate minio identity
  • minio_os_user : create os user minio
  • minio_install : install minio/mcli rpm
  • minio_clean : remove minio data (not default)
  • minio_dir : create minio directories
  • minio_config : generate minio config
    • minio_conf : minio main config
    • minio_cert : minio ssl cert
    • minio_dns : write minio dns records
  • minio_launch : launch minio service
  • minio_register : register minio to prometheus
  • minio_provision : create minio aliases/buckets/users
    • minio_alias : create minio client alias
    • minio_bucket : create minio buckets
    • minio_user : create minio biz users

Trusted ca file: /etc/pki/ca.crt should exist on all nodes already. which is generated in role: ca and loaded & trusted by default in role: node.

You should install MINIO module on Pigsty-managed nodes (i.e., Install NODE first)


Configuration

You have to define a MinIO cluster before deploying it. There are some parameters for MinIO.

And here are three typical deployment scenarios:

Single-Node Single-Drive

Reference: deploy-minio-single-node-single-drive

To define a singleton MinIO instance, it’s straightforward:

  1. # 1 Node 1 Driver (DEFAULT)
  2. minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

The only required params are minio_seq and minio_cluster, which generate a unique identity for each MinIO instance.

Single-Node Single-Driver mode is for development purposes, so you can use a common dir as the data dir, which is /data/minio by default. Beware that in multi-driver or multi-node mode, MinIO will refuse to start if using a common dir as the data dir rather than a mount point.

Single-Node Multi-Drive

Reference: deploy-minio-single-node-multi-drive

To use multiple disks on a single node, you have to specify the minio_data in the format of {{ prefix }}{x...y}, which defines a series of disk mount points.

  1. minio:
  2. hosts: { 10.10.10.10: { minio_seq: 1 } }
  3. vars:
  4. minio_cluster: minio # minio cluster name, minio by default
  5. minio_data: '/data{1...4}' # minio data dir(s), use {x...y} to specify multi drivers

This example defines a single-node MinIO cluster with 4 drivers: /data1, /data2, /data3, /data4. You have to mount them properly before launching MinIO:

  1. mkfs.xfs /dev/sdb; mkdir /data1; mount -t xfs /dev/sdb /data1; # mount 1st driver, ...

Multi-Node Multi-Drive

Reference: deploy-minio-multi-node-multi-drive

The extra minio_node param will be used for a multi-node deployment:

  1. minio:
  2. hosts:
  3. 10.10.10.10: { minio_seq: 1 }
  4. 10.10.10.11: { minio_seq: 2 }
  5. 10.10.10.12: { minio_seq: 3 }
  6. vars:
  7. minio_cluster: minio
  8. minio_data: '/data{1...2}' # use two disk per node
  9. minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern

The ${minio_cluster} and ${minio_seq} will be replaced with the value of minio_cluster and minio_seq respectively and used as MinIO nodename.

Expose Service

MinIO will serve on port 9000 by default. If a multi-node MinIO cluster is deployed, you can access its service via any node. It would be better to expose MinIO service via a load balancer, such as the default haproxy on NODE.

To do so, you have to define an extra service with haproxy_services:

  1. minio:
  2. hosts:
  3. 10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
  4. 10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
  5. 10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
  6. vars:
  7. minio_cluster: minio
  8. node_cluster: minio
  9. minio_data: '/data{1...2}' # use two disk per node
  10. minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
  11. haproxy_services: # EXPOSING MINIO SERVICE WITH HAPROXY
  12. - name: minio # [REQUIRED] service name, unique
  13. port: 9002 # [REQUIRED] service port, unique
  14. options: # [OPTIONAL] minio health check
  15. - option httpchk
  16. - option http-keep-alive
  17. - http-check send meth OPTIONS uri /minio/health/live
  18. - http-check expect status 200
  19. servers:
  20. - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
  21. - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
  22. - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

Access Service

To use the exposed service, you have to update/append the MinIO credential in the pgbackrest_repo section:

  1. # This is the newly added HA MinIO Repo definition, USE THIS INSTEAD!
  2. minio_ha:
  3. type: s3
  4. s3_endpoint: minio-1.pigsty # s3_endpoint could be any load balancer: 10.10.10.1{0,1,2}, or domain names point to any of the 3 nodes
  5. s3_region: us-east-1 # you could use external domain name: sss.pigsty , which resolve to any members (`minio_domain`)
  6. s3_bucket: pgsql # instance & nodename can be used : minio-1.pigsty minio-1.pigsty minio-1.pigsty minio-1 minio-2 minio-3
  7. s3_key: pgbackrest # Better using a new password for MinIO pgbackrest user
  8. s3_key_secret: S3User.SomeNewPassWord
  9. s3_uri_style: path
  10. path: /pgbackrest
  11. storage_port: 9002 # Use the load balancer port 9002 instead of default 9000 (direct access)
  12. storage_ca_file: /etc/pki/ca.crt
  13. bundle: y
  14. cipher_type: aes-256-cbc # Better using a new cipher password for your production environment
  15. cipher_pass: pgBackRest.With.Some.Extra.PassWord.And.Salt.${pg_cluster}
  16. retention_full_type: time
  17. retention_full: 14

Expose Admin

MinIO will serve an admin web portal on port 9001 by default.

It’s not wise to expose the admin portal to the public, but if you wish to do so, add MinIO to the infra_portal and refresh the nginx server:

  1. infra_portal: # domain names and upstream servers
  2. # ... # MinIO admin page require HTTPS / Websocket to work
  3. minio1 : { domain: sss.pigsty ,endpoint: 10.10.10.10:9001 ,scheme: https ,websocket: true }
  4. minio2 : { domain: sss2.pigsty ,endpoint: 10.10.10.11:9001 ,scheme: https ,websocket: true }
  5. minio3 : { domain: sss3.pigsty ,endpoint: 10.10.10.12:9001 ,scheme: https ,websocket: true }

Check the MinIO demo config and special Vagrantfile for more details.


Administration

Here are some common MinIO mcli commands for reference, check MinIO Client for more details.

Set Alias

  1. mcli alias ls # list minio alias (there's a sss by default)
  2. mcli alias set sss https://sss.pigsty:9000 minioadmin minioadmin # root user
  3. mcli alias set pgbackrest https://sss.pigsty:9000 pgbackrest S3User.Backup # backup user

User Admin

  1. mcli admin user list sss # list all users on sss
  2. set +o history # hide password in history and create minio user
  3. mcli admin user add sss dba S3User.DBA
  4. mcli admin user add sss pgbackrest S3User.Backup
  5. set -o history

Bucket CRUD

  1. mcli ls sss/ # list buckets of alias 'sss'
  2. mcli mb --ignore-existing sss/hello # create a bucket named 'hello'
  3. mcli rb --force sss/hello # remove bucket 'hello' with force

Object CRUD

  1. mcli cp -r /www/pigsty/*.rpm sss/infra/repo/ # upload files to bucket 'infra' with prefix 'repo'
  2. mcli cp sss/infra/repo/pg_exporter-0.5.0.x86_64.rpm /tmp/ # download file from minio to local

Dashboards

There are two dashboards for MINIO module.


Parameters

There are 15 parameters in MINIO module.

ParameterTypeLevelComment
minio_seqintIminio instance identifier, REQUIRED
minio_clusterstringCminio cluster name, minio by default
minio_cleanboolG/C/Acleanup minio during init?, false by default
minio_userusernameCminio os user, minio by default
minio_nodestringCminio node name pattern
minio_datapathCminio data dir(s), use {x…y} to specify multi drivers
minio_domainstringGminio external domain name, sss.pigsty by default
minio_portportCminio service port, 9000 by default
minio_admin_portportCminio console port, 9001 by default
minio_access_keyusernameCroot access key, minioadmin by default
minio_secret_keypasswordCroot secret key, minioadmin by default
minio_extra_varsstringCextra environment variables for minio server
minio_aliasstringGalias name for local minio deployment
minio_bucketsbucket[]Clist of minio bucket to be created
minio_usersuser[]Clist of minio user to be created

Last modified 2023-02-27: add v2.0 images and docs (5b09f12)