pg_dumpall

Extracts all databases in a HAWQ system to a single script file or other archive file.

Synopsis

  1. pg_dumpall [<options>] ...

where:

  1. <general options> =
  2. [-f | --filespaces]
  3. [-i | --ignore-version ]
  4. [--help ]
  5. [--version]
  6. <options controlling output content> =
  7. [-a --dataonly ]
  8. [-c | --clean ]
  9. [-d | --inserts]
  10. [-D | --column_inserts]
  11. [-F | --filespaces ]
  12. [-g | --globals-only]
  13. [-o | --oids ]
  14. [-d | --inserts]
  15. [-O | --no-owner]
  16. [-r | --resource-queues]
  17. [-s | --schema-only]
  18. [-S <username> | --superuser=<username> ]
  19. [-v | --verbose ]
  20. [-x | --no-privileges ]
  21. [--disable-dollar-quoting]
  22. [--disable-triggers]
  23. [--use-set-session-authorization]
  24. [--gp-syntax]
  25. [--no-gp-syntax]
  26. <connection_options> =
  27. [-h <host> | --host <host>]
  28. [-p <port> | -- port <port>]
  29. [-U <username> | --username <username>]
  30. [-w | --no-password]
  31. [-W | --password]

Description

pg_dumpall is a standard PostgreSQL utility for backing up all databases in a HAWQ (or PostgreSQL) instance, and is also supported in HAWQ. It creates a single (non-parallel) dump file.

pg_dumpall creates a single script file that contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database. pg_dumpall also dumps global objects that are common to all databases. (pg_dump does not save these objects.) This currently includes information about database users and groups, and access permissions that apply to databases as a whole.

Since pg_dumpall reads tables from all databases, connect as a database superuser to assure producing a complete dump, as well as to execute the saved script, add users and groups, and to create databases.

The SQL script will be written to the standard output. Shell operators should be used to redirect it into a file.

pg_dumpall needs to connect to the HAWQ master server several times (once per database). If you use password authentication, a password could be requested for each connection, so using a ~/.pgpass file is recommended.

Options

General Options

-f | —filespaces

Dump filespace definitions.

-i | —ignore-version

Ignore version mismatch between pg_dump and the database server. pg_dump can dump from servers running previous releases of HAWQ (or PostgreSQL), but some older versions may not be supported. Use this option if you need to override the version check.

–help

Displays this help, then exits.

–version

Displays the version information for the output.

Output Control Options

-a | —data-only

Dump only the data, not the schema (data definitions). This option is only meaningful for the plain-text format. For the archive formats, you can specify this option when you call pg_restore.

-c | —clean

Output commands to clean (DROP) database objects prior to (the commands for) creating them. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call pg_restore.

-d | —inserts

Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. Also, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents. Note that the restore may fail altogether if you have rearranged column order. The -D option is safe against column order changes, though even slower.

-D | —column-inserts

Dump data as INSERT commands with explicit column names (INSERT INTO table (column, ...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. Also, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.

-g | —globals-only

Dump only global objects (roles and tablespaces), no databases.

-o | —oids

Dump object identifiers (OIDs) as part of the data for every table. Use of this option is not recommended for files to be restored into HAWQ.

-O | —no-owner

Do not output commands to set ownership of objects to match the original database. By default, pg_dump issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call pg_restore.

-r | —resource-queues

Dump resource queue definitions.

-s | —schema-only

Dump only the object definitions (schema), not data.

-S | —superuser=

Specify the superuser user name to use when disabling triggers. This option is only relevant if --disable-triggers is used. Starting the resulting script as a superuser is preferred.

Note: HAWQ does not support user-defined triggers.

-x | —no-privileges | —no-acl

Prevent dumping of access privileges (GRANT/REVOKE commands).

--disable-dollar-quoting

This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.

--disable-triggers

This option is only relevant when creating a data-only dump. It instructs pg_dumpall to include commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you do not want to invoke triggers on the tables during data reload. You need superuser permissions to perform commands issued for --disable-triggers. Either specify a superuser name with the -S option, or start the resulting script as a superuser.

Note: HAWQ does not support user-defined triggers.

--use-set-session-authorization

Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, may not restore properly. A dump using SET SESSION AUTHORIZATION will require superuser privileges to restore correctly, whereas ALTER OWNER requires lesser privileges.

--gp-syntax

Output HAWQ syntax in the CREATE TABLE statements. This allows the distribution policy (DISTRIBUTED BY or DISTRIBUTED RANDOMLY clauses) of a HAWQ table to be dumped, which is useful for restoring into other HAWQ systems.

--no-gp-syntax

Do not use HAWQ syntax in the dump. This is the default if using postgresql.

Connection Options

-h | —host

The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable PGHOST or defaults to localhost.

-l | —database

Connect to an alternate database.

-p | —port

The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable PGPORT or defaults to 5432.

-U | —username

The database role name to connect as. If not specified, reads from the environment variable PGUSER or defaults to the current system role name.

-w | —no-password

Do not prompt for a password.

-W | —password

Force a password prompt.

Notes

Since pg_dumpall calls pg_dump internally, some diagnostic messages will refer to pg_dump.

Once restored, it is wise to run ANALYZE on each database so the query planner has useful statistics. You can also run vacuumdb -a -z to analyze all databases.

All tablespace (filespace) directories used by pg_dumpall must exist before the restore. Otherwise, database creation will fail for databases in non-default locations.

Examples

To dump all databases:

  1. $ pg_dumpall > db.out

To reload this file:

  1. $ psql template1 -f db.out

To dump only global objects (including filespaces and resource queues):

  1. $ pg_dumpall -g -f -r

See Also

pg_dump