Greenplum Database tables are similar to tables in any relational database, except that table rows are distributed across the different segments in the system. When you create a table, you specify the table’s distribution policy.

Creating a Table

The CREATE TABLE command creates a table and defines its structure. When you create a table, you define:

Choosing Column Data Types

The data type of a column determines the types of data values the column can contain. Choose the data type that uses the least possible space but can still accommodate your data and that best constrains the data. For example, use character data types for strings, date or timestamp data types for dates, and numeric data types for numbers.

For table columns that contain textual data, specify the data type VARCHAR or TEXT. Specifying the data type CHAR is not recommended. In Greenplum Database, the data types VARCHAR or TEXT handle padding added to the data (space characters added after the last non-space character) as significant characters, the data type CHAR does not. For information on the character data types, see the CREATE TABLE command in the Greenplum Database Reference Guide.

Use the smallest numeric data type that will accommodate your numeric data and allow for future expansion. For example, using BIGINT for data that fits in INT or SMALLINT wastes storage space. If you expect that your data values will expand over time, consider that changing from a smaller datatype to a larger datatype after loading large amounts of data is costly. For example, if your current data values fit in a SMALLINT but it is likely that the values will expand, INT is the better long-term choice.

Use the same data types for columns that you plan to use in cross-table joins. Cross-table joins usually use the primary key in one table and a foreign key in the other table. When the data types are different, the database must convert one of them so that the data values can be compared correctly, which adds unnecessary overhead.

Greenplum Database has a rich set of native data types available to users. See the Greenplum Database Reference Guide for information about the built-in data types.

Setting Table and Column Constraints

You can define constraints on columns and tables to restrict the data in your tables. Greenplum Database support for constraints is the same as PostgreSQL with some limitations, including:

  • CHECK constraints can refer only to the table on which they are defined.

  • UNIQUE and PRIMARY KEY constraints must be compatible with their tableʼs distribution key and partitioning key, if any.

    Note

    UNIQUE and PRIMARY KEY constraints are not allowed on append-optimized tables because the UNIQUE indexes that are created by the constraints are not allowed on append-optimized tables.

  • FOREIGN KEY constraints are allowed, but not enforced.

  • Constraints that you define on partitioned tables apply to the partitioned table as a whole. You cannot define constraints on the individual parts of the table.

Check Constraints

Check constraints allow you to specify that the value in a certain column must satisfy a Boolean (truth-value) expression. For example, to require positive product prices:

  1. => CREATE TABLE products
  2. ( product_no integer,
  3. name text,
  4. price numeric CHECK (price > 0) );

Not-Null Constraints

Not-null constraints specify that a column must not assume the null value. A not-null constraint is always written as a column constraint. For example:

  1. => CREATE TABLE products
  2. ( product_no integer NOT NULL,
  3. name text NOT NULL,
  4. price numeric );

Unique Constraints

Unique constraints ensure that the data contained in a column or a group of columns is unique with respect to all the rows in the table. The table must be hash-distributed or replicated (not DISTRIBUTED RANDOMLY). If the table is hash-distributed, the constraint columns must be the same as (or a superset of) the table’s distribution key columns. For example:

  1. => CREATE TABLE products
  2. ( `product_no` integer `UNIQUE`,
  3. name text,
  4. price numeric)
  5. ` DISTRIBUTED BY (``product_no``)`;

Primary Keys

A primary key constraint is a combination of a UNIQUE constraint and a NOT NULL constraint. The table must be hash-distributed (not DISTRIBUTED RANDOMLY), and the primary key columns must be the same as (or a superset of) the table’s distribution key columns. If a table has a primary key, this column (or group of columns) is chosen as the distribution key for the table by default. For example:

  1. => CREATE TABLE products
  2. ( `product_no` integer `PRIMARY KEY`,
  3. name text,
  4. price numeric)
  5. ` DISTRIBUTED BY (``product_no``)`;

Foreign Keys

Foreign keys are not supported. You can declare them, but referential integrity is not enforced.

Foreign key constraints specify that the values in a column or a group of columns must match the values appearing in some row of another table to maintain referential integrity between two related tables. Referential integrity checks cannot be enforced between the distributed table segments of a Greenplum database.

Choosing the Table Distribution Policy

All Greenplum Database tables are distributed. When you create or alter a table, you optionally specify DISTRIBUTED BY (hash distribution), DISTRIBUTED RANDOMLY (round-robin distribution), or DISTRIBUTED REPLICATED (fully distributed) to determine the table row distribution.

Note

The Greenplum Database server configuration parameter gp_create_table_random_default_distribution controls the table distribution policy if the DISTRIBUTED BY clause is not specified when you create a table.

For information about the parameter, see “Server Configuration Parameters” of the Greenplum Database Reference Guide.

Consider the following points when deciding on a table distribution policy.

  • Even Data Distribution — For the best possible performance, all segments should contain equal portions of data. If the data is unbalanced or skewed, the segments with more data must work harder to perform their portion of the query processing. Choose a distribution key that is unique for each record, such as the primary key.
  • Local and Distributed Operations — Local operations are faster than distributed operations. Query processing is fastest if the work associated with join, sort, or aggregation operations is done locally, at the segment level. Work done at the system level requires distributing tuples across the segments, which is less efficient. When tables share a common distribution key, the work of joining or sorting on their shared distribution key columns is done locally. With a random distribution policy, local join operations are not an option.
  • Even Query Processing — For best performance, all segments should handle an equal share of the query workload. Query workload can be skewed if a table’s data distribution policy and the query predicates are not well matched. For example, suppose that a sales transactions table is distributed on the customer ID column (the distribution key). If a predicate in a query references a single customer ID, the query processing work is concentrated on just one segment.

The replicated table distribution policy (DISTRIBUTED REPLICATED) should be used only for small tables. Replicating data to every segment is costly in both storage and maintenance, and prohibitive for large fact tables. The primary use cases for replicated tables are to:

  • remove restrictions on operations that user-defined functions can perform on segments, and
  • improve query performance by making it unnecessary to broadcast frequently used tables to all segments.

Note

The hidden system columns (ctid, cmin, cmax, xmin, xmax, and gp_segment_id) cannot be referenced in user queries on replicated tables because they have no single, unambiguous value. Greenplum Database returns a column does not exist error for the query.

Declaring Distribution Keys

CREATE TABLE‘s optional clauses DISTRIBUTED BY, DISTRIBUTED RANDOMLY, and DISTRIBUTED REPLICATED specify the distribution policy for a table. The default is a hash distribution policy that uses either the PRIMARY KEY (if the table has one) or the first column of the table as the distribution key. Columns with geometric or user-defined data types are not eligible as Greenplum Database distribution key columns. If a table does not have an eligible column, Greenplum Database distributes the rows randomly or in round-robin fashion.

Replicated tables have no distribution key because every row is distributed to every Greenplum Database segment instance.

To ensure even distribution of hash-distributed data, choose a distribution key that is unique for each record. If that is not possible, choose DISTRIBUTED RANDOMLY. For example:

  1. => CREATE TABLE products
  2. ` (name varchar(40),
  3. prod_id integer,
  4. supplier_id integer)
  5. DISTRIBUTED BY (prod_id);
  6. `
  1. => CREATE TABLE random_stuff
  2. ` (things text,
  3. doodads text,
  4. etc text)
  5. DISTRIBUTED RANDOMLY;
  6. `

Important

If a primary key exists, it is the default distribution key for the table. If no primary key exists, but a unique key exists, this is the default distribution key for the table.

Custom Distribution Key Hash Functions

The hash function used for hash distribution policy is defined by the hash operator class for the column’s data type. As the default Greenplum Database uses the data type’s default hash operator class, the same operator class used for hash joins and hash aggregates, which is suitable for most use cases. However, you can declare a non-default hash operator class in the DISTRIBUTED BY clause.

Using a custom hash operator class can be useful to support co-located joins on a different operator than the default equality operator (=).

Example Custom Hash Operator Class

This example creates a custom hash operator class for the integer data type that is used to improve query performance. The operator class compares the absolute values of integers.

Create a function and an equality operator that returns true if the absolute values of two integers are equal.

  1. CREATE FUNCTION abseq(int, int) RETURNS BOOL AS
  2. $$
  3. begin return abs($1) = abs($2); end;
  4. $$ LANGUAGE plpgsql STRICT IMMUTABLE;
  5. CREATE OPERATOR |=| (
  6. PROCEDURE = abseq,
  7. LEFTARG = int,
  8. RIGHTARG = int,
  9. COMMUTATOR = |=|,
  10. hashes, merges);

Now, create a hash function and operator class that uses the operator.

  1. CREATE FUNCTION abshashfunc(int) RETURNS int AS
  2. $$
  3. begin return hashint4(abs($1)); end;
  4. $$ LANGUAGE plpgsql STRICT IMMUTABLE;
  5. CREATE OPERATOR CLASS abs_int_hash_ops FOR TYPE int4
  6. USING hash AS
  7. OPERATOR 1 |=|,
  8. FUNCTION 1 abshashfunc(int);

Also, create less than and greater than operators, and a btree operator class for them. We don’t need them for our queries, but the Postgres Planner will not consider co-location of joins without them.

  1. CREATE FUNCTION abslt(int, int) RETURNS BOOL AS
  2. $$
  3. begin return abs($1) < abs($2); end;
  4. $$ LANGUAGE plpgsql STRICT IMMUTABLE;
  5. CREATE OPERATOR |<| (
  6. PROCEDURE = abslt,
  7. LEFTARG = int,
  8. RIGHTARG = int);
  9. CREATE FUNCTION absgt(int, int) RETURNS BOOL AS
  10. $$
  11. begin return abs($1) > abs($2); end;
  12. $$ LANGUAGE plpgsql STRICT IMMUTABLE;
  13. CREATE OPERATOR |>| (
  14. PROCEDURE = absgt,
  15. LEFTARG = int,
  16. RIGHTARG = int);
  17. CREATE FUNCTION abscmp(int, int) RETURNS int AS
  18. $$
  19. begin return btint4cmp(abs($1),abs($2)); end;
  20. $$ LANGUAGE plpgsql STRICT IMMUTABLE;
  21. CREATE OPERATOR CLASS abs_int_btree_ops FOR TYPE int4
  22. USING btree AS
  23. OPERATOR 1 |<|,
  24. OPERATOR 3 |=|,
  25. OPERATOR 5 |>|,
  26. FUNCTION 1 abscmp(int, int);

Now, you can use the custom hash operator class in tables.

  1. CREATE TABLE atab (a int) DISTRIBUTED BY (a abs_int_hash_ops);
  2. CREATE TABLE btab (b int) DISTRIBUTED BY (b abs_int_hash_ops);
  3. INSERT INTO atab VALUES (-1), (0), (1);
  4. INSERT INTO btab VALUES (-1), (0), (1), (2);

Queries that perform a join that use the custom equality operator |=| can take advantage of the co-location.

With the default integer opclass, this query requires Redistribute Motion nodes.

  1. EXPLAIN (COSTS OFF) SELECT a, b FROM atab, btab WHERE a = b;
  2. QUERY PLAN
  3. ------------------------------------------------------------------
  4. Gather Motion 4:1 (slice3; segments: 4)
  5. -> Hash Join
  6. Hash Cond: (atab.a = btab.b)
  7. -> Redistribute Motion 4:4 (slice1; segments: 4)
  8. Hash Key: atab.a
  9. -> Seq Scan on atab
  10. -> Hash
  11. -> Redistribute Motion 4:4 (slice2; segments: 4)
  12. Hash Key: btab.b
  13. -> Seq Scan on btab
  14. Optimizer: Postgres query optimizer
  15. (11 rows)

With the custom opclass, a more efficient plan is possible.

  1. EXPLAIN (COSTS OFF) SELECT a, b FROM atab, btab WHERE a |=| b;
  2. QUERY PLAN
  3. ------------------------------------------------------------------
  4. Gather Motion 4:1 (slice1; segments: 4)
  5. -> Hash Join
  6. Hash Cond: (atab.a |=| btab.b)
  7. -> Seq Scan on atab
  8. -> Hash
  9. -> Seq Scan on btab
  10. Optimizer: Postgres query optimizer
  11. (7 rows)