CREATE DATASOURCE TABLE

Description

The CREATE TABLE statement defines a new table using a Data Source.

Syntax

  1. CREATE TABLE [ IF NOT EXISTS ] table_identifier
  2. [ ( col_name1 col_type1 [ COMMENT col_comment1 ], ... ) ]
  3. USING data_source
  4. [ OPTIONS ( key1=val1, key2=val2, ... ) ]
  5. [ PARTITIONED BY ( col_name1, col_name2, ... ) ]
  6. [ CLUSTERED BY ( col_name3, col_name4, ... )
  7. [ SORTED BY ( col_name [ ASC | DESC ], ... ) ]
  8. INTO num_buckets BUCKETS ]
  9. [ LOCATION path ]
  10. [ COMMENT table_comment ]
  11. [ TBLPROPERTIES ( key1=val1, key2=val2, ... ) ]
  12. [ AS select_statement ]

Note that, the clauses between the USING clause and the AS SELECT clause can come in as any order. For example, you can write COMMENT table_comment after TBLPROPERTIES.

Parameters

  • table_identifier

    Specifies a table name, which may be optionally qualified with a database name.

    Syntax: [ database_name. ] table_name

  • USING data_source

    Data Source is the input format used to create the table. Data source can be CSV, TXT, ORC, JDBC, PARQUET, etc.

  • OPTIONS

    Options of data source which will be injected to storage properties.

  • PARTITIONED BY

    Partitions are created on the table, based on the columns specified.

  • CLUSTERED BY

    Partitions created on the table will be bucketed into fixed buckets based on the column specified for bucketing.

    NOTE: Bucketing is an optimization technique that uses buckets (and bucketing columns) to determine data partitioning and avoid data shuffle.

  • SORTED BY

    Specifies an ordering of bucket columns. Optionally, one can use ASC for an ascending order or DESC for a descending order after any column names in the SORTED BY clause. If not specified, ASC is assumed by default.

  • INTO num_buckets BUCKETS

    Specifies buckets numbers, which is used in CLUSTERED BY clause.

  • LOCATION

    Path to the directory where table data is stored, which could be a path on distributed storage like HDFS, etc.

  • COMMENT

    A string literal to describe the table.

  • TBLPROPERTIES

    A list of key-value pairs that is used to tag the table definition.

  • AS select_statement

    The table is populated using the data from the select statement.

Data Source Interaction

A Data Source table acts like a pointer to the underlying data source. For example, you can create a table “foo” in Spark which points to a table “bar” in MySQL using JDBC Data Source. When you read/write table “foo”, you actually read/write table “bar”.

In general CREATE TABLE is creating a “pointer”, and you need to make sure it points to something existing. An exception is file source such as parquet, json. If you don’t specify the LOCATION, Spark will create a default table location for you.

For CREATE TABLE AS SELECT, Spark will overwrite the underlying data source with the data of the input query, to make sure the table gets created contains exactly the same data as the input query.

Examples

  1. --Use data source
  2. CREATE TABLE student (id INT, name STRING, age INT) USING CSV;
  3. --Use data from another table
  4. CREATE TABLE student_copy USING CSV
  5. AS SELECT * FROM student;
  6. --Omit the USING clause, which uses the default data source (parquet by default)
  7. CREATE TABLE student (id INT, name STRING, age INT);
  8. --Use parquet data source with parquet storage options
  9. --The columns 'id' and 'name' enable the bloom filter during writing parquet file,
  10. --column 'age' does not enable
  11. CREATE TABLE student_parquet(id INT, name STRING, age INT) USING PARQUET
  12. OPTIONS (
  13. 'parquet.bloom.filter.enabled'='true',
  14. 'parquet.bloom.filter.enabled#age'='false'
  15. );
  16. --Specify table comment and properties
  17. CREATE TABLE student (id INT, name STRING, age INT) USING CSV
  18. COMMENT 'this is a comment'
  19. TBLPROPERTIES ('foo'='bar');
  20. --Specify table comment and properties with different clauses order
  21. CREATE TABLE student (id INT, name STRING, age INT) USING CSV
  22. TBLPROPERTIES ('foo'='bar')
  23. COMMENT 'this is a comment';
  24. --Create partitioned and bucketed table
  25. CREATE TABLE student (id INT, name STRING, age INT)
  26. USING CSV
  27. PARTITIONED BY (age)
  28. CLUSTERED BY (Id) INTO 4 buckets;
  29. --Create partitioned and bucketed table through CTAS
  30. CREATE TABLE student_partition_bucket
  31. USING parquet
  32. PARTITIONED BY (age)
  33. CLUSTERED BY (id) INTO 4 buckets
  34. AS SELECT * FROM student;
  35. --Create bucketed table through CTAS and CTE
  36. CREATE TABLE student_bucket
  37. USING parquet
  38. CLUSTERED BY (id) INTO 4 buckets (
  39. WITH tmpTable AS (
  40. SELECT * FROM student WHERE id > 100
  41. )
  42. SELECT * FROM tmpTable
  43. );