s3 Table Function

Provides table-like interface to select/insert files in Amazon S3. This table function is similar to hdfs, but provides S3-specific features.

Syntax

  1. s3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression])

Arguments

  • path — Bucket url with path to file. Supports following wildcards in readonly mode: *, ?, {abc,def} and {N..M} where N, M — numbers, 'abc', 'def' — strings. For more information see here.
  • format — The format of the file.
  • structure — Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...'.
  • compression — Parameter is optional. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. By default, it will autodetect compression by file extension.

Returned value

A table with the specified structure for reading or writing data in the specified file.

Examples

Selecting the first two rows from the table from S3 file https://storage.yandexcloud.net/my-test-bucket-768/data.csv:

  1. SELECT *
  2. FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32')
  3. LIMIT 2;
  1. ┌─column1─┬─column2─┬─column3─┐
  2. 1 2 3
  3. 3 2 1
  4. └─────────┴─────────┴─────────┘

The similar but from file with gzip compression:

  1. SELECT *
  2. FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', 'gzip')
  3. LIMIT 2;
  1. ┌─column1─┬─column2─┬─column3─┐
  2. 1 2 3
  3. 3 2 1
  4. └─────────┴─────────┴─────────┘

Usage

Suppose that we have several files with following URIs on S3:

Count the amount of rows in files ending with numbers from 1 to 3:

  1. SELECT count(*)
  2. FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}.csv', 'CSV', 'name String, value UInt32')
  1. ┌─count()─┐
  2. 18
  3. └─────────┘

Count the total amount of rows in all files in these two directories:

  1. SELECT count(*)
  2. FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV', 'name String, value UInt32')
  1. ┌─count()─┐
  2. 24
  3. └─────────┘

Warning

If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ?.

Count the total amount of rows in files named file-000.csv, file-001.csv, … , file-999.csv:

  1. SELECT count(*)
  2. FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32');
  1. ┌─count()─┐
  2. 12
  3. └─────────┘

Insert data into file test-data.csv.gz:

  1. INSERT INTO FUNCTION s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
  2. VALUES ('test-data', 1), ('test-data-2', 2);

Insert data into file test-data.csv.gz from existing table:

  1. INSERT INTO FUNCTION s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
  2. SELECT name, value FROM existing_table;

Partitioned Write

If you specify PARTITION BY expression when inserting data into S3 table, a separate file is created for each partition value. Splitting the data into separate files helps to improve reading operations efficiency.

Examples

  1. Using partition ID in a key creates separate files:
  1. INSERT INTO TABLE FUNCTION
  2. s3('http://bucket.amazonaws.com/my_bucket/file_{_partition_id}.csv', 'CSV', 'a String, b UInt32, c UInt32')
  3. PARTITION BY a VALUES ('x', 2, 3), ('x', 4, 5), ('y', 11, 12), ('y', 13, 14), ('z', 21, 22), ('z', 23, 24);

As a result, the data is written into three files: file_x.csv, file_y.csv, and file_z.csv.

  1. Using partition ID in a bucket name creates files in different buckets:
  1. INSERT INTO TABLE FUNCTION
  2. s3('http://bucket.amazonaws.com/my_bucket_{_partition_id}/file.csv', 'CSV', 'a UInt32, b UInt32, c UInt32')
  3. PARTITION BY a VALUES (1, 2, 3), (1, 4, 5), (10, 11, 12), (10, 13, 14), (20, 21, 22), (20, 23, 24);

As a result, the data is written into three files in different buckets: my_bucket_1/file.csv, my_bucket_10/file.csv, and my_bucket_20/file.csv.

See Also