File Analysis

With the Table Value Function feature, Doris is able to query files in object storage or HDFS as simply as querying Tables. In addition, it supports automatic column type inference.

Usage

For more usage details, please see the documentation:

  • S3: supports file analysis on object storage compatible with S3
  • HDFS: supports file analysis on HDFS

The followings illustrate how file analysis is conducted with the example of S3 Table Value Function.

Automatic Column Type Inference

  1. > DESC FUNCTION s3 (
  2. "URI" = "http://127.0.0.1:9312/test2/test.snappy.parquet",
  3. "s3.access_key"= "ak",
  4. "s3.secret_key" = "sk",
  5. "format" = "parquet",
  6. "use_path_style"="true"
  7. );
  8. +---------------+--------------+------+-------+---------+-------+
  9. | Field | Type | Null | Key | Default | Extra |
  10. +---------------+--------------+------+-------+---------+-------+
  11. | p_partkey | INT | Yes | false | NULL | NONE |
  12. | p_name | TEXT | Yes | false | NULL | NONE |
  13. | p_mfgr | TEXT | Yes | false | NULL | NONE |
  14. | p_brand | TEXT | Yes | false | NULL | NONE |
  15. | p_type | TEXT | Yes | false | NULL | NONE |
  16. | p_size | INT | Yes | false | NULL | NONE |
  17. | p_container | TEXT | Yes | false | NULL | NONE |
  18. | p_retailprice | DECIMAL(9,0) | Yes | false | NULL | NONE |
  19. | p_comment | TEXT | Yes | false | NULL | NONE |
  20. +---------------+--------------+------+-------+---------+-------+

An S3 Table Value Function is defined as follows:

  1. s3(
  2. "URI" = "http://127.0.0.1:9312/test2/test.snappy.parquet",
  3. "s3.access_key"= "ak",
  4. "s3.secret_key" = "sk",
  5. "Format" = "parquet",
  6. "use_path_style"="true")

It specifies the file path, connection, and authentication.

After defining, you can view the schema of this file using the DESC FUNCTION statement.

As can be seen, Doris is able to automatically infer column types based on the metadata of the Parquet file.

Besides Parquet, Doris supports analysis and auto column type inference of ORC, CSV, and Json files.

CSV Schema

By default, for CSV format files, all columns are of type String. Column names and column types can be specified individually via the csv_schema attribute. Doris will use the specified column type for file reading. The format is as follows:

name1:type1;name2:type2;...

For columns with mismatched formats (such as string in the file and int defined by the user), or missing columns (such as 4 columns in the file and 5 columns defined by the user), these columns will return null.

Currently supported column types are:

namemapping type
tinyinttinyint
smallintsmallint
intint
bigintbigint
largeintlargeint
floatfloat
doubledouble
decimal(p,s)decimalv3(p,s)
datedatev2
datetimedatetimev2
charstring
varcharstring
stringstring
booleanboolean

Example:

  1. s3 (
  2. "uri" = "https://bucket1/inventory.dat",
  3. "s3.access_key"= "ak",
  4. "s3.secret_key" = "sk",
  5. "format" = "csv",
  6. "column_separator" = "|",
  7. "csv_schema" = "k1:int;k2:int;k3:int;k4:decimal(38,10)",
  8. "use_path_style"="true"
  9. )

Query and Analysis

You can conduct queries and analysis on this Parquet file using any SQL statements:

  1. SELECT * FROM s3(
  2. "uri" = "http://127.0.0.1:9312/test2/test.snappy.parquet",
  3. "s3.access_key"= "ak",
  4. "s3.secret_key" = "sk",
  5. "format" = "parquet",
  6. "use_path_style"="true")
  7. LIMIT 5;
  8. +-----------+------------------------------------------+----------------+----------+-------------------------+--------+-------------+---------------+---------------------+
  9. | p_partkey | p_name | p_mfgr | p_brand | p_type | p_size | p_container | p_retailprice | p_comment |
  10. +-----------+------------------------------------------+----------------+----------+-------------------------+--------+-------------+---------------+---------------------+
  11. | 1 | goldenrod lavender spring chocolate lace | Manufacturer#1 | Brand#13 | PROMO BURNISHED COPPER | 7 | JUMBO PKG | 901 | ly. slyly ironi |
  12. | 2 | blush thistle blue yellow saddle | Manufacturer#1 | Brand#13 | LARGE BRUSHED BRASS | 1 | LG CASE | 902 | lar accounts amo |
  13. | 3 | spring green yellow purple cornsilk | Manufacturer#4 | Brand#42 | STANDARD POLISHED BRASS | 21 | WRAP CASE | 903 | egular deposits hag |
  14. | 4 | cornflower chocolate smoke green pink | Manufacturer#3 | Brand#34 | SMALL PLATED BRASS | 14 | MED DRUM | 904 | p furiously r |
  15. | 5 | forest brown coral puff cream | Manufacturer#3 | Brand#32 | STANDARD POLISHED TIN | 15 | SM PKG | 905 | wake carefully |
  16. +-----------+------------------------------------------+----------------+----------+-------------------------+--------+-------------+---------------+---------------------+

You can put the Table Value Function anywhere that you used to put Table in the SQL, such as in the WITH or FROM clause in CTE. In this way, you can treat the file as a normal table and conduct analysis conveniently.

你也可以用过 CREATE VIEW 语句为 Table Value Function 创建一个逻辑视图.这样,你可以想其他视图一样,对这个 Table Value Function 进行访问、权限管理等操作,也可以让其他用户访问这个 Table Value Function. You can also create a logic view by using CREATE VIEW statement for a Table Value Function. So that you can query this view, grant priv on this view or allow other user to access this Table Value Function.

  1. CREATE VIEW v1 AS
  2. SELECT * FROM s3(
  3. "uri" = "http://127.0.0.1:9312/test2/test.snappy.parquet",
  4. "s3.access_key"= "ak",
  5. "s3.secret_key" = "sk",
  6. "format" = "parquet",
  7. "use_path_style"="true");
  8. DESC v1;
  9. SELECT * FROM v1;
  10. GRANT SELECT_PRIV ON db1.v1 TO user1;

Data Ingestion

Users can ingest files into Doris tables via INSERT INTO SELECT for faster file analysis:

  1. // 1. Create Doris internal table
  2. CREATE TABLE IF NOT EXISTS test_table
  3. (
  4. id int,
  5. name varchar(50),
  6. age int
  7. )
  8. DISTRIBUTED BY HASH(id) BUCKETS 4
  9. PROPERTIES("replication_num" = "1");
  10. // 2. Insert data using S3 Table Value Function
  11. INSERT INTO test_table (id,name,age)
  12. SELECT cast(id as INT) as id, name, cast (age as INT) as age
  13. FROM s3(
  14. "uri" = "http://127.0.0.1:9312/test2/test.snappy.parquet",
  15. "s3.access_key"= "ak",
  16. "s3.secret_key" = "sk",
  17. "format" = "parquet",
  18. "use_path_style" = "true");

Note

  1. If the URI specified by the S3 / HDFS TVF is not matched with the file, or all the matched files are empty files, then the S3 / HDFS TVF will return to the empty result set. In this case, using the DESC FUNCTION to view the schema of this file, you will get a dummy column __dummy_col, which can be ignored.

  2. If the format of the TVF is specified to CSV, and the read file is not a empty file but the first line of this file is empty, then it will prompt the error The first line is empty, can not parse column numbers. This is because the schema cannot be parsed from the first line of the file