OUTFILE

description

  1. The `SELECT INTO OUTFILE` statement can export the query results to a file. Currently supports export to remote storage through Broker process, or directly through S3, HDFS protocol such as HDFS, S3, BOS and COS(Tencent Cloud) through the Broker process. The syntax is as follows:
  2. Grammar
  3. query_stmt
  4. INTO OUTFILE "file_path"
  5. [format_as]
  6. [properties]
  7. 1. file_path
  8. `file_path` specify the file path and file name prefix. Like: `hdfs://path/to/my_file_`.
  9. The final file name will be assembled as `my_file_`, file seq no and the format suffix. File seq no starts from 0, determined by the number of split.
  10. my_file_abcdefg_0.csv
  11. my_file_abcdefg_1.csv
  12. my_file_abcdegf_2.csv
  13. 2. format_as
  14. FORMAT AS CSV
  15. Specify the export format. The default is CSV.
  16. 3. properties
  17. Specify the relevant attributes. Currently it supports exporting through the Broker process, or through the S3, HDFS protocol.
  18. Grammar
  19. [PROPERTIES ("key"="value", ...)]
  20. The following parameters can be specified:
  21. column_separator: Specifies the exported column separator, defaulting to t. Supports invisible characters, such as'\x07'.
  22. line_delimiter: Specifies the exported line separator, defaulting to\n. Supports invisible characters, such as'\x07'.
  23. max_file_size: max size for each file
  24. Broker related attributes need to be prefixed with `broker.`:
  25. broker.name: broker name
  26. broker.hadoop.security.authentication: Specify authentication as kerberos
  27. broker.kerberos_principal: Specify the principal of kerberos
  28. broker.kerberos_keytab: Specify the keytab path of kerberos, this file is the path on the broker.
  29. HDFS protocal can directly execute HDFS protocal configuration:
  30. hdfs.fs.defaultFS: namenode ip and port
  31. hdfs.hdfs_user: hdfs user name
  32. S3 protocol can directly execute S3 protocol configuration:
  33. AWS_ENDPOINT
  34. AWS_ACCESS_KEY
  35. AWS_SECRET_KEY
  36. AWS_REGION

example

  1. 1. Export simple query results to the file `hdfs://path/to/result.txt`. Specify the export format as CSV. Use `my_broker` and set kerberos authentication information. Specify the column separator as `,` and the line delimiter as `\n`.
  2. SELECT * FROM tbl
  3. INTO OUTFILE "hdfs://path/to/result_"
  4. FORMAT AS CSV
  5. PROPERTIES
  6. (
  7. "broker.name" = "my_broker",
  8. "broker.hadoop.security.authentication" = "kerberos",
  9. "broker.kerberos_principal" = "doris@YOUR.COM",
  10. "broker.kerberos_keytab" = "/home/doris/my.keytab",
  11. "column_separator" = ",",
  12. "line_delimiter" = "\n",
  13. "max_file_size" = "100MB"
  14. );
  15. If the result is less than 100MB, file will be: `result_0.csv`.
  16. If larger than 100MB, may be: `result_0.csv, result_1.csv, ...`.
  17. 2. Export simple query results to the file `hdfs://path/to/result.parquet`. Specify the export format as PARQUET. Use `my_broker` and set kerberos authentication information.
  18. SELECT c1, c2, c3 FROM tbl
  19. INTO OUTFILE "hdfs://path/to/result_"
  20. FORMAT AS PARQUET
  21. PROPERTIES
  22. (
  23. "broker.name" = "my_broker",
  24. "broker.hadoop.security.authentication" = "kerberos",
  25. "broker.kerberos_principal" = "doris@YOUR.COM",
  26. "broker.kerberos_keytab" = "/home/doris/my.keytab",
  27. "schema"="required,int32,c1;required,byte_array,c2;required,byte_array,c2"
  28. );
  29. If the exported file format is PARQUET, `schema` must be specified.
  30. 3. Export the query result of the CTE statement to the file `hdfs://path/to/result.txt`. The default export format is CSV. Use `my_broker` and set hdfs high availability information. Use the default column separators and line delimiter.
  31. WITH
  32. x1 AS
  33. (SELECT k1, k2 FROM tbl1),
  34. x2 AS
  35. (SELECT k3 FROM tbl2)
  36. SELEC k1 FROM x1 UNION SELECT k3 FROM x2
  37. INTO OUTFILE "hdfs://path/to/result_"
  38. PROPERTIES
  39. (
  40. "broker.name" = "my_broker",
  41. "broker.username"="user",
  42. "broker.password"="passwd",
  43. "broker.dfs.nameservices" = "my_ha",
  44. "broker.dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
  45. "broker.dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
  46. "broker.dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
  47. "broker.dfs.client.failover.proxy.provider" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
  48. );
  49. If the result is less than 1GB, file will be: `result_0.csv`.
  50. If larger than 1GB, may be: `result_0.csv, result_1.csv, ...`.
  51. 4. Export the query results of the UNION statement to the file `bos://bucket/result.parquet`. Specify the export format as PARQUET. Use `my_broker` and set hdfs high availability information. PARQUET format does not need to specify the column separator and line delimiter.
  52. SELECT k1 FROM tbl1 UNION SELECT k2 FROM tbl1
  53. INTO OUTFILE "bos://bucket/result_"
  54. FORMAT AS PARQUET
  55. PROPERTIES
  56. (
  57. "broker.name" = "my_broker",
  58. "broker.bos_endpoint" = "http://bj.bcebos.com",
  59. "broker.bos_accesskey" = "xxxxxxxxxxxxxxxxxxxxxxxxxx",
  60. "broker.bos_secret_accesskey" = "yyyyyyyyyyyyyyyyyyyyyyyyyy",
  61. "schema"="required,int32,k1;required,byte_array,k2"
  62. );
  63. 5. Export simple query results to the file `cos://${bucket_name}/path/result.txt`. Specify the export format as CSV.
  64. And create a mark file after export finished.
  65. select k1,k2,v1 from tbl1 limit 100000
  66. into outfile "s3a://my_bucket/export/my_file_"
  67. FORMAT AS CSV
  68. PROPERTIES
  69. (
  70. "broker.name" = "hdfs_broker",
  71. "broker.fs.s3a.access.key" = "xxx",
  72. "broker.fs.s3a.secret.key" = "xxxx",
  73. "broker.fs.s3a.endpoint" = "https://cos.xxxxxx.myqcloud.com/",
  74. "column_separator" = ",",
  75. "line_delimiter" = "\n",
  76. "max_file_size" = "1024MB",
  77. "success_file_name" = "SUCCESS"
  78. )
  79. Please Note:
  80. 1. Paths that do not exist are automatically created.
  81. 2. These parameters(access.key/secret.key/endpointneed) need to be confirmed with `Tecent Cloud COS`. In particular, the value of endpoint does not need to be filled in bucket_name.
  82. 6. Use the s3 protocol to export to bos, and concurrent export is enabled.
  83. set enable_parallel_outfile = true;
  84. select k1 from tb1 limit 1000
  85. into outfile "s3://my_bucket/export/my_file_"
  86. format as csv
  87. properties
  88. (
  89. "AWS_ENDPOINT" = "http://s3.bd.bcebos.com",
  90. "AWS_ACCESS_KEY" = "xxxx",
  91. "AWS_SECRET_KEY" = "xxx",
  92. "AWS_REGION" = "bd"
  93. )
  94. The final generated file prefix is `my_file_{fragment_instance_id}_`
  95. 7. Use the s3 protocol to export to bos, and enable concurrent export of session variables.
  96. set enable_parallel_outfile = true;
  97. select k1 from tb1 order by k1 limit 1000
  98. into outfile "s3://my_bucket/export/my_file_"
  99. format as csv
  100. properties
  101. (
  102. "AWS_ENDPOINT" = "http://s3.bd.bcebos.com",
  103. "AWS_ACCESS_KEY" = "xxxx",
  104. "AWS_SECRET_KEY" = "xxx",
  105. "AWS_REGION" = "bd"
  106. )
  107. But because the query statement has a top-level sorting node, even if the query is enabled for concurrently exported session variables, it cannot be exported concurrently.
  108. 8. Use libhdfs to export to hdfs cluster. Export the query results of the UNION statement to the file `hdfs://path/to/result.txt`
  109. Specify the export format as CSV. Use the user name as 'work', the column separators as ',' and line delimiter as '\n'.
  110. SELECT * FROM tbl
  111. INTO OUTFILE "hdfs://path/to/result_"
  112. FORMAT AS CSV
  113. PROPERTIES
  114. (
  115. "hdfs.fs.defaultFS" = "hdfs://ip:port",
  116. "hdfs.hdfs_user" = "work"
  117. );
  118. If the result is less than 1GB, file will be: `my_file_0.csv`.
  119. If larger than 1GB, may be: `my_file_0.csv, result_1.csv, ...`.

keyword

  1. OUTFILE