INSERT

Name

INSERT

Description

The change statement is to complete the data insertion operation.

  1. INSERT INTO table_name
  2. [ PARTITION (p1, ...) ]
  3. [ WITH LABEL label]
  4. [ (column [, ...]) ]
  5. [ [ hint [, ...] ] ]
  6. { VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }

Parameters

tablet_name: The destination table for importing data. Can be of the form db_name.table_name

partitions: Specify the partitions to be imported, which must be partitions that exist in table_name. Multiple partition names are separated by commas

label: specify a label for the Insert task

column_name: The specified destination column, must be a column that exists in table_name

expression: the corresponding expression that needs to be assigned to a column

DEFAULT: let the corresponding column use the default value

query: a common query, the result of the query will be written to the target

hint: some indicator used to indicate the execution behavior of INSERT. Both streaming and the default non-streaming method use synchronous mode to complete INSERT statement execution The non-streaming method will return a label after the execution is completed, which is convenient for users to query the import status through SHOW LOAD

Notice:

When executing the INSERT statement, the default behavior is to filter the data that does not conform to the target table format, such as the string is too long. However, for business scenarios that require data not to be filtered, you can set the session variable enable_insert_strict to true to ensure that INSERT will not be executed successfully when data is filtered out.

Example

The test table contains two columns c1, c2.

  1. Import a row of data into the test table
  1. INSERT INTO test VALUES (1, 2);
  2. INSERT INTO test (c1, c2) VALUES (1, 2);
  3. INSERT INTO test (c1, c2) VALUES (1, DEFAULT);
  4. INSERT INTO test (c1) VALUES (1);

The first and second statements have the same effect. When no target column is specified, the column order in the table is used as the default target column. The third and fourth statements express the same meaning, use the default value of the c2 column to complete the data import.

  1. Import multiple rows of data into the test table at one time
  1. INSERT INTO test VALUES (1, 2), (3, 2 + 2);
  2. INSERT INTO test (c1, c2) VALUES (1, 2), (3, 2 * 2);
  3. INSERT INTO test (c1) VALUES (1), (3);
  4. INSERT INTO test (c1, c2) VALUES (1, DEFAULT), (3, DEFAULT);

The first and second statements have the same effect, import two pieces of data into the test table at one time The effect of the third and fourth statements is known, and the default value of the c2 column is used to import two pieces of data into the test table

  1. Import a query result into the test table
  1. INSERT INTO test SELECT * FROM test2;
  2. INSERT INTO test (c1, c2) SELECT * from test2;
  1. Import a query result into the test table, specifying the partition and label
  1. INSERT INTO test PARTITION(p1, p2) WITH LABEL `label1` SELECT * FROM test2;
  2. INSERT INTO test WITH LABEL `label1` (c1, c2) SELECT * from test2;

Asynchronous import is actually a synchronous import encapsulated into asynchronous. Filling in streaming and not filling in execution efficiency is the same.

Since the previous import methods of Doris are all asynchronous import methods, in order to be compatible with the old usage habits, the INSERT statement without streaming will still return a label. Users need to view the label import job through the SHOW LOAD command. state.

Keywords

  1. INSERT

Best Practice

  1. View the returned results

    The INSERT operation is a synchronous operation, and the return of the result indicates the end of the operation. Users need to perform corresponding processing according to the different returned results.

    1. The execution is successful, the result set is empty

      If the result set of the insert corresponding to the select statement is empty, it will return as follows:

  1. ```sql
  2. mysql> insert into tbl1 select * from empty_tbl;
  3. Query OK, 0 rows affected (0.02 sec)
  4. `

Query OK indicates successful execution. 0 rows affected means that no data was imported.

  1. 2. The execution is successful, the result set is not empty
  2. In the case where the result set is not empty. The returned results are divided into the following situations:
  3. 1. Insert executes successfully and is visible:
  1. ```sql
  2. mysql> insert into tbl1 select * from tbl2;
  3. Query OK, 4 rows affected (0.38 sec)
  4. {'label':'insert_8510c568-9eda-4173-9e36-6adc7d35291c', 'status':'visible', 'txnId':'4005'}
  5. mysql> insert into tbl1 with label my_label1 select * from tbl2;
  6. Query OK, 4 rows affected (0.38 sec)
  7. {'label':'my_label1', 'status':'visible', 'txnId':'4005'}
  8. mysql> insert into tbl1 select * from tbl2;
  9. Query OK, 2 rows affected, 2 warnings (0.31 sec)
  10. {'label':'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status':'visible', 'txnId':'4005'}
  11. mysql> insert into tbl1 select * from tbl2;
  12. Query OK, 2 rows affected, 2 warnings (0.31 sec)
  13. {'label':'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status':'committed', 'txnId':'4005'}
  14. ````
  15. `Query OK` indicates successful execution. `4 rows affected` means that a total of 4 rows of data were imported. `2 warnings` indicates the number of lines to be filtered.
  16. Also returns a json string:
  17. ````json
  18. {'label':'my_label1', 'status':'visible', 'txnId':'4005'}
  19. {'label':'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status':'committed', 'txnId':'4005'}
  20. {'label':'my_label1', 'status':'visible', 'txnId':'4005', 'err':'some other error'}
  21. ````
  22. `label` is a user-specified label or an automatically generated label. Label is the ID of this Insert Into import job. Each import job has a unique Label within a single database.
  23. `status` indicates whether the imported data is visible. Show `visible` if visible, `committed` if not visible.
  24. `txnId` is the id of the import transaction corresponding to this insert.
  25. The `err` field shows some other unexpected errors.
  26. When you need to view the filtered rows, the user can pass the following statement
  27. ```sql
  28. show load where label="xxx";
  29. ````
  30. The URL in the returned result can be used to query the wrong data. For details, see the summary of **Viewing Error Lines** later.
  31. **Invisibility of data is a temporary state, this batch of data will eventually be visible**
  32. You can view the visible status of this batch of data with the following statement:
  33. ```sql
  34. show transaction where id=4005;
  35. ````
  36. If the `TransactionStatus` column in the returned result is `visible`, the representation data is visible.
  1. 3. Execution failed
  2. Execution failure indicates that no data was successfully imported, and the following is returned:
  1. mysql> insert into tbl1 select * from tbl2 where k1 = "a";
  2. ERROR 1064 (HY000): all partitions have no load data. url: http://10.74.167.16:8042/api/_load_error_log?file=__shard_2/error_log_insert_stmt_ba8bb9e158e4879-ae8de8507c0bf8a2_ba8bb9e158e4879_ae8de8507c0
  3. `

Where ERROR 1064 (HY000): all partitions have no load data shows the reason for the failure. The following url can be used to query the wrong data:

  1. show load warnings on "url";
  2. `

You can view the specific error line. ```

  1. Timeout time

    The timeout for INSERT operations is controlled by session variable query_timeout. The default is 5 minutes. If it times out, the job will be canceled.

  2. Label and atomicity

    The INSERT operation also guarantees the atomicity of imports, see the Import Transactions and Atomicity documentation.

    When using CTE(Common Table Expressions) as the query part in an insert operation, the WITH LABEL and column parts must be specified.

  3. Filter Threshold

    Unlike other import methods, INSERT operations cannot specify a filter threshold (max_filter_ratio). The default filter threshold is 1, which means that rows with errors can be ignored.

    For business scenarios that require data not to be filtered, you can set session variable enable_insert_strict to true to ensure that when there is data When filtered out, INSERT will not be executed successfully.

  4. Performance issues

    There is no single row insertion using the VALUES method. If you must use it this way, combine multiple rows of data into one INSERT statement for bulk commit.