Query Analysis

Doris provides a graphical command to help users analyze a specific query or import more easily. This article describes how to use this feature.

query plan tree

SQL is a descriptive language, and users describe the data they want to get through a SQL. The specific execution mode of a SQL depends on the implementation of the database. The query planner is used to determine how the database executes a SQL.

For example, if the user specifies a Join operator, the query planner needs to decide the specific Join algorithm, such as Hash Join or Merge Sort Join; whether to use Shuffle or Broadcast; whether the Join order needs to be adjusted to avoid Cartesian product; on which nodes to execute and so on.

Doris’ query planning process is to first convert an SQL statement into a single-machine execution plan tree.

  1. ┌────┐
  2. Sort
  3. └────┘
  4. ┌──────────────┐
  5. Aggregation
  6. └──────────────┘
  7. ┌────┐
  8. Join
  9. └────┘
  10. ┌────┴────┐
  11. ┌──────┐ ┌──────┐
  12. Scan-1 Scan-2
  13. └──────┘ └──────┘

After that, the query planner will convert the single-machine query plan into a distributed query plan according to the specific operator execution mode and the specific distribution of data. The distributed query plan is composed of multiple fragments, each fragment is responsible for a part of the query plan, and the data is transmitted between the fragments through the ExchangeNode operator.

  1. ┌────┐
  2. Sort
  3. F1
  4. └────┘
  5. ┌──────────────┐
  6. Aggregation
  7. F1
  8. └──────────────┘
  9. ┌────┐
  10. Join
  11. F1
  12. └────┘
  13. ┌──────┴────┐
  14. ┌──────┐ ┌────────────┐
  15. Scan-1 ExchangeNode
  16. F1 F1
  17. └──────┘ └────────────┘
  18. ┌────────────────┐
  19. DataStreamDink
  20. F2
  21. └────────────────┘
  22. ┌──────┐
  23. Scan-2
  24. F2
  25. └──────┘

As shown above, we divided the stand-alone plan into two Fragments: F1 and F2. Data is transmitted between two Fragments through an ExchangeNode.

And a Fragment will be further divided into multiple Instances. Instance is the final concrete execution instance. Dividing into multiple Instances helps to make full use of machine resources and improve the execution concurrency of a Fragment.

View query plan

You can view the execution plan of a SQL through the following three commands.

  • EXPLAIN GRAPH select ...; OR DESC GRAPH select ...;
  • EXPLAIN select ...;
  • EXPLAIN VERBOSE select ...;

The first command displays a query plan graphically. This command can more intuitively display the tree structure of the query plan and the division of Fragments:

  1. mysql> explain graph select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1;
  2. +---------------------------------------------------------------------------------------------------------------------------------+
  3. | Explain String |
  4. +---------------------------------------------------------------------------------------------------------------------------------+
  5. | |
  6. | ┌───────────────┐ |
  7. | │[9: ResultSink]│ |
  8. | │[Fragment: 4] |
  9. | RESULT SINK |
  10. | └───────────────┘ |
  11. | |
  12. | ┌─────────────────────┐ |
  13. | │[9: MERGING-EXCHANGE]│ |
  14. | │[Fragment: 4] |
  15. | └─────────────────────┘ |
  16. | |
  17. | ┌───────────────────┐ |
  18. | │[9: DataStreamSink]│ |
  19. | │[Fragment: 3] |
  20. | STREAM DATA SINK |
  21. | EXCHANGE ID: 09 |
  22. | UNPARTITIONED |
  23. | └───────────────────┘ |
  24. | |
  25. | ┌─────────────┐ |
  26. | │[4: TOP-N] |
  27. | │[Fragment: 3]│ |
  28. | └─────────────┘ |
  29. | |
  30. | ┌───────────────────────────────┐ |
  31. | │[8: AGGREGATE (merge finalize)]│ |
  32. | │[Fragment: 3] |
  33. | └───────────────────────────────┘ |
  34. | |
  35. | ┌─────────────┐ |
  36. | │[7: EXCHANGE]│ |
  37. | │[Fragment: 3]│ |
  38. | └─────────────┘ |
  39. | |
  40. | ┌───────────────────┐ |
  41. | │[7: DataStreamSink]│ |
  42. | │[Fragment: 2] |
  43. | STREAM DATA SINK |
  44. | EXCHANGE ID: 07 |
  45. | HASH_PARTITIONED |
  46. | └───────────────────┘ |
  47. | |
  48. | ┌─────────────────────────────────┐ |
  49. | │[3: AGGREGATE (update serialize)]│ |
  50. | │[Fragment: 2] |
  51. | STREAMING |
  52. | └─────────────────────────────────┘ |
  53. | |
  54. | ┌─────────────────────────────────┐ |
  55. | │[2: HASH JOIN] |
  56. | │[Fragment: 2] |
  57. | join op: INNER JOIN (PARTITIONED)│ |
  58. | └─────────────────────────────────┘ |
  59. | ┌──────────┴──────────┐ |
  60. | ┌─────────────┐ ┌─────────────┐ |
  61. | │[5: EXCHANGE]│ │[6: EXCHANGE]│ |
  62. | │[Fragment: 2]│ │[Fragment: 2]│ |
  63. | └─────────────┘ └─────────────┘ |
  64. | |
  65. | ┌───────────────────┐ ┌───────────────────┐ |
  66. | │[5: DataStreamSink]│ │[6: DataStreamSink]│ |
  67. | │[Fragment: 0] │[Fragment: 1] |
  68. | STREAM DATA SINK STREAM DATA SINK |
  69. | EXCHANGE ID: 05 EXCHANGE ID: 06 |
  70. | HASH_PARTITIONED HASH_PARTITIONED |
  71. | └───────────────────┘ └───────────────────┘ |
  72. | |
  73. | ┌─────────────────┐ ┌─────────────────┐ |
  74. | │[0: OlapScanNode]│ │[1: OlapScanNode]│ |
  75. | │[Fragment: 0] │[Fragment: 1] |
  76. | TABLE: tbl1 TABLE: tbl2 |
  77. | └─────────────────┘ └─────────────────┘ |
  78. +---------------------------------------------------------------------------------------------------------------------------------+

As can be seen from the figure, the query plan tree is divided into 5 fragments: 0, 1, 2, 3, and 4. For example, [Fragment: 0] on the OlapScanNode node indicates that this node belongs to Fragment 0. Data is transferred between each Fragment through DataStreamSink and ExchangeNode.

The graphics command only displays the simplified node information. If you need to view more specific node information, such as the filter conditions pushed to the node as follows, you need to view the more detailed text version information through the second command:

  1. mysql> explain select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1;
  2. +----------------------------------------------------------------------------------+
  3. | Explain String |
  4. +----------------------------------------------------------------------------------+
  5. | PLAN FRAGMENT 0 |
  6. | OUTPUT EXPRS:<slot 5> <slot 3> `tbl1`.`k1` | <slot 6> <slot 4> sum(`tbl1`.`k2`) |
  7. | PARTITION: UNPARTITIONED |
  8. | |
  9. | RESULT SINK |
  10. | |
  11. | 9:MERGING-EXCHANGE |
  12. | limit: 65535 |
  13. | |
  14. | PLAN FRAGMENT 1 |
  15. | OUTPUT EXPRS: |
  16. | PARTITION: HASH_PARTITIONED: <slot 3> `tbl1`.`k1` |
  17. | |
  18. | STREAM DATA SINK |
  19. | EXCHANGE ID: 09 |
  20. | UNPARTITIONED |
  21. | |
  22. | 4:TOP-N |
  23. | | order by: <slot 5> <slot 3> `tbl1`.`k1` ASC |
  24. | | offset: 0 |
  25. | | limit: 65535 |
  26. | | |
  27. | 8:AGGREGATE (merge finalize) |
  28. | | output: sum(<slot 4> sum(`tbl1`.`k2`)) |
  29. | | group by: <slot 3> `tbl1`.`k1` |
  30. | | cardinality=-1 |
  31. | | |
  32. | 7:EXCHANGE |
  33. | |
  34. | PLAN FRAGMENT 2 |
  35. | OUTPUT EXPRS: |
  36. | PARTITION: HASH_PARTITIONED: `tbl1`.`k1` |
  37. | |
  38. | STREAM DATA SINK |
  39. | EXCHANGE ID: 07 |
  40. | HASH_PARTITIONED: <slot 3> `tbl1`.`k1` |
  41. | |
  42. | 3:AGGREGATE (update serialize) |
  43. | | STREAMING |
  44. | | output: sum(`tbl1`.`k2`) |
  45. | | group by: `tbl1`.`k1` |
  46. | | cardinality=-1 |
  47. | | |
  48. | 2:HASH JOIN |
  49. | | join op: INNER JOIN (PARTITIONED) |
  50. | | runtime filter: false |
  51. | | hash predicates: |
  52. | | colocate: false, reason: table not in the same group |
  53. | | equal join conjunct: `tbl1`.`k1` = `tbl2`.`k1` |
  54. | | cardinality=2 |
  55. | | |
  56. | |----6:EXCHANGE |
  57. | | |
  58. | 5:EXCHANGE |
  59. | |
  60. | PLAN FRAGMENT 3 |
  61. | OUTPUT EXPRS: |
  62. | PARTITION: RANDOM |
  63. | |
  64. | STREAM DATA SINK |
  65. | EXCHANGE ID: 06 |
  66. | HASH_PARTITIONED: `tbl2`.`k1` |
  67. | |
  68. | 1:OlapScanNode |
  69. | TABLE: tbl2 |
  70. | PREAGGREGATION: ON |
  71. | partitions=1/1 |
  72. | rollup: tbl2 |
  73. | tabletRatio=3/3 |
  74. | tabletList=105104776,105104780,105104784 |
  75. | cardinality=1 |
  76. | avgRowSize=4.0 |
  77. | numNodes=6 |
  78. | |
  79. | PLAN FRAGMENT 4 |
  80. | OUTPUT EXPRS: |
  81. | PARTITION: RANDOM |
  82. | |
  83. | STREAM DATA SINK |
  84. | EXCHANGE ID: 05 |
  85. | HASH_PARTITIONED: `tbl1`.`k1` |
  86. | |
  87. | 0:OlapScanNode |
  88. | TABLE: tbl1 |
  89. | PREAGGREGATION: ON |
  90. | partitions=1/1 |
  91. | rollup: tbl1 |
  92. | tabletRatio=3/3 |
  93. | tabletList=105104752,105104763,105104767 |
  94. | cardinality=2 |
  95. | avgRowSize=8.0 |
  96. | numNodes=6 |
  97. +----------------------------------------------------------------------------------+

The third command explain verbose select ...; gives you more details than the second command.

  1. mysql> explain verbose select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1;
  2. +---------------------------------------------------------------------------------------------------------------------------------------------------------+
  3. | Explain String |
  4. +---------------------------------------------------------------------------------------------------------------------------------------------------------+
  5. | PLAN FRAGMENT 0 |
  6. | OUTPUT EXPRS:<slot 5> <slot 3> `tbl1`.`k1` | <slot 6> <slot 4> sum(`tbl1`.`k2`) |
  7. | PARTITION: UNPARTITIONED |
  8. | |
  9. | VRESULT SINK |
  10. | |
  11. | 6:VMERGING-EXCHANGE |
  12. | limit: 65535 |
  13. | tuple ids: 3 |
  14. | |
  15. | PLAN FRAGMENT 1 |
  16. | |
  17. | PARTITION: HASH_PARTITIONED: `default_cluster:test`.`tbl1`.`k2` |
  18. | |
  19. | STREAM DATA SINK |
  20. | EXCHANGE ID: 06 |
  21. | UNPARTITIONED |
  22. | |
  23. | 4:VTOP-N |
  24. | | order by: <slot 5> <slot 3> `tbl1`.`k1` ASC |
  25. | | offset: 0 |
  26. | | limit: 65535 |
  27. | | tuple ids: 3 |
  28. | | |
  29. | 3:VAGGREGATE (update finalize) |
  30. | | output: sum(<slot 8>) |
  31. | | group by: <slot 7> |
  32. | | cardinality=-1 |
  33. | | tuple ids: 2 |
  34. | | |
  35. | 2:VHASH JOIN |
  36. | | join op: INNER JOIN(BROADCAST)[Tables are not in the same group] |
  37. | | equal join conjunct: CAST(`tbl1`.`k1` AS DATETIME) = `tbl2`.`k1` |
  38. | | runtime filters: RF000[in_or_bloom] <- `tbl2`.`k1` |
  39. | | cardinality=0 |
  40. | | vec output tuple id: 4 | tuple ids: 0 1 |
  41. | | |
  42. | |----5:VEXCHANGE |
  43. | | tuple ids: 1 |
  44. | | |
  45. | 0:VOlapScanNode |
  46. | TABLE: tbl1(null), PREAGGREGATION: OFF. Reason: the type of agg on StorageEngine's Key column should only be MAX or MIN.agg expr: sum(`tbl1`.`k2`) |
  47. | runtime filters: RF000[in_or_bloom] -> CAST(`tbl1`.`k1` AS DATETIME) |
  48. | partitions=0/1, tablets=0/0, tabletList= |
  49. | cardinality=0, avgRowSize=20.0, numNodes=1 |
  50. | tuple ids: 0 |
  51. | |
  52. | PLAN FRAGMENT 2 |
  53. | |
  54. | PARTITION: HASH_PARTITIONED: `default_cluster:test`.`tbl2`.`k2` |
  55. | |
  56. | STREAM DATA SINK |
  57. | EXCHANGE ID: 05 |
  58. | UNPARTITIONED |
  59. | |
  60. | 1:VOlapScanNode |
  61. | TABLE: tbl2(null), PREAGGREGATION: OFF. Reason: null |
  62. | partitions=0/1, tablets=0/0, tabletList= |
  63. | cardinality=0, avgRowSize=16.0, numNodes=1 |
  64. | tuple ids: 1 |
  65. | |
  66. | Tuples: |
  67. | TupleDescriptor{id=0, tbl=tbl1, byteSize=32, materialized=true} |
  68. | SlotDescriptor{id=0, col=k1, type=DATE} |
  69. | parent=0 |
  70. | materialized=true |
  71. | byteSize=16 |
  72. | byteOffset=16 |
  73. | nullIndicatorByte=0 |
  74. | nullIndicatorBit=-1 |
  75. | slotIdx=1 |
  76. | |
  77. | SlotDescriptor{id=2, col=k2, type=INT} |
  78. | parent=0 |
  79. | materialized=true |
  80. | byteSize=4 |
  81. | byteOffset=0 |
  82. | nullIndicatorByte=0 |
  83. | nullIndicatorBit=-1 |
  84. | slotIdx=0 |
  85. | |
  86. | |
  87. | TupleDescriptor{id=1, tbl=tbl2, byteSize=16, materialized=true} |
  88. | SlotDescriptor{id=1, col=k1, type=DATETIME} |
  89. | parent=1 |
  90. | materialized=true |
  91. | byteSize=16 |
  92. | byteOffset=0 |
  93. | nullIndicatorByte=0 |
  94. | nullIndicatorBit=-1 |
  95. | slotIdx=0 |
  96. | |
  97. | |
  98. | TupleDescriptor{id=2, tbl=null, byteSize=32, materialized=true} |
  99. | SlotDescriptor{id=3, col=null, type=DATE} |
  100. | parent=2 |
  101. | materialized=true |
  102. | byteSize=16 |
  103. | byteOffset=16 |
  104. | nullIndicatorByte=0 |
  105. | nullIndicatorBit=-1 |
  106. | slotIdx=1 |
  107. | |
  108. | SlotDescriptor{id=4, col=null, type=BIGINT} |
  109. | parent=2 |
  110. | materialized=true |
  111. | byteSize=8 |
  112. | byteOffset=0 |
  113. | nullIndicatorByte=0 |
  114. | nullIndicatorBit=-1 |
  115. | slotIdx=0 |
  116. | |
  117. | |
  118. | TupleDescriptor{id=3, tbl=null, byteSize=32, materialized=true} |
  119. | SlotDescriptor{id=5, col=null, type=DATE} |
  120. | parent=3 |
  121. | materialized=true |
  122. | byteSize=16 |
  123. | byteOffset=16 |
  124. | nullIndicatorByte=0 |
  125. | nullIndicatorBit=-1 |
  126. | slotIdx=1 |
  127. | |
  128. | SlotDescriptor{id=6, col=null, type=BIGINT} |
  129. | parent=3 |
  130. | materialized=true |
  131. | byteSize=8 |
  132. | byteOffset=0 |
  133. | nullIndicatorByte=0 |
  134. | nullIndicatorBit=-1 |
  135. | slotIdx=0 |
  136. | |
  137. | |
  138. | TupleDescriptor{id=4, tbl=null, byteSize=48, materialized=true} |
  139. | SlotDescriptor{id=7, col=k1, type=DATE} |
  140. | parent=4 |
  141. | materialized=true |
  142. | byteSize=16 |
  143. | byteOffset=16 |
  144. | nullIndicatorByte=0 |
  145. | nullIndicatorBit=-1 |
  146. | slotIdx=1 |
  147. | |
  148. | SlotDescriptor{id=8, col=k2, type=INT} |
  149. | parent=4 |
  150. | materialized=true |
  151. | byteSize=4 |
  152. | byteOffset=0 |
  153. | nullIndicatorByte=0 |
  154. | nullIndicatorBit=-1 |
  155. | slotIdx=0 |
  156. | |
  157. | SlotDescriptor{id=9, col=k1, type=DATETIME} |
  158. | parent=4 |
  159. | materialized=true |
  160. | byteSize=16 |
  161. | byteOffset=32 |
  162. | nullIndicatorByte=0 |
  163. | nullIndicatorBit=-1 |
  164. | slotIdx=2 |
  165. +---------------------------------------------------------------------------------------------------------------------------------------------------------+
  166. 160 rows in set (0.00 sec)

The information displayed in the query plan is still being standardized and improved, and we will introduce it in detail in subsequent articles.

View query Profile

The user can open the session variable is_report_success with the following command:

  1. SET is_report_success=true;

Then execute the query, and Doris will generate a Profile of the query. Profile contains the specific execution of a query for each node, which helps us analyze query bottlenecks.

After executing the query, we can first get the Profile list with the following command:

  1. mysql> show query profile "/"\G
  2. **************************** 1. row ******************** ******
  3. QueryId: c257c52f93e149ee-ace8ac14e8c9fef9
  4. User: root
  5. DefaultDb: default_cluster:db1
  6. SQL: select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1
  7. QueryType: Query
  8. StartTime: 2021-04-08 11:30:50
  9. EndTime: 2021-04-08 11:30:50
  10. TotalTime: 9ms
  11. QueryState: EOF

This command will list all currently saved profiles. Each row corresponds to a query. We can select the QueryId corresponding to the Profile we want to see to see the specific situation.

Viewing a Profile is divided into 3 steps:

  1. View the overall execution plan tree

    This step is mainly used to analyze the execution plan as a whole and view the execution time of each Fragment.

    1. mysql> show query profile "/c257c52f93e149ee-ace8ac14e8c9fef9"\G
    2. *************************** 1. row ***************************
    3. Fragments:
    4. ┌──────────────────────┐
    5. │[-1: DataBufferSender]│
    6. Fragment: 0
    7. MaxActiveTime: 6.626ms
    8. └──────────────────────┘
    9. ┌──────────────────┐
    10. │[9: EXCHANGE_NODE]│
    11. Fragment: 0
    12. └──────────────────┘
    13. ┌──────────────────────┐
    14. │[9: DataStreamSender]
    15. Fragment: 1
    16. MaxActiveTime: 5.449ms
    17. └──────────────────────┘
    18. ┌──────────────┐
    19. │[4: SORT_NODE]│
    20. Fragment: 1
    21. └──────────────┘
    22. ┌┘
    23. ┌─────────────────────┐
    24. │[8: AGGREGATION_NODE]│
    25. Fragment: 1
    26. └─────────────────────┘
    27. └┐
    28. ┌──────────────────┐
    29. │[7: EXCHANGE_NODE]│
    30. Fragment: 1
    31. └──────────────────┘
    32. ┌──────────────────────┐
    33. │[7: DataStreamSender]
    34. Fragment: 2
    35. MaxActiveTime: 3.505ms
    36. └──────────────────────┘
    37. ┌┘
    38. ┌─────────────────────┐
    39. │[3: AGGREGATION_NODE]│
    40. Fragment: 2
    41. └─────────────────────┘
    42. ┌───────────────────┐
    43. │[2: HASH_JOIN_NODE]│
    44. Fragment: 2
    45. └───────────────────┘
    46. ┌────────────┴────────────┐
    47. ┌──────────────────┐ ┌──────────────────┐
    48. │[5: EXCHANGE_NODE]│ │[6: EXCHANGE_NODE]│
    49. Fragment: 2 Fragment: 2
    50. └──────────────────┘ └──────────────────┘
    51. ┌─────────────────────┐ ┌────────────────────────┐
    52. │[5: DataStreamSender]│ │[6: DataStreamSender]
    53. Fragment: 4 Fragment: 3
    54. MaxActiveTime: 1.87ms MaxActiveTime: 636.767us
    55. └─────────────────────┘ └────────────────────────┘
    56. ┌┘
    57. ┌───────────────────┐ ┌───────────────────┐
    58. │[0: OLAP_SCAN_NODE]│ │[1: OLAP_SCAN_NODE]│
    59. Fragment: 4 Fragment: 3
    60. └───────────────────┘ └───────────────────┘
    61. ┌─────────────┐ ┌─────────────┐
    62. │[OlapScanner]│ │[OlapScanner]│
    63. Fragment: 4 Fragment: 3
    64. └─────────────┘ └─────────────┘
    65. ┌─────────────────┐ ┌─────────────────┐
    66. │[SegmentIterator]│ │[SegmentIterator]│
    67. Fragment: 4 Fragment: 3
    68. └─────────────────┘ └─────────────────┘
    69. 1 row in set (0.02 sec)

    As shown in the figure above, each node is marked with the Fragment to which it belongs, and at the Sender node of each Fragment, the execution time of the Fragment is marked. This time-consuming is the longest of all Instance execution times under Fragment. This helps us find the most time-consuming Fragment from an overall perspective.

  2. View the Instance list under the specific Fragment

    For example, if we find that Fragment 1 takes the longest time, we can continue to view the Instance list of Fragment 1:

    1. ```sql

    mysql> show query profile “/c257c52f93e149ee-ace8ac14e8c9fef9/1”; +—————————————————-+—————————-+——————+ | Instances | Host | ActiveTime | +—————————————————-+—————————-+——————+ | c257c52f93e149ee-ace8ac14e8c9ff03 | 10.200.00.01:9060 | 5.449ms | | c257c52f93e149ee-ace8ac14e8c9ff05 | 10.200.00.02:9060 | 5.367ms | | c257c52f93e149ee-ace8ac14e8c9ff04 | 10.200.00.03:9060 | 5.358ms | +—————————————————-+—————————-+——————+

    1. This shows the execution nodes and time consumption of all three Instances on Fragment 1.
  3. View the specific Instance

    We can continue to view the detailed profile of each operator on a specific Instance:

    1. mysql> show query profile "/c257c52f93e149ee-ace8ac14e8c9fef9/1/c257c52f93e149ee-ace8ac14e8c9ff03"\G
    2. **************************** 1. row ******************** ******
    3. Instance:
    4. ┌────────────────────────────────────────────┐
    5. │[9: DataStreamSender]
    6. │(Active: 37.222us, non-child: 0.40)
    7. - Counters:
    8. - BytesSent: 0.00
    9. - IgnoreRows: 0
    10. - OverallThroughput: 0.0 /sec
    11. - PeakMemoryUsage: 8.00 KB
    12. - SerializeBatchTime: 0ns
    13. - UncompressedRowBatchSize: 0.00
    14. └──────────────────────────────────────────┘
    15. └┐
    16. ┌──────────────────────────────────────┐
    17. │[4: SORT_NODE]
    18. │(Active: 5.421ms, non-child: 0.71)│
    19. - Counters:
    20. - PeakMemoryUsage: 12.00 KB
    21. - RowsReturned: 0
    22. - RowsReturnedRate: 0
    23. └──────────────────────────────────────┘
    24. ┌┘
    25. ┌──────────────────────────────────────┐
    26. │[8: AGGREGATION_NODE]
    27. │(Active: 5.355ms, non-child: 10.68)│
    28. - Counters:
    29. - BuildTime: 3.701us
    30. - GetResultsTime: 0ns
    31. - HTResize: 0
    32. - HTResizeTime: 1.211us
    33. - HashBuckets: 0
    34. - HashCollisions: 0
    35. - HashFailedProbe: 0
    36. - HashFilledBuckets: 0
    37. - HashProbe: 0
    38. - HashTravelLength: 0
    39. - LargestPartitionPercent: 0
    40. - MaxPartitionLevel: 0
    41. - NumRepartitions: 0
    42. - PartitionsCreated: 16
    43. - PeakMemoryUsage: 34.02 MB
    44. - RowsProcessed: 0
    45. - RowsRepartitioned: 0
    46. - RowsReturned: 0
    47. - RowsReturnedRate: 0
    48. - SpilledPartitions: 0
    49. └──────────────────────────────────────┘
    50. └┐
    51. ┌────────────────────────────────────────────────────┐
    52. │[7: EXCHANGE_NODE]
    53. │(Active: 4.360ms, non-child: 46.84)
    54. - Counters:
    55. - BytesReceived: 0.00
    56. - ConvertRowBatchTime: 387ns
    57. - DataArrivalWaitTime: 4.357ms
    58. - DeserializeRowBatchTimer: 0ns
    59. - FirstBatchArrivalWaitTime: 4.356ms
    60. - PeakMemoryUsage: 0.00
    61. - RowsReturned: 0
    62. - RowsReturnedRate: 0
    63. - SendersBlockedTotalTimer(*): 0ns
    64. └────────────────────────────────────────────────────┘

    The above figure shows the specific profiles of each operator of Instance c257c52f93e149ee-ace8ac14e8c9ff03 in Fragment 1.

Through the above three steps, we can gradually check the performance bottleneck of a SQL.