查询分析

Doris 提供了一个图形化的命令以帮助用户更方便的分析一个具体的查询或导入。本文介绍如何使用该功能。

查询计划树

SQL 是一个描述性语言,用户通过一个 SQL 来描述想获取的数据。而一个 SQL 的具体执行方式依赖于数据库的实现。而查询规划器就是用来决定数据库如何具体执行一个 SQL 的。

比如用户指定了一个 Join 算子,则查询规划器需要决定具体的 Join 算法,比如是 Hash Join,还是 Merge Sort Join;是使用 Shuffle 还是 Broadcast;Join 顺序是否需要调整以避免笛卡尔积;以及确定最终的在哪些节点执行等等。

Doris 的查询规划过程是先将一个 SQL 语句转换成一个单机执行计划树。

  1. ┌────┐
  2. Sort
  3. └────┘
  4. ┌───────────┐
  5. Aggregation
  6. └───────────┘
  7. ┌────┐
  8. Join
  9. └────┘
  10. ┌───┴────┐
  11. ┌──────┐ ┌──────┐
  12. Scan-1 Scan-2
  13. └──────┘ └──────┘

之后,查询规划器会根据具体的算子执行方式、数据的具体分布,将单机查询计划转换为分布式查询计划。分布式查询计划是由多个 Fragment 组成的,每个 Fragment 负责查询计划的一部分,各个 Fragment 之间会通过 ExchangeNode 算子进行数据的传输。

  1. ┌────┐
  2. Sort
  3. F1
  4. └────┘
  5. ┌───────────┐
  6. Aggregation
  7. F1
  8. └───────────┘
  9. ┌────┐
  10. Join
  11. F1
  12. └────┘
  13. ┌──────┴────┐
  14. ┌──────┐ ┌────────────┐
  15. Scan-1 ExchangeNode
  16. F1 F1
  17. └──────┘ └────────────┘
  18. ┌──────────────┐
  19. DataStreamDink
  20. F2
  21. └──────────────┘
  22. ┌──────┐
  23. Scan-2
  24. F2
  25. └──────┘

如上图,我们将单机计划分成了两个 Fragment:F1 和 F2。两个 Fragment 之间通过一个 ExchangeNode 节点传输数据。

而一个 Fragment 会进一步的划分为多个 Instance。Instance 是最终具体的执行实例。划分成多个 Instance 有助于充分利用机器资源,提升一个 Fragment 的执行并发度。

查看查询计划

可以通过以下三种命令查看一个 SQL 的执行计划。

  • EXPLAIN GRAPH select ...; 或者 DESC GRAPH select ...;
  • EXPLAIN select ...;
  • EXPLAIN VERBOSE select ...;

其中第一个命令以图形化的方式展示一个查询计划,这个命令可以比较直观的展示查询计划的树形结构,以及 Fragment 的划分情况:

  1. mysql> explain graph select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1;
  2. +---------------------------------------------------------------------------------------------------------------------------------+
  3. | Explain String |
  4. +---------------------------------------------------------------------------------------------------------------------------------+
  5. | |
  6. | ┌───────────────┐ |
  7. | │[9: ResultSink]│ |
  8. | │[Fragment: 4] |
  9. | RESULT SINK |
  10. | └───────────────┘ |
  11. | |
  12. | ┌─────────────────────┐ |
  13. | │[9: MERGING-EXCHANGE]│ |
  14. | │[Fragment: 4] |
  15. | └─────────────────────┘ |
  16. | |
  17. | ┌───────────────────┐ |
  18. | │[9: DataStreamSink]│ |
  19. | │[Fragment: 3] |
  20. | STREAM DATA SINK |
  21. | EXCHANGE ID: 09 |
  22. | UNPARTITIONED |
  23. | └───────────────────┘ |
  24. | |
  25. | ┌─────────────┐ |
  26. | │[4: TOP-N] |
  27. | │[Fragment: 3]│ |
  28. | └─────────────┘ |
  29. | |
  30. | ┌───────────────────────────────┐ |
  31. | │[8: AGGREGATE (merge finalize)]│ |
  32. | │[Fragment: 3] |
  33. | └───────────────────────────────┘ |
  34. | |
  35. | ┌─────────────┐ |
  36. | │[7: EXCHANGE]│ |
  37. | │[Fragment: 3]│ |
  38. | └─────────────┘ |
  39. | |
  40. | ┌───────────────────┐ |
  41. | │[7: DataStreamSink]│ |
  42. | │[Fragment: 2] |
  43. | STREAM DATA SINK |
  44. | EXCHANGE ID: 07 |
  45. | HASH_PARTITIONED |
  46. | └───────────────────┘ |
  47. | |
  48. | ┌─────────────────────────────────┐ |
  49. | │[3: AGGREGATE (update serialize)]│ |
  50. | │[Fragment: 2] |
  51. | STREAMING |
  52. | └─────────────────────────────────┘ |
  53. | |
  54. | ┌─────────────────────────────────┐ |
  55. | │[2: HASH JOIN] |
  56. | │[Fragment: 2] |
  57. | join op: INNER JOIN (PARTITIONED)│ |
  58. | └─────────────────────────────────┘ |
  59. | ┌──────────┴──────────┐ |
  60. | ┌─────────────┐ ┌─────────────┐ |
  61. | │[5: EXCHANGE]│ │[6: EXCHANGE]│ |
  62. | │[Fragment: 2]│ │[Fragment: 2]│ |
  63. | └─────────────┘ └─────────────┘ |
  64. | |
  65. | ┌───────────────────┐ ┌───────────────────┐ |
  66. | │[5: DataStreamSink]│ │[6: DataStreamSink]│ |
  67. | │[Fragment: 0] │[Fragment: 1] |
  68. | STREAM DATA SINK STREAM DATA SINK |
  69. | EXCHANGE ID: 05 EXCHANGE ID: 06 |
  70. | HASH_PARTITIONED HASH_PARTITIONED |
  71. | └───────────────────┘ └───────────────────┘ |
  72. | |
  73. | ┌─────────────────┐ ┌─────────────────┐ |
  74. | │[0: OlapScanNode]│ │[1: OlapScanNode]│ |
  75. | │[Fragment: 0] │[Fragment: 1] |
  76. | TABLE: tbl1 TABLE: tbl2 |
  77. | └─────────────────┘ └─────────────────┘ |
  78. +---------------------------------------------------------------------------------------------------------------------------------+

从图中可以看出,查询计划树被分为了5个 Fragment:0、1、2、3、4。如 OlapScanNode 节点上的 [Fragment: 0] 表示这个节点属于 Fragment 0。每个Fragment之间都通过 DataStreamSink 和 ExchangeNode 进行数据传输。

图形命令仅展示简化后的节点信息,如果需要查看更具体的节点信息,如下推到节点上的过滤条件等,则需要通过第二个命令查看更详细的文字版信息:

  1. mysql> explain select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1;
  2. +----------------------------------------------------------------------------------+
  3. | Explain String |
  4. +----------------------------------------------------------------------------------+
  5. | PLAN FRAGMENT 0 |
  6. | OUTPUT EXPRS:<slot 5> <slot 3> `tbl1`.`k1` | <slot 6> <slot 4> sum(`tbl1`.`k2`) |
  7. | PARTITION: UNPARTITIONED |
  8. | |
  9. | RESULT SINK |
  10. | |
  11. | 9:MERGING-EXCHANGE |
  12. | limit: 65535 |
  13. | |
  14. | PLAN FRAGMENT 1 |
  15. | OUTPUT EXPRS: |
  16. | PARTITION: HASH_PARTITIONED: <slot 3> `tbl1`.`k1` |
  17. | |
  18. | STREAM DATA SINK |
  19. | EXCHANGE ID: 09 |
  20. | UNPARTITIONED |
  21. | |
  22. | 4:TOP-N |
  23. | | order by: <slot 5> <slot 3> `tbl1`.`k1` ASC |
  24. | | offset: 0 |
  25. | | limit: 65535 |
  26. | | |
  27. | 8:AGGREGATE (merge finalize) |
  28. | | output: sum(<slot 4> sum(`tbl1`.`k2`)) |
  29. | | group by: <slot 3> `tbl1`.`k1` |
  30. | | cardinality=-1 |
  31. | | |
  32. | 7:EXCHANGE |
  33. | |
  34. | PLAN FRAGMENT 2 |
  35. | OUTPUT EXPRS: |
  36. | PARTITION: HASH_PARTITIONED: `tbl1`.`k1` |
  37. | |
  38. | STREAM DATA SINK |
  39. | EXCHANGE ID: 07 |
  40. | HASH_PARTITIONED: <slot 3> `tbl1`.`k1` |
  41. | |
  42. | 3:AGGREGATE (update serialize) |
  43. | | STREAMING |
  44. | | output: sum(`tbl1`.`k2`) |
  45. | | group by: `tbl1`.`k1` |
  46. | | cardinality=-1 |
  47. | | |
  48. | 2:HASH JOIN |
  49. | | join op: INNER JOIN (PARTITIONED) |
  50. | | runtime filter: false |
  51. | | hash predicates: |
  52. | | colocate: false, reason: table not in the same group |
  53. | | equal join conjunct: `tbl1`.`k1` = `tbl2`.`k1` |
  54. | | cardinality=2 |
  55. | | |
  56. | |----6:EXCHANGE |
  57. | | |
  58. | 5:EXCHANGE |
  59. | |
  60. | PLAN FRAGMENT 3 |
  61. | OUTPUT EXPRS: |
  62. | PARTITION: RANDOM |
  63. | |
  64. | STREAM DATA SINK |
  65. | EXCHANGE ID: 06 |
  66. | HASH_PARTITIONED: `tbl2`.`k1` |
  67. | |
  68. | 1:OlapScanNode |
  69. | TABLE: tbl2 |
  70. | PREAGGREGATION: ON |
  71. | partitions=1/1 |
  72. | rollup: tbl2 |
  73. | tabletRatio=3/3 |
  74. | tabletList=105104776,105104780,105104784 |
  75. | cardinality=1 |
  76. | avgRowSize=4.0 |
  77. | numNodes=6 |
  78. | |
  79. | PLAN FRAGMENT 4 |
  80. | OUTPUT EXPRS: |
  81. | PARTITION: RANDOM |
  82. | |
  83. | STREAM DATA SINK |
  84. | EXCHANGE ID: 05 |
  85. | HASH_PARTITIONED: `tbl1`.`k1` |
  86. | |
  87. | 0:OlapScanNode |
  88. | TABLE: tbl1 |
  89. | PREAGGREGATION: ON |
  90. | partitions=1/1 |
  91. | rollup: tbl1 |
  92. | tabletRatio=3/3 |
  93. | tabletList=105104752,105104763,105104767 |
  94. | cardinality=2 |
  95. | avgRowSize=8.0 |
  96. | numNodes=6 |
  97. +----------------------------------------------------------------------------------+

第三个命令EXPLAIN VERBOSE select ...;相比第二个命令可以查看更详细的执行计划信息。

  1. mysql> explain verbose select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1;
  2. +---------------------------------------------------------------------------------------------------------------------------------------------------------+
  3. | Explain String |
  4. +---------------------------------------------------------------------------------------------------------------------------------------------------------+
  5. | PLAN FRAGMENT 0 |
  6. | OUTPUT EXPRS:<slot 5> <slot 3> `tbl1`.`k1` | <slot 6> <slot 4> sum(`tbl1`.`k2`) |
  7. | PARTITION: UNPARTITIONED |
  8. | |
  9. | VRESULT SINK |
  10. | |
  11. | 6:VMERGING-EXCHANGE |
  12. | limit: 65535 |
  13. | tuple ids: 3 |
  14. | |
  15. | PLAN FRAGMENT 1 |
  16. | |
  17. | PARTITION: HASH_PARTITIONED: `default_cluster:test`.`tbl1`.`k2` |
  18. | |
  19. | STREAM DATA SINK |
  20. | EXCHANGE ID: 06 |
  21. | UNPARTITIONED |
  22. | |
  23. | 4:VTOP-N |
  24. | | order by: <slot 5> <slot 3> `tbl1`.`k1` ASC |
  25. | | offset: 0 |
  26. | | limit: 65535 |
  27. | | tuple ids: 3 |
  28. | | |
  29. | 3:VAGGREGATE (update finalize) |
  30. | | output: sum(<slot 8>) |
  31. | | group by: <slot 7> |
  32. | | cardinality=-1 |
  33. | | tuple ids: 2 |
  34. | | |
  35. | 2:VHASH JOIN |
  36. | | join op: INNER JOIN(BROADCAST)[Tables are not in the same group] |
  37. | | equal join conjunct: CAST(`tbl1`.`k1` AS DATETIME) = `tbl2`.`k1` |
  38. | | runtime filters: RF000[in_or_bloom] <- `tbl2`.`k1` |
  39. | | cardinality=0 |
  40. | | vec output tuple id: 4 | tuple ids: 0 1 |
  41. | | |
  42. | |----5:VEXCHANGE |
  43. | | tuple ids: 1 |
  44. | | |
  45. | 0:VOlapScanNode |
  46. | TABLE: tbl1(null), PREAGGREGATION: OFF. Reason: the type of agg on StorageEngine's Key column should only be MAX or MIN.agg expr: sum(`tbl1`.`k2`) |
  47. | runtime filters: RF000[in_or_bloom] -> CAST(`tbl1`.`k1` AS DATETIME) |
  48. | partitions=0/1, tablets=0/0, tabletList= |
  49. | cardinality=0, avgRowSize=20.0, numNodes=1 |
  50. | tuple ids: 0 |
  51. | |
  52. | PLAN FRAGMENT 2 |
  53. | |
  54. | PARTITION: HASH_PARTITIONED: `default_cluster:test`.`tbl2`.`k2` |
  55. | |
  56. | STREAM DATA SINK |
  57. | EXCHANGE ID: 05 |
  58. | UNPARTITIONED |
  59. | |
  60. | 1:VOlapScanNode |
  61. | TABLE: tbl2(null), PREAGGREGATION: OFF. Reason: null |
  62. | partitions=0/1, tablets=0/0, tabletList= |
  63. | cardinality=0, avgRowSize=16.0, numNodes=1 |
  64. | tuple ids: 1 |
  65. | |
  66. | Tuples: |
  67. | TupleDescriptor{id=0, tbl=tbl1, byteSize=32, materialized=true} |
  68. | SlotDescriptor{id=0, col=k1, type=DATE} |
  69. | parent=0 |
  70. | materialized=true |
  71. | byteSize=16 |
  72. | byteOffset=16 |
  73. | nullIndicatorByte=0 |
  74. | nullIndicatorBit=-1 |
  75. | slotIdx=1 |
  76. | |
  77. | SlotDescriptor{id=2, col=k2, type=INT} |
  78. | parent=0 |
  79. | materialized=true |
  80. | byteSize=4 |
  81. | byteOffset=0 |
  82. | nullIndicatorByte=0 |
  83. | nullIndicatorBit=-1 |
  84. | slotIdx=0 |
  85. | |
  86. | |
  87. | TupleDescriptor{id=1, tbl=tbl2, byteSize=16, materialized=true} |
  88. | SlotDescriptor{id=1, col=k1, type=DATETIME} |
  89. | parent=1 |
  90. | materialized=true |
  91. | byteSize=16 |
  92. | byteOffset=0 |
  93. | nullIndicatorByte=0 |
  94. | nullIndicatorBit=-1 |
  95. | slotIdx=0 |
  96. | |
  97. | |
  98. | TupleDescriptor{id=2, tbl=null, byteSize=32, materialized=true} |
  99. | SlotDescriptor{id=3, col=null, type=DATE} |
  100. | parent=2 |
  101. | materialized=true |
  102. | byteSize=16 |
  103. | byteOffset=16 |
  104. | nullIndicatorByte=0 |
  105. | nullIndicatorBit=-1 |
  106. | slotIdx=1 |
  107. | |
  108. | SlotDescriptor{id=4, col=null, type=BIGINT} |
  109. | parent=2 |
  110. | materialized=true |
  111. | byteSize=8 |
  112. | byteOffset=0 |
  113. | nullIndicatorByte=0 |
  114. | nullIndicatorBit=-1 |
  115. | slotIdx=0 |
  116. | |
  117. | |
  118. | TupleDescriptor{id=3, tbl=null, byteSize=32, materialized=true} |
  119. | SlotDescriptor{id=5, col=null, type=DATE} |
  120. | parent=3 |
  121. | materialized=true |
  122. | byteSize=16 |
  123. | byteOffset=16 |
  124. | nullIndicatorByte=0 |
  125. | nullIndicatorBit=-1 |
  126. | slotIdx=1 |
  127. | |
  128. | SlotDescriptor{id=6, col=null, type=BIGINT} |
  129. | parent=3 |
  130. | materialized=true |
  131. | byteSize=8 |
  132. | byteOffset=0 |
  133. | nullIndicatorByte=0 |
  134. | nullIndicatorBit=-1 |
  135. | slotIdx=0 |
  136. | |
  137. | |
  138. | TupleDescriptor{id=4, tbl=null, byteSize=48, materialized=true} |
  139. | SlotDescriptor{id=7, col=k1, type=DATE} |
  140. | parent=4 |
  141. | materialized=true |
  142. | byteSize=16 |
  143. | byteOffset=16 |
  144. | nullIndicatorByte=0 |
  145. | nullIndicatorBit=-1 |
  146. | slotIdx=1 |
  147. | |
  148. | SlotDescriptor{id=8, col=k2, type=INT} |
  149. | parent=4 |
  150. | materialized=true |
  151. | byteSize=4 |
  152. | byteOffset=0 |
  153. | nullIndicatorByte=0 |
  154. | nullIndicatorBit=-1 |
  155. | slotIdx=0 |
  156. | |
  157. | SlotDescriptor{id=9, col=k1, type=DATETIME} |
  158. | parent=4 |
  159. | materialized=true |
  160. | byteSize=16 |
  161. | byteOffset=32 |
  162. | nullIndicatorByte=0 |
  163. | nullIndicatorBit=-1 |
  164. | slotIdx=2 |
  165. +---------------------------------------------------------------------------------------------------------------------------------------------------------+
  166. 160 rows in set (0.00 sec)

查询计划中显示的信息还在不断规范和完善中,我们将在后续的文章中详细介绍。

查看查询 Profile

用户可以通过以下命令打开会话变量 is_report_success

  1. SET is_report_success=true;

然后执行查询,则 Doris 会产生该查询的一个 Profile。Profile 包含了一个查询各个节点的具体执行情况,有助于我们分析查询瓶颈。

执行完查询后,我们可以通过如下命令先获取 Profile 列表:

  1. mysql> show query profile "/"\G
  2. *************************** 1. row ***************************
  3. QueryId: c257c52f93e149ee-ace8ac14e8c9fef9
  4. User: root
  5. DefaultDb: default_cluster:db1
  6. SQL: select tbl1.k1, sum(tbl1.k2) from tbl1 join tbl2 on tbl1.k1 = tbl2.k1 group by tbl1.k1 order by tbl1.k1
  7. QueryType: Query
  8. StartTime: 2021-04-08 11:30:50
  9. EndTime: 2021-04-08 11:30:50
  10. TotalTime: 9ms
  11. QueryState: EOF

这个命令会列出当前保存的所有 Profile。每行对应一个查询。我们可以选择我们想看的 Profile 对应的 QueryId,查看具体情况。

查看一个Profile分为3个步骤:

  1. 查看整体执行计划树

    这一步主要用于从整体分析执行计划,并查看每个Fragment的执行耗时。

    1. mysql> show query profile "/c257c52f93e149ee-ace8ac14e8c9fef9"\G
    2. *************************** 1. row ***************************
    3. Fragments:
    4. ┌──────────────────────┐
    5. │[-1: DataBufferSender]│
    6. Fragment: 0
    7. MaxActiveTime: 6.626ms
    8. └──────────────────────┘
    9. ┌──────────────────┐
    10. │[9: EXCHANGE_NODE]│
    11. Fragment: 0
    12. └──────────────────┘
    13. ┌──────────────────────┐
    14. │[9: DataStreamSender]
    15. Fragment: 1
    16. MaxActiveTime: 5.449ms
    17. └──────────────────────┘
    18. ┌──────────────┐
    19. │[4: SORT_NODE]│
    20. Fragment: 1
    21. └──────────────┘
    22. ┌┘
    23. ┌─────────────────────┐
    24. │[8: AGGREGATION_NODE]│
    25. Fragment: 1
    26. └─────────────────────┘
    27. └┐
    28. ┌──────────────────┐
    29. │[7: EXCHANGE_NODE]│
    30. Fragment: 1
    31. └──────────────────┘
    32. ┌──────────────────────┐
    33. │[7: DataStreamSender]
    34. Fragment: 2
    35. MaxActiveTime: 3.505ms
    36. └──────────────────────┘
    37. ┌┘
    38. ┌─────────────────────┐
    39. │[3: AGGREGATION_NODE]│
    40. Fragment: 2
    41. └─────────────────────┘
    42. ┌───────────────────┐
    43. │[2: HASH_JOIN_NODE]│
    44. Fragment: 2
    45. └───────────────────┘
    46. ┌────────────┴────────────┐
    47. ┌──────────────────┐ ┌──────────────────┐
    48. │[5: EXCHANGE_NODE]│ │[6: EXCHANGE_NODE]│
    49. Fragment: 2 Fragment: 2
    50. └──────────────────┘ └──────────────────┘
    51. ┌─────────────────────┐ ┌────────────────────────┐
    52. │[5: DataStreamSender]│ │[6: DataStreamSender]
    53. Fragment: 4 Fragment: 3
    54. MaxActiveTime: 1.87ms MaxActiveTime: 636.767us
    55. └─────────────────────┘ └────────────────────────┘
    56. ┌┘
    57. ┌───────────────────┐ ┌───────────────────┐
    58. │[0: OLAP_SCAN_NODE]│ │[1: OLAP_SCAN_NODE]│
    59. Fragment: 4 Fragment: 3
    60. └───────────────────┘ └───────────────────┘
    61. ┌─────────────┐ ┌─────────────┐
    62. │[OlapScanner]│ │[OlapScanner]│
    63. Fragment: 4 Fragment: 3
    64. └─────────────┘ └─────────────┘
    65. ┌─────────────────┐ ┌─────────────────┐
    66. │[SegmentIterator]│ │[SegmentIterator]│
    67. Fragment: 4 Fragment: 3
    68. └─────────────────┘ └─────────────────┘
    69. 1 row in set (0.02 sec)

    如上图,每个节点都标注了自己所属的 Fragment,并且在每个 Fragment 的 Sender节点,标注了该 Fragment 的执行耗时。这个耗时,是Fragment下所有 Instance 执行耗时中最长的一个。这个有助于我们从整体角度发现最耗时的 Fragment。

  2. 查看具体 Fragment 下的 Instance 列表

    比如我们发现 Fragment 1 耗时最长,则可以继续查看 Fragment 1 的 Instance 列表:

    1. mysql> show query profile "/c257c52f93e149ee-ace8ac14e8c9fef9/1";
    2. +-----------------------------------+-------------------+------------+
    3. | Instances | Host | ActiveTime |
    4. +-----------------------------------+-------------------+------------+
    5. | c257c52f93e149ee-ace8ac14e8c9ff03 | 10.200.00.01:9060 | 5.449ms |
    6. | c257c52f93e149ee-ace8ac14e8c9ff05 | 10.200.00.02:9060 | 5.367ms |
    7. | c257c52f93e149ee-ace8ac14e8c9ff04 | 10.200.00.03:9060 | 5.358ms |
    8. +-----------------------------------+-------------------+------------+

    这里展示了 Fragment 1 上所有的 3 个 Instance 所在的执行节点和耗时。

  3. 查看具体 Instance

    我们可以继续查看某一个具体的 Instance 上各个算子的详细 Profile:

    1. mysql> show query profile "/c257c52f93e149ee-ace8ac14e8c9fef9/1/c257c52f93e149ee-ace8ac14e8c9ff03"\G
    2. *************************** 1. row ***************************
    3. Instance:
    4. ┌───────────────────────────────────────┐
    5. │[9: DataStreamSender]
    6. │(Active: 37.222us, non-child: 0.40)
    7. - Counters:
    8. - BytesSent: 0.00
    9. - IgnoreRows: 0
    10. - OverallThroughput: 0.0 /sec
    11. - PeakMemoryUsage: 8.00 KB
    12. - SerializeBatchTime: 0ns
    13. - UncompressedRowBatchSize: 0.00
    14. └───────────────────────────────────────┘
    15. └┐
    16. ┌──────────────────────────────────┐
    17. │[4: SORT_NODE]
    18. │(Active: 5.421ms, non-child: 0.71)│
    19. - Counters:
    20. - PeakMemoryUsage: 12.00 KB
    21. - RowsReturned: 0
    22. - RowsReturnedRate: 0
    23. └──────────────────────────────────┘
    24. ┌┘
    25. ┌───────────────────────────────────┐
    26. │[8: AGGREGATION_NODE]
    27. │(Active: 5.355ms, non-child: 10.68)│
    28. - Counters:
    29. - BuildTime: 3.701us
    30. - GetResultsTime: 0ns
    31. - HTResize: 0
    32. - HTResizeTime: 1.211us
    33. - HashBuckets: 0
    34. - HashCollisions: 0
    35. - HashFailedProbe: 0
    36. - HashFilledBuckets: 0
    37. - HashProbe: 0
    38. - HashTravelLength: 0
    39. - LargestPartitionPercent: 0
    40. - MaxPartitionLevel: 0
    41. - NumRepartitions: 0
    42. - PartitionsCreated: 16
    43. - PeakMemoryUsage: 34.02 MB
    44. - RowsProcessed: 0
    45. - RowsRepartitioned: 0
    46. - RowsReturned: 0
    47. - RowsReturnedRate: 0
    48. - SpilledPartitions: 0
    49. └───────────────────────────────────┘
    50. └┐
    51. ┌──────────────────────────────────────────┐
    52. │[7: EXCHANGE_NODE]
    53. │(Active: 4.360ms, non-child: 46.84)
    54. - Counters:
    55. - BytesReceived: 0.00
    56. - ConvertRowBatchTime: 387ns
    57. - DataArrivalWaitTime: 4.357ms
    58. - DeserializeRowBatchTimer: 0ns
    59. - FirstBatchArrivalWaitTime: 4.356ms
    60. - PeakMemoryUsage: 0.00
    61. - RowsReturned: 0
    62. - RowsReturnedRate: 0
    63. - SendersBlockedTotalTimer(*): 0ns
    64. └──────────────────────────────────────────┘

    上图展示了 Fragment 1 中,Instance c257c52f93e149ee-ace8ac14e8c9ff03 的各个算子的具体 Profile。

通过以上3个步骤,我们可以逐步排查一个SQL的性能瓶颈。