命令行界面

Flink provides a Command-Line Interface (CLI) to run programs that are packagedas JAR files, and control their execution. The CLI is partof any Flink setup, available in local single node setups and indistributed setups. It is located under <flink-home>/bin/flinkand connects by default to the running Flink master (JobManager) that wasstarted from the same installation directory.

The command line can be used to

  • submit jobs for execution,
  • cancel a running job,
  • provide information about a job,
  • list running and waiting jobs,
  • trigger and dispose savepoints, and

A prerequisite to using the command line interface is that the Flinkmaster (JobManager) has been started (via<flink-home>/bin/start-cluster.sh) or that another deployment target such as YARN or Kubernetes isavailable.

Deployment targets

Flink has the concept of executors for defining available deployment targets. You can see theavailable executors in the output of bin/flink —help, for example:

  1. Options for executor mode:
  2. -D <property=value> Generic configuration options for
  3. execution/deployment and for the configured executor.
  4. The available options can be found at
  5. https://ci.apache.org/projects/flink/flink-docs-stabl
  6. e/ops/config.html
  7. -e,--executor <arg> The name of the executor to be used for executing the
  8. given job, which is equivalent to the
  9. "execution.target" config option. The currently
  10. available executors are: "remote", "local",
  11. "kubernetes-session", "yarn-per-job", "yarn-session".

When running one of the bin/flink actions, the executor is specified using the —executoroption.

Examples

作业提交示例


这些示例是关于如何通过脚本提交一个作业

  • Run example program with no arguments:
  1. ./bin/flink run ./examples/batch/WordCount.jar
  • Run example program with arguments for input and result files:
  1. ./bin/flink run ./examples/batch/WordCount.jar \
  2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • Run example program with parallelism 16 and arguments for input and result files:
  1. ./bin/flink run -p 16 ./examples/batch/WordCount.jar \
  2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • Run example program with flink log output disabled:
  1. ./bin/flink run -q ./examples/batch/WordCount.jar
  • Run example program in detached mode:
  1. ./bin/flink run -d ./examples/batch/WordCount.jar
  • Run example program on a specific JobManager:
  1. ./bin/flink run -m myJMHost:8081 \
  2. ./examples/batch/WordCount.jar \
  3. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • Run example program with a specific class as an entry point:
  1. ./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \
  2. ./examples/batch/WordCount.jar \
  3. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  1. ./bin/flink run -m yarn-cluster \
  2. ./examples/batch/WordCount.jar \
  3. --input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out

注意 通过flink run提交Python任务时Flink会调用“python”命令。请执行以下命令以确认当前环境下的指令“python”指向Python 3.5及以上版本:

  1. $ python --version
  2. # the version printed here must be 3.5+
  • 提交一个Python Table的作业:
  1. ./bin/flink run -py WordCount.py
  • 提交一个有多个依赖的Python Table的作业:
  1. ./bin/flink run -py examples/python/table/batch/word_count.py \
  2. -pyfs file:///user.txt,hdfs:///$namenode_address/username.txt
  • 提交一个Python Table的作业,并指定依赖的jar包:
  1. ./bin/flink run -py examples/python/table/batch/word_count.py -j <jarFile>
  • 提交一个有多个依赖的Python Table的作业,Python作业的主入口通过pym选项指定:
  1. ./bin/flink run -pym batch.word_count -pyfs examples/python/table/batch
  • 提交一个指定并发度为16的Python Table的作业:
  1. ./bin/flink run -p 16 -py examples/python/table/batch/word_count.py
  • 提交一个关闭flink日志输出的Python Table的作业:
  1. ./bin/flink run -q -py examples/python/table/batch/word_count.py
  • 提交一个运行在detached模式下的Python Table的作业:
  1. ./bin/flink run -d -py examples/python/table/batch/word_count.py
  • 提交一个运行在指定JobManager上的Python Table的作业:
  1. ./bin/flink run -m myJMHost:8081 \
  2. -py examples/python/table/batch/word_count.py
  1. ./bin/flink run -m yarn-cluster \
  2. -py examples/python/table/batch/word_count.py

作业管理示例


  • Display the optimized execution plan for the WordCount example program as JSON:
  1. ./bin/flink info ./examples/batch/WordCount.jar \
  2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • List scheduled and running jobs (including their JobIDs):
  1. ./bin/flink list
  • List scheduled jobs (including their JobIDs):
  1. ./bin/flink list -s
  • List running jobs (including their JobIDs):
  1. ./bin/flink list -r
  • List all existing jobs (including their JobIDs):
  1. ./bin/flink list -a
  • List running Flink jobs inside Flink YARN session:
  1. ./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -r
  • Cancel a job:
  1. ./bin/flink cancel <jobID>
  • Cancel a job with a savepoint (deprecated; use “stop” instead):
  1. ./bin/flink cancel -s [targetDirectory] <jobID>
  • Gracefully stop a job with a savepoint (streaming jobs only):
  1. ./bin/flink stop [-p targetDirectory] [-d] <jobID>

Savepoints

Savepoints are controlled via the command line client:

Trigger a Savepoint

  1. ./bin/flink savepoint <jobId> [savepointDirectory]

This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.

Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.

If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.

Trigger a Savepoint with YARN

  1. ./bin/flink savepoint <jobId> [savepointDirectory] -yid <yarnAppId>

This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.

Everything else is the same as described in the above Trigger a Savepoint section.

Stop

Use the stop to gracefully stop a running streaming job with a savepoint.

  1. ./bin/flink stop [-p targetDirectory] [-d] <jobID>

A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows fromsource to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrierthat will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling theircancel() method. If the -d flag is specified, then a MAX_WATERMARK will be emitted before the last checkpointbarrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting fora specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the job to finish processing all in-flight data.

Cancel with a savepoint (deprecated)

You can atomically trigger a savepoint and cancel a job.

  1. ./bin/flink cancel -s [savepointDirectory] <jobID>

If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).

The job will only be cancelled if the savepoint succeeds.

Note: Cancelling a job with savepoint is deprecated. Use "stop" instead.

Restore a savepoint

  1. ./bin/flink run -s <savepointPath> ...

The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.

By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.

  1. ./bin/flink run -s <savepointPath> -n ...

This is useful if your program dropped an operator that was part of the savepoint.

Dispose a savepoint

  1. ./bin/flink savepoint -d <savepointPath>

Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.

If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:

  1. ./bin/flink savepoint -d <savepointPath> -j <jarFile>

Otherwise, you will run into a ClassNotFoundException.

Usage

The command line syntax is as follows:

  1. ./flink <ACTION> [OPTIONS] [ARGUMENTS]
  2. The following actions are available:
  3. Action "run" compiles and runs a program.
  4. Syntax: run [OPTIONS] <jar-file> <arguments>
  5. "run" action options:
  6. -c,--class <classname> Class with the program entry point
  7. ("main()" method). Only needed if the
  8. JAR file does not specify the class in
  9. its manifest.
  10. -C,--classpath <url> Adds a URL to each user code
  11. classloader on all nodes in the
  12. cluster. The paths must specify a
  13. protocol (e.g. file://) and be
  14. accessible on all nodes (e.g. by means
  15. of a NFS share). You can use this
  16. option multiple times for specifying
  17. more than one URL. The protocol must
  18. be supported by the {@link
  19. java.net.URLClassLoader}.
  20. -d,--detached If present, runs the job in detached
  21. mode
  22. -n,--allowNonRestoredState Allow to skip savepoint state that
  23. cannot be restored. You need to allow
  24. this if you removed an operator from
  25. your program that was part of the
  26. program when the savepoint was
  27. triggered.
  28. -p,--parallelism <parallelism> The parallelism with which to run the
  29. program. Optional flag to override the
  30. default value specified in the
  31. configuration.
  32. -py,--python <pythonFile> Python script with the program entry
  33. point. The dependent resources can be
  34. configured with the `--pyFiles`
  35. option.
  36. -pyarch,--pyArchives <arg> Add python archive files for job. The
  37. archive files will be extracted to the
  38. working directory of python UDF
  39. worker. Currently only zip-format is
  40. supported. For each archive file, a
  41. target directory be specified. If the
  42. target directory name is specified,
  43. the archive file will be extracted to
  44. a name can directory with the
  45. specified name. Otherwise, the archive
  46. file will be extracted to a directory
  47. with the same name of the archive
  48. file. The files uploaded via this
  49. option are accessible via relative
  50. path. '#' could be used as the
  51. separator of the archive file path and
  52. the target directory name. Comma (',')
  53. could be used as the separator to
  54. specify multiple archive files. This
  55. option can be used to upload the
  56. virtual environment, the data files
  57. used in Python UDF (e.g.: --pyArchives
  58. file:///tmp/py37.zip,file:///tmp/data.
  59. zip#data --pyExecutable
  60. py37.zip/py37/bin/python). The data
  61. files could be accessed in Python UDF,
  62. e.g.: f = open('data/data.txt', 'r').
  63. -pyexec,--pyExecutable <arg> Specify the path of the python
  64. interpreter used to execute the python
  65. UDF worker (e.g.: --pyExecutable
  66. /usr/local/bin/python3). The python
  67. UDF worker depends on Python 3.5+,
  68. Apache Beam (version == 2.15.0), Pip
  69. (version >= 7.1.0) and SetupTools
  70. (version >= 37.0.0). Please ensure
  71. that the specified environment meets
  72. the above requirements.
  73. -pyfs,--pyFiles <pythonFiles> Attach custom python files for job.
  74. These files will be added to the
  75. PYTHONPATH of both the local client
  76. and the remote python UDF worker. The
  77. standard python resource file suffixes
  78. such as .py/.egg/.zip or directory are
  79. all supported. Comma (',') could be
  80. used as the separator to specify
  81. multiple files (e.g.: --pyFiles
  82. file:///tmp/myresource.zip,hdfs:///$na
  83. menode_address/myresource2.zip).
  84. -pym,--pyModule <pythonModule> Python module with the program entry
  85. point. This option must be used in
  86. conjunction with `--pyFiles`.
  87. -pyreq,--pyRequirements <arg> Specify a requirements.txt file which
  88. defines the third-party dependencies.
  89. These dependencies will be installed
  90. and added to the PYTHONPATH of the
  91. python UDF worker. A directory which
  92. contains the installation packages of
  93. these dependencies could be specified
  94. optionally. Use '#' as the separator
  95. if the optional parameter exists
  96. (e.g.: --pyRequirements
  97. file:///tmp/requirements.txt#file:///t
  98. mp/cached_dir).
  99. -s,--fromSavepoint <savepointPath> Path to a savepoint to restore the job
  100. from (for example
  101. hdfs:///flink/savepoint-1537).
  102. -sae,--shutdownOnAttachedExit If the job is submitted in attached
  103. mode, perform a best-effort cluster
  104. shutdown when the CLI is terminated
  105. abruptly, e.g., in response to a user
  106. interrupt, such as typing Ctrl + C.
  107. Options for yarn-cluster mode:
  108. -d,--detached If present, runs the job in detached
  109. mode
  110. -m,--jobmanager <arg> Address of the JobManager (master) to
  111. which to connect. Use this flag to
  112. connect to a different JobManager than
  113. the one specified in the
  114. configuration.
  115. -yat,--yarnapplicationType <arg> Set a custom application type for the
  116. application on YARN
  117. -yD <property=value> use value for given property
  118. -yd,--yarndetached If present, runs the job in detached
  119. mode (deprecated; use non-YARN
  120. specific option instead)
  121. -yh,--yarnhelp Help for the Yarn session CLI.
  122. -yid,--yarnapplicationId <arg> Attach to running YARN session
  123. -yj,--yarnjar <arg> Path to Flink jar file
  124. -yjm,--yarnjobManagerMemory <arg> Memory for JobManager Container with
  125. optional unit (default: MB)
  126. -ynl,--yarnnodeLabel <arg> Specify YARN node label for the YARN
  127. application
  128. -ynm,--yarnname <arg> Set a custom name for the application
  129. on YARN
  130. -yq,--yarnquery Display available YARN resources
  131. (memory, cores)
  132. -yqu,--yarnqueue <arg> Specify YARN queue.
  133. -ys,--yarnslots <arg> Number of slots per TaskManager
  134. -yt,--yarnship <arg> Ship files in the specified directory
  135. (t for transfer)
  136. -ytm,--yarntaskManagerMemory <arg> Memory per TaskManager Container with
  137. optional unit (default: MB)
  138. -yz,--yarnzookeeperNamespace <arg> Namespace to create the Zookeeper
  139. sub-paths for high availability mode
  140. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  141. sub-paths for high availability mode
  142. Options for executor mode:
  143. -D <property=value> Generic configuration options for
  144. execution/deployment and for the configured executor.
  145. The available options can be found at
  146. https://ci.apache.org/projects/flink/flink-docs-stabl
  147. e/ops/config.html
  148. -e,--executor <arg> The name of the executor to be used for executing the
  149. given job, which is equivalent to the
  150. "execution.target" config option. The currently
  151. available executors are: "remote", "local",
  152. "kubernetes-session", "yarn-per-job", "yarn-session".
  153. Options for default mode:
  154. -m,--jobmanager <arg> Address of the JobManager (master) to which
  155. to connect. Use this flag to connect to a
  156. different JobManager than the one specified
  157. in the configuration.
  158. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  159. for high availability mode
  160. Action "info" shows the optimized execution plan of the program (JSON).
  161. Syntax: info [OPTIONS] <jar-file> <arguments>
  162. "info" action options:
  163. -c,--class <classname> Class with the program entry point
  164. ("main()" method). Only needed if the JAR
  165. file does not specify the class in its
  166. manifest.
  167. -p,--parallelism <parallelism> The parallelism with which to run the
  168. program. Optional flag to override the
  169. default value specified in the
  170. configuration.
  171. Action "list" lists running and scheduled programs.
  172. Syntax: list [OPTIONS]
  173. "list" action options:
  174. -a,--all Show all programs and their JobIDs
  175. -r,--running Show only running programs and their JobIDs
  176. -s,--scheduled Show only scheduled programs and their JobIDs
  177. Options for yarn-cluster mode:
  178. -m,--jobmanager <arg> Address of the JobManager (master) to
  179. which to connect. Use this flag to connect
  180. to a different JobManager than the one
  181. specified in the configuration.
  182. -yid,--yarnapplicationId <arg> Attach to running YARN session
  183. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  184. sub-paths for high availability mode
  185. Options for executor mode:
  186. -D <property=value> Generic configuration options for
  187. execution/deployment and for the configured executor.
  188. The available options can be found at
  189. https://ci.apache.org/projects/flink/flink-docs-stabl
  190. e/ops/config.html
  191. -e,--executor <arg> The name of the executor to be used for executing the
  192. given job, which is equivalent to the
  193. "execution.target" config option. The currently
  194. available executors are: "remote", "local",
  195. "kubernetes-session", "yarn-per-job", "yarn-session".
  196. Options for default mode:
  197. -m,--jobmanager <arg> Address of the JobManager (master) to which
  198. to connect. Use this flag to connect to a
  199. different JobManager than the one specified
  200. in the configuration.
  201. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  202. for high availability mode
  203. Action "stop" stops a running program with a savepoint (streaming jobs only).
  204. Syntax: stop [OPTIONS] <Job ID>
  205. "stop" action options:
  206. -d,--drain Send MAX_WATERMARK before taking the
  207. savepoint and stopping the pipelne.
  208. -p,--savepointPath <savepointPath> Path to the savepoint (for example
  209. hdfs:///flink/savepoint-1537). If no
  210. directory is specified, the configured
  211. default will be used
  212. ("state.savepoints.dir").
  213. Options for yarn-cluster mode:
  214. -m,--jobmanager <arg> Address of the JobManager (master) to
  215. which to connect. Use this flag to connect
  216. to a different JobManager than the one
  217. specified in the configuration.
  218. -yid,--yarnapplicationId <arg> Attach to running YARN session
  219. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  220. sub-paths for high availability mode
  221. Options for executor mode:
  222. -D <property=value> Generic configuration options for
  223. execution/deployment and for the configured executor.
  224. The available options can be found at
  225. https://ci.apache.org/projects/flink/flink-docs-stabl
  226. e/ops/config.html
  227. -e,--executor <arg> The name of the executor to be used for executing the
  228. given job, which is equivalent to the
  229. "execution.target" config option. The currently
  230. available executors are: "remote", "local",
  231. "kubernetes-session", "yarn-per-job", "yarn-session".
  232. Options for default mode:
  233. -m,--jobmanager <arg> Address of the JobManager (master) to which
  234. to connect. Use this flag to connect to a
  235. different JobManager than the one specified
  236. in the configuration.
  237. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  238. for high availability mode
  239. Action "cancel" cancels a running program.
  240. Syntax: cancel [OPTIONS] <Job ID>
  241. "cancel" action options:
  242. -s,--withSavepoint <targetDirectory> **DEPRECATION WARNING**: Cancelling
  243. a job with savepoint is deprecated.
  244. Use "stop" instead.
  245. Trigger savepoint and cancel job.
  246. The target directory is optional. If
  247. no directory is specified, the
  248. configured default directory
  249. (state.savepoints.dir) is used.
  250. Options for yarn-cluster mode:
  251. -m,--jobmanager <arg> Address of the JobManager (master) to
  252. which to connect. Use this flag to connect
  253. to a different JobManager than the one
  254. specified in the configuration.
  255. -yid,--yarnapplicationId <arg> Attach to running YARN session
  256. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  257. sub-paths for high availability mode
  258. Options for executor mode:
  259. -D <property=value> Generic configuration options for
  260. execution/deployment and for the configured executor.
  261. The available options can be found at
  262. https://ci.apache.org/projects/flink/flink-docs-stabl
  263. e/ops/config.html
  264. -e,--executor <arg> The name of the executor to be used for executing the
  265. given job, which is equivalent to the
  266. "execution.target" config option. The currently
  267. available executors are: "remote", "local",
  268. "kubernetes-session", "yarn-per-job", "yarn-session".
  269. Options for default mode:
  270. -m,--jobmanager <arg> Address of the JobManager (master) to which
  271. to connect. Use this flag to connect to a
  272. different JobManager than the one specified
  273. in the configuration.
  274. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  275. for high availability mode
  276. Action "savepoint" triggers savepoints for a running job or disposes existing ones.
  277. Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]
  278. "savepoint" action options:
  279. -d,--dispose <arg> Path of savepoint to dispose.
  280. -j,--jarfile <jarfile> Flink program JAR file.
  281. Options for yarn-cluster mode:
  282. -m,--jobmanager <arg> Address of the JobManager (master) to
  283. which to connect. Use this flag to connect
  284. to a different JobManager than the one
  285. specified in the configuration.
  286. -yid,--yarnapplicationId <arg> Attach to running YARN session
  287. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  288. sub-paths for high availability mode
  289. Options for executor mode:
  290. -D <property=value> Generic configuration options for
  291. execution/deployment and for the configured executor.
  292. The available options can be found at
  293. https://ci.apache.org/projects/flink/flink-docs-stabl
  294. e/ops/config.html
  295. -e,--executor <arg> The name of the executor to be used for executing the
  296. given job, which is equivalent to the
  297. "execution.target" config option. The currently
  298. available executors are: "remote", "local",
  299. "kubernetes-session", "yarn-per-job", "yarn-session".
  300. Options for default mode:
  301. -m,--jobmanager <arg> Address of the JobManager (master) to which
  302. to connect. Use this flag to connect to a
  303. different JobManager than the one specified
  304. in the configuration.
  305. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  306. for high availability mode