Command-Line Interface

Flink provides a Command-Line Interface (CLI) to run programs that are packagedas JAR files, and control their execution. The CLI is partof any Flink setup, available in local single node setups and indistributed setups. It is located under <flink-home>/bin/flinkand connects by default to the running Flink master (JobManager) that wasstarted from the same installation directory.

The command line can be used to

  • submit jobs for execution,
  • cancel a running job,
  • provide information about a job,
  • list running and waiting jobs,
  • trigger and dispose savepoints, andA prerequisite to using the command line interface is that the Flinkmaster (JobManager) has been started (via<flink-home>/bin/start-cluster.sh) or that a YARN environment isavailable.

Examples

Job Submission Examples


These examples about how to submit a job in CLI.

  • Run example program with no arguments:
  1. ./bin/flink run ./examples/batch/WordCount.jar
  • Run example program with arguments for input and result files:
  1. ./bin/flink run ./examples/batch/WordCount.jar \
  2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • Run example program with parallelism 16 and arguments for input and result files:
  1. ./bin/flink run -p 16 ./examples/batch/WordCount.jar \
  2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • Run example program with flink log output disabled:
  1. ./bin/flink run -q ./examples/batch/WordCount.jar
  • Run example program in detached mode:
  1. ./bin/flink run -d ./examples/batch/WordCount.jar
  • Run example program on a specific JobManager:
  1. ./bin/flink run -m myJMHost:8081 \
  2. ./examples/batch/WordCount.jar \
  3. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • Run example program with a specific class as an entry point:
  1. ./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \
  2. ./examples/batch/WordCount.jar \
  3. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  1. ./bin/flink run -m yarn-cluster -yn 2 \
  2. ./examples/batch/WordCount.jar \
  3. --input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out
  • Run Python Table program:
  1. ./bin/flink run -py examples/python/table/batch/word_count.py
  • Run Python Table program with pyFiles:
  1. ./bin/flink run -py examples/python/table/batch/word_count.py \
  2. -pyfs file:///user.txt,hdfs:///$namenode_address/username.txt
  • Run Python Table program with pyFiles and pyModule:
  1. ./bin/flink run -pym batch.word_count -pyfs examples/python/table/batch
  • Run Python Table program with parallelism 16:
  1. ./bin/flink run -p 16 -py examples/python/table/batch/word_count.py
  • Run Python Table program with flink log output disabled:
  1. ./bin/flink run -q -py examples/python/table/batch/word_count.py
  • Run Python Table program in detached mode:
  1. ./bin/flink run -d -py examples/python/table/batch/word_count.py
  • Run Python Table program on a specific JobManager:
  1. ./bin/flink run -m myJMHost:8081 \
  2. -py examples/python/table/batch/word_count.py
  1. ./bin/flink run -m yarn-cluster -yn 2 \
  2. -py examples/python/table/batch/word_count.py

Job Management Examples


These examples about how to manage a job in CLI.

  • Display the optimized execution plan for the WordCount example program as JSON:
  1. ./bin/flink info ./examples/batch/WordCount.jar \
  2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
  • List scheduled and running jobs (including their JobIDs):
  1. ./bin/flink list
  • List scheduled jobs (including their JobIDs):
  1. ./bin/flink list -s
  • List running jobs (including their JobIDs):
  1. ./bin/flink list -r
  • List all existing jobs (including their JobIDs):
  1. ./bin/flink list -a
  • List running Flink jobs inside Flink YARN session:
  1. ./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -r
  • Cancel a job:
  1. ./bin/flink cancel <jobID>
  • Cancel a job with a savepoint (deprecated; use “stop” instead):
  1. ./bin/flink cancel -s [targetDirectory] <jobID>
  • Gracefully stop a job with a savepoint (streaming jobs only):
  1. ./bin/flink stop [-p targetDirectory] [-d] <jobID>

Savepoints

Savepoints are controlled via the command line client:

Trigger a Savepoint

  1. ./bin/flink savepoint <jobId> [savepointDirectory]

This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.

Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.

If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.

Trigger a Savepoint with YARN

  1. ./bin/flink savepoint <jobId> [savepointDirectory] -yid <yarnAppId>

This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.

Everything else is the same as described in the above Trigger a Savepoint section.

Stop

Use the stop to gracefully stop a running streaming job with a savepoint.

  1. ./bin/flink stop [-p targetDirectory] [-d] <jobID>

A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows fromsource to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrierthat will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling theircancel() method. If the -d flag is specified, then a MAX_WATERMARK will be emitted before the last checkpointbarrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting fora specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the job to finish processing all in-flight data.

Cancel with a savepoint (deprecated)

You can atomically trigger a savepoint and cancel a job.

  1. ./bin/flink cancel -s [savepointDirectory] <jobID>

If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).

The job will only be cancelled if the savepoint succeeds.

Note: Cancelling a job with savepoint is deprecated. Use "stop" instead.

Restore a savepoint

  1. ./bin/flink run -s <savepointPath> ...

The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.

By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.

  1. ./bin/flink run -s <savepointPath> -n ...

This is useful if your program dropped an operator that was part of the savepoint.

Dispose a savepoint

  1. ./bin/flink savepoint -d <savepointPath>

Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.

If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:

  1. ./bin/flink savepoint -d <savepointPath> -j <jarFile>

Otherwise, you will run into a ClassNotFoundException.

Usage

The command line syntax is as follows:

  1. ./flink <ACTION> [OPTIONS] [ARGUMENTS]
  2. The following actions are available:
  3. Action "run" compiles and runs a program.
  4. Syntax: run [OPTIONS] <jar-file> <arguments>
  5. "run" action options:
  6. -c,--class <classname> Class with the program entry point
  7. ("main()" method or "getPlan()" method).
  8. Only needed if the JAR file does not
  9. specify the class in its manifest.
  10. -C,--classpath <url> Adds a URL to each user code
  11. classloader on all nodes in the
  12. cluster. The paths must specify a
  13. protocol (e.g. file://) and be
  14. accessible on all nodes (e.g. by means
  15. of a NFS share). You can use this
  16. option multiple times for specifying
  17. more than one URL. The protocol must
  18. be supported by the {@link
  19. java.net.URLClassLoader}.
  20. -d,--detached If present, runs the job in detached
  21. mode
  22. -n,--allowNonRestoredState Allow to skip savepoint state that
  23. cannot be restored. You need to allow
  24. this if you removed an operator from
  25. your program that was part of the
  26. program when the savepoint was
  27. triggered.
  28. -p,--parallelism <parallelism> The parallelism with which to run the
  29. program. Optional flag to override the
  30. default value specified in the
  31. configuration.
  32. -py,--python <python-file> Python script with the program entry
  33. point.The dependent resources can be
  34. configured with the `--pyFiles` option.
  35. -pyfs,--pyFiles <python-files> Attach custom python files for job.
  36. Comma can be used as the separator to
  37. specify multiple files. The standard
  38. python resource file suffixes such as
  39. .py/.egg/.zip are all supported.
  40. (eg:--pyFiles file:///tmp/myresource.zip
  41. ,hdfs:///$namenode_address/myresource2.zip)
  42. -pym,--pyModule <python-module> Python module with the program entry
  43. point. This option must be used in
  44. conjunction with ` --pyFiles`.
  45. -q,--sysoutLogging If present, suppress logging output to
  46. standard out.
  47. -s,--fromSavepoint <savepointPath> Path to a savepoint to restore the job
  48. from (for example
  49. hdfs:///flink/savepoint-1537).
  50. -sae,--shutdownOnAttachedExit If the job is submitted in attached
  51. mode, perform a best-effort cluster
  52. shutdown when the CLI is terminated
  53. abruptly, e.g., in response to a user
  54. interrupt, such as typing Ctrl + C.
  55. Options for yarn-cluster mode:
  56. -d,--detached If present, runs the job in detached
  57. mode
  58. -m,--jobmanager <arg> Address of the JobManager (master) to
  59. which to connect. Use this flag to
  60. connect to a different JobManager than
  61. the one specified in the
  62. configuration.
  63. -sae,--shutdownOnAttachedExit If the job is submitted in attached
  64. mode, perform a best-effort cluster
  65. shutdown when the CLI is terminated
  66. abruptly, e.g., in response to a user
  67. interrupt, such as typing Ctrl + C.
  68. -yat,--yarnapplicationType <arg> Set a custom application type for the
  69. application on YARN
  70. -yD <property=value> use value for given property
  71. -yd,--yarndetached If present, runs the job in detached
  72. mode (deprecated; use non-YARN
  73. specific option instead)
  74. -yh,--yarnhelp Help for the Yarn session CLI.
  75. -yid,--yarnapplicationId <arg> Attach to running YARN session
  76. -yj,--yarnjar <arg> Path to Flink jar file
  77. -yjm,--yarnjobManagerMemory <arg> Memory for JobManager Container
  78. with optional unit (default: MB)
  79. -yn,--yarncontainer <arg> Number of YARN container to allocate
  80. (=Number of Task Managers)
  81. -ynm,--yarnname <arg> Set a custom name for the application
  82. on YARN
  83. -yq,--yarnquery Display available YARN resources
  84. (memory, cores)
  85. -yqu,--yarnqueue <arg> Specify YARN queue.
  86. -ys,--yarnslots <arg> Number of slots per TaskManager
  87. -yst,--yarnstreaming Start Flink in streaming mode
  88. -yt,--yarnship <arg> Ship files in the specified directory
  89. (t for transfer), multiple options are
  90. supported.
  91. -ytm,--yarntaskManagerMemory <arg> Memory per TaskManager Container
  92. with optional unit (default: MB)
  93. -yz,--yarnzookeeperNamespace <arg> Namespace to create the Zookeeper
  94. sub-paths for high availability mode
  95. -ynl,--yarnnodeLabel <arg> Specify YARN node label for
  96. the YARN application
  97. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  98. sub-paths for high availability mode
  99. Options for default mode:
  100. -m,--jobmanager <arg> Address of the JobManager (master) to which
  101. to connect. Use this flag to connect to a
  102. different JobManager than the one specified
  103. in the configuration.
  104. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  105. for high availability mode
  106. Action "info" shows the optimized execution plan of the program (JSON).
  107. Syntax: info [OPTIONS] <jar-file> <arguments>
  108. "info" action options:
  109. -c,--class <classname> Class with the program entry point ("main()"
  110. method or "getPlan()" method). Only needed
  111. if the JAR file does not specify the class
  112. in its manifest.
  113. -p,--parallelism <parallelism> The parallelism with which to run the
  114. program. Optional flag to override the
  115. default value specified in the
  116. configuration.
  117. Action "list" lists running and scheduled programs.
  118. Syntax: list [OPTIONS]
  119. "list" action options:
  120. -r,--running Show only running programs and their JobIDs
  121. -s,--scheduled Show only scheduled programs and their JobIDs
  122. Options for yarn-cluster mode:
  123. -m,--jobmanager <arg> Address of the JobManager (master) to
  124. which to connect. Use this flag to connect
  125. to a different JobManager than the one
  126. specified in the configuration.
  127. -yid,--yarnapplicationId <arg> Attach to running YARN session
  128. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  129. sub-paths for high availability mode
  130. Options for default mode:
  131. -m,--jobmanager <arg> Address of the JobManager (master) to which
  132. to connect. Use this flag to connect to a
  133. different JobManager than the one specified
  134. in the configuration.
  135. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  136. for high availability mode
  137. Action "stop" stops a running program with a savepoint (streaming jobs only).
  138. Syntax: stop [OPTIONS] <Job ID>
  139. "stop" action options:
  140. -d,--drain Send MAX_WATERMARK before taking the
  141. savepoint and stopping the pipelne.
  142. -p,--savepointPath <savepointPath> Path to the savepoint (for example
  143. hdfs:///flink/savepoint-1537). If no
  144. directory is specified, the configured
  145. default will be used
  146. ("state.savepoints.dir").
  147. Options for yarn-cluster mode:
  148. -m,--jobmanager <arg> Address of the JobManager (master) to
  149. which to connect. Use this flag to connect
  150. to a different JobManager than the one
  151. specified in the configuration.
  152. -yid,--yarnapplicationId <arg> Attach to running YARN session
  153. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  154. sub-paths for high availability mode
  155. Options for default mode:
  156. -m,--jobmanager <arg> Address of the JobManager (master) to which
  157. to connect. Use this flag to connect to a
  158. different JobManager than the one specified
  159. in the configuration.
  160. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  161. for high availability mode
  162. Action "cancel" cancels a running program.
  163. Syntax: cancel [OPTIONS] <Job ID>
  164. "cancel" action options:
  165. -s,--withSavepoint <targetDirectory> **DEPRECATION WARNING**: Cancelling
  166. a job with savepoint is deprecated.
  167. Use "stop" instead.
  168. Trigger savepoint and cancel job.
  169. The target directory is optional. If
  170. no directory is specified, the
  171. configured default directory
  172. (state.savepoints.dir) is used.
  173. Options for yarn-cluster mode:
  174. -m,--jobmanager <arg> Address of the JobManager (master) to
  175. which to connect. Use this flag to connect
  176. to a different JobManager than the one
  177. specified in the configuration.
  178. -yid,--yarnapplicationId <arg> Attach to running YARN session
  179. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  180. sub-paths for high availability mode
  181. Options for default mode:
  182. -m,--jobmanager <arg> Address of the JobManager (master) to which
  183. to connect. Use this flag to connect to a
  184. different JobManager than the one specified
  185. in the configuration.
  186. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  187. for high availability mode
  188. Action "savepoint" triggers savepoints for a running job or disposes existing ones.
  189. Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]
  190. "savepoint" action options:
  191. -d,--dispose <arg> Path of savepoint to dispose.
  192. -j,--jarfile <jarfile> Flink program JAR file.
  193. Options for yarn-cluster mode:
  194. -m,--jobmanager <arg> Address of the JobManager (master) to
  195. which to connect. Use this flag to connect
  196. to a different JobManager than the one
  197. specified in the configuration.
  198. -yid,--yarnapplicationId <arg> Attach to running YARN session
  199. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
  200. sub-paths for high availability mode
  201. Options for default mode:
  202. -m,--jobmanager <arg> Address of the JobManager (master) to which
  203. to connect. Use this flag to connect to a
  204. different JobManager than the one specified
  205. in the configuration.
  206. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
  207. for high availability mode