协程与任务

本节将简述用于协程与任务的高层级 API。

协程

通过 async/await 语法来声明 协程 是编写 asyncio 应用的推荐方式。 例如,以下代码段会打印 “hello”,等待 1 秒,再打印 “world”:

  1. >>> import asyncio
  2. >>> async def main():
  3. ... print('hello')
  4. ... await asyncio.sleep(1)
  5. ... print('world')
  6. >>> asyncio.run(main())
  7. hello
  8. world

注意:简单地调用一个协程并不会使其被调度执行

  1. >>> main()
  2. <coroutine object main at 0x1053bb7c8>

To actually run a coroutine, asyncio provides the following mechanisms:

  • asyncio.run() 函数用来运行最高层级的入口点 “main()” 函数 (参见上面的示例。)

  • 等待一个协程。以下代码段会在等待 1 秒后打印 “hello”,然后 再次 等待 2 秒后打印 “world”:

    1. import asyncio
    2. import time
    3. async def say_after(delay, what):
    4. await asyncio.sleep(delay)
    5. print(what)
    6. async def main():
    7. print(f"started at {time.strftime('%X')}")
    8. await say_after(1, 'hello')
    9. await say_after(2, 'world')
    10. print(f"finished at {time.strftime('%X')}")
    11. asyncio.run(main())

    预期的输出:

    1. started at 17:13:52
    2. hello
    3. world
    4. finished at 17:13:55
  • asyncio.create_task() 函数用来并发运行作为 asyncio 任务 的多个协程。

    让我们修改以上示例,并发 运行两个 say_after 协程:

    1. async def main():
    2. task1 = asyncio.create_task(
    3. say_after(1, 'hello'))
    4. task2 = asyncio.create_task(
    5. say_after(2, 'world'))
    6. print(f"started at {time.strftime('%X')}")
    7. # Wait until both tasks are completed (should take
    8. # around 2 seconds.)
    9. await task1
    10. await task2
    11. print(f"finished at {time.strftime('%X')}")

    注意,预期的输出显示代码段的运行时间比之前快了 1 秒:

    1. started at 17:14:32
    2. hello
    3. world
    4. finished at 17:14:34
  • The asyncio.TaskGroup class provides a more modern alternative to create_task(). Using this API, the last example becomes:

    1. async def main():
    2. async with asyncio.TaskGroup() as tg:
    3. task1 = tg.create_task(
    4. say_after(1, 'hello'))
    5. task2 = tg.create_task(
    6. say_after(2, 'world'))
    7. print(f"started at {time.strftime('%X')}")
    8. # The wait is implicit when the context manager exits.
    9. print(f"finished at {time.strftime('%X')}")

    The timing and output should be the same as for the previous version.

    3.11 新版功能: asyncio.TaskGroup.

可等待对象

如果一个对象可以在 await 语句中使用,那么它就是 可等待 对象。许多 asyncio API 都被设计为接受可等待对象。

可等待 对象有三种主要类型: 协程, 任务Future.

协程

Python 协程属于 可等待 对象,因此可以在其他协程中被等待:

  1. import asyncio
  2. async def nested():
  3. return 42
  4. async def main():
  5. # Nothing happens if we just call "nested()".
  6. # A coroutine object is created but not awaited,
  7. # so it *won't run at all*.
  8. nested()
  9. # Let's do it differently now and await it:
  10. print(await nested()) # will print "42".
  11. asyncio.run(main())

重要

在本文档中 “协程” 可用来表示两个紧密关联的概念:

  • 协程函数: 定义形式为 async def 的函数;

  • 协程对象: 调用 协程函数 所返回的对象。

任务

任务 被用来“并行的”调度协程

当一个协程通过 asyncio.create_task() 等函数被封装为一个 任务,该协程会被自动调度执行:

  1. import asyncio
  2. async def nested():
  3. return 42
  4. async def main():
  5. # Schedule nested() to run soon concurrently
  6. # with "main()".
  7. task = asyncio.create_task(nested())
  8. # "task" can now be used to cancel "nested()", or
  9. # can simply be awaited to wait until it is complete:
  10. await task
  11. asyncio.run(main())

Futures

Future 是一种特殊的 低层级 可等待对象,表示一个异步操作的 最终结果

当一个 Future 对象 被等待,这意味着协程将保持等待直到该 Future 对象在其他地方操作完毕。

在 asyncio 中需要 Future 对象以便允许通过 async/await 使用基于回调的代码。

通常情况下 没有必要 在应用层级的代码中创建 Future 对象。

Future 对象有时会由库和某些 asyncio API 暴露给用户,用作可等待对象:

  1. async def main():
  2. await function_that_returns_a_future_object()
  3. # this is also valid:
  4. await asyncio.gather(
  5. function_that_returns_a_future_object(),
  6. some_python_coroutine()
  7. )

一个很好的返回对象的低层级函数的示例是 loop.run_in_executor()

创建任务

asyncio.create_task(coro, **, name=None, context=None*)

coro 协程 封装为一个 Task 并调度其执行。返回 Task 对象。

name 不为 None,它将使用 Task.set_name() 来设为任务的名称。

An optional keyword-only context argument allows specifying a custom contextvars.Context for the coro to run in. The current context copy is created when no context is provided.

该任务会在 get_running_loop() 返回的循环中执行,如果当前线程没有在运行的循环则会引发 RuntimeError

备注

asyncio.TaskGroup.create_task() is a newer alternative that allows for convenient waiting for a group of related tasks.

重要

Save a reference to the result of this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn’t referenced elsewhere may get garbage collected at any time, even before it’s done. For reliable “fire-and-forget” background tasks, gather them in a collection:

  1. background_tasks = set()
  2. for i in range(10):
  3. task = asyncio.create_task(some_coro(param=i))
  4. # Add task to the set. This creates a strong reference.
  5. background_tasks.add(task)
  6. # To prevent keeping references to finished tasks forever,
  7. # make each task remove its own reference from the set after
  8. # completion:
  9. task.add_done_callback(background_tasks.discard)

3.7 新版功能.

在 3.8 版更改: 添加了 name 形参。

在 3.11 版更改: Added the context parameter.

Task Cancellation

Tasks can easily and safely be cancelled. When a task is cancelled, asyncio.CancelledError will be raised in the task at the next opportunity.

It is recommended that coroutines use try/finally blocks to robustly perform clean-up logic. In case asyncio.CancelledError is explicitly caught, it should generally be propagated when clean-up is complete. Most code can safely ignore asyncio.CancelledError.

The asyncio components that enable structured concurrency, like asyncio.TaskGroup and asyncio.timeout(), are implemented using cancellation internally and might misbehave if a coroutine swallows asyncio.CancelledError. Similarly, user code should not call uncancel.

Task Groups

Task groups combine a task creation API with a convenient and reliable way to wait for all tasks in the group to finish.

class asyncio.TaskGroup

An asynchronous context manager holding a group of tasks. Tasks can be added to the group using create_task(). All tasks are awaited when the context manager exits.

3.11 新版功能.

  • create_task(coro, **, name=None, context=None*)

    Create a task in this task group. The signature matches that of asyncio.create_task().

示例:

  1. async def main():
  2. async with asyncio.TaskGroup() as tg:
  3. task1 = tg.create_task(some_coro(...))
  4. task2 = tg.create_task(another_coro(...))
  5. print("Both tasks have completed now.")

The async with statement will wait for all tasks in the group to finish. While waiting, new tasks may still be added to the group (for example, by passing tg into one of the coroutines and calling tg.create_task() in that coroutine). Once the last task has finished and the async with block is exited, no new tasks may be added to the group.

The first time any of the tasks belonging to the group fails with an exception other than asyncio.CancelledError, the remaining tasks in the group are cancelled. No further tasks can then be added to the group. At this point, if the body of the async with statement is still active (i.e., __aexit__() hasn’t been called yet), the task directly containing the async with statement is also cancelled. The resulting asyncio.CancelledError will interrupt an await, but it will not bubble out of the containing async with statement.

Once all tasks have finished, if any tasks have failed with an exception other than asyncio.CancelledError, those exceptions are combined in an ExceptionGroup or BaseExceptionGroup (as appropriate; see their documentation) which is then raised.

Two base exceptions are treated specially: If any task fails with KeyboardInterrupt or SystemExit, the task group still cancels the remaining tasks and waits for them, but then the initial KeyboardInterrupt or SystemExit is re-raised instead of ExceptionGroup or BaseExceptionGroup.

If the body of the async with statement exits with an exception (so __aexit__() is called with an exception set), this is treated the same as if one of the tasks failed: the remaining tasks are cancelled and then waited for, and non-cancellation exceptions are grouped into an exception group and raised. The exception passed into __aexit__(), unless it is asyncio.CancelledError, is also included in the exception group. The same special case is made for KeyboardInterrupt and SystemExit as in the previous paragraph.

休眠

coroutine asyncio.sleep(delay, result=None)

阻塞 delay 指定的秒数。

如果指定了 result,则当协程完成时将其返回给调用者。

sleep() 总是会挂起当前任务,以允许其他任务运行。

将 delay 设为 0 将提供一个经优化的路径以允许其他任务运行。 这可供长期间运行的函数使用以避免在函数调用的全过程中阻塞事件循环。

以下协程示例运行 5 秒,每秒显示一次当前日期:

  1. import asyncio
  2. import datetime
  3. async def display_date():
  4. loop = asyncio.get_running_loop()
  5. end_time = loop.time() + 5.0
  6. while True:
  7. print(datetime.datetime.now())
  8. if (loop.time() + 1.0) >= end_time:
  9. break
  10. await asyncio.sleep(1)
  11. asyncio.run(display_date())

在 3.10 版更改: 移除了 loop 形参。

并发运行任务

awaitable asyncio.gather(\aws, return_exceptions=False*)

并发 运行 aws 序列中的 可等待对象

如果 aws 中的某个可等待对象为协程,它将自动被作为一个任务调度。

如果所有可等待对象都成功完成,结果将是一个由所有返回值聚合而成的列表。结果值的顺序与 aws 中可等待对象的顺序一致。

如果 return_exceptionsFalse (默认),所引发的首个异常会立即传播给等待 gather() 的任务。aws 序列中的其他可等待对象 不会被取消 并将继续运行。

如果 return_exceptionsTrue,异常会和成功的结果一样处理,并聚合至结果列表。

如果 gather() 被取消,所有被提交 (尚未完成) 的可等待对象也会 被取消

如果 aws 序列中的任一 Task 或 Future 对象 被取消,它将被当作引发了 CancelledError 一样处理 — 在此情况下 gather() 调用 不会 被取消。这是为了防止一个已提交的 Task/Future 被取消导致其他 Tasks/Future 也被取消。

备注

A more modern way to create and run tasks concurrently and wait for their completion is asyncio.TaskGroup.

示例:

  1. import asyncio
  2. async def factorial(name, number):
  3. f = 1
  4. for i in range(2, number + 1):
  5. print(f"Task {name}: Compute factorial({number}), currently i={i}...")
  6. await asyncio.sleep(1)
  7. f *= i
  8. print(f"Task {name}: factorial({number}) = {f}")
  9. return f
  10. async def main():
  11. # Schedule three calls *concurrently*:
  12. L = await asyncio.gather(
  13. factorial("A", 2),
  14. factorial("B", 3),
  15. factorial("C", 4),
  16. )
  17. print(L)
  18. asyncio.run(main())
  19. # Expected output:
  20. #
  21. # Task A: Compute factorial(2), currently i=2...
  22. # Task B: Compute factorial(3), currently i=2...
  23. # Task C: Compute factorial(4), currently i=2...
  24. # Task A: factorial(2) = 2
  25. # Task B: Compute factorial(3), currently i=3...
  26. # Task C: Compute factorial(4), currently i=3...
  27. # Task B: factorial(3) = 6
  28. # Task C: Compute factorial(4), currently i=4...
  29. # Task C: factorial(4) = 24
  30. # [2, 6, 24]

备注

如果 return_exceptions 为 False,则在 gather() 被标记为已完成后取消它将不会取消任何已提交的可等待对象。 例如,在将一个异常传播给调用者之后,gather 可被标记为已完成,因此,在从 gather 捕获一个(由可等待对象所引发的)异常之后调用 gather.cancel() 将不会取消任何其他可等待对象。

在 3.7 版更改: 如果 gather 本身被取消,则无论 return_exceptions 取值为何,消息都会被传播。

在 3.10 版更改: 移除了 loop 形参。

3.10 版后已移除: 如果未提供位置参数或者并非所有位置参数均为 Future 类对象并且没有正在运行的事件循环则会发出弃用警告。

屏蔽取消操作

awaitable asyncio.shield(aw)

保护一个 可等待对象 防止其被 取消

如果 aw 是一个协程,它将自动被作为任务调度。

以下语句:

  1. task = asyncio.create_task(something())
  2. res = await shield(task)

相当于:

  1. res = await something()

不同之处 在于如果包含它的协程被取消,在 something() 中运行的任务不会被取消。从 something() 的角度看来,取消操作并没有发生。然而其调用者已被取消,因此 “await” 表达式仍然会引发 CancelledError

如果通过其他方式取消 something() (例如在其内部操作) 则 shield() 也会取消。

如果希望完全忽略取消操作 (不推荐) 则 shield() 函数需要配合一个 try/except 代码段,如下所示:

  1. task = asyncio.create_task(something())
  2. try:
  3. res = await shield(task)
  4. except CancelledError:
  5. res = None

重要

Save a reference to tasks passed to this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn’t referenced elsewhere may get garbage collected at any time, even before it’s done.

在 3.10 版更改: 移除了 loop 形参。

3.10 版后已移除: 如果 aw 不是 Future 类对象并且没有正在运行的事件循环则会发出弃用警告。

超时

coroutine asyncio.timeout(delay)

An asynchronous context manager that can be used to limit the amount of time spent waiting on something.

delay can either be None, or a float/int number of seconds to wait. If delay is None, no time limit will be applied; this can be useful if the delay is unknown when the context manager is created.

In either case, the context manager can be rescheduled after creation using Timeout.reschedule().

示例:

  1. async def main():
  2. async with asyncio.timeout(10):
  3. await long_running_task()

If long_running_task takes more than 10 seconds to complete, the context manager will cancel the current task and handle the resulting asyncio.CancelledError internally, transforming it into an asyncio.TimeoutError which can be caught and handled.

备注

The asyncio.timeout() context manager is what transforms the asyncio.CancelledError into an asyncio.TimeoutError, which means the asyncio.TimeoutError can only be caught outside of the context manager.

Example of catching asyncio.TimeoutError:

  1. async def main():
  2. try:
  3. async with asyncio.timeout(10):
  4. await long_running_task()
  5. except TimeoutError:
  6. print("The long operation timed out, but we've handled it.")
  7. print("This statement will run regardless.")

The context manager produced by asyncio.timeout() can be rescheduled to a different deadline and inspected.

  • class asyncio.Timeout

    An asynchronous context manager that limits time spent inside of it.

    3.11 新版功能.

    • when() → float | None

      Return the current deadline, or None if the current deadline is not set.

      The deadline is a float, consistent with the time returned by loop.time().

    • reschedule(when: float | None)

      Change the time the timeout will trigger.

      If when is None, any current deadline will be removed, and the context manager will wait indefinitely.

      If when is a float, it is set as the new deadline.

      if when is in the past, the timeout will trigger on the next iteration of the event loop.

    • expired() → bool

      Return whether the context manager has exceeded its deadline (expired).

示例:

  1. async def main():
  2. try:
  3. # We do not know the timeout when starting, so we pass ``None``.
  4. async with asyncio.timeout(None) as cm:
  5. # We know the timeout now, so we reschedule it.
  6. new_deadline = get_running_loop().time() + 10
  7. cm.reschedule(new_deadline)
  8. await long_running_task()
  9. except TimeoutError:
  10. pass
  11. if cm.expired:
  12. print("Looks like we haven't finished on time.")

Timeout context managers can be safely nested.

3.11 新版功能.

coroutine asyncio.timeout_at(when)

Similar to asyncio.timeout(), except when is the absolute time to stop waiting, or None.

示例:

  1. async def main():
  2. loop = get_running_loop()
  3. deadline = loop.time() + 20
  4. try:
  5. async with asyncio.timeout_at(deadline):
  6. await long_running_task()
  7. except TimeoutError:
  8. print("The long operation timed out, but we've handled it.")
  9. print("This statement will run regardless.")

3.11 新版功能.

coroutine asyncio.wait_for(aw, timeout)

等待 aw 可等待对象 完成,指定 timeout 秒数后超时。

如果 aw 是一个协程,它将自动被作为任务调度。

timeout 可以为 None,也可以为 float 或 int 型数值表示的等待秒数。如果 timeoutNone,则等待直到完成。

If a timeout occurs, it cancels the task and raises TimeoutError.

要避免任务 取消,可以加上 shield()

此函数将等待直到 Future 确实被取消,所以总等待时间可能超过 timeout。 如果在取消期间发生了异常,异常将会被传播。

如果等待被取消,则 aw 指定的对象也会被取消。

在 3.10 版更改: 移除了 loop 形参。

示例:

  1. async def eternity():
  2. # Sleep for one hour
  3. await asyncio.sleep(3600)
  4. print('yay!')
  5. async def main():
  6. # Wait for at most 1 second
  7. try:
  8. await asyncio.wait_for(eternity(), timeout=1.0)
  9. except TimeoutError:
  10. print('timeout!')
  11. asyncio.run(main())
  12. # Expected output:
  13. #
  14. # timeout!

在 3.7 版更改: When aw is cancelled due to a timeout, wait_for waits for aw to be cancelled. Previously, it raised TimeoutError immediately.

在 3.10 版更改: 移除了 loop 形参。

简单等待

coroutine asyncio.wait(aws, **, timeout=None, return_when=ALL_COMPLETED*)

Run Future and Task instances in the aws iterable concurrently and block until the condition specified by return_when.

aws 可迭代对象必须不为空。

返回两个 Task/Future 集合: (done, pending)

用法:

  1. done, pending = await asyncio.wait(aws)

如指定 timeout (float 或 int 类型) 则它将被用于控制返回之前等待的最长秒数。

Note that this function does not raise TimeoutError. Futures or Tasks that aren’t done when the timeout occurs are simply returned in the second set.

return_when 指定此函数应在何时返回。它必须为以下常数之一:

常量

描述

FIRST_COMPLETED

函数将在任意可等待对象结束或取消时返回。

FIRST_EXCEPTION

函数将在任意可等待对象因引发异常而结束时返回。当没有引发任何异常时它就相当于 ALL_COMPLETED

ALL_COMPLETED

函数将在所有可等待对象结束或取消时返回。

wait_for() 不同,wait() 在超时发生时不会取消可等待对象。

在 3.10 版更改: 移除了 loop 形参。

在 3.11 版更改: Passing coroutine objects to wait() directly is forbidden.

asyncio.as_completed(aws, **, timeout=None*)

并发地运行 aws 可迭代对象中的 可等待对象。 返回一个协程的迭代器。 所返回的每个协程可被等待以从剩余的可等待对象的可迭代对象中获得最早的下一个结果。

Raises TimeoutError if the timeout occurs before all Futures are done.

在 3.10 版更改: 移除了 loop 形参。

示例:

  1. for coro in as_completed(aws):
  2. earliest_result = await coro
  3. # ...

在 3.10 版更改: 移除了 loop 形参。

3.10 版后已移除: 如果 aws 可迭代对象中的可等待对象不全为 Future 类对象并且没有正在运行的事件循环则会发出弃用警告。

在线程中运行

coroutine asyncio.to_thread(func, /, \args, **kwargs*)

在不同的线程中异步地运行函数 func

向此函数提供的任何 *args 和 **kwargs 会被直接传给 func。 并且,当前 contextvars.Context 会被传播,允许在不同的线程中访问来自事件循环的上下文变量。

返回一个可被等待以获取 func 的最终结果的协程。

This coroutine function is primarily intended to be used for executing IO-bound functions/methods that would otherwise block the event loop if they were run in the main thread. For example:

  1. def blocking_io():
  2. print(f"start blocking_io at {time.strftime('%X')}")
  3. # Note that time.sleep() can be replaced with any blocking
  4. # IO-bound operation, such as file operations.
  5. time.sleep(1)
  6. print(f"blocking_io complete at {time.strftime('%X')}")
  7. async def main():
  8. print(f"started main at {time.strftime('%X')}")
  9. await asyncio.gather(
  10. asyncio.to_thread(blocking_io),
  11. asyncio.sleep(1))
  12. print(f"finished main at {time.strftime('%X')}")
  13. asyncio.run(main())
  14. # Expected output:
  15. #
  16. # started main at 19:50:53
  17. # start blocking_io at 19:50:53
  18. # blocking_io complete at 19:50:54
  19. # finished main at 19:50:54

Directly calling blocking_io() in any coroutine would block the event loop for its duration, resulting in an additional 1 second of run time. Instead, by using asyncio.to_thread(), we can run it in a separate thread without blocking the event loop.

备注

Due to the GIL, asyncio.to_thread() can typically only be used to make IO-bound functions non-blocking. However, for extension modules that release the GIL or alternative Python implementations that don’t have one, asyncio.to_thread() can also be used for CPU-bound functions.

3.9 新版功能.

跨线程调度

asyncio.run_coroutine_threadsafe(coro, loop)

向指定事件循环提交一个协程。(线程安全)

返回一个 concurrent.futures.Future 以等待来自其他 OS 线程的结果。

此函数应该从另一个 OS 线程中调用,而非事件循环运行所在线程。示例:

  1. # Create a coroutine
  2. coro = asyncio.sleep(1, result=3)
  3. # Submit the coroutine to a given loop
  4. future = asyncio.run_coroutine_threadsafe(coro, loop)
  5. # Wait for the result with an optional timeout argument
  6. assert future.result(timeout) == 3

如果在协程内产生了异常,将会通知返回的 Future 对象。它也可被用来取消事件循环中的任务:

  1. try:
  2. result = future.result(timeout)
  3. except TimeoutError:
  4. print('The coroutine took too long, cancelling the task...')
  5. future.cancel()
  6. except Exception as exc:
  7. print(f'The coroutine raised an exception: {exc!r}')
  8. else:
  9. print(f'The coroutine returned: {result!r}')

参见 concurrency and multithreading 部分的文档。

不同与其他 asyncio 函数,此函数要求显式地传入 loop 参数。

3.5.1 新版功能.

内省

asyncio.current_task(loop=None)

返回当前运行的 Task 实例,如果没有正在运行的任务则返回 None

如果 loopNone 则会使用 get_running_loop() 获取当前事件循环。

3.7 新版功能.

asyncio.all_tasks(loop=None)

返回事件循环所运行的未完成的 Task 对象的集合。

如果 loopNone,则会使用 get_running_loop() 获取当前事件循环。

3.7 新版功能.

Task 对象

class asyncio.Task(coro, **, loop=None, name=None*)

一个与 Future 类似 的对象,可运行 Python 协程。非线程安全。

Task 对象被用来在事件循环中运行协程。如果一个协程在等待一个 Future 对象,Task 对象会挂起该协程的执行并等待该 Future 对象完成。当该 Future 对象 完成,被打包的协程将恢复执行。

事件循环使用协同日程调度: 一个事件循环每次运行一个 Task 对象。而一个 Task 对象会等待一个 Future 对象完成,该事件循环会运行其他 Task、回调或执行 IO 操作。

使用高层级的 asyncio.create_task() 函数来创建 Task 对象,也可用低层级的 loop.create_task()ensure_future() 函数。不建议手动实例化 Task 对象。

要取消一个正在运行的 Task 对象可使用 cancel() 方法。调用此方法将使该 Task 对象抛出一个 CancelledError 异常给打包的协程。如果取消期间一个协程正在等待一个 Future 对象,该 Future 对象也将被取消。

cancelled() 可被用来检测 Task 对象是否被取消。如果打包的协程没有抑制 CancelledError 异常并且确实被取消,该方法将返回 True

asyncio.TaskFuture 继承了其除 Future.set_result()Future.set_exception() 以外的所有 API。

Task 对象支持 contextvars 模块。当一个 Task 对象被创建,它将复制当前上下文,然后在复制的上下文中运行其协程。

在 3.7 版更改: 加入对 contextvars 模块的支持。

在 3.8 版更改: 添加了 name 形参。

3.10 版后已移除: 如果未指定 loop 并且没有正在运行的事件循环则会发出弃用警告。

  • done()

    如果 Task 对象 已完成 则返回 True

    当 Task 所封包的协程返回一个值、引发一个异常或 Task 本身被取消时,则会被认为 已完成

  • result()

    返回 Task 的结果。

    如果 Task 对象 已完成,其封包的协程的结果会被返回 (或者当协程引发异常时,该异常会被重新引发。)

    如果 Task 对象 被取消,此方法会引发一个 CancelledError 异常。

    如果 Task 对象的结果还不可用,此方法会引发一个 InvalidStateError 异常。

  • exception()

    返回 Task 对象的异常。

    如果所封包的协程引发了一个异常,该异常将被返回。如果所封包的协程正常返回则该方法将返回 None

    如果 Task 对象 被取消,此方法会引发一个 CancelledError 异常。

    如果 Task 对象尚未 完成,此方法将引发一个 InvalidStateError 异常。

  • add_done_callback(callback, **, context=None*)

    添加一个回调,将在 Task 对象 完成 时被运行。

    此方法应该仅在低层级的基于回调的代码中使用。

    要了解更多细节请查看 Future.add_done_callback() 的文档。

  • remove_done_callback(callback)

    从回调列表中移除 callback

    此方法应该仅在低层级的基于回调的代码中使用。

    要了解更多细节请查看 Future.remove_done_callback() 的文档。

  • get_stack(**, limit=None*)

    返回此 Task 对象的栈框架列表。

    如果所封包的协程未完成,这将返回其挂起所在的栈。如果协程已成功完成或被取消,这将返回一个空列表。如果协程被一个异常终止,这将返回回溯框架列表。

    框架总是从按从旧到新排序。

    每个被挂起的协程只返回一个栈框架。

    可选的 limit 参数指定返回框架的数量上限;默认返回所有框架。返回列表的顺序要看是返回一个栈还是一个回溯:栈返回最新的框架,回溯返回最旧的框架。(这与 traceback 模块的行为保持一致。)

  • print_stack(**, limit=None, file=None*)

    打印此 Task 对象的栈或回溯。

    此方法产生的输出类似于 traceback 模块通过 get_stack() 所获取的框架。

    limit 参数会直接传递给 get_stack()

    file 参数是输出所写入的 I/O 流;默认情况下输出会写入 sys.stderr

  • get_coro()

    返回由 Task 包装的协程对象。

    3.8 新版功能.

  • get_name()

    返回 Task 的名称。

    如果没有一个 Task 名称被显式地赋值,默认的 asyncio Task 实现会在实例化期间生成一个默认名称。

    3.8 新版功能.

  • set_name(value)

    设置 Task 的名称。

    value 参数可以为任意对象,它随后会被转换为字符串。

    在默认的 Task 实现中,名称将在任务对象的 repr() 输出中可见。

    3.8 新版功能.

  • cancel(msg=None)

    请求取消 Task 对象。

    这将安排在下一轮事件循环中抛出一个 CancelledError 异常给被封包的协程。

    协程在之后有机会进行清理甚至使用 try … … except CancelledErrorfinally 代码块抑制异常来拒绝请求。不同于 Future.cancel()Task.cancel() 不保证 Task 会被取消,虽然抑制完全取消并不常见,也很不鼓励这样做。

    在 3.9 版更改: 增加了 msg 形参。

    在 3.11 版更改: The msg parameter is propagated from cancelled task to its awaiter.

    以下示例演示了协程是如何侦听取消请求的:

    1. async def cancel_me():
    2. print('cancel_me(): before sleep')
    3. try:
    4. # Wait for 1 hour
    5. await asyncio.sleep(3600)
    6. except asyncio.CancelledError:
    7. print('cancel_me(): cancel sleep')
    8. raise
    9. finally:
    10. print('cancel_me(): after sleep')
    11. async def main():
    12. # Create a "cancel_me" Task
    13. task = asyncio.create_task(cancel_me())
    14. # Wait for 1 second
    15. await asyncio.sleep(1)
    16. task.cancel()
    17. try:
    18. await task
    19. except asyncio.CancelledError:
    20. print("main(): cancel_me is cancelled now")
    21. asyncio.run(main())
    22. # Expected output:
    23. #
    24. # cancel_me(): before sleep
    25. # cancel_me(): cancel sleep
    26. # cancel_me(): after sleep
    27. # main(): cancel_me is cancelled now
  • cancelled()

    如果 Task 对象 被取消 则返回 True

    当使用 cancel() 发出取消请求时 Task 会被 取消,其封包的协程将传播被抛入的 CancelledError 异常。

  • uncancel()

    Decrement the count of cancellation requests to this Task.

    Returns the remaining number of cancellation requests.

    Note that once execution of a cancelled task completed, further calls to uncancel() are ineffective.

    3.11 新版功能.

    This method is used by asyncio’s internals and isn’t expected to be used by end-user code. In particular, if a Task gets successfully uncancelled, this allows for elements of structured concurrency like Task Groups and asyncio.timeout() to continue running, isolating cancellation to the respective structured block. For example:

    1. async def make_request_with_timeout():
    2. try:
    3. async with asyncio.timeout(1):
    4. # Structured block affected by the timeout:
    5. await make_request()
    6. await make_another_request()
    7. except TimeoutError:
    8. log("There was a timeout")
    9. # Outer code not affected by the timeout:
    10. await unrelated_code()

    While the block with make_request() and make_another_request() might get cancelled due to the timeout, unrelated_code() should continue running even in case of the timeout. This is implemented with uncancel(). TaskGroup context managers use uncancel() in a similar fashion.

  • cancelling()

    Return the number of pending cancellation requests to this Task, i.e., the number of calls to cancel() less the number of uncancel() calls.

    Note that if this number is greater than zero but the Task is still executing, cancelled() will still return False. This is because this number can be lowered by calling uncancel(), which can lead to the task not being cancelled after all if the cancellation requests go down to zero.

    This method is used by asyncio’s internals and isn’t expected to be used by end-user code. See uncancel() for more details.

    3.11 新版功能.