Coroutines

New in version 2.0.

Scrapy has partial support for the coroutine syntax.

Supported callables

The following callables may be defined as coroutines using async def, and hence use coroutine syntax (e.g. await, async for, async with):

Usage

There are several use cases for coroutines in Scrapy. Code that would return Deferreds when written for previous Scrapy versions, such as downloader middlewares and signal handlers, can be rewritten to be shorter and cleaner:

  1. from itemadapter import ItemAdapter
  2. class DbPipeline:
  3. def _update_item(self, data, item):
  4. adapter = ItemAdapter(item)
  5. adapter['field'] = data
  6. return item
  7. def process_item(self, item, spider):
  8. adapter = ItemAdapter(item)
  9. dfd = db.get_some_data(adapter['id'])
  10. dfd.addCallback(self._update_item, item)
  11. return dfd

becomes:

  1. from itemadapter import ItemAdapter
  2. class DbPipeline:
  3. async def process_item(self, item, spider):
  4. adapter = ItemAdapter(item)
  5. adapter['field'] = await db.get_some_data(adapter['id'])
  6. return item

Coroutines may be used to call asynchronous code. This includes other coroutines, functions that return Deferreds and functions that return awaitable objects such as Future. This means you can use many useful Python libraries providing such code:

  1. class MySpiderDeferred(Spider):
  2. # ...
  3. async def parse(self, response):
  4. additional_response = await treq.get('https://additional.url')
  5. additional_data = await treq.content(additional_response)
  6. # ... use response and additional_data to yield items and requests
  7. class MySpiderAsyncio(Spider):
  8. # ...
  9. async def parse(self, response):
  10. async with aiohttp.ClientSession() as session:
  11. async with session.get('https://additional.url') as additional_response:
  12. additional_data = await additional_response.text()
  13. # ... use response and additional_data to yield items and requests

Note

Many libraries that use coroutines, such as aio-libs, require the asyncio loop and to use them you need to enable asyncio support in Scrapy.

Note

If you want to await on Deferreds while using the asyncio reactor, you need to wrap them.

Common use cases for asynchronous code include:

  • requesting data from websites, databases and other services (in callbacks, pipelines and middlewares);

  • storing data in databases (in pipelines and middlewares);

  • delaying the spider initialization until some external event (in the spider_opened handler);

  • calling asynchronous Scrapy methods like ExecutionEngine.download (see the screenshot pipeline example).