Downloader Middleware

The downloader middleware is a framework of hooks into Scrapy’s request/response processing. It’s a light, low-level system for globally altering Scrapy’s requests and responses.

Activating a downloader middleware

To activate a downloader middleware component, add it to the DOWNLOADER_MIDDLEWARES setting, which is a dict whose keys are the middleware class paths and their values are the middleware orders.

Here’s an example:

  1. DOWNLOADER_MIDDLEWARES = {
  2. 'myproject.middlewares.CustomDownloaderMiddleware': 543,
  3. }

The DOWNLOADER_MIDDLEWARES setting is merged with the DOWNLOADER_MIDDLEWARES_BASE setting defined in Scrapy (and not meant to be overridden) and then sorted by order to get the final sorted list of enabled middlewares: the first middleware is the one closer to the engine and the last is the one closer to the downloader. In other words, the process_request() method of each middleware will be invoked in increasing middleware order (100, 200, 300, …) and the process_response() method of each middleware will be invoked in decreasing order.

To decide which order to assign to your middleware see the DOWNLOADER_MIDDLEWARES_BASE setting and pick a value according to where you want to insert the middleware. The order does matter because each middleware performs a different action and your middleware could depend on some previous (or subsequent) middleware being applied.

If you want to disable a built-in middleware (the ones defined in DOWNLOADER_MIDDLEWARES_BASE and enabled by default) you must define it in your project’s DOWNLOADER_MIDDLEWARES setting and assign None as its value. For example, if you want to disable the user-agent middleware:

  1. DOWNLOADER_MIDDLEWARES = {
  2. 'myproject.middlewares.CustomDownloaderMiddleware': 543,
  3. 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
  4. }

Finally, keep in mind that some middlewares may need to be enabled through a particular setting. See each middleware documentation for more info.

Writing your own downloader middleware

Each downloader middleware is a Python class that defines one or more of the methods defined below.

The main entry point is the from_crawler class method, which receives a Crawler instance. The Crawler object gives you access, for example, to the settings.

class scrapy.downloadermiddlewares.``DownloaderMiddleware

Note

Any of the downloader middleware methods may also return a deferred.

  • process_request(request, spider)

    This method is called for each request that goes through the download middleware.

    process_request() should either: return None, return a Response object, return a Request object, or raise IgnoreRequest.

    If it returns None, Scrapy will continue processing this request, executing all other middlewares until, finally, the appropriate downloader handler is called the request performed (and its response downloaded).

    If it returns a Response object, Scrapy won’t bother calling any other process_request() or process_exception() methods, or the appropriate download function; it’ll return that response. The process_response() methods of installed middleware is always called on every response.

    If it returns a Request object, Scrapy will stop calling process_request methods and reschedule the returned request. Once the newly returned request is performed, the appropriate middleware chain will be called on the downloaded response.

    If it raises an IgnoreRequest exception, the process_exception() methods of installed downloader middleware will be called. If none of them handle the exception, the errback function of the request (Request.errback) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).

    • Parameters

      • request (Request object) – the request being processed

      • spider (Spider object) – the spider for which this request is intended

  • process_response(request, response, spider)

    process_response() should either: return a Response object, return a Request object or raise a IgnoreRequest exception.

    If it returns a Response (it could be the same given response, or a brand-new one), that response will continue to be processed with the process_response() of the next middleware in the chain.

    If it returns a Request object, the middleware chain is halted and the returned request is rescheduled to be downloaded in the future. This is the same behavior as if a request is returned from process_request().

    If it raises an IgnoreRequest exception, the errback function of the request (Request.errback) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).

    • Parameters

      • request (is a Request object) – the request that originated the response

      • response (Response object) – the response being processed

      • spider (Spider object) – the spider for which this response is intended

  • process_exception(request, exception, spider)

    Scrapy calls process_exception() when a download handler or a process_request() (from a downloader middleware) raises an exception (including an IgnoreRequest exception)

    process_exception() should return: either None, a Response object, or a Request object.

    If it returns None, Scrapy will continue processing this exception, executing any other process_exception() methods of installed middleware, until no middleware is left and the default exception handling kicks in.

    If it returns a Response object, the process_response() method chain of installed middleware is started, and Scrapy won’t bother calling any other process_exception() methods of middleware.

    If it returns a Request object, the returned request is rescheduled to be downloaded in the future. This stops the execution of process_exception() methods of the middleware the same as returning a response would.

    • Parameters

      • request (is a Request object) – the request that generated the exception

      • exception (an Exception object) – the raised exception

      • spider (Spider object) – the spider for which this request is intended

  • from_crawler(cls, crawler)

    If present, this classmethod is called to create a middleware instance from a Crawler. It must return a new instance of the middleware. Crawler object provides access to all Scrapy core components like settings and signals; it is a way for middleware to access them and hook its functionality into Scrapy.

    • Parameters

      crawler (Crawler object) – crawler that uses this middleware

Built-in downloader middleware reference

This page describes all downloader middleware components that come with Scrapy. For information on how to use them and how to write your own downloader middleware, see the downloader middleware usage guide.

For a list of the components enabled by default (and their orders) see the DOWNLOADER_MIDDLEWARES_BASE setting.

CookiesMiddleware

class scrapy.downloadermiddlewares.cookies.``CookiesMiddleware[source]

This middleware enables working with sites that require cookies, such as those that use sessions. It keeps track of cookies sent by web servers, and sends them back on subsequent requests (from that spider), just like web browsers do.

The following settings can be used to configure the cookie middleware:

Multiple cookie sessions per spider

New in version 0.15.

There is support for keeping multiple cookie sessions per spider by using the cookiejar Request meta key. By default it uses a single cookie jar (session), but you can pass an identifier to use different ones.

For example:

  1. for i, url in enumerate(urls):
  2. yield scrapy.Request(url, meta={'cookiejar': i},
  3. callback=self.parse_page)

Keep in mind that the cookiejar meta key is not “sticky”. You need to keep passing it along on subsequent requests. For example:

  1. def parse_page(self, response):
  2. # do some processing
  3. return scrapy.Request("http://www.example.com/otherpage",
  4. meta={'cookiejar': response.meta['cookiejar']},
  5. callback=self.parse_other_page)

COOKIES_ENABLED

Default: True

Whether to enable the cookies middleware. If disabled, no cookies will be sent to web servers.

Notice that despite the value of COOKIES_ENABLED setting if Request.meta['dont_merge_cookies'] evaluates to True the request cookies will not be sent to the web server and received cookies in Response will not be merged with the existing cookies.

For more detailed information see the cookies parameter in Request.

COOKIES_DEBUG

Default: False

If enabled, Scrapy will log all cookies sent in requests (i.e. Cookie header) and all cookies received in responses (i.e. Set-Cookie header).

Here’s an example of a log with COOKIES_DEBUG enabled:

  1. 2011-04-06 14:35:10-0300 [scrapy.core.engine] INFO: Spider opened
  2. 2011-04-06 14:35:10-0300 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <GET http://www.diningcity.com/netherlands/index.html>
  3. Cookie: clientlanguage_nl=en_EN
  4. 2011-04-06 14:35:14-0300 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 http://www.diningcity.com/netherlands/index.html>
  5. Set-Cookie: JSESSIONID=B~FA4DC0C496C8762AE4F1A620EAB34F38; Path=/
  6. Set-Cookie: ip_isocode=US
  7. Set-Cookie: clientlanguage_nl=en_EN; Expires=Thu, 07-Apr-2011 21:21:34 GMT; Path=/
  8. 2011-04-06 14:49:50-0300 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.diningcity.com/netherlands/index.html> (referer: None)
  9. [...]

DefaultHeadersMiddleware

class scrapy.downloadermiddlewares.defaultheaders.``DefaultHeadersMiddleware[source]

This middleware sets all default requests headers specified in the DEFAULT_REQUEST_HEADERS setting.

DownloadTimeoutMiddleware

class scrapy.downloadermiddlewares.downloadtimeout.``DownloadTimeoutMiddleware[source]

This middleware sets the download timeout for requests specified in the DOWNLOAD_TIMEOUT setting or download_timeout spider attribute.

Note

You can also set download timeout per-request using download_timeout Request.meta key; this is supported even when DownloadTimeoutMiddleware is disabled.

HttpAuthMiddleware

class scrapy.downloadermiddlewares.httpauth.``HttpAuthMiddleware[source]

This middleware authenticates all requests generated from certain spiders using Basic access authentication (aka. HTTP auth).

To enable HTTP authentication from certain spiders, set the http_user and http_pass attributes of those spiders.

Example:

  1. from scrapy.spiders import CrawlSpider
  2. class SomeIntranetSiteSpider(CrawlSpider):
  3. http_user = 'someuser'
  4. http_pass = 'somepass'
  5. name = 'intranet.example.com'
  6. # .. rest of the spider code omitted ...

HttpCacheMiddleware

class scrapy.downloadermiddlewares.httpcache.``HttpCacheMiddleware[source]

This middleware provides low-level cache to all HTTP requests and responses. It has to be combined with a cache storage backend as well as a cache policy.

Scrapy ships with three HTTP cache storage backends:

You can change the HTTP cache storage backend with the HTTPCACHE_STORAGE setting. Or you can also implement your own storage backend.

Scrapy ships with two HTTP cache policies:

You can change the HTTP cache policy with the HTTPCACHE_POLICY setting. Or you can also implement your own policy.

You can also avoid caching a response on every policy using dont_cache meta key equals True.

Dummy policy (default)

class scrapy.extensions.httpcache.``DummyPolicy[source]

This policy has no awareness of any HTTP Cache-Control directives. Every request and its corresponding response are cached. When the same request is seen again, the response is returned without transferring anything from the Internet.

The Dummy policy is useful for testing spiders faster (without having to wait for downloads every time) and for trying your spider offline, when an Internet connection is not available. The goal is to be able to “replay” a spider run exactly as it ran before.

RFC2616 policy

class scrapy.extensions.httpcache.``RFC2616Policy[source]

This policy provides a RFC2616 compliant HTTP cache, i.e. with HTTP Cache-Control awareness, aimed at production and used in continuous runs to avoid downloading unmodified data (to save bandwidth and speed up crawls).

What is implemented:

  • Do not attempt to store responses/requests with no-store cache-control directive set

  • Do not serve responses from cache if no-cache cache-control directive is set even for fresh responses

  • Compute freshness lifetime from max-age cache-control directive

  • Compute freshness lifetime from Expires response header

  • Compute freshness lifetime from Last-Modified response header (heuristic used by Firefox)

  • Compute current age from Age response header

  • Compute current age from Date header

  • Revalidate stale responses based on Last-Modified response header

  • Revalidate stale responses based on ETag response header

  • Set Date header for any received response missing it

  • Support max-stale cache-control directive in requests

This allows spiders to be configured with the full RFC2616 cache policy, but avoid revalidation on a request-by-request basis, while remaining conformant with the HTTP spec.

Example:

Add Cache-Control: max-stale=600 to Request headers to accept responses that have exceeded their expiration time by no more than 600 seconds.

See also: RFC2616, 14.9.3

What is missing:

Filesystem storage backend (default)

class scrapy.extensions.httpcache.``FilesystemCacheStorage[source]

File system storage backend is available for the HTTP cache middleware.

Each request/response pair is stored in a different directory containing the following files:

  • request_body - the plain request body

  • request_headers - the request headers (in raw HTTP format)

  • response_body - the plain response body

  • response_headers - the request headers (in raw HTTP format)

  • meta - some metadata of this cache resource in Python repr() format (grep-friendly format)

  • pickled_meta - the same metadata in meta but pickled for more efficient deserialization

The directory name is made from the request fingerprint (see scrapy.utils.request.fingerprint), and one level of subdirectories is used to avoid creating too many files into the same directory (which is inefficient in many file systems). An example directory could be:

  1. /path/to/cache/dir/example.com/72/72811f648e718090f041317756c03adb0ada46c7

DBM storage backend

class scrapy.extensions.httpcache.``DbmCacheStorage[source]

New in version 0.13.

A DBM storage backend is also available for the HTTP cache middleware.

By default, it uses the dbm, but you can change it with the HTTPCACHE_DBM_MODULE setting.

Writing your own storage backend

You can implement a cache storage backend by creating a Python class that defines the methods described below.

class scrapy.extensions.httpcache.``CacheStorage

  • open_spider(spider)

    This method gets called after a spider has been opened for crawling. It handles the open_spider signal.

    • Parameters

      spider (Spider object) – the spider which has been opened

  • close_spider(spider)

    This method gets called after a spider has been closed. It handles the close_spider signal.

    • Parameters

      spider (Spider object) – the spider which has been closed

  • retrieve_response(spider, request)

    Return response if present in cache, or None otherwise.

    • Parameters

      • spider (Spider object) – the spider which generated the request

      • request (Request object) – the request to find cached response for

  • store_response(spider, request, response)

    Store the given response in the cache.

    • Parameters

      • spider (Spider object) – the spider for which the response is intended

      • request (Request object) – the corresponding request the spider generated

      • response (Response object) – the response to store in the cache

In order to use your storage backend, set:

HTTPCache middleware settings

The HttpCacheMiddleware can be configured through the following settings:

HTTPCACHE_ENABLED

New in version 0.11.

Default: False

Whether the HTTP cache will be enabled.

Changed in version 0.11: Before 0.11, HTTPCACHE_DIR was used to enable cache.

HTTPCACHE_EXPIRATION_SECS

Default: 0

Expiration time for cached requests, in seconds.

Cached requests older than this time will be re-downloaded. If zero, cached requests will never expire.

Changed in version 0.11: Before 0.11, zero meant cached requests always expire.

HTTPCACHE_DIR

Default: 'httpcache'

The directory to use for storing the (low-level) HTTP cache. If empty, the HTTP cache will be disabled. If a relative path is given, is taken relative to the project data dir. For more info see: Default structure of Scrapy projects.

HTTPCACHE_IGNORE_HTTP_CODES

New in version 0.10.

Default: []

Don’t cache response with these HTTP codes.

HTTPCACHE_IGNORE_MISSING

Default: False

If enabled, requests not found in the cache will be ignored instead of downloaded.

HTTPCACHE_IGNORE_SCHEMES

New in version 0.10.

Default: ['file']

Don’t cache responses with these URI schemes.

HTTPCACHE_STORAGE

Default: 'scrapy.extensions.httpcache.FilesystemCacheStorage'

The class which implements the cache storage backend.

HTTPCACHE_DBM_MODULE

New in version 0.13.

Default: 'dbm'

The database module to use in the DBM storage backend. This setting is specific to the DBM backend.

HTTPCACHE_POLICY

New in version 0.18.

Default: 'scrapy.extensions.httpcache.DummyPolicy'

The class which implements the cache policy.

HTTPCACHE_GZIP

New in version 1.0.

Default: False

If enabled, will compress all cached data with gzip. This setting is specific to the Filesystem backend.

HTTPCACHE_ALWAYS_STORE

New in version 1.1.

Default: False

If enabled, will cache pages unconditionally.

A spider may wish to have all responses available in the cache, for future use with Cache-Control: max-stale, for instance. The DummyPolicy caches all responses but never revalidates them, and sometimes a more nuanced policy is desirable.

This setting still respects Cache-Control: no-store directives in responses. If you don’t want that, filter no-store out of the Cache-Control headers in responses you feed to the cache middleware.

HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS

New in version 1.1.

Default: []

List of Cache-Control directives in responses to be ignored.

Sites often set “no-store”, “no-cache”, “must-revalidate”, etc., but get upset at the traffic a spider can generate if it actually respects those directives. This allows to selectively ignore Cache-Control directives that are known to be unimportant for the sites being crawled.

We assume that the spider will not issue Cache-Control directives in requests unless it actually needs them, so directives in requests are not filtered.

HttpCompressionMiddleware

class scrapy.downloadermiddlewares.httpcompression.``HttpCompressionMiddleware[source]

This middleware allows compressed (gzip, deflate) traffic to be sent/received from web sites.

This middleware also supports decoding brotli-compressed responses, provided brotlipy is installed.

HttpCompressionMiddleware Settings

COMPRESSION_ENABLED

Default: True

Whether the Compression middleware will be enabled.

HttpProxyMiddleware

New in version 0.8.

class scrapy.downloadermiddlewares.httpproxy.``HttpProxyMiddleware[source]

This middleware sets the HTTP proxy to use for requests, by setting the proxy meta value for Request objects.

Like the Python standard library module urllib.request, it obeys the following environment variables:

  • http_proxy

  • https_proxy

  • no_proxy

You can also set the meta key proxy per-request, to a value like http://some_proxy_server:port or http://username:password@some_proxy_server:port. Keep in mind this value will take precedence over http_proxy/https_proxy environment variables, and it will also ignore no_proxy environment variable.

RedirectMiddleware

class scrapy.downloadermiddlewares.redirect.``RedirectMiddleware[source]

This middleware handles redirection of requests based on response status.

The urls which the request goes through (while being redirected) can be found in the redirect_urls Request.meta key.

The reason behind each redirect in redirect_urls can be found in the redirect_reasons Request.meta key. For example: [301, 302, 307, 'meta refresh'].

The format of a reason depends on the middleware that handled the corresponding redirect. For example, RedirectMiddleware indicates the triggering response status code as an integer, while MetaRefreshMiddleware always uses the 'meta refresh' string as reason.

The RedirectMiddleware can be configured through the following settings (see the settings documentation for more info):

If Request.meta has dont_redirect key set to True, the request will be ignored by this middleware.

If you want to handle some redirect status codes in your spider, you can specify these in the handle_httpstatus_list spider attribute.

For example, if you want the redirect middleware to ignore 301 and 302 responses (and pass them through to your spider) you can do this:

  1. class MySpider(CrawlSpider):
  2. handle_httpstatus_list = [301, 302]

The handle_httpstatus_list key of Request.meta can also be used to specify which response codes to allow on a per-request basis. You can also set the meta key handle_httpstatus_all to True if you want to allow any response code for a request.

RedirectMiddleware settings

REDIRECT_ENABLED

New in version 0.13.

Default: True

Whether the Redirect middleware will be enabled.

REDIRECT_MAX_TIMES

Default: 20

The maximum number of redirections that will be followed for a single request. After this maximum, the request’s response is returned as is.

MetaRefreshMiddleware

class scrapy.downloadermiddlewares.redirect.``MetaRefreshMiddleware[source]

This middleware handles redirection of requests based on meta-refresh html tag.

The MetaRefreshMiddleware can be configured through the following settings (see the settings documentation for more info):

This middleware obey REDIRECT_MAX_TIMES setting, dont_redirect, redirect_urls and redirect_reasons request meta keys as described for RedirectMiddleware

MetaRefreshMiddleware settings

METAREFRESH_ENABLED

New in version 0.17.

Default: True

Whether the Meta Refresh middleware will be enabled.

METAREFRESH_IGNORE_TAGS

Default: []

Meta tags within these tags are ignored.

Changed in version 2.0: The default value of METAREFRESH_IGNORE_TAGS changed from ['script', 'noscript'] to [].

METAREFRESH_MAXDELAY

Default: 100

The maximum meta-refresh delay (in seconds) to follow the redirection. Some sites use meta-refresh for redirecting to a session expired page, so we restrict automatic redirection to the maximum delay.

RetryMiddleware

class scrapy.downloadermiddlewares.retry.``RetryMiddleware[source]

A middleware to retry failed requests that are potentially caused by temporary problems such as a connection timeout or HTTP 500 error.

Failed pages are collected on the scraping process and rescheduled at the end, once the spider has finished crawling all regular (non failed) pages.

The RetryMiddleware can be configured through the following settings (see the settings documentation for more info):

If Request.meta has dont_retry key set to True, the request will be ignored by this middleware.

RetryMiddleware Settings

RETRY_ENABLED

New in version 0.13.

Default: True

Whether the Retry middleware will be enabled.

RETRY_TIMES

Default: 2

Maximum number of times to retry, in addition to the first download.

Maximum number of retries can also be specified per-request using max_retry_times attribute of Request.meta. When initialized, the max_retry_times meta key takes higher precedence over the RETRY_TIMES setting.

RETRY_HTTP_CODES

Default: [500, 502, 503, 504, 522, 524, 408, 429]

Which HTTP response codes to retry. Other errors (DNS lookup issues, connections lost, etc) are always retried.

In some cases you may want to add 400 to RETRY_HTTP_CODES because it is a common code used to indicate server overload. It is not included by default because HTTP specs say so.

RobotsTxtMiddleware

class scrapy.downloadermiddlewares.robotstxt.``RobotsTxtMiddleware[source]

This middleware filters out requests forbidden by the robots.txt exclusion standard.

To make sure Scrapy respects robots.txt make sure the middleware is enabled and the ROBOTSTXT_OBEY setting is enabled.

The ROBOTSTXT_USER_AGENT setting can be used to specify the user agent string to use for matching in the robots.txt file. If it is None, the User-Agent header you are sending with the request or the USER_AGENT setting (in that order) will be used for determining the user agent to use in the robots.txt file.

This middleware has to be combined with a robots.txt parser.

Scrapy ships with support for the following robots.txt parsers:

You can change the robots.txt parser with the ROBOTSTXT_PARSER setting. Or you can also implement support for a new parser.

If Request.meta has dont_obey_robotstxt key set to True the request will be ignored by this middleware even if ROBOTSTXT_OBEY is enabled.

Parsers vary in several aspects:

  • Language of implementation

  • Supported specification

  • Support for wildcard matching

  • Usage of length based rule: in particular for Allow and Disallow directives, where the most specific rule based on the length of the path trumps the less specific (shorter) rule

Performance comparison of different parsers is available at the following link.

Protego parser

Based on Protego:

Scrapy uses this parser by default.

RobotFileParser

Based on RobotFileParser:

It is faster than Protego and backward-compatible with versions of Scrapy before 1.8.0.

In order to use this parser, set:

Reppy parser

Based on Reppy:

Native implementation, provides better speed than Protego.

In order to use this parser:

  • Install Reppy by running pip install reppy

  • Set ROBOTSTXT_PARSER setting to scrapy.robotstxt.ReppyRobotParser

Robotexclusionrulesparser

Based on Robotexclusionrulesparser:

In order to use this parser:

Implementing support for a new parser

You can implement support for a new robots.txt parser by subclassing the abstract base class RobotParser and implementing the methods described below.

class scrapy.robotstxt.``RobotParser[source]

  • abstract allowed(url, user_agent)[source]

    Return True if user_agent is allowed to crawl url, otherwise return False.

    • Parameters

      • url (string) – Absolute URL

      • user_agent (string) – User agent

  • abstract classmethod from_crawler(crawler, robotstxt_body)[source]

    Parse the content of a robots.txt file as bytes. This must be a class method. It must return a new instance of the parser backend.

    • Parameters

      • crawler (Crawler instance) – crawler which made the request

      • robotstxt_body (bytes) – content of a robots.txt file.

DownloaderStats

class scrapy.downloadermiddlewares.stats.``DownloaderStats[source]

Middleware that stores stats of all requests, responses and exceptions that pass through it.

To use this middleware you must enable the DOWNLOADER_STATS setting.

UserAgentMiddleware

class scrapy.downloadermiddlewares.useragent.``UserAgentMiddleware[source]

Middleware that allows spiders to override the default user agent.

In order for a spider to override the default user agent, its user_agent attribute must be set.

AjaxCrawlMiddleware

class scrapy.downloadermiddlewares.ajaxcrawl.``AjaxCrawlMiddleware[source]

Middleware that finds ‘AJAX crawlable’ page variants based on meta-fragment html tag. See https://developers.google.com/search/docs/ajax-crawling/docs/getting-started for more info.

Note

Scrapy finds ‘AJAX crawlable’ pages for URLs like 'http://example.com/!#foo=bar' even without this middleware. AjaxCrawlMiddleware is necessary when URL doesn’t contain '!#'. This is often a case for ‘index’ or ‘main’ website pages.

AjaxCrawlMiddleware Settings

AJAXCRAWL_ENABLED

New in version 0.21.

Default: False

Whether the AjaxCrawlMiddleware will be enabled. You may want to enable it for broad crawls.

HttpProxyMiddleware settings

HTTPPROXY_ENABLED

Default: True

Whether or not to enable the HttpProxyMiddleware.

HTTPPROXY_AUTH_ENCODING

Default: "latin-1"

The default encoding for proxy authentication on HttpProxyMiddleware.