Downloader Middleware

The downloader middleware is a framework of hooks into Scrapy’srequest/response processing. It’s a light, low-level system for globallyaltering Scrapy’s requests and responses.

Activating a downloader middleware

To activate a downloader middleware component, add it to theDOWNLOADER_MIDDLEWARES setting, which is a dict whose keys are themiddleware class paths and their values are the middleware orders.

Here’s an example:

  2. 'myproject.middlewares.CustomDownloaderMiddleware': 543,
  3. }

The DOWNLOADER_MIDDLEWARES setting is merged with theDOWNLOADER_MIDDLEWARES_BASE setting defined in Scrapy (and not meantto be overridden) and then sorted by order to get the final sorted list ofenabled middlewares: the first middleware is the one closer to the engine andthe last is the one closer to the downloader. In other words,the process_request()method of each middleware will be invoked in increasingmiddleware order (100, 200, 300, …) and the process_response() methodof each middleware will be invoked in decreasing order.

To decide which order to assign to your middleware see theDOWNLOADER_MIDDLEWARES_BASE setting and pick a value according towhere you want to insert the middleware. The order does matter because eachmiddleware performs a different action and your middleware could depend on someprevious (or subsequent) middleware being applied.

If you want to disable a built-in middleware (the ones defined inDOWNLOADER_MIDDLEWARES_BASE and enabled by default) you must define itin your project’s DOWNLOADER_MIDDLEWARES setting and assign Noneas its value. For example, if you want to disable the user-agent middleware:

  2. 'myproject.middlewares.CustomDownloaderMiddleware': 543,
  3. 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
  4. }

Finally, keep in mind that some middlewares may need to be enabled through aparticular setting. See each middleware documentation for more info.

Writing your own downloader middleware

Each downloader middleware is a Python class that defines one or more of themethods defined below.

The main entry point is the from_crawler class method, which receives aCrawler instance. The Crawlerobject gives you access, for example, to the settings.

  • class scrapy.downloadermiddlewares.DownloaderMiddleware


Any of the downloader middleware methods may also return a deferred.

  • processrequest(_request, spider)
  • This method is called for each request that goes through the downloadmiddleware.

process_request() should either: return None, return aResponse object, return a Requestobject, or raise IgnoreRequest.

If it returns None, Scrapy will continue processing this request, executing allother middlewares until, finally, the appropriate downloader handler is calledthe request performed (and its response downloaded).

If it returns a Response object, Scrapy won’t bothercalling any other process_request() or process_exception() methods,or the appropriate download function; it’ll return that response. The process_response()methods of installed middleware is always called on every response.

If it returns a Request object, Scrapy will stop callingprocess_request methods and reschedule the returned request. Once the newly returnedrequest is performed, the appropriate middleware chain will be called onthe downloaded response.

If it raises an IgnoreRequest exception, theprocess_exception() methods of installed downloader middleware will be called.If none of them handle the exception, the errback function of the request(Request.errback) is called. If no code handles the raised exception, it isignored and not logged (unlike other exceptions).


  1. - **request** ([<code>Request</code>]($ object) – the request being processed
  2. - **spider** ([<code>Spider</code>]($ object) – the spider for which this request is intended

If it returns a Response (it could be the same givenresponse, or a brand-new one), that response will continue to be processedwith the process_response() of the next middleware in the chain.

If it returns a Request object, the middleware chain ishalted and the returned request is rescheduled to be downloaded in the future.This is the same behavior as if a request is returned from process_request().

If it raises an IgnoreRequest exception, the errbackfunction of the request (Request.errback) is called. If no code handles the raisedexception, it is ignored and not logged (unlike other exceptions).


  1. - **request** (is a [<code>Request</code>]($ object) – the request that originated the response
  2. - **response** ([<code>Response</code>]($ object) – the response being processed
  3. - **spider** ([<code>Spider</code>]($ object) – the spider for which this response is intended

process_exception() should return: either None,a Response object, or a Request object.

If it returns None, Scrapy will continue processing this exception,executing any other process_exception() methods of installed middleware,until no middleware is left and the default exception handling kicks in.

If it returns a Response object, the process_response()method chain of installed middleware is started, and Scrapy won’t bother callingany other process_exception() methods of middleware.

If it returns a Request object, the returned request isrescheduled to be downloaded in the future. This stops the execution ofprocess_exception() methods of the middleware the same as returning aresponse would.


  1. - **request** (is a [<code>Request</code>]($ object) – the request that generated the exception
  2. - **exception** (an <code>Exception</code> object) the raised exception
  3. - **spider** ([<code>Spider</code>]($ object) – the spider for which this request is intended
  • fromcrawler(_cls, crawler)
  • If present, this classmethod is called to create a middleware instancefrom a Crawler. It must return a new instanceof the middleware. Crawler object provides access to all Scrapy corecomponents like settings and signals; it is a way for middleware toaccess them and hook its functionality into Scrapy.

Parameters:crawler (Crawler object) – crawler that uses this middleware

Built-in downloader middleware reference

This page describes all downloader middleware components that come withScrapy. For information on how to use them and how to write your own downloadermiddleware, see the downloader middleware usage guide.

For a list of the components enabled by default (and their orders) see theDOWNLOADER_MIDDLEWARES_BASE setting.


  • class scrapy.downloadermiddlewares.cookies.CookiesMiddleware[source]
  • This middleware enables working with sites that require cookies, such asthose that use sessions. It keeps track of cookies sent by web servers, andsends them back on subsequent requests (from that spider), just like webbrowsers do.

The following settings can be used to configure the cookie middleware:

New in version 0.15.

There is support for keeping multiple cookie sessions per spider by using thecookiejar Request meta key. By default it uses a single cookie jar(session), but you can pass an identifier to use different ones.

For example:

  1. for i, url in enumerate(urls):
  2. yield scrapy.Request(url, meta={'cookiejar': i},
  3. callback=self.parse_page)

Keep in mind that the cookiejar meta key is not “sticky”. You need to keeppassing it along on subsequent requests. For example:

  1. def parse_page(self, response):
  2. # do some processing
  3. return scrapy.Request("",
  4. meta={'cookiejar': response.meta['cookiejar']},
  5. callback=self.parse_other_page)


Default: True

Whether to enable the cookies middleware. If disabled, no cookies will be sentto web servers.

Notice that despite the value of COOKIES_ENABLED setting ifRequest.meta['dont_merge_cookies']evaluates to True the request cookies will not be sent to theweb server and received cookies in Response willnot be merged with the existing cookies.

For more detailed information see the cookies parameter inRequest.


Default: False

If enabled, Scrapy will log all cookies sent in requests (i.e. Cookieheader) and all cookies received in responses (i.e. Set-Cookie header).

Here’s an example of a log with COOKIES_DEBUG enabled:

  1. 2011-04-06 14:35:10-0300 [scrapy.core.engine] INFO: Spider opened
  2. 2011-04-06 14:35:10-0300 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <GET>
  3. Cookie: clientlanguage_nl=en_EN
  4. 2011-04-06 14:35:14-0300 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200>
  5. Set-Cookie: JSESSIONID=B~FA4DC0C496C8762AE4F1A620EAB34F38; Path=/
  6. Set-Cookie: ip_isocode=US
  7. Set-Cookie: clientlanguage_nl=en_EN; Expires=Thu, 07-Apr-2011 21:21:34 GMT; Path=/
  8. 2011-04-06 14:49:50-0300 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None)
  9. [...]


  • class scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware[source]
  • This middleware sets all default requests headers specified in theDEFAULT_REQUEST_HEADERS setting.


  • class scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware[source]
  • This middleware sets the download timeout for requests specified in theDOWNLOAD_TIMEOUT setting or download_timeoutspider attribute.


You can also set download timeout per-request usingdownload_timeout Request.meta key; this is supportedeven when DownloadTimeoutMiddleware is disabled.


  • class scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware[source]
  • This middleware authenticates all requests generated from certain spidersusing Basic access authentication (aka. HTTP auth).

To enable HTTP authentication from certain spiders, set the http_userand http_pass attributes of those spiders.


  1. from scrapy.spiders import CrawlSpider
  3. class SomeIntranetSiteSpider(CrawlSpider):
  5. http_user = 'someuser'
  6. http_pass = 'somepass'
  7. name = ''
  9. # .. rest of the spider code omitted ...


  • class scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware[source]
  • This middleware provides low-level cache to all HTTP requests and responses.It has to be combined with a cache storage backend as well as a cache policy.

Scrapy ships with three HTTP cache storage backends:

You can change the HTTP cache storage backend with the HTTPCACHE_STORAGEsetting. Or you can also implement your own storage backend.

Scrapy ships with two HTTP cache policies:

You can change the HTTP cache policy with the HTTPCACHE_POLICYsetting. Or you can also implement your own policy.

You can also avoid caching a response on every policy using dont_cache meta key equals True.

Dummy policy (default)

  • class scrapy.extensions.httpcache.DummyPolicy[source]
  • This policy has no awareness of any HTTP Cache-Control directives.Every request and its corresponding response are cached. When the samerequest is seen again, the response is returned without transferringanything from the Internet.

The Dummy policy is useful for testing spiders faster (without havingto wait for downloads every time) and for trying your spider offline,when an Internet connection is not available. The goal is to be able to“replay” a spider run exactly as it ran before.

RFC2616 policy

  • class scrapy.extensions.httpcache.RFC2616Policy[source]
  • This policy provides a RFC2616 compliant HTTP cache, i.e. with HTTPCache-Control awareness, aimed at production and used in continuousruns to avoid downloading unmodified data (to save bandwidth and speed upcrawls).

What is implemented:

  • Do not attempt to store responses/requests with no-store cache-control directive set
  • Do not serve responses from cache if no-cache cache-control directive is set even for fresh responses
  • Compute freshness lifetime from max-age cache-control directive
  • Compute freshness lifetime from Expires response header
  • Compute freshness lifetime from Last-Modified response header (heuristic used by Firefox)
  • Compute current age from Age response header
  • Compute current age from Date header
  • Revalidate stale responses based on Last-Modified response header
  • Revalidate stale responses based on ETag response header
  • Set Date header for any received response missing it
  • Support max-stale cache-control directive in requestsThis allows spiders to be configured with the full RFC2616 cache policy,but avoid revalidation on a request-by-request basis, while remainingconformant with the HTTP spec.


Add Cache-Control: max-stale=600 to Request headers to accept responses thathave exceeded their expiration time by no more than 600 seconds.

See also: RFC2616, 14.9.3

What is missing:

Filesystem storage backend (default)

  • class scrapy.extensions.httpcache.FilesystemCacheStorage[source]
  • File system storage backend is available for the HTTP cache middleware.

Each request/response pair is stored in a different directory containingthe following files:

  • request_body - the plain request body
  • request_headers - the request headers (in raw HTTP format)
  • response_body - the plain response body
  • response_headers - the request headers (in raw HTTP format)
  • meta - some metadata of this cache resource in Python repr()format (grep-friendly format)
  • pickled_meta - the same metadata in meta but pickled for moreefficient deserializationThe directory name is made from the request fingerprint (seescrapy.utils.request.fingerprint), and one level of subdirectories isused to avoid creating too many files into the same directory (which isinefficient in many file systems). An example directory could be:
  1. /path/to/cache/dir/

DBM storage backend

  • class scrapy.extensions.httpcache.DbmCacheStorage[source]

New in version 0.13.

A DBM storage backend is also available for the HTTP cache middleware.

By default, it uses the dbm, but you can change it with theHTTPCACHE_DBM_MODULE setting.

Writing your own storage backend

You can implement a cache storage backend by creating a Python class thatdefines the methods described below.

  • class scrapy.extensions.httpcache.CacheStorage
    • openspider(_spider)
    • This method gets called after a spider has been opened for crawling. It handlesthe open_spider signal.

Parameters:spider (Spider object) – the spider which has been opened

  • closespider(_spider)
  • This method gets called after a spider has been closed. It handlesthe close_spider signal.

Parameters:spider (Spider object) – the spider which has been closed

  • retrieveresponse(_spider, request)
  • Return response if present in cache, or None otherwise.


  1. - **spider** ([<code>Spider</code>]($ object) – the spider which generated the request
  2. - **request** ([<code>Request</code>]($ object) – the request to find cached response for
  • storeresponse(_spider, request, response)
  • Store the given response in the cache.


  1. - **spider** ([<code>Spider</code>]($ object) – the spider for which the response is intended
  2. - **request** ([<code>Request</code>]($ object) – the corresponding request the spider generated
  3. - **response** ([<code>Response</code>]($ object) – the response to store in the cache

In order to use your storage backend, set:

HTTPCache middleware settings

The HttpCacheMiddleware can be configured through the followingsettings:


New in version 0.11.

Default: False

Whether the HTTP cache will be enabled.

Changed in version 0.11: Before 0.11, HTTPCACHE_DIR was used to enable cache.


Default: 0

Expiration time for cached requests, in seconds.

Cached requests older than this time will be re-downloaded. If zero, cachedrequests will never expire.

Changed in version 0.11: Before 0.11, zero meant cached requests always expire.


Default: 'httpcache'

The directory to use for storing the (low-level) HTTP cache. If empty, the HTTPcache will be disabled. If a relative path is given, is taken relative to theproject data dir. For more info see: Default structure of Scrapy projects.


New in version 0.10.

Default: []

Don’t cache response with these HTTP codes.


Default: False

If enabled, requests not found in the cache will be ignored instead of downloaded.


New in version 0.10.

Default: ['file']

Don’t cache responses with these URI schemes.


Default: 'scrapy.extensions.httpcache.FilesystemCacheStorage'

The class which implements the cache storage backend.


New in version 0.13.

Default: 'dbm'

The database module to use in the DBM storage backend. This setting is specific to the DBM backend.


New in version 0.18.

Default: 'scrapy.extensions.httpcache.DummyPolicy'

The class which implements the cache policy.


New in version 1.0.

Default: False

If enabled, will compress all cached data with gzip.This setting is specific to the Filesystem backend.


New in version 1.1.

Default: False

If enabled, will cache pages unconditionally.

A spider may wish to have all responses available in the cache, forfuture use with Cache-Control: max-stale, for instance. TheDummyPolicy caches all responses but never revalidates them, andsometimes a more nuanced policy is desirable.

This setting still respects Cache-Control: no-store directives in responses.If you don’t want that, filter no-store out of the Cache-Control headers inresponses you feed to the cache middleware.


New in version 1.1.

Default: []

List of Cache-Control directives in responses to be ignored.

Sites often set “no-store”, “no-cache”, “must-revalidate”, etc., but getupset at the traffic a spider can generate if it actually respects thosedirectives. This allows to selectively ignore Cache-Control directivesthat are known to be unimportant for the sites being crawled.

We assume that the spider will not issue Cache-Control directivesin requests unless it actually needs them, so directives in requests arenot filtered.


  • class scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware[source]
  • This middleware allows compressed (gzip, deflate) traffic to besent/received from web sites.

This middleware also supports decoding brotli-compressed responses,provided brotlipy is installed.

HttpCompressionMiddleware Settings


Default: True

Whether the Compression middleware will be enabled.


New in version 0.8.

  • class scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware[source]
  • This middleware sets the HTTP proxy to use for requests, by setting theproxy meta value for Request objects.

Like the Python standard library modules urllib and urllib2, it obeysthe following environment variables:


  • class scrapy.downloadermiddlewares.redirect.RedirectMiddleware[source]
  • This middleware handles redirection of requests based on response status.

The urls which the request goes through (while being redirected) can be foundin the redirect_urls Request.meta key.

The reason behind each redirect in redirect_urls can be found in theredirect_reasons Request.meta key. Forexample: [301, 302, 307, 'meta refresh'].

The format of a reason depends on the middleware that handled the correspondingredirect. For example, RedirectMiddleware indicates the triggeringresponse status code as an integer, while MetaRefreshMiddlewarealways uses the 'meta refresh' string as reason.

The RedirectMiddleware can be configured through the followingsettings (see the settings documentation for more info):

If Request.meta has dont_redirectkey set to True, the request will be ignored by this middleware.

If you want to handle some redirect status codes in your spider, you canspecify these in the handle_httpstatus_list spider attribute.

For example, if you want the redirect middleware to ignore 301 and 302responses (and pass them through to your spider) you can do this:

  1. class MySpider(CrawlSpider):
  2. handle_httpstatus_list = [301, 302]

The handle_httpstatus_list key of Request.meta can also be used to specify which response codes toallow on a per-request basis. You can also set the meta keyhandle_httpstatus_all to True if you want to allow any response codefor a request.

RedirectMiddleware settings


New in version 0.13.

Default: True

Whether the Redirect middleware will be enabled.


Default: 20

The maximum number of redirections that will be followed for a single request.


  • class scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware[source]
  • This middleware handles redirection of requests based on meta-refresh html tag.

The MetaRefreshMiddleware can be configured through the followingsettings (see the settings documentation for more info):

This middleware obey REDIRECT_MAX_TIMES setting, dont_redirect,redirect_urls and redirect_reasons request meta keys as describedfor RedirectMiddleware

MetaRefreshMiddleware settings


New in version 0.17.

Default: True

Whether the Meta Refresh middleware will be enabled.


Default: []

Meta tags within these tags are ignored.

Changed in version 2.0: The default value of METAREFRESH_IGNORE_TAGS changed from['script', 'noscript'] to [].


Default: 100

The maximum meta-refresh delay (in seconds) to follow the redirection.Some sites use meta-refresh for redirecting to a session expired page, so werestrict automatic redirection to the maximum delay.


  • class scrapy.downloadermiddlewares.retry.RetryMiddleware[source]
  • A middleware to retry failed requests that are potentially caused bytemporary problems such as a connection timeout or HTTP 500 error.

Failed pages are collected on the scraping process and rescheduled at theend, once the spider has finished crawling all regular (non failed) pages.

The RetryMiddleware can be configured through the followingsettings (see the settings documentation for more info):

If Request.meta has dont_retry keyset to True, the request will be ignored by this middleware.

RetryMiddleware Settings


New in version 0.13.

Default: True

Whether the Retry middleware will be enabled.


Default: 2

Maximum number of times to retry, in addition to the first download.

Maximum number of retries can also be specified per-request usingmax_retry_times attribute of Request.meta.When initialized, the max_retry_times meta key takes higherprecedence over the RETRY_TIMES setting.


Default: [500, 502, 503, 504, 522, 524, 408, 429]

Which HTTP response codes to retry. Other errors (DNS lookup issues,connections lost, etc) are always retried.

In some cases you may want to add 400 to RETRY_HTTP_CODES becauseit is a common code used to indicate server overload. It is not included bydefault because HTTP specs say so.


  • class scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware[source]
  • This middleware filters out requests forbidden by the robots.txt exclusionstandard.

To make sure Scrapy respects robots.txt make sure the middleware is enabledand the ROBOTSTXT_OBEY setting is enabled.

The ROBOTSTXT_USER_AGENT setting can be used to specify theuser agent string to use for matching in the robots.txt file. If itis None, the User-Agent header you are sending with the request or theUSER_AGENT setting (in that order) will be used for determiningthe user agent to use in the robots.txt file.

This middleware has to be combined with a robots.txt parser.

Scrapy ships with support for the following robots.txt parsers:

If Request.meta hasdont_obey_robotstxt key set to Truethe request will be ignored by this middleware even ifROBOTSTXT_OBEY is enabled.

Parsers vary in several aspects:

  • Language of implementation
  • Supported specification
  • Support for wildcard matching
  • Usage of length based rule:in particular for Allow and Disallow directives, where the mostspecific rule based on the length of the path trumps the less specific(shorter) rule

Performance comparison of different parsers is available at the following link.

Protego parser

Based on Protego:

Scrapy uses this parser by default.


Based on RobotFileParser:

It is faster than Protego and backward-compatible with versions of Scrapy before 1.8.0.

In order to use this parser, set:

Reppy parser

Based on Reppy:

Native implementation, provides better speed than Protego.

In order to use this parser:

  • Install Reppy by running pip install reppy
  • Set ROBOTSTXT_PARSER setting toscrapy.robotstxt.ReppyRobotParser


Based on Robotexclusionrulesparser:

In order to use this parser:

Implementing support for a new parser

You can implement support for a new robots.txt parser by subclassingthe abstract base class RobotParser andimplementing the methods described below.

  • class scrapy.robotstxt.RobotParser[source]
    • abstract allowed(url, user_agent)[source]
    • Return True if user_agent is allowed to crawl url, otherwise return False.


  1. - **url** (_string_) Absolute URL
  2. - **user_agent** (_string_) User agent
  • abstract classmethod fromcrawler(_crawler, robotstxt_body)[source]
  • Parse the content of a robots.txt file as bytes. This must be a class method.It must return a new instance of the parser backend.


  1. - **crawler** ([<code>Crawler</code>]($ instance) – crawler which made the request
  2. - **robotstxt_body** ([_bytes_]( – content of a [robots.txt]( file.


  • class scrapy.downloadermiddlewares.stats.DownloaderStats[source]
  • Middleware that stores stats of all requests, responses and exceptions thatpass through it.

To use this middleware you must enable the DOWNLOADER_STATSsetting.


  • class scrapy.downloadermiddlewares.useragent.UserAgentMiddleware[source]
  • Middleware that allows spiders to override the default user agent.

In order for a spider to override the default user agent, its user_agentattribute must be set.



Scrapy finds ‘AJAX crawlable’ pages for URLs like'!#foo=bar&#39; even without this middleware.AjaxCrawlMiddleware is necessary when URL doesn’t contain '!#'.This is often a case for ‘index’ or ‘main’ website pages.

AjaxCrawlMiddleware Settings


New in version 0.21.

Default: False

Whether the AjaxCrawlMiddleware will be enabled. You may want toenable it for broad crawls.

HttpProxyMiddleware settings


Default: True

Whether or not to enable the HttpProxyMiddleware.


Default: "latin-1"

The default encoding for proxy authentication on HttpProxyMiddleware.