Spiders Contracts

New in version 0.15.

Testing spiders can get particularly annoying and while nothing prevents youfrom writing unit tests the task gets cumbersome quickly. Scrapy offers anintegrated way of testing your spiders by the means of contracts.

This allows you to test each callback of your spider by hardcoding a sample urland check various constraints for how the callback processes the response. Eachcontract is prefixed with an @ and included in the docstring. See thefollowing example:

  1. def parse(self, response):
  2. """ This function parses a sample response. Some contracts are mingled
  3. with this docstring.
  4.  
  5. @url http://www.amazon.com/s?field-keywords=selfish+gene
  6. @returns items 1 16
  7. @returns requests 0 0
  8. @scrapes Title Author Year Price
  9. """

This callback is tested using three built-in contracts:

  • class scrapy.contracts.default.UrlContract[source]
  • This contract (@url) sets the sample URL used when checking othercontract conditions for this spider. This contract is mandatory. Allcallbacks lacking this contract are ignored when running the checks:

  1. @url url

  • class scrapy.contracts.default.CallbackKeywordArgumentsContract[source]
  • This contract (@cb_kwargs) sets the cb_kwargsattribute for the sample request. It must be a valid JSON dictionary.

  1. @cb_kwargs {"arg1": "value1", "arg2": "value2", …}

  • class scrapy.contracts.default.ReturnsContract[source]
  • This contract (@returns) sets lower and upper bounds for the items andrequests returned by the spider. The upper bound is optional:

  1. @returns item(s)|request(s) [min [max]]

  • class scrapy.contracts.default.ScrapesContract[source]
  • This contract (@scrapes) checks that all the items returned by thecallback have the specified fields:

  1. @scrapes field_1 field_2

Use the check command to run the contract checks.

Custom Contracts

If you find you need more power than the built-in Scrapy contracts you cancreate and load your own contracts in the project by using theSPIDER_CONTRACTS setting:

  1. SPIDER_CONTRACTS = {
  2. 'myproject.contracts.ResponseCheck': 10,
  3. 'myproject.contracts.ItemValidate': 10,
  4. }

Each contract must inherit from Contract and canoverride three methods:

  • class scrapy.contracts.Contract(method, *args)[source]

Parameters:

  • method (function) – callback function to which the contract is associated
  • args (list) – list of arguments passed into the docstring (whitespaceseparated)
  • adjustrequest_args(_args)[source]
  • This receives a dict as an argument containing default argumentsfor request object. Request is used by default,but this can be changed with the request_cls attribute.If multiple contracts in chain have this attribute defined, the last one is used.

Must return the same or a modified version of it.

  • preprocess(_response)
  • This allows hooking in various checks on the response received from thesample request, before it’s being passed to the callback.

  • postprocess(_output)

  • This allows processing the output of the callback. Iterators areconverted listified before being passed to this hook.

Raise ContractFail frompre_process orpost_process if expectations are not met:

  • class scrapy.exceptions.ContractFail[source]
  • Error raised in case of a failing contract

Here is a demo contract which checks the presence of a custom header in theresponse received:

  1. from scrapy.contracts import Contract
  2. from scrapy.exceptions import ContractFail
  3.  
  4. class HasHeaderContract(Contract):
  5. """ Demo contract which checks the presence of a custom header
  6. @has_header X-CustomHeader
  7. """
  8.  
  9. name = 'has_header'
  10.  
  11. def pre_process(self, response):
  12. for header in self.args:
  13. if header not in response.headers:
  14. raise ContractFail('X-CustomHeader not present')

Detecting check runs

When scrapy check is running, the SCRAPY_CHECK environment variable isset to the true string. You can use os.environ to perform any change toyour spiders or your settings when scrapy check is used:

  1. import os
  2. import scrapy
  3.  
  4. class ExampleSpider(scrapy.Spider):
  5. name = 'example'
  6.  
  7. def __init__(self):
  8. if os.environ.get('SCRAPY_CHECK'):
  9. pass # Do some scraper adjustments when a check is running