Contributing to Scrapy

Important

Double check that you are reading the most recent version of this document athttps://docs.scrapy.org/en/master/contributing.html

There are many ways to contribute to Scrapy. Here are some of them:

  • Blog about Scrapy. Tell the world how you’re using Scrapy. This will helpnewcomers with more examples and will help the Scrapy project to increase itsvisibility.
  • Report bugs and request features in the issue tracker, trying to followthe guidelines detailed in Reporting bugs below.
  • Submit patches for new functionalities and/or bug fixes. Please readWriting patches and Submitting patches below for details on how towrite and submit a patch.
  • Join the Scrapy subreddit and share your ideas on how toimprove Scrapy. We’re always open to suggestions.
  • Answer Scrapy questions atStack Overflow.

Reporting bugs

Note

Please report security issues only toscrapy-security@googlegroups.com. This is a private list only open totrusted Scrapy developers, and its archives are not public.

Well-written bug reports are very helpful, so keep in mind the followingguidelines when you’re going to report a new bug.

  • check the FAQ first to see if your issue is addressed in awell-known question
  • if you have a general question about Scrapy usage, please ask it atStack Overflow(use “scrapy” tag).
  • check the open issues to see if the issue has already been reported. If ithas, don’t dismiss the report, but check the ticket history and comments. Ifyou have additional useful information, please leave a comment, or considersending a pull request with a fix.
  • search the scrapy-users list and Scrapy subreddit to see if it hasbeen discussed there, or if you’re not sure if what you’re seeing is a bug.You can also ask in the #scrapy IRC channel.
  • write complete, reproducible, specific bug reports. The smaller the testcase, the better. Remember that other developers won’t have your project toreproduce the bug, so please include all relevant files required to reproduceit. See for example StackOverflow’s guide on creating aMinimal, Complete, and Verifiable example exhibiting the issue.
  • the most awesome way to provide a complete reproducible example is tosend a pull request which adds a failing test case to theScrapy testing suite (see Submitting patches).This is helpful even if you don’t have an intention tofix the issue yourselves.
  • include the output of scrapy version -v so developers working on your bugknow exactly which version and platform it occurred on, which is often veryhelpful for reproducing it, or knowing if it was already fixed.

Writing patches

The better a patch is written, the higher the chances that it’ll get accepted and the sooner it will be merged.

Well-written patches should:

  • contain the minimum amount of code required for the specific change. Smallpatches are easier to review and merge. So, if you’re doing more than onechange (or bug fix), please consider submitting one patch per change. Do notcollapse multiple changes into a single patch. For big changes consider usinga patch queue.

  • pass all unit-tests. See Running tests below.

  • include one (or more) test cases that check the bug fixed or the newfunctionality added. See Writing tests below.

  • if you’re adding or changing a public (documented) API, please includethe documentation changes in the same patch. See Documentation policiesbelow.

  • if you’re adding a private API, please add a regular expression to thecoverage_ignore_pyobjects variable of docs/conf.py to exclude the newprivate API from documentation coverage checks.

To see if your private API is skipped properly, generate a documentationcoverage report as follows:

  1. tox -e docs-coverage

Submitting patches

The best way to submit a patch is to issue a pull request on GitHub,optionally creating a new issue first.

Remember to explain what was fixed or the new functionality (what it is, whyit’s needed, etc). The more info you include, the easier will be for coredevelopers to understand and accept your patch.

You can also discuss the new functionality (or bug fix) before creating thepatch, but it’s always good to have a patch ready to illustrate your argumentsand show that you have put some additional thought into the subject. A goodstarting point is to send a pull request on GitHub. It can be simple enough toillustrate your idea, and leave documentation/tests for later, after the ideahas been validated and proven useful. Alternatively, you can start aconversation in the Scrapy subreddit to discuss your idea first.

Sometimes there is an existing pull request for the problem you’d like tosolve, which is stalled for some reason. Often the pull request is in aright direction, but changes are requested by Scrapy maintainers, and theoriginal pull request author hasn’t had time to address them.In this case consider picking up this pull request: opena new pull request with all commits from the original pull request, as well asadditional changes to address the raised issues. Doing so helps a lot; it isnot considered rude as soon as the original author is acknowledged by keepinghis/her commits.

You can pull an existing pull request to a local branchby running git fetch upstream pull/$PR_NUMBER/head:$BRANCH_NAME_TO_CREATE(replace ‘upstream’ with a remote name for scrapy repository,$PR_NUMBER with an ID of the pull request, and $BRANCH_NAME_TO_CREATEwith a name of the branch you want to create locally).See also: https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally#modifying-an-inactive-pull-request-locally.

When writing GitHub pull requests, try to keep titles short but descriptive.E.g. For bug #411: “Scrapy hangs if an exception raises in start_requests”prefer “Fix hanging when exception occurs in start_requests (#411)”instead of “Fix for #411”. Complete titles make it easy to skim throughthe issue tracker.

Finally, try to keep aesthetic changes (PEP 8 compliance, unused importsremoval, etc) in separate commits from functional changes. This will make pullrequests easier to review and more likely to get merged.

Coding style

Please follow these coding conventions when writing code for inclusion inScrapy:

Documentation policies

For reference documentation of API members (classes, methods, etc.) usedocstrings and make sure that the Sphinx documentation uses theautodoc extension to pull the docstrings. API referencedocumentation should follow docstring conventions (PEP 257) and beIDE-friendly: short, to the point, and it may provide short examples.

Other types of documentation, such as tutorials or topics, should be covered infiles within the docs/ directory. This includes documentation that isspecific to an API member, but goes beyond API reference documentation.

In any case, if something is covered in a docstring, use theautodoc extension to pull the docstring into thedocumentation instead of duplicating the docstring in files within thedocs/ directory.

Tests

Tests are implemented using the Twisted unit-testing framework. Running tests requirestox.

Running tests

To run all tests:

  1. tox

To run a specific test (say tests/test_loader.py) use:

tox — tests/test_loader.py

To run the tests on a specific tox environment, use-e <name> with an environment name from tox.ini. For example, to runthe tests with Python 3.6 use:

  1. tox -e py36

You can also specify a comma-separated list of environments, and use tox’sparallel mode to run the tests on multiple environments inparallel:

  1. tox -e py36,py38 -p auto

To pass command-line options to pytest, add them after in your call to tox. Using overrides thedefault positional arguments defined in tox.ini, so you must include thosedefault positional arguments (scrapy tests) after as well:

  1. tox -- scrapy tests -x # stop after first failure

You can also use the pytest-xdist plugin. For example, to run all tests onthe Python 3.6 tox environment using all your CPU cores:

  1. tox -e py36 -- scrapy tests -n auto

To see coverage report install coverage(pip install coverage) and run:

coverage report

see output of coverage —help for more options like html or xml report.

Writing tests

All functionality (including new features and bug fixes) must include a testcase to check that it works as expected, so please include tests for yourpatches if you want them to get accepted sooner.

Scrapy uses unit-tests, which are located in the tests/ directory.Their module name typically resembles the full path of the module they’retesting. For example, the item loaders code is in:

  1. scrapy.loader

And their unit-tests are in:

  1. tests/test_loader.py