- Frequently Asked Questions
- How does Scrapy compare to BeautifulSoup or lxml?
- Can I use Scrapy with BeautifulSoup?
- What Python versions does Scrapy support?
- Did Scrapy “steal” X from Django?
- Does Scrapy work with HTTP proxies?
- How can I scrape an item with attributes in different pages?
- Scrapy crashes with: ImportError: No module named win32api
- How can I simulate a user login in my spider?
- Does Scrapy crawl in breadth-first or depth-first order?
- My Scrapy crawler has memory leaks. What can I do?
- How can I make Scrapy consume less memory?
- Can I use Basic HTTP Authentication in my spiders?
- Why does Scrapy download pages in English instead of my native language?
- Where can I find some example Scrapy projects?
- Can I run a spider without creating a project?
- I get “Filtered offsite request” messages. How can I fix them?
- What is the recommended way to deploy a Scrapy crawler in production?
- Can I use JSON for large exports?
- Can I return (Twisted) deferreds from signal handlers?
- What does the response status code 999 means?
- Can I call
pdb.set_trace()from my spiders to debug them?
- Simplest way to dump all my scraped items into a JSON/CSV/XML file?
- What’s this huge cryptic
__VIEWSTATEparameter used in some forms?
- What’s the best way to parse big XML/CSV data feeds?
- Does Scrapy manage cookies automatically?
- How can I see the cookies being sent and received from Scrapy?
- How can I instruct a spider to stop itself?
- How can I prevent my Scrapy bot from getting banned?
- Should I use spider arguments or settings to configure my spider?
- I’m scraping a XML document and my XPath selector doesn’t return any items
- How to split an item into multiple items in an item pipeline?
- Does Scrapy support IPv6 addresses?
- How to deal with
<class 'ValueError'>: filedescriptor out of range in select()exceptions?
- How can I cancel the download of a given response?
Scrapy provides a built-in mechanism for extracting data (called selectors) but you can easily use BeautifulSoup (or lxml) instead, if you feel more comfortable working with them. After all, they’re just parsing libraries which can be imported and used from any Python code.
Yes, you can. As mentioned above, BeautifulSoup can be used for parsing HTML responses in Scrapy callbacks. You just have to feed the response’s body into a
BeautifulSoup object and extract whatever data you need from it.
Here’s an example spider using BeautifulSoup API, with
lxml as the HTML parser:
from bs4 import BeautifulSoup
name = "example"
allowed_domains = ["example.com"]
start_urls = (
def parse(self, response):
# use lxml to get decent HTML parsing speed
soup = BeautifulSoup(response.text, 'lxml')
BeautifulSoup supports several HTML/XML parsers. See BeautifulSoup’s official documentation on which ones are available.
Scrapy is supported under Python 3.5.2+ under CPython (default Python implementation) and PyPy (starting with PyPy 5.9). Python 3 support was added in Scrapy 1.1. PyPy support was added in Scrapy 1.4, PyPy3 support was added in Scrapy 1.5. Python 2 support was dropped in Scrapy 2.0.
For Python 3 support on Windows, it is recommended to use Anaconda/Miniconda as outlined in the installation guide.
Probably, but we don’t like that word. We think Django is a great open source project and an example to follow, so we’ve used it as an inspiration for Scrapy.
We believe that, if something is already done well, there’s no need to reinvent it. This concept, besides being one of the foundations for open source and free software, not only applies to software but also to documentation, procedures, policies, etc. So, instead of going through each problem ourselves, we choose to copy ideas from those projects that have already solved them properly, and focus on the real problems we need to solve.
We’d be proud if Scrapy serves as an inspiration for other projects. Feel free to steal from us!
Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP Proxy downloader middleware. See
If you do want to crawl in true BFO order, you can do it by setting the following settings:
DEPTH_PRIORITY = 1
SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleFifoDiskQueue'
SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.FifoMemoryQueue'
While pending requests are below the configured values of
CONCURRENT_REQUESTS_PER_IP, those requests are sent concurrently. As a result, the first few requests of a crawl rarely follow the desired order. Lowering those settings to
1 enforces the desired order, but it significantly slows down the crawl as a whole.
Also, Python has a builtin memory leak issue which is described in Leaks without leaks.
See previous question.
Yes. You can use the
runspider command. For example, if you have a spider written in a
my_spider.py file you can run it with:
scrapy runspider my_spider.py
runspider command for more info.
Those messages (logged with
DEBUG level) don’t necessarily mean there is a problem, so you may not need to fix them.
Those messages are thrown by the Offsite Spider Middleware, which is a spider middleware (enabled by default) whose purpose is to filter out requests to domains outside the ones covered by the spider.
For more info see:
See Deploying Spiders.
Some signals support returning deferreds from their handlers, others don’t. See the Built-in signals reference to know which ones.
999 is a custom response status code used by Yahoo sites to throttle requests. Try slowing down the crawling speed by using a download delay of
2 (or higher) in your spider:
name = 'myspider'
download_delay = 2
# [ ... rest of the spider code ... ]
Or by setting a global download delay in your project with the
Yes, but you can also use the Scrapy shell which allows you to quickly analyze (and even modify) the response being processed by your spider, which is, quite often, more useful than plain old
For more info see Invoking the shell from spiders to inspect responses.
To dump into a JSON file:
scrapy crawl myspider -o items.json
To dump into a CSV file:
scrapy crawl myspider -o items.csv
To dump into a XML file:
scrapy crawl myspider -o items.xml
For more information see Feed exports
Parsing big feeds with XPath selectors can be problematic since they need to build the DOM of the entire feed in memory, and this can be quite slow and consume a lot of memory.
In order to avoid parsing all the entire feed at once in memory, you can use the functions
scrapy.utils.iterators module. In fact, this is what the feed spiders (see Spiders) use under the cover.
Yes, Scrapy receives and keeps track of cookies sent by servers, and sends them back on subsequent requests, like any regular web browser does.
Both spider arguments and settings can be used to configure your spider. There is no strict rule that mandates to use one or the other, but settings are more suited for parameters that, once set, don’t change much, while spider arguments are meant to change more often, even on each spider run and sometimes are required for the spider to run at all (for example, to set the start url of a spider).
To illustrate with an example, assuming you have a spider that needs to log into a site to scrape data, and you only want to scrape data from a certain section of the site (which varies each time). In that case, the credentials to log in would be settings, while the url of the section to scrape would be a spider argument.
You may need to remove namespaces. See Removing namespaces.
from copy import deepcopy
from itemadapter import is_item, ItemAdapter
def process_spider_output(self, response, result, spider):
for item in result:
adapter = ItemAdapter(item)
for _ in range(adapter['multiply_by']):
Yes, by setting
scrapy.resolver.CachingHostnameResolver. Note that by doing so, you lose the ability to set a specific timeout for DNS requests (the value of the
DNS_TIMEOUT setting is ignored).
This issue has been reported to appear when running broad crawls in macOS, where the default Twisted reactor is
twisted.internet.selectreactor.SelectReactor. Switching to a different reactor is possible by using the
In some situations, it might be useful to stop the download of a certain response. For instance, if you only need the first part of a large response and you would like to save resources by avoiding the download of the whole body. In that case, you could attach a handler to the
bytes_received signal and raise a
StopDownload exception. Please refer to the Stopping the download of a Response topic for additional information and examples.