Scrapy shell

The Scrapy shell is an interactive shell where you can try and debug yourscraping code very quickly, without having to run the spider. It’s meant to beused for testing data extraction code, but you can actually use it for testingany kind of code as it is also a regular Python shell.

The shell is used for testing XPath or CSS expressions and see how they workand what data they extract from the web pages you’re trying to scrape. Itallows you to interactively test your expressions while you’re writing yourspider, without having to run the spider to test every change.

Once you get familiarized with the Scrapy shell, you’ll see that it’s aninvaluable tool for developing and debugging your spiders.

Configuring the shell

If you have IPython installed, the Scrapy shell will use it (instead of thestandard Python console). The IPython console is much more powerful andprovides smart auto-completion and colorized output, among other things.

We highly recommend you install IPython, specially if you’re working onUnix systems (where IPython excels). See the IPython installation guidefor more info.

Scrapy also has support for bpython, and will try to use it where IPythonis unavailable.

Through Scrapy’s settings you can configure it to use any one ofipython, bpython or the standard python shell, regardless of whichare installed. This is done by setting the SCRAPY_PYTHON_SHELL environmentvariable; or by defining it in your scrapy.cfg:

  1. [settings]
  2. shell = bpython

Launch the shell

To launch the Scrapy shell you can use the shell command likethis:

  1. scrapy shell <url>

Where the <url> is the URL you want to scrape.

shell also works for local files. This can be handy if you wantto play around with a local copy of a web page. shell understandsthe following syntaxes for local files:

  1. # UNIX-style
  2. scrapy shell ./path/to/file.html
  3. scrapy shell ../other/path/to/file.html
  4. scrapy shell /absolute/path/to/file.html
  6. # File URI
  7. scrapy shell file:///absolute/path/to/file.html


When using relative file paths, be explicit and prepend themwith ./ (or ../ when relevant).scrapy shell index.html will not work as one might expect (andthis is by design, not a bug).

Because shell favors HTTP URLs over File URIs,and index.html being syntactically similar to,shell will treat index.html as a domain name and triggera DNS lookup error:

  1. $ scrapy shell index.html
  2. [ ... scrapy shell starts ... ]
  3. [ ... traceback ... ]
  4. twisted.internet.error.DNSLookupError: DNS lookup failed:
  5. address 'index.html' not found: [Errno -5] No address associated with hostname.

shell will not test beforehand if a file called index.htmlexists in the current directory. Again, be explicit.

Using the shell

The Scrapy shell is just a regular Python console (or IPython console if youhave it available) which provides some additional shortcut functions forconvenience.

Available Shortcuts

  • shelp() - print a help with the list of available objects and shortcuts
  • fetch(url[, redirect=True]) - fetch a new response from the givenURL and update all related objects accordingly. You can optionaly ask forHTTP 3xx redirections to not be followed by passing redirect=False
  • fetch(request) - fetch a new response from the given request andupdate all related objects accordingly.
  • view(response) - open the given response in your local web browser, forinspection. This will add a <base> tag to the response body in orderfor external links (such as images and style sheets) to display properly.Note, however, that this will create a temporary file in your computer,which won’t be removed automatically.

Available Scrapy objects

The Scrapy shell automatically creates some convenient objects from thedownloaded page, like the Response object and theSelector objects (for both HTML and XMLcontent).

Those objects are:

  • crawler - the current Crawler object.
  • spider - the Spider which is known to handle the URL, or aSpider object if there is no spider found forthe current URL
  • request - a Request object of the last fetchedpage. You can modify this request using replace()or fetch a new request (without leaving the shell) using the fetchshortcut.
  • response - a Response object containing the lastfetched page
  • settings - the current Scrapy settings

Example of shell session

Here’s an example of a typical shell session where we start by scraping the page, and then proceed to scrape the Finally, we modify the (Reddit) request method to POST and re-fetch itgetting an error. We end the session by typing Ctrl-D (in Unix systems) orCtrl-Z in Windows.

Keep in mind that the data extracted here may not be the same when you try it,as those pages are not static and could have changed by the time you test this.The only purpose of this example is to get you familiarized with how the Scrapyshell works.

First, we launch the shell:

  1. scrapy shell '' --nolog

Then, the shell fetches the URL (using the Scrapy downloader) and prints thelist of available objects and useful shortcuts (you’ll notice that these linesall start with the [s] prefix):

  1. [s] Available Scrapy objects:
  2. [s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
  3. [s] crawler <scrapy.crawler.Crawler object at 0x7f07395dd690>
  4. [s] item {}
  5. [s] request <GET>
  6. [s] response <200>
  7. [s] settings <scrapy.settings.Settings object at 0x7f07395dd710>
  8. [s] spider <DefaultSpider 'default' at 0x7f0735891690>
  9. [s] Useful shortcuts:
  10. [s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
  11. [s] fetch(req) Fetch a scrapy.Request and update local objects
  12. [s] shelp() Shell help (print this help)
  13. [s] view(response) View response in a browser
  15. >>>

After that, we can start playing with the objects:

  1. >>> response.xpath('//title/text()').get()
  2. 'Scrapy | A Fast and Powerful Scraping and Web Crawling Framework'
  1. >>> fetch("")
  1. >>> response.xpath('//title/text()').get()
  2. 'reddit: the front page of the internet'
  1. >>> request = request.replace(method="POST")
  1. >>> fetch(request)
  1. >>> response.status
  2. 404
  1. >>> from pprint import pprint
  1. >>> pprint(response.headers)
  2. {'Accept-Ranges': ['bytes'],
  3. 'Cache-Control': ['max-age=0, must-revalidate'],
  4. 'Content-Type': ['text/html; charset=UTF-8'],
  5. 'Date': ['Thu, 08 Dec 2016 16:21:19 GMT'],
  6. 'Server': ['snooserv'],
  7. 'Set-Cookie': ['loid=KqNLou0V9SKMX4qb4n;; Max-Age=63071999; Path=/; expires=Sat, 08-Dec-2018 16:21:19 GMT; secure',
  8. 'loidcreated=2016-12-08T16%3A21%3A19.445Z;; Max-Age=63071999; Path=/; expires=Sat, 08-Dec-2018 16:21:19 GMT; secure',
  9. 'loid=vi0ZVe4NkxNWdlH7r7;; Max-Age=63071999; Path=/; expires=Sat, 08-Dec-2018 16:21:19 GMT; secure',
  10. 'loidcreated=2016-12-08T16%3A21%3A19.459Z;; Max-Age=63071999; Path=/; expires=Sat, 08-Dec-2018 16:21:19 GMT; secure'],
  11. 'Vary': ['accept-encoding'],
  12. 'Via': ['1.1 varnish'],
  13. 'X-Cache': ['MISS'],
  14. 'X-Cache-Hits': ['0'],
  15. 'X-Content-Type-Options': ['nosniff'],
  16. 'X-Frame-Options': ['SAMEORIGIN'],
  17. 'X-Moose': ['majestic'],
  18. 'X-Served-By': ['cache-cdg8730-CDG'],
  19. 'X-Timer': ['S1481214079.394283,VS0,VE159'],
  20. 'X-Ua-Compatible': ['IE=edge'],
  21. 'X-Xss-Protection': ['1; mode=block']}

Invoking the shell from spiders to inspect responses

Sometimes you want to inspect the responses that are being processed in acertain point of your spider, if only to check that response you expect isgetting there.

This can be achieved by using the function.

Here’s an example of how you would call it from your spider:

  1. import scrapy
  4. class MySpider(scrapy.Spider):
  5. name = "myspider"
  6. start_urls = [
  7. "",
  8. "",
  9. "",
  10. ]
  12. def parse(self, response):
  13. # We want to inspect one specific response.
  14. if ".org" in response.url:
  15. from import inspect_response
  16. inspect_response(response, self)
  18. # Rest of parsing code.

When you run the spider, you will get something similar to this:

  1. 2014-01-23 17:48:31-0400 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None)
  2. 2014-01-23 17:48:31-0400 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None)
  3. [s] Available Scrapy objects:
  4. [s] crawler <scrapy.crawler.Crawler object at 0x1e16b50>
  5. ...
  7. >>> response.url
  8. ''

Then, you can check if the extraction code is working:

  1. >>> response.xpath('//h1[@class="fn"]')
  2. []

Nope, it doesn’t. So you can open the response in your web browser and see ifit’s the response you were expecting:

  1. >>> view(response)
  2. True

Finally you hit Ctrl-D (or Ctrl-Z in Windows) to exit the shell and resume thecrawling:

  1. >>> ^D
  2. 2014-01-23 17:50:03-0400 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None)
  3. ...

Note that you can’t use the fetch shortcut here since the Scrapy engine isblocked by the shell. However, after you leave the shell, the spider willcontinue crawling where it stopped, as shown above.