Release notes

1.0.5 (2016-02-04)

1.0.4 (2015-12-30)

1.0.3 (2015-08-11)

1.0.2 (2015-08-06)

1.0.1 (2015-07-01)

1.0.0 (2015-06-19)

You will find a lot of new features and bugfixes in this major release. Makesure to check our updated overview to get a glance ofsome of the changes, along with our brushed tutorial.

Support for returning dictionaries in spiders

Declaring and returning Scrapy Items is no longer necessary to collect thescraped data from your spider, you can now return explicit dictionariesinstead.

Classic version

  1. class MyItem(scrapy.Item):
  2. url = scrapy.Field()
  3.  
  4. class MySpider(scrapy.Spider):
  5. def parse(self, response):
  6. return MyItem(url=response.url)

New version

  1. class MySpider(scrapy.Spider):
  2. def parse(self, response):
  3. return {'url': response.url}

Per-spider settings (GSoC 2014)

Last Google Summer of Code project accomplished an important redesign of themechanism used for populating settings, introducing explicit priorities tooverride any given setting. As an extension of that goal, we included a newlevel of priority for settings that act exclusively for a single spider,allowing them to redefine project settings.

Start using it by defining a custom_settingsclass variable in your spider:

  1. class MySpider(scrapy.Spider):
  2. custom_settings = {
  3. "DOWNLOAD_DELAY": 5.0,
  4. "RETRY_ENABLED": False,
  5. }

Read more about settings population: Settings

Python Logging

Scrapy 1.0 has moved away from Twisted logging to support Python built in’sas default logging system. We’re maintaining backward compatibility for mostof the old custom interface to call logging functions, but you’ll getwarnings to switch to the Python logging API entirely.

Old version

  1. from scrapy import log
  2. log.msg('MESSAGE', log.INFO)

New version

  1. import logging
  2. logging.info('MESSAGE')

Logging with spiders remains the same, but on top of thelog() method you’ll have access to a customlogger created for the spider to issue logevents:

  1. class MySpider(scrapy.Spider):
  2. def parse(self, response):
  3. self.logger.info('Response received')

Read more in the logging documentation: Logging

Crawler API refactoring (GSoC 2014)

Another milestone for last Google Summer of Code was a refactoring of theinternal API, seeking a simpler and easier usage. Check new core interfacein: 核心API

A common situation where you will face these changes is while running Scrapyfrom scripts. Here’s a quick example of how to run a Spider manually with thenew API:

  1. from scrapy.crawler import CrawlerProcess
  2.  
  3. process = CrawlerProcess({
  4. 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
  5. })
  6. process.crawl(MySpider)
  7. process.start()

Bear in mind this feature is still under development and its API may changeuntil it reaches a stable status.

See more examples for scripts running Scrapy: 实践经验(Common Practices)

Module Relocations

There’s been a large rearrangement of modules trying to improve the generalstructure of Scrapy. Main changes were separating various subpackages intonew projects and dissolving both scrapy.contrib and _scrapy.contrib_exp_into top level packages. Backward compatibility was kept among internalrelocations, while importing deprecated modules expect warnings indicatingtheir new place.

Full list of relocations

Outsourced packages

注解

These extensions went through some minor changes, e.g. some setting nameswere changed. Please check the documentation in each new repository toget familiar with the new usage.

Old locationNew location
scrapy.commands.deployscrapyd-client(See other alternatives here:Deploying Spiders)
scrapy.contrib.djangoitemscrapy-djangoitem
scrapy.webservicescrapy-jsonrpc

scrapy.contrib_exp and scrapy.contrib dissolutions

Old locationNew location
scrapy.contrib_exp.downloadermiddleware.decompressionscrapy.downloadermiddlewares.decompression
scrapy.contrib_exp.iteratorsscrapy.utils.iterators
scrapy.contrib.downloadermiddlewarescrapy.downloadermiddlewares
scrapy.contrib.exporterscrapy.exporters
scrapy.contrib.linkextractorsscrapy.linkextractors
scrapy.contrib.loaderscrapy.loader
scrapy.contrib.loader.processorscrapy.loader.processors
scrapy.contrib.pipelinescrapy.pipelines
scrapy.contrib.spidermiddlewarescrapy.spidermiddlewares
scrapy.contrib.spidersscrapy.spiders

- scrapy.contrib.closespider
- scrapy.contrib.corestats
- scrapy.contrib.debug
- scrapy.contrib.feedexport
- scrapy.contrib.httpcache
- scrapy.contrib.logstats
- scrapy.contrib.memdebug
- scrapy.contrib.memusage
- scrapy.contrib.spiderstate
- scrapy.contrib.statsmailer
- scrapy.contrib.throttle
scrapy.extensions.*

Plural renames and Modules unification

Old locationNew location
scrapy.commandscrapy.commands
scrapy.dupefilterscrapy.dupefilters
scrapy.linkextractorscrapy.linkextractors
scrapy.spiderscrapy.spiders
scrapy.squeuescrapy.squeues
scrapy.statscolscrapy.statscollectors
scrapy.utils.decoratorscrapy.utils.decorators

Class renames

Old locationNew location
scrapy.spidermanager.SpiderManagerscrapy.spiderloader.SpiderLoader

Settings renames

Old locationNew location
SPIDER_MANAGER_CLASSSPIDER_LOADER_CLASS

Changelog

New Features and Enhancements

0.24.6 (2015-04-20)

0.24.5 (2015-02-25)

0.24.4 (2014-08-09)

0.24.3 (2014-08-09)

0.24.2 (2014-07-08)

  • Use a mutable mapping to proxy deprecated settings.overrides and settings.defaults attribute (commit e5e8133)
  • there is not support for python3 yet (commit 3cd6146)
  • Update python compatible version set to debian packages (commit fa5d76b)
  • DOC fix formatting in release notes (commit c6a9e20)

0.24.1 (2014-06-27)

  • Fix deprecated CrawlerSettings and increase backwards compatibility with.defaults attribute (commit 8e3f20a)

0.24.0 (2014-06-26)

Enhancements

Bugfixes

  • Encode unicode URL value when creating Links in RegexLinkExtractor (issue 561)
  • Ignore None values in ItemLoader processors (issue 556)
  • Fix link text when there is an inner tag in SGMLLinkExtractor andHtmlParserLinkExtractor (issue 485, issue 574)
  • Fix wrong checks on subclassing of deprecated classes(issue 581, issue 584)
  • Handle errors caused by inspect.stack() failures (issue 582)
  • Fix a reference to unexistent engine attribute (issue 593, issue 594)
  • Fix dynamic itemclass example usage of type() (issue 603)
  • Use lucasdemarchi/codespell to fix typos (issue 628)
  • Fix default value of attrs argument in SgmlLinkExtractor to be tuple (issue 661)
  • Fix XXE flaw in sitemap reader (issue 676)
  • Fix engine to support filtered start requests (issue 707)
  • Fix offsite middleware case on urls with no hostnames (issue 745)
  • Testsuite doesn’t require PIL anymore (issue 585)

0.22.2 (released 2014-02-14)

0.22.1 (released 2014-02-08)

  • localhost666 can resolve under certain circumstances (commit 2ec2279)
  • test inspect.stack failure (commit cc3eda3)
  • Handle cases when inspect.stack() fails (commit 8cb44f9)
  • Fix wrong checks on subclassing of deprecated classes. closes #581 (commit 46d98d6)
  • Docs: 4-space indent for final spider example (commit 13846de)
  • Fix HtmlParserLinkExtractor and tests after #485 merge (commit 368a946)
  • BaseSgmlLinkExtractor: Fixed the missing space when the link has an inner tag (commit b566388)
  • BaseSgmlLinkExtractor: Added unit test of a link with an inner tag (commit c1cb418)
  • BaseSgmlLinkExtractor: Fixed unknown_endtag() so that it only set current_link=None when the end tag match the opening tag (commit 7e4d627)
  • Fix tests for Travis-CI build (commit 76c7e20)
  • replace unencodeable codepoints with html entities. fixes #562 and #285 (commit 5f87b17)
  • RegexLinkExtractor: encode URL unicode value when creating Links (commit d0ee545)
  • Updated the tutorial crawl output with latest output. (commit 8da65de)
  • Updated shell docs with the crawler reference and fixed the actual shell output. (commit 875b9ab)
  • PEP8 minor edits. (commit f89efaf)
  • Expose current crawler in the scrapy shell. (commit 5349cec)
  • Unused re import and PEP8 minor edits. (commit 387f414)
  • Ignore None’s values when using the ItemLoader. (commit 0632546)
  • DOC Fixed HTTPCACHE_STORAGE typo in the default value which is now Filesystem instead Dbm. (commit cde9a8c)
  • show ubuntu setup instructions as literal code (commit fb5c9c5)
  • Update Ubuntu installation instructions (commit 70fb105)
  • Merge pull request #550 from stray-leone/patch-1 (commit 6f70b6a)
  • modify the version of scrapy ubuntu package (commit 725900d)
  • fix 0.22.0 release date (commit af0219a)
  • fix typos in news.rst and remove (not released yet) header (commit b7f58f4)

0.22.0 (released 2014-01-17)

Enhancements

Fixes

  • Update Selector class imports in CrawlSpider template (issue 484)
  • Fix unexistent reference to engine.slots (issue 464)
  • Do not try to call body_as_unicode() on a non-TextResponse instance (issue 462)
  • Warn when subclassing XPathItemLoader, previously it only warned oninstantiation. (issue 523)
  • Warn when subclassing XPathSelector, previously it only warned oninstantiation. (issue 537)
  • Multiple fixes to memory stats (issue 531, issue 530, issue 529)
  • Fix overriding url in FormRequest.from_response() (issue 507)
  • Fix tests runner under pip 1.5 (issue 513)
  • Fix logging error when spider name is unicode (issue 479)

0.20.2 (released 2013-12-09)

0.20.1 (released 2013-11-28)

  • include_package_data is required to build wheels from published sources (commit 5ba1ad5)
  • process_parallel was leaking the failures on its internal deferreds. closes #458 (commit 419a780)

0.20.0 (released 2013-11-08)

Enhancements

  • New Selector’s API including CSS selectors (issue 395 and issue 426),
  • Request/Response url/body attributes are now immutable(modifying them had been deprecated for a long time)
  • ITEM_PIPELINES is now defined as a dict (instead of a list)
  • Sitemap spider can fetch alternate URLs (issue 360)
  • Selector.remove_namespaces() now remove namespaces from element’s attributes. (issue 416)
  • Paved the road for Python 3.3+ (issue 435, issue 436, issue 431, issue 452)
  • New item exporter using native python types with nesting support (issue 366)
  • Tune HTTP1.1 pool size so it matches concurrency defined by settings (commit b43b5f575)
  • scrapy.mail.MailSender now can connect over TLS or upgrade using STARTTLS (issue 327)
  • New FilesPipeline with functionality factored out from ImagesPipeline (issue 370, issue 409)
  • Recommend Pillow instead of PIL for image handling (issue 317)
  • Added debian packages for Ubuntu quantal and raring (commit 86230c0)
  • Mock server (used for tests) can listen for HTTPS requests (issue 410)
  • Remove multi spider support from multiple core components(issue 422, issue 421, issue 420, issue 419, issue 423, issue 418)
  • Travis-CI now tests Scrapy changes against development versions of w3lib and queuelib python packages.
  • Add pypy 2.1 to continuous integration tests (commit ecfa7431)
  • Pylinted, pep8 and removed old-style exceptions from source (issue 430, issue 432)
  • Use importlib for parametric imports (issue 445)
  • Handle a regression introduced in Python 2.7.5 that affects XmlItemExporter (issue 372)
  • Bugfix crawling shutdown on SIGINT (issue 450)
  • Do not submit reset type inputs in FormRequest.from_response (commit b326b87)
  • Do not silence download errors when request errback raises an exception (commit 684cfc0)

Bugfixes

Other

  • Dropped Python 2.6 support (issue 448)
  • Add cssselect python package as install dependency
  • Drop libxml2 and multi selector’s backend support, lxml is required from now on.
  • Minimum Twisted version increased to 10.0.0, dropped Twisted 8.0 support.
  • Running test suite now requires mock python library (issue 390)

Thanks

Thanks to everyone who contribute to this release!

List of contributors sorted by number of commits:

  1. 69 Daniel Graña <dangra@...>
  2. 37 Pablo Hoffman <pablo@...>
  3. 13 Mikhail Korobov <kmike84@...>
  4. 9 Alex Cepoi <alex.cepoi@...>
  5. 9 alexanderlukanin13 <alexander.lukanin.13@...>
  6. 8 Rolando Espinoza La fuente <darkrho@...>
  7. 8 Lukasz Biedrycki <lukasz.biedrycki@...>
  8. 6 Nicolas Ramirez <nramirez.uy@...>
  9. 3 Paul Tremberth <paul.tremberth@...>
  10. 2 Martin Olveyra <molveyra@...>
  11. 2 Stefan <misc@...>
  12. 2 Rolando Espinoza <darkrho@...>
  13. 2 Loren Davie <loren@...>
  14. 2 irgmedeiros <irgmedeiros@...>
  15. 1 Stefan Koch <taikano@...>
  16. 1 Stefan <cct@...>
  17. 1 scraperdragon <dragon@...>
  18. 1 Kumara Tharmalingam <ktharmal@...>
  19. 1 Francesco Piccinno <stack.box@...>
  20. 1 Marcos Campal <duendex@...>
  21. 1 Dragon Dave <dragon@...>
  22. 1 Capi Etheriel <barraponto@...>
  23. 1 cacovsky <amarquesferraz@...>
  24. 1 Berend Iwema <berend@...>

0.18.4 (released 2013-10-10)

  • IPython refuses to update the namespace. fix #396 (commit 3d32c4f)
  • Fix AlreadyCalledError replacing a request in shell command. closes #407 (commit b1d8919)
  • Fix start_requests laziness and early hangs (commit 89faf52)

0.18.3 (released 2013-10-03)

0.18.2 (released 2013-09-03)

  • Backport scrapy check command fixes and backward compatible multicrawler process(issue 339)

0.18.1 (released 2013-08-27)

0.18.0 (released 2013-08-09)

  • Lot of improvements to testsuite run using Tox, including a way to test on pypi
  • Handle GET parameters for AJAX crawleable urls (commit 3fe2a32)
  • Use lxml recover option to parse sitemaps (issue 347)
  • Bugfix cookie merging by hostname and not by netloc (issue 352)
  • Support disabling HttpCompressionMiddleware using a flag setting (issue 359)
  • Support xml namespaces using iternodes parser in XMLFeedSpider (issue 12)
  • Support dont_cache request meta flag (issue 19)
  • Bugfix scrapy.utils.gz.gunzip broken by changes in python 2.7.4 (commit 4dc76e)
  • Bugfix url encoding on SgmlLinkExtractor (issue 24)
  • Bugfix TakeFirst processor shouldn’t discard zero (0) value (issue 59)
  • Support nested items in xml exporter (issue 66)
  • Improve cookies handling performance (issue 77)
  • Log dupe filtered requests once (issue 105)
  • Split redirection middleware into status and meta based middlewares (issue 78)
  • Use HTTP1.1 as default downloader handler (issue 109 and issue 318)
  • Support xpath form selection on FormRequest.from_response (issue 185)
  • Bugfix unicode decoding error on SgmlLinkExtractor (issue 199)
  • Bugfix signal dispatching on pypi interpreter (issue 205)
  • Improve request delay and concurrency handling (issue 206)
  • Add RFC2616 cache policy to HttpCacheMiddleware (issue 212)
  • Allow customization of messages logged by engine (issue 214)
  • Multiples improvements to DjangoItem (issue 217, issue 218, issue 221)
  • Extend Scrapy commands using setuptools entry points (issue 260)
  • Allow spider allowed_domains value to be set/tuple (issue 261)
  • Support settings.getdict (issue 269)
  • Simplify internal scrapy.core.scraper slot handling (issue 271)
  • Added Item.copy (issue 290)
  • Collect idle downloader slots (issue 297)
  • Add ftp:// scheme downloader handler (issue 329)
  • Added downloader benchmark webserver and spider tools Benchmarking
  • Moved persistent (on disk) queues to a separate project (queuelib) which scrapy now depends on
  • Add scrapy commands using external libraries (issue 260)
  • Added —pdb option to scrapy command line tool
  • Added XPathSelector.remove_namespaces() which allows to remove all namespaces from XML documents for convenience (to work with namespace-less XPaths). Documented in 选择器(Selectors).
  • Several improvements to spider contracts
  • New default middleware named MetaRefreshMiddldeware that handles meta-refresh html tag redirections,
  • MetaRefreshMiddldeware and RedirectMiddleware have different priorities to address #62
  • added from_crawler method to spiders
  • added system tests with mock server
  • more improvements to Mac OS compatibility (thanks Alex Cepoi)
  • several more cleanups to singletons and multi-spider support (thanks Nicolas Ramirez)
  • support custom download slots
  • added –spider option to “shell” command.
  • log overridden settings when scrapy starts
    Thanks to everyone who contribute to this release. Here is a list ofcontributors sorted by number of commits:
  1. 130 Pablo Hoffman <pablo@...>
  2. 97 Daniel Graña <dangra@...>
  3. 20 Nicolás Ramírez <nramirez.uy@...>
  4. 13 Mikhail Korobov <kmike84@...>
  5. 12 Pedro Faustino <pedrobandim@...>
  6. 11 Steven Almeroth <sroth77@...>
  7. 5 Rolando Espinoza La fuente <darkrho@...>
  8. 4 Michal Danilak <mimino.coder@...>
  9. 4 Alex Cepoi <alex.cepoi@...>
  10. 4 Alexandr N Zamaraev (aka tonal) <tonal@...>
  11. 3 paul <paul.tremberth@...>
  12. 3 Martin Olveyra <molveyra@...>
  13. 3 Jordi Llonch <llonchj@...>
  14. 3 arijitchakraborty <myself.arijit@...>
  15. 2 Shane Evans <shane.evans@...>
  16. 2 joehillen <joehillen@...>
  17. 2 Hart <HartSimha@...>
  18. 2 Dan <ellisd23@...>
  19. 1 Zuhao Wan <wanzuhao@...>
  20. 1 whodatninja <blake@...>
  21. 1 vkrest <v.krestiannykov@...>
  22. 1 tpeng <pengtaoo@...>
  23. 1 Tom Mortimer-Jones <tom@...>
  24. 1 Rocio Aramberri <roschegel@...>
  25. 1 Pedro <pedro@...>
  26. 1 notsobad <wangxiaohugg@...>
  27. 1 Natan L <kuyanatan.nlao@...>
  28. 1 Mark Grey <mark.grey@...>
  29. 1 Luan <luanpab@...>
  30. 1 Libor Nenadál <libor.nenadal@...>
  31. 1 Juan M Uys <opyate@...>
  32. 1 Jonas Brunsgaard <jonas.brunsgaard@...>
  33. 1 Ilya Baryshev <baryshev@...>
  34. 1 Hasnain Lakhani <m.hasnain.lakhani@...>
  35. 1 Emanuel Schorsch <emschorsch@...>
  36. 1 Chris Tilden <chris.tilden@...>
  37. 1 Capi Etheriel <barraponto@...>
  38. 1 cacovsky <amarquesferraz@...>
  39. 1 Berend Iwema <berend@...>

0.16.5 (released 2013-05-30)

  • obey request method when scrapy deploy is redirected to a new endpoint (commit 8c4fcee)
  • fix inaccurate downloader middleware documentation. refs #280 (commit 40667cb)
  • doc: remove links to diveintopython.org, which is no longer available. closes #246 (commit bd58bfa)
  • Find form nodes in invalid html5 documents (commit e3d6945)
  • Fix typo labeling attrs type bool instead of list (commit a274276)

0.16.4 (released 2013-01-23)

  • fixes spelling errors in documentation (commit 6d2b3aa)
  • add doc about disabling an extension. refs #132 (commit c90de33)
  • Fixed error message formatting. log.err() doesn’t support cool formatting and when error occurred, the message was: “ERROR: Error processing %(item)s” (commit c16150c)
  • lint and improve images pipeline error logging (commit 56b45fc)
  • fixed doc typos (commit 243be84)
  • add documentation topics: Broad Crawls & Common Practies (commit 1fbb715)
  • fix bug in scrapy parse command when spider is not specified explicitly. closes #209 (commit c72e682)
  • Update docs/topics/commands.rst (commit 28eac7a)

0.16.3 (released 2012-12-07)

0.16.2 (released 2012-11-09)

0.16.1 (released 2012-10-26)

  • fixed LogStats extension, which got broken after a wrong merge before the 0.16 release (commit 8c780fd)
  • better backwards compatibility for scrapy.conf.settings (commit 3403089)
  • extended documentation on how to access crawler stats from extensions (commit c4da0b5)
  • removed .hgtags (no longer needed now that scrapy uses git) (commit d52c188)
  • fix dashes under rst headers (commit fa4f7f9)
  • set release date for 0.16.0 in news (commit e292246)

0.16.0 (released 2012-10-18)

Scrapy changes:

  • added Spiders Contracts, a mechanism for testing spiders in a formal/reproducible way
  • added options -o and -t to the runspider command
  • documented 自动限速(AutoThrottle)扩展 and added to extensions installed by default. You still need to enable it with AUTOTHROTTLE_ENABLED
  • major Stats Collection refactoring: removed separation of global/per-spider stats, removed stats-related signals (stats_spider_opened, etc). Stats are much simpler now, backwards compatibility is kept on the Stats Collector API and signals.
  • added process_start_requests() method to spider middlewares
  • dropped Signals singleton. Signals should now be accesed through the Crawler.signals attribute. See the signals documentation for more info.
  • dropped Signals singleton. Signals should now be accesed through the Crawler.signals attribute. See the signals documentation for more info.
  • dropped Stats Collector singleton. Stats can now be accessed through the Crawler.stats attribute. See the stats collection documentation for more info.
  • documented 核心API
  • lxml is now the default selectors backend instead of libxml2
  • ported FormRequest.from_response() to use lxml instead of ClientForm
  • removed modules: scrapy.xlib.BeautifulSoup and scrapy.xlib.ClientForm
  • SitemapSpider: added support for sitemap urls ending in .xml and .xml.gz, even if they advertise a wrong content type (commit 10ed28b)
  • StackTraceDump extension: also dump trackref live references (commit fe2ce93)
  • nested items now fully supported in JSON and JSONLines exporters
  • added cookiejar Request meta key to support multiple cookie sessions per spider
  • decoupled encoding detection code to w3lib.encoding, and ported Scrapy code to use that module
  • dropped support for Python 2.5. See http://blog.scrapinghub.com/2012/02/27/scrapy-0-15-dropping-support-for-python-2-5/
  • dropped support for Twisted 2.5
  • added REFERER_ENABLED setting, to control referer middleware
  • changed default user agent to: Scrapy/VERSION (+http://scrapy.org)
  • removed (undocumented) HTMLImageLinkExtractor class from scrapy.contrib.linkextractors.image
  • removed per-spider settings (to be replaced by instantiating multiple crawler objects)
  • USER_AGENT spider attribute will no longer work, use user_agent attribute instead
  • DOWNLOAD_TIMEOUT spider attribute will no longer work, use download_timeout attribute instead
  • removed ENCODING_ALIASES setting, as encoding auto-detection has been moved to the w3lib library
  • promoted DjangoItem to main contrib
  • LogFormatter method now return dicts(instead of strings) to support lazy formatting (issue 164, commit dcef7b0)
  • downloader handlers (DOWNLOAD_HANDLERS setting) now receive settings as the first argument of the constructor
  • replaced memory usage acounting with (more portable) resource module, removed scrapy.utils.memory module
  • removed signal: scrapy.mail.mail_sent
  • removed TRACK_REFS setting, now trackrefs is always enabled
  • DBM is now the default storage backend for HTTP cache middleware
  • number of log messages (per level) are now tracked through Scrapy stats (stat name: log_count/LEVEL)
  • number received responses are now tracked through Scrapy stats (stat name: response_received_count)
  • removed scrapy.log.started attribute

0.14.4

0.14.3

  • forgot to include pydispatch license. #118 (commit fd85f9c)
  • include egg files used by testsuite in source distribution. #118 (commit c897793)
  • update docstring in project template to avoid confusion with genspider command, which may be considered as an advanced feature. refs #107 (commit 2548dcc)
  • added note to docs/topics/firebug.rst about google directory being shut down (commit 668e352)
  • dont discard slot when empty, just save in another dict in order to recycle if needed again. (commit 8e9f607)
  • do not fail handling unicode xpaths in libxml2 backed selectors (commit b830e95)
  • fixed minor mistake in Request objects documentation (commit bf3c9ee)
  • fixed minor defect in link extractors documentation (commit ba14f38)
  • removed some obsolete remaining code related to sqlite support in scrapy (commit 0665175)

0.14.2

  • move buffer pointing to start of file before computing checksum. refs #92 (commit 6a5bef2)
  • Compute image checksum before persisting images. closes #92 (commit 9817df1)
  • remove leaking references in cached failures (commit 673a120)
  • fixed bug in MemoryUsage extension: get_engine_status() takes exactly 1 argument (0 given) (commit 11133e9)
  • fixed struct.error on http compression middleware. closes #87 (commit 1423140)
  • ajax crawling wasn’t expanding for unicode urls (commit 0de3fb4)
  • Catch start_requests iterator errors. refs #83 (commit 454a21d)
  • Speed-up libxml2 XPathSelector (commit 2fbd662)
  • updated versioning doc according to recent changes (commit 0a070f5)
  • scrapyd: fixed documentation link (commit 2b4e4c3)
  • extras/makedeb.py: no longer obtaining version from git (commit caffe0e)

0.14.1

  • extras/makedeb.py: no longer obtaining version from git (commit caffe0e)
  • bumped version to 0.14.1 (commit 6cb9e1c)
  • fixed reference to tutorial directory (commit 4b86bd6)
  • doc: removed duplicated callback argument from Request.replace() (commit 1aeccdd)
  • fixed formatting of scrapyd doc (commit 8bf19e6)
  • Dump stacks for all running threads and fix engine status dumped by StackTraceDump extension (commit 14a8e6e)
  • added comment about why we disable ssl on boto images upload (commit 5223575)
  • SSL handshaking hangs when doing too many parallel connections to S3 (commit 63d583d)
  • change tutorial to follow changes on dmoz site (commit bcb3198)
  • Avoid _disconnectedDeferred AttributeError exception in Twisted>=11.1.0 (commit 98f3f87)
  • allow spider to set autothrottle max concurrency (commit 175a4b5)

0.14

New features and settings

  • Support for AJAX crawleable urls

  • New persistent scheduler that stores requests on disk, allowing to suspend and resume crawls (r2737)

  • added -o option to scrapy crawl, a shortcut for dumping scraped items into a file (or standard output using -)

  • Added support for passing custom settings to Scrapyd schedule.json api (r2779, r2783)

  • New ChunkedTransferMiddleware (enabled by default) to support chunked transfer encoding (r2769)

  • Add boto 2.0 support for S3 downloader handler (r2763)

  • Added marshal to formats supported by feed exports (r2744)

  • In request errbacks, offending requests are now received in failure.request attribute (r2738)

  • Big downloader refactoring to support per domain/ip concurrency limits (r2732)

  • Added builtin caching DNS resolver (r2728)

  • Moved Amazon AWS-related components/extensions (SQS spider queue, SimpleDB stats collector) to a separate project: scaws) (r2706, r2714)

  • Moved spider queues to scrapyd: scrapy.spiderqueue -> scrapyd.spiderqueue (r2708)

  • Moved sqlite utils to scrapyd: scrapy.utils.sqlite -> scrapyd.sqlite (r2781)

  • Real support for returning iterators on start_requests() method. The iterator is now consumed during the crawl when the spider is getting idle (r2704)

  • Added REDIRECT_ENABLED setting to quickly enable/disable the redirect middleware (r2697)

  • Added RETRY_ENABLED setting to quickly enable/disable the retry middleware (r2694)

  • Added CloseSpider exception to manually close spiders (r2691)

  • Improved encoding detection by adding support for HTML5 meta charset declaration (r2690)

  • Refactored close spider behavior to wait for all downloads to finish and be processed by spiders, before closing the spider (r2688)

  • Added SitemapSpider (see documentation in Spiders page) (r2658)

  • Added LogStats extension for periodically logging basic stats (like crawled pages and scraped items) (r2657)

  • Make handling of gzipped responses more robust (#319, r2643). Now Scrapy will try and decompress as much as possible from a gzipped response, instead of failing with an IOError.

  • Simplified !MemoryDebugger extension to use stats for dumping memory debugging info (r2639)

  • Added new command to edit spiders: scrapy edit (r2636) and -e flag to genspider command that uses it (r2653)

  • Changed default representation of items to pretty-printed dicts. (r2631). This improves default logging by making log more readable in the default case, for both Scraped and Dropped lines.

  • Added spider_error signal (r2628)

  • Added COOKIES_ENABLED setting (r2625)

  • Stats are now dumped to Scrapy log (default value of STATS_DUMP setting has been changed to True). This is to make Scrapy users more aware of Scrapy stats and the data that is collected there.

  • Added support for dynamically adjusting download delay and maximum concurrent requests (r2599)

  • Added new DBM HTTP cache storage backend (r2576)

  • Added listjobs.json API to Scrapyd (r2571)

  • CsvItemExporter: added join_multivalued parameter (r2578)

  • Added namespace support to xmliter_lxml (r2552)

  • Improved cookies middleware by making COOKIES_DEBUG nicer and documenting it (r2579)

  • Several improvements to Scrapyd and Link extractors

Code rearranged and removed

  • Merged item passed and item scraped concepts, as they have often proved confusing in the past. This means: (r2630)
    • original item_scraped signal was removed
    • original item_passed signal was renamed to item_scraped
    • old log lines Scraped Item… were removed
    • old log lines Passed Item… were renamed to Scraped Item… lines and downgraded to DEBUG level
  • Reduced Scrapy codebase by striping part of Scrapy code into two new libraries:
    • w3lib (several functions from scrapy.utils.{http,markup,multipart,response,url}, done in r2584)
    • scrapely (was scrapy.contrib.ibl, done in r2586)
  • Removed unused function: scrapy.utils.request.request_info() (r2577)

  • Removed googledir project from examples/googledir. There’s now a new example project called dirbot available on github: https://github.com/scrapy/dirbot

  • Removed support for default field values in Scrapy items (r2616)

  • Removed experimental crawlspider v2 (r2632)

  • Removed scheduler middleware to simplify architecture. Duplicates filter is now done in the scheduler itself, using the same dupe fltering class as before (DUPEFILTER_CLASS setting) (r2640)

  • Removed support for passing urls to scrapy crawl command (use scrapy parse instead) (r2704)

  • Removed deprecated Execution Queue (r2704)

  • Removed (undocumented) spider context extension (from scrapy.contrib.spidercontext) (r2780)

  • removed CONCURRENT_SPIDERS setting (use scrapyd maxproc instead) (r2789)

  • Renamed attributes of core components: downloader.sites -> downloader.slots, scraper.sites -> scraper.slots (r2717, r2718)

  • Renamed setting CLOSESPIDER_ITEMPASSED to CLOSESPIDER_ITEMCOUNT (r2655). Backwards compatibility kept.

0.12

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features and improvements

  • Passed item is now sent in the item argument of the item_passed (#273)
  • Added verbose option to scrapy version command, useful for bug reports (#298)
  • HTTP cache now stored by default in the project data dir (#279)
  • Added project data storage directory (#276, #277)
  • Documented file structure of Scrapy projects (see command-line tool doc)
  • New lxml backend for XPath selectors (#147)
  • Per-spider settings (#245)
  • Support exit codes to signal errors in Scrapy commands (#248)
  • Added -c argument to scrapy shell command
  • Made libxml2 optional (#260)
  • New deploy command (#261)
  • Added CLOSESPIDER_PAGECOUNT setting (#253)
  • Added CLOSESPIDER_ERRORCOUNT setting (#254)

Scrapyd changes

  • Scrapyd now uses one process per spider
  • It stores one log file per spider run, and rotate them keeping the lastest 5 logs per spider (by default)
  • A minimal web ui was added, available at http://localhost:6800 by default
  • There is now a scrapy server command to start a Scrapyd server of the current project

Changes to settings

  • added HTTPCACHE_ENABLED setting (False by default) to enable HTTP cache middleware
  • changed HTTPCACHE_EXPIRATION_SECS semantics: now zero means “never expire”.

Deprecated/obsoleted functionality

  • Deprecated runserver command in favor of server command which starts a Scrapyd server. See also: Scrapyd changes
  • Deprecated queue command in favor of using Scrapyd schedule.json API. See also: Scrapyd changes
  • Removed the !LxmlItemLoader (experimental contrib which never graduated to main contrib)

0.10

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features and improvements

  • New Scrapy service called scrapyd for deploying Scrapy crawlers in production (#218) (documentation available)
  • Simplified Images pipeline usage which doesn’t require subclassing your own images pipeline now (#217)
  • Scrapy shell now shows the Scrapy log by default (#206)
  • Refactored execution queue in a common base code and pluggable backends called “spider queues” (#220)
  • New persistent spider queue (based on SQLite) (#198), available by default, which allows to start Scrapy in server mode and then schedule spiders to run.
  • Added documentation for Scrapy command-line tool and all its available sub-commands. (documentation available)
  • Feed exporters with pluggable backends (#197) (documentation available)
  • Deferred signals (#193)
  • Added two new methods to item pipeline open_spider(), close_spider() with deferred support (#195)
  • Support for overriding default request headers per spider (#181)
  • Replaced default Spider Manager with one with similar functionality but not depending on Twisted Plugins (#186)
  • Splitted Debian package into two packages - the library and the service (#187)
  • Scrapy log refactoring (#188)
  • New extension for keeping persistent spider contexts among different runs (#203)
  • Added dont_redirect request.meta key for avoiding redirects (#233)
  • Added dont_retry request.meta key for avoiding retries (#234)

Command-line tool changes

  • New scrapy command which replaces the old scrapy-ctl.py (#199)- there is only one global scrapy command now, instead of one scrapy-ctl.py per project- Added scrapy.bat script for running more conveniently from Windows
  • Added bash completion to command-line tool (#210)
  • Renamed command start to runserver (#209)

API changes

  • url and body attributes of Request objects are now read-only (#230)

  • Request.copy() and Request.replace() now also copies their callback and errback attributes (#231)

  • Removed UrlFilterMiddleware from scrapy.contrib (already disabled by default)

  • Offsite middelware doesn’t filter out any request coming from a spider that doesn’t have a allowed_domains attribute (#225)

  • Removed Spider Manager load() method. Now spiders are loaded in the constructor itself.

  • Changes to Scrapy Manager (now called “Crawler”):

    • scrapy.core.manager.ScrapyManager class renamed to scrapy.crawler.Crawler
    • scrapy.core.manager.scrapymanager singleton moved to scrapy.project.crawler
  • Moved module: scrapy.contrib.spidermanager to scrapy.spidermanager

  • Spider Manager singleton moved from scrapy.spider.spiders to the spiders` attribute of ``scrapy.project.crawler singleton.

  • moved Stats Collector classes: (#204)

    • scrapy.stats.collector.StatsCollector to scrapy.statscol.StatsCollector
    • scrapy.stats.collector.SimpledbStatsCollector to scrapy.contrib.statscol.SimpledbStatsCollector
  • default per-command settings are now specified in the default_settings attribute of command object class (#201)

  • changed arguments of Item pipeline process_item() method from (spider, item) to (item, spider)

    • backwards compatibility kept (with deprecation warning)
  • moved scrapy.core.signals module to scrapy.signals
    • backwards compatibility kept (with deprecation warning)
  • moved scrapy.core.exceptions module to scrapy.exceptions
    • backwards compatibility kept (with deprecation warning)
  • added handles_request() class method to BaseSpider

  • dropped scrapy.log.exc() function (use scrapy.log.err() instead)

  • dropped component argument of scrapy.log.msg() function

  • dropped scrapy.log.log_level attribute

  • Added from_settings() class methods to Spider Manager, and Item Pipeline Manager

Changes to settings

  • Added HTTPCACHE_IGNORE_SCHEMES setting to ignore certain schemes on !HttpCacheMiddleware (#225)
  • Added SPIDER_QUEUE_CLASS setting which defines the spider queue to use (#220)
  • Added KEEP_ALIVE setting (#220)
  • Removed SERVICE_QUEUE setting (#220)
  • Removed COMMANDS_SETTINGS_MODULE setting (#201)
  • Renamed REQUEST_HANDLERS to DOWNLOAD_HANDLERS and make download handlers classes (instead of functions)

0.9

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features and improvements

  • Added SMTP-AUTH support to scrapy.mail
  • New settings added: MAIL_USER, MAIL_PASS (r2065 | #149)
  • Added new scrapy-ctl view command - To view URL in the browser, as seen by Scrapy (r2039)
  • Added web service for controlling Scrapy process (this also deprecates the web console. (r2053 | #167)
  • Support for running Scrapy as a service, for production systems (r1988, r2054, r2055, r2056, r2057 | #168)
  • Added wrapper induction library (documentation only available in source code for now). (r2011)
  • Simplified and improved response encoding support (r1961, r1969)
  • Added LOG_ENCODING setting (r1956, documentation available)
  • Added RANDOMIZE_DOWNLOAD_DELAY setting (enabled by default) (r1923, doc available)
  • MailSender is no longer IO-blocking (r1955 | #146)
  • Linkextractors and new Crawlspider now handle relative base tag urls (r1960 | #148)
  • Several improvements to Item Loaders and processors (r2022, r2023, r2024, r2025, r2026, r2027, r2028, r2029, r2030)
  • Added support for adding variables to telnet console (r2047 | #165)
  • Support for requests without callbacks (r2050 | #166)

API changes

  • Change Spider.domain_name to Spider.name (SEP-012, r1975)
  • Response.encoding is now the detected encoding (r1961)
  • HttpErrorMiddleware now returns None or raises an exception (r2006 | #157)
  • scrapy.command modules relocation (r2035, r2036, r2037)
  • Added ExecutionQueue for feeding spiders to scrape (r2034)
  • Removed ExecutionEngine singleton (r2039)
  • Ported S3ImagesStore (images pipeline) to use boto and threads (r2033)
  • Moved module: scrapy.management.telnet to scrapy.telnet (r2047)

Changes to default settings

  • Changed default SCHEDULER_ORDER to DFO (r1939)

0.8

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features

  • Added DEFAULT_RESPONSE_ENCODING setting (r1809)
  • Added dont_click argument to FormRequest.from_response() method (r1813, r1816)
  • Added clickdata argument to FormRequest.from_response() method (r1802, r1803)
  • Added support for HTTP proxies (HttpProxyMiddleware) (r1781, r1785)
  • Offsite spider middleware now logs messages when filtering out requests (r1841)

Backwards-incompatible changes

  • Changed scrapy.utils.response.get_meta_refresh() signature (r1804)

  • Removed deprecated scrapy.item.ScrapedItem class - use scrapy.item.Item instead (r1838)

  • Removed deprecated scrapy.xpath module - use scrapy.selector instead. (r1836)

  • Removed deprecated core.signals.domain_open signal - use core.signals.domain_opened instead (r1822)

  • log.msg() now receives a spider argument (r1822)

    • Old domain argument has been deprecated and will be removed in 0.9. For spiders, you should always use the spider argument and pass spider references. If you really want to pass a string, use the component argument instead.
  • Changed core signals domain_opened, domain_closed, domain_idle

  • Changed Item pipeline to use spiders instead of domains

    • The domain argument of process_item() item pipeline method was changed to spider, the new signature is: process_item(spider, item) (r1827 | #105)
    • To quickly port your code (to work with Scrapy 0.8) just use spider.domain_name where you previously used domain.
  • Changed Stats API to use spiders instead of domains (r1849 | #113)
    • StatsCollector was changed to receive spider references (instead of domains) in its methods (set_value, inc_value, etc).
    • added StatsCollector.iter_spider_stats() method
    • removed StatsCollector.list_domains() method
    • Also, Stats signals were renamed and now pass around spider references (instead of domains). Here’s a summary of the changes:
    • To quickly port your code (to work with Scrapy 0.8) just use spider.domain_name where you previously used domain. spider_stats contains exactly the same data as domain_stats.
  • CloseDomain extension moved to scrapy.contrib.closespider.CloseSpider (r1833)
    • Its settings were also renamed:
      • CLOSEDOMAIN_TIMEOUT to CLOSESPIDER_TIMEOUT
      • CLOSEDOMAIN_ITEMCOUNT to CLOSESPIDER_ITEMCOUNT
  • Removed deprecated SCRAPYSETTINGS_MODULE environment variable - use SCRAPY_SETTINGS_MODULE instead (r1840)

  • Renamed setting: REQUESTS_PER_DOMAIN to CONCURRENT_REQUESTS_PER_SPIDER (r1830, r1844)

  • Renamed setting: CONCURRENT_DOMAINS to CONCURRENT_SPIDERS (r1830)

  • Refactored HTTP Cache middleware

  • HTTP Cache middleware has been heavilty refactored, retaining the same functionality except for the domain sectorization which was removed. (r1843 )

  • Renamed exception: DontCloseDomain to DontCloseSpider (r1859 | #120)

  • Renamed extension: DelayedCloseDomain to SpiderCloseDelay (r1861 | #121)

  • Removed obsolete scrapy.utils.markup.remove_escape_chars function - use scrapy.utils.markup.replace_escape_chars instead (r1865)

0.7

First release of Scrapy.