Feed exports

One of the most frequently required features when implementing scrapers is being able to store the scraped data properly and, quite often, that means generating an “export file” with the scraped data (commonly called “export feed”) to be consumed by other systems.

Scrapy provides this functionality out of the box with the Feed Exports, which allows you to generate feeds with the scraped items, using multiple serialization formats and storage backends.

Serialization formats

For serializing the scraped data, the feed exports use the Item exporters. These formats are supported out of the box:

But you can also extend the supported format through the FEED_EXPORTERS setting.

JSON

JSON lines

CSV

  • Value for the format key in the FEEDS setting: csv

  • Exporter used: CsvItemExporter

  • To specify columns to export and their order use FEED_EXPORT_FIELDS. Other feed exporters can also use this option, but it is important for CSV because unlike many other export formats CSV uses a fixed header.

XML

Pickle

Marshal

Storages

When using the feed exports you define where to store the feed using one or multiple URIs (through the FEEDS setting). The feed exports supports multiple storage backend types which are defined by the URI scheme.

The storages backends supported out of the box are:

Some storage backends may be unavailable if the required external libraries are not available. For example, the S3 backend is only available if the botocore library is installed.

Storage URI parameters

The storage URI can also contain parameters that get replaced when the feed is being created. These parameters are:

  • %(time)s - gets replaced by a timestamp when the feed is being created

  • %(name)s - gets replaced by the spider name

Any other named parameter gets replaced by the spider attribute of the same name. For example, %(site_id)s would get replaced by the spider.site_id attribute the moment the feed is being created.

Here are some examples to illustrate:

  • Store in FTP using one directory per spider:

    • ftp://user:password@ftp.example.com/scraping/feeds/%(name)s/%(time)s.json
  • Store in S3 using one directory per spider:

    • s3://mybucket/scraping/feeds/%(name)s/%(time)s.json

Note

Spider arguments become spider attributes, hence they can also be used as storage URI parameters.

Storage backends

Local filesystem

The feeds are stored in the local filesystem.

  • URI scheme: file

  • Example URI: file:///tmp/export.csv

  • Required external libraries: none

Note that for the local filesystem storage (only) you can omit the scheme if you specify an absolute path like /tmp/export.csv. This only works on Unix systems though.

FTP

The feeds are stored in a FTP server.

  • URI scheme: ftp

  • Example URI: ftp://user:pass@ftp.example.com/path/to/export.csv

  • Required external libraries: none

FTP supports two different connection modes: active or passive. Scrapy uses the passive connection mode by default. To use the active connection mode instead, set the FEED_STORAGE_FTP_ACTIVE setting to True.

This storage backend uses delayed file delivery.

S3

The feeds are stored on Amazon S3.

  • URI scheme: s3

  • Example URIs:

    • s3://mybucket/path/to/export.csv

    • s3://aws_key:aws_secret@mybucket/path/to/export.csv

  • Required external libraries: botocore >= 1.4.87

The AWS credentials can be passed as user/password in the URI, or they can be passed through the following settings:

You can also define a custom ACL and custom endpoint for exported feeds using this setting:

This storage backend uses delayed file delivery.

Google Cloud Storage (GCS)

New in version 2.3.

The feeds are stored on Google Cloud Storage.

  • URI scheme: gs

  • Example URIs:

    • gs://mybucket/path/to/export.csv
  • Required external libraries: google-cloud-storage.

For more information about authentication, please refer to Google Cloud documentation.

You can set a Project ID and Access Control List (ACL) through the following settings:

This storage backend uses delayed file delivery.

Standard output

The feeds are written to the standard output of the Scrapy process.

  • URI scheme: stdout

  • Example URI: stdout:

  • Required external libraries: none

Delayed file delivery

As indicated above, some of the described storage backends use delayed file delivery.

These storage backends do not upload items to the feed URI as those items are scraped. Instead, Scrapy writes items into a temporary local file, and only once all the file contents have been written (i.e. at the end of the crawl) is that file uploaded to the feed URI.

If you want item delivery to start earlier when using one of these storage backends, use FEED_EXPORT_BATCH_ITEM_COUNT to split the output items in multiple files, with the specified maximum item count per file. That way, as soon as a file reaches the maximum item count, that file is delivered to the feed URI, allowing item delivery to start way before the end of the crawl.

Item filtering

New in version 2.6.0.

You can filter items that you want to allow for a particular feed by using the item_classes option in feeds options. Only items of the specified types will be added to the feed.

The item_classes option is implemented by the ItemFilter class, which is the default value of the item_filter feed option.

You can create your own custom filtering class by implementing ItemFilter’s method accepts and taking feed_options as an argument.

For instance:

  1. class MyCustomFilter:
  2. def __init__(self, feed_options):
  3. self.feed_options = feed_options
  4. def accepts(self, item):
  5. if "field1" in item and item["field1"] == "expected_data":
  6. return True
  7. return False

You can assign your custom filtering class to the item_filter option of a feed. See FEEDS for examples.

ItemFilter

class scrapy.extensions.feedexport.ItemFilter(feed_options: Optional[dict])[source]

This will be used by FeedExporter to decide if an item should be allowed to be exported to a particular feed.

  • Parameters

    feed_options (dict) – feed specific options passed from FeedExporter

  • accepts(item: Any) → bool[source]

    Return True if item should be exported or False otherwise.

    • Parameters

      item (Scrapy items) – scraped item which user wants to check if is acceptable

      Returns

      True if accepted, False otherwise

      Return type

      bool

Post-Processing

New in version 2.6.0.

Scrapy provides an option to activate plugins to post-process feeds before they are exported to feed storages. In addition to using builtin plugins, you can create your own plugins.

These plugins can be activated through the postprocessing option of a feed. The option must be passed a list of post-processing plugins in the order you want the feed to be processed. These plugins can be declared either as an import string or with the imported class of the plugin. Parameters to plugins can be passed through the feed options. See feed options for examples.

Built-in Plugins

class scrapy.extensions.postprocessing.GzipPlugin(file: BinaryIO, feed_options: Dict[str, Any])[source]

Compresses received data using gzip.

Accepted feed_options parameters:

  • gzip_compresslevel

  • gzip_mtime

  • gzip_filename

See gzip.GzipFile for more info about parameters.

class scrapy.extensions.postprocessing.LZMAPlugin(file: BinaryIO, feed_options: Dict[str, Any])[source]

Compresses received data using lzma.

Accepted feed_options parameters:

  • lzma_format

  • lzma_check

  • lzma_preset

  • lzma_filters

Note

lzma_filters cannot be used in pypy version 7.3.1 and older.

See lzma.LZMAFile for more info about parameters.

class scrapy.extensions.postprocessing.Bz2Plugin(file: BinaryIO, feed_options: Dict[str, Any])[source]

Compresses received data using bz2.

Accepted feed_options parameters:

  • bz2_compresslevel

See bz2.BZ2File for more info about parameters.

Custom Plugins

Each plugin is a class that must implement the following methods:

__init__(self, file, feed_options)

Initialize the plugin.

  • Parameters

    • file – file-like object having at least the write, tell and close methods implemented

    • feed_options (dict) – feed-specific options

write(self, data)

Process and write data (bytes or memoryview) into the plugin’s target file. It must return number of bytes written.

close(self)

Close the target file object.

To pass a parameter to your plugin, use feed options. You can then access those parameters from the __init__ method of your plugin.

Settings

These are the settings used for configuring the feed exports:

FEEDS

New in version 2.1.

Default: {}

A dictionary in which every key is a feed URI (or a pathlib.Path object) and each value is a nested dictionary containing configuration parameters for the specific feed.

This setting is required for enabling the feed export feature.

See Storage backends for supported URI schemes.

For instance:

  1. {
  2. 'items.json': {
  3. 'format': 'json',
  4. 'encoding': 'utf8',
  5. 'store_empty': False,
  6. 'item_classes': [MyItemClass1, 'myproject.items.MyItemClass2'],
  7. 'fields': None,
  8. 'indent': 4,
  9. 'item_export_kwargs': {
  10. 'export_empty_fields': True,
  11. },
  12. },
  13. '/home/user/documents/items.xml': {
  14. 'format': 'xml',
  15. 'fields': ['name', 'price'],
  16. 'item_filter': MyCustomFilter1,
  17. 'encoding': 'latin1',
  18. 'indent': 8,
  19. },
  20. pathlib.Path('items.csv.gz'): {
  21. 'format': 'csv',
  22. 'fields': ['price', 'name'],
  23. 'item_filter': 'myproject.filters.MyCustomFilter2',
  24. 'postprocessing': [MyPlugin1, 'scrapy.extensions.postprocessing.GzipPlugin'],
  25. 'gzip_compresslevel': 5,
  26. },
  27. }

The following is a list of the accepted keys and the setting that is used as a fallback value if that key is not provided for a specific feed definition:

FEED_EXPORT_ENCODING

Default: None

The encoding to be used for the feed.

If unset or set to None (default) it uses UTF-8 for everything except JSON output, which uses safe numeric encoding (\uXXXX sequences) for historic reasons.

Use utf-8 if you want UTF-8 for JSON too.

FEED_EXPORT_FIELDS

Default: None

A list of fields to export, optional. Example: FEED_EXPORT_FIELDS = ["foo", "bar", "baz"].

Use FEED_EXPORT_FIELDS option to define fields to export and their order.

When FEED_EXPORT_FIELDS is empty or None (default), Scrapy uses the fields defined in item objects yielded by your spider.

If an exporter requires a fixed set of fields (this is the case for CSV export format) and FEED_EXPORT_FIELDS is empty or None, then Scrapy tries to infer field names from the exported data - currently it uses field names from the first item.

FEED_EXPORT_INDENT

Default: 0

Amount of spaces used to indent the output on each level. If FEED_EXPORT_INDENT is a non-negative integer, then array elements and object members will be pretty-printed with that indent level. An indent level of 0 (the default), or negative, will put each item on a new line. None selects the most compact representation.

Currently implemented only by JsonItemExporter and XmlItemExporter, i.e. when you are exporting to .json or .xml.

FEED_STORE_EMPTY

Default: False

Whether to export empty feeds (i.e. feeds with no items).

FEED_STORAGES

Default: {}

A dict containing additional feed storage backends supported by your project. The keys are URI schemes and the values are paths to storage classes.

FEED_STORAGE_FTP_ACTIVE

Default: False

Whether to use the active connection mode when exporting feeds to an FTP server (True) or use the passive connection mode instead (False, default).

For information about FTP connection modes, see What is the difference between active and passive FTP?.

FEED_STORAGE_S3_ACL

Default: '' (empty string)

A string containing a custom ACL for feeds exported to Amazon S3 by your project.

For a complete list of available values, access the Canned ACL section on Amazon S3 docs.

FEED_STORAGES_BASE

Default:

  1. {
  2. '': 'scrapy.extensions.feedexport.FileFeedStorage',
  3. 'file': 'scrapy.extensions.feedexport.FileFeedStorage',
  4. 'stdout': 'scrapy.extensions.feedexport.StdoutFeedStorage',
  5. 's3': 'scrapy.extensions.feedexport.S3FeedStorage',
  6. 'ftp': 'scrapy.extensions.feedexport.FTPFeedStorage',
  7. }

A dict containing the built-in feed storage backends supported by Scrapy. You can disable any of these backends by assigning None to their URI scheme in FEED_STORAGES. E.g., to disable the built-in FTP storage backend (without replacement), place this in your settings.py:

  1. FEED_STORAGES = {
  2. 'ftp': None,
  3. }

FEED_EXPORTERS

Default: {}

A dict containing additional exporters supported by your project. The keys are serialization formats and the values are paths to Item exporter classes.

FEED_EXPORTERS_BASE

Default:

  1. {
  2. 'json': 'scrapy.exporters.JsonItemExporter',
  3. 'jsonlines': 'scrapy.exporters.JsonLinesItemExporter',
  4. 'jl': 'scrapy.exporters.JsonLinesItemExporter',
  5. 'csv': 'scrapy.exporters.CsvItemExporter',
  6. 'xml': 'scrapy.exporters.XmlItemExporter',
  7. 'marshal': 'scrapy.exporters.MarshalItemExporter',
  8. 'pickle': 'scrapy.exporters.PickleItemExporter',
  9. }

A dict containing the built-in feed exporters supported by Scrapy. You can disable any of these exporters by assigning None to their serialization format in FEED_EXPORTERS. E.g., to disable the built-in CSV exporter (without replacement), place this in your settings.py:

  1. FEED_EXPORTERS = {
  2. 'csv': None,
  3. }

FEED_EXPORT_BATCH_ITEM_COUNT

New in version 2.3.0.

Default: 0

If assigned an integer number higher than 0, Scrapy generates multiple output files storing up to the specified number of items in each output file.

When generating multiple output files, you must use at least one of the following placeholders in the feed URI to indicate how the different output file names are generated:

  • %(batch_time)s - gets replaced by a timestamp when the feed is being created (e.g. 2020-03-28T14-45-08.237134)

  • %(batch_id)d - gets replaced by the 1-based sequence number of the batch.

    Use printf-style string formatting to alter the number format. For example, to make the batch ID a 5-digit number by introducing leading zeroes as needed, use %(batch_id)05d (e.g. 3 becomes 00003, 123 becomes 00123).

For instance, if your settings include:

  1. FEED_EXPORT_BATCH_ITEM_COUNT = 100

And your crawl command line is:

  1. scrapy crawl spidername -o "dirname/%(batch_id)d-filename%(batch_time)s.json"

The command line above can generate a directory tree like:

  1. ->projectname
  2. -->dirname
  3. --->1-filename2020-03-28T14-45-08.237134.json
  4. --->2-filename2020-03-28T14-45-09.148903.json
  5. --->3-filename2020-03-28T14-45-10.046092.json

Where the first and second files contain exactly 100 items. The last one contains 100 items or fewer.

FEED_URI_PARAMS

Default: None

A string with the import path of a function to set the parameters to apply with printf-style string formatting to the feed URI.

The function signature should be as follows:

scrapy.extensions.feedexport.uri_params(params, spider)

Return a dict of key-value pairs to apply to the feed URI using printf-style string formatting.

Caution

The function should return a new dictionary, modifying the received params in-place is deprecated.

For example, to include the name of the source spider in the feed URI:

  1. Define the following function somewhere in your project:

    1. # myproject/utils.py
    2. def uri_params(params, spider):
    3. return {**params, 'spider_name': spider.name}
  2. Point FEED_URI_PARAMS to that function in your settings:

    1. # myproject/settings.py
    2. FEED_URI_PARAMS = 'myproject.utils.uri_params'
  3. Use %(spider_name)s in your feed URI:

    1. scrapy crawl <spider_name> -o "%(spider_name)s.jl"