Item Exporters

Once you have scraped your items, you often want to persist or export thoseitems, to use the data in some other application. That is, after all, the wholepurpose of the scraping process.

For this purpose Scrapy provides a collection of Item Exporters for differentoutput formats, such as XML, CSV or JSON.

Using Item Exporters

If you are in a hurry, and just want to use an Item Exporter to output scrapeddata see the Feed exports. Otherwise, if you want to know howItem Exporters work or need more custom functionality (not covered by thedefault exports), continue reading below.

In order to use an Item Exporter, you must instantiate it with its requiredargs. Each Item Exporter requires different arguments, so check each exporterdocumentation to be sure, in Built-in Item Exporters reference. After you haveinstantiated your exporter, you have to:

  1. call the method start_exporting() in order tosignal the beginning of the exporting process

  2. call the export_item() method for each item you wantto export

  3. and finally call the finish_exporting() to signalthe end of the exporting process

Here you can see an Item Pipeline which uses multipleItem Exporters to group scraped items to different files according to thevalue of one of their fields:

  1. from scrapy.exporters import XmlItemExporter
  2.  
  3. class PerYearXmlExportPipeline(object):
  4. """Distribute items across multiple XML files according to their 'year' field"""
  5.  
  6. def open_spider(self, spider):
  7. self.year_to_exporter = {}
  8.  
  9. def close_spider(self, spider):
  10. for exporter in self.year_to_exporter.values():
  11. exporter.finish_exporting()
  12.  
  13. def _exporter_for_item(self, item):
  14. year = item['year']
  15. if year not in self.year_to_exporter:
  16. f = open('{}.xml'.format(year), 'wb')
  17. exporter = XmlItemExporter(f)
  18. exporter.start_exporting()
  19. self.year_to_exporter[year] = exporter
  20. return self.year_to_exporter[year]
  21.  
  22. def process_item(self, item, spider):
  23. exporter = self._exporter_for_item(item)
  24. exporter.export_item(item)
  25. return item

Serialization of item fields

By default, the field values are passed unmodified to the underlyingserialization library, and the decision of how to serialize them is delegatedto each particular serialization library.

However, you can customize how each field value is serialized before it ispassed to the serialization library.

There are two ways to customize how a field will be serialized, which aredescribed next.

1. Declaring a serializer in the field

If you use Item you can declare a serializer in thefield metadata. The serializer must bea callable which receives a value and returns its serialized form.

Example:

  1. import scrapy
  2.  
  3. def serialize_price(value):
  4. return '$ %s' % str(value)
  5.  
  6. class Product(scrapy.Item):
  7. name = scrapy.Field()
  8. price = scrapy.Field(serializer=serialize_price)

2. Overriding the serialize_field() method

You can also override the serialize_field() method tocustomize how your field value will be exported.

Make sure you call the base class serialize_field() methodafter your custom code.

Example:

  1. from scrapy.exporter import XmlItemExporter
  2.  
  3. class ProductXmlExporter(XmlItemExporter):
  4.  
  5. def serialize_field(self, field, name, value):
  6. if field == 'price':
  7. return '$ %s' % str(value)
  8. return super(Product, self).serialize_field(field, name, value)

Built-in Item Exporters reference

Here is a list of the Item Exporters bundled with Scrapy. Some of them containoutput examples, which assume you’re exporting these two items:

  1. Item(name='Color TV', price='1200')
  2. Item(name='DVD player', price='200')

BaseItemExporter

  • class scrapy.exporters.BaseItemExporter(fields_to_export=None, export_empty_fields=False, encoding='utf-8', indent=0, dont_fail=False)[source]
  • This is the (abstract) base class for all Item Exporters. It providessupport for common features used by all (concrete) Item Exporters, such asdefining what fields to export, whether to export empty fields, or whichencoding to use.

These features can be configured through the init method arguments whichpopulate their respective instance attributes: fields_to_export,export_empty_fields, encoding, indent.

New in version 2.0: The dont_fail parameter.

  • exportitem(_item)[source]
  • Exports the given item. This method must be implemented in subclasses.

  • serializefield(_field, name, value)[source]

  • Return the serialized value for the given field. You can override thismethod (in your custom Item Exporters) if you want to control how aparticular field or value will be serialized/exported.

By default, this method looks for a serializer declared in the itemfield and returns the result of applyingthat serializer to the value. If no serializer is found, it returns thevalue unchanged except for unicode values which are encoded tostr using the encoding declared in the encoding attribute.

Parameters:

  1. - **field** ([<code>Field</code>]($03ea72809515384c.md#scrapy.item.Field) object or an empty dict) – the field being serialized. If a raw dict is beingexported (not [<code>Item</code>]($03ea72809515384c.md#scrapy.item.Item)) _field_ value is an empty dict.
  2. - **name** ([_str_](https://docs.python.org/3/library/stdtypes.html#str)) – the name of the field being serialized
  3. - **value** the value being serialized
  • start_exporting()[source]
  • Signal the beginning of the exporting process. Some exporters may usethis to generate some required header (for example, theXmlItemExporter). You must call this method before exporting anyitems.

  • finish_exporting()[source]

  • Signal the end of the exporting process. Some exporters may use this togenerate some required footer (for example, theXmlItemExporter). You must always call this method after youhave no more items to export.

  • fields_to_export

  • A list with the name of the fields that will be exported, or None if youwant to export all fields. Defaults to None.

Some exporters (like CsvItemExporter) respect the order of thefields defined in this attribute.

Some exporters may require fields_to_export list in order to export thedata properly when spiders return dicts (not Item instances).

  • export_empty_fields
  • Whether to include empty/unpopulated item fields in the exported data.Defaults to False. Some exporters (like CsvItemExporter)ignore this attribute and always export all empty fields.

This option is ignored for dict items.

  • encoding
  • The encoding that will be used to encode unicode values. This onlyaffects unicode values (which are always serialized to str using thisencoding). Other value types are passed unchanged to the specificserialization library.

  • indent

  • Amount of spaces used to indent the output on each level. Defaults to 0.

    • indent=None selects the most compact representation,all items in the same line with no indentation
    • indent<=0 each item on its own line, no indentation
    • indent>0 each item on its own line, indented with the provided numeric value

PythonItemExporter

  • class scrapy.exporters.PythonItemExporter(*, dont_fail=False, **kwargs)[source]
  • This is a base class for item exporters that extendsBaseItemExporter with support for nested items.

It serializes items to built-in Python types, so that any serializationlibrary (e.g. json or msgpack) can be used on top of it.

XmlItemExporter

  • class scrapy.exporters.XmlItemExporter(file, item_element='item', root_element='items', **kwargs)[source]
  • Exports Items in XML format to the specified file object.

Parameters:

  • file – the file-like object to use for exporting the data. Its write method shouldaccept bytes (a disk file opened in binary mode, a io.BytesIO object, etc)
  • root_element (str) – The name of root element in the exported XML.
  • item_element (str) – The name of each item element in the exported XML.

The additional keyword arguments of this init method are passed to theBaseItemExporter init method.

A typical output of this exporter would be:

  1. <?xml version="1.0" encoding="utf-8"?>
  2. <items>
  3. <item>
  4. <name>Color TV</name>
  5. <price>1200</price>
  6. </item>
  7. <item>
  8. <name>DVD player</name>
  9. <price>200</price>
  10. </item>
  11. </items>

Unless overridden in the serialize_field() method, multi-valued fields areexported by serializing each value inside a <value> element. This is forconvenience, as multi-valued fields are very common.

For example, the item:

  1. Item(name=['John', 'Doe'], age='23')

Would be serialized as:

  1. <?xml version="1.0" encoding="utf-8"?>
  2. <items>
  3. <item>
  4. <name>
  5. <value>John</value>
  6. <value>Doe</value>
  7. </name>
  8. <age>23</age>
  9. </item>
  10. </items>

CsvItemExporter

  • class scrapy.exporters.CsvItemExporter(file, include_headers_line=True, join_multivalued=', ', **kwargs)[source]
  • Exports Items in CSV format to the given file-like object. If thefields_to_export attribute is set, it will be used to define theCSV columns and their order. The export_empty_fields attribute hasno effect on this exporter.

Parameters:

  • file – the file-like object to use for exporting the data. Its write method shouldaccept bytes (a disk file opened in binary mode, a io.BytesIO object, etc)
  • include_headers_line (str) – If enabled, makes the exporter output a headerline with the field names taken fromBaseItemExporter.fields_to_export or the first exported item fields.
  • join_multivalued – The char (or chars) that will be used for joiningmulti-valued fields, if found.

The additional keyword arguments of this init method are passed to theBaseItemExporter init method, and the leftover arguments to thecsv.writer init method, so you can use any csv.writer init methodargument to customize this exporter.

A typical output of this exporter would be:

  1. product,price
  2. Color TV,1200
  3. DVD player,200

PickleItemExporter

  • class scrapy.exporters.PickleItemExporter(file, protocol=0, **kwargs)[source]
  • Exports Items in pickle format to the given file-like object.

Parameters:

  • file – the file-like object to use for exporting the data. Its write method shouldaccept bytes (a disk file opened in binary mode, a io.BytesIO object, etc)
  • protocol (int) – The pickle protocol to use.

For more information, refer to the pickle module documentation.

The additional keyword arguments of this init method are passed to theBaseItemExporter init method.

Pickle isn’t a human readable format, so no output examples are provided.

PprintItemExporter

  • class scrapy.exporters.PprintItemExporter(file, **kwargs)[source]
  • Exports Items in pretty print format to the specified file object.

Parameters:file – the file-like object to use for exporting the data. Its write method shouldaccept bytes (a disk file opened in binary mode, a io.BytesIO object, etc)

The additional keyword arguments of this init method are passed to theBaseItemExporter init method.

A typical output of this exporter would be:

  1. {'name': 'Color TV', 'price': '1200'}
  2. {'name': 'DVD player', 'price': '200'}

Longer lines (when present) are pretty-formatted.

JsonItemExporter

  • class scrapy.exporters.JsonItemExporter(file, **kwargs)[source]
  • Exports Items in JSON format to the specified file-like object, writing allobjects as a list of objects. The additional init method arguments arepassed to the BaseItemExporter init method, and the leftoverarguments to the JSONEncoder init method, so you can use anyJSONEncoder init method argument to customize this exporter.

Parameters:file – the file-like object to use for exporting the data. Its write method shouldaccept bytes (a disk file opened in binary mode, a io.BytesIO object, etc)

A typical output of this exporter would be:

  1. [{"name": "Color TV", "price": "1200"},
  2. {"name": "DVD player", "price": "200"}]

Warning

JSON is very simple and flexible serialization format, but itdoesn’t scale well for large amounts of data since incremental (aka.stream-mode) parsing is not well supported (if at all) among JSON parsers(on any language), and most of them just parse the entire object inmemory. If you want the power and simplicity of JSON with a morestream-friendly format, consider using JsonLinesItemExporterinstead, or splitting the output in multiple chunks.

JsonLinesItemExporter

  • class scrapy.exporters.JsonLinesItemExporter(file, **kwargs)[source]
  • Exports Items in JSON format to the specified file-like object, writing oneJSON-encoded item per line. The additional init method arguments are passedto the BaseItemExporter init method, and the leftover arguments tothe JSONEncoder init method, so you can use any JSONEncoderinit method argument to customize this exporter.

Parameters:file – the file-like object to use for exporting the data. Its write method shouldaccept bytes (a disk file opened in binary mode, a io.BytesIO object, etc)

A typical output of this exporter would be:

  1. {"name": "Color TV", "price": "1200"}
  2. {"name": "DVD player", "price": "200"}

Unlike the one produced by JsonItemExporter, the format produced bythis exporter is well suited for serializing large amounts of data.

MarshalItemExporter

  • class scrapy.exporters.MarshalItemExporter(file, **kwargs)[source]
  • Exports items in a Python-specific binary format (seemarshal).

Parameters:file – The file-like object to use for exporting the data. Itswrite method should accept bytes (a disk fileopened in binary mode, a BytesIO object, etc)