The main goal in scraping is to extract structured data from unstructured sources, typically, web pages. Scrapy spiders can return the extracted data as Python dicts. While convenient and familiar, Python dicts lack structure: it is easy to make a typo in a field name or return inconsistent data, especially in a larger project with many spiders.

To define common output data format Scrapy provides the Item class. Item objects are simple containers used to collect the scraped data. They provide an API similar to dict API with a convenient syntax for declaring their available fields.

Various Scrapy components use extra information provided by Items: exporters look at declared fields to figure out columns to export, serialization can be customized using Item fields metadata, trackref tracks Item instances to help find memory leaks (see Debugging memory leaks with trackref), etc.

Declaring Items

Items are declared using a simple class definition syntax and Field objects. Here is an example:

  1. import scrapy
  2. class Product(scrapy.Item):
  3. name = scrapy.Field()
  4. price = scrapy.Field()
  5. stock = scrapy.Field()
  6. tags = scrapy.Field()
  7. last_updated = scrapy.Field(serializer=str)


Those familiar with Django will notice that Scrapy Items are declared similar to Django Models, except that Scrapy Items are much simpler as there is no concept of different field types.

Item Fields

Field objects are used to specify metadata for each field. For example, the serializer function for the last_updated field illustrated in the example above.

You can specify any kind of metadata for each field. There is no restriction on the values accepted by Field objects. For this same reason, there is no reference list of all available metadata keys. Each key defined in Field objects could be used by a different component, and only those components know about it. You can also define and use any other Field key in your project too, for your own needs. The main goal of Field objects is to provide a way to define all field metadata in one place. Typically, those components whose behaviour depends on each field use certain field keys to configure that behaviour. You must refer to their documentation to see which metadata keys are used by each component.

It’s important to note that the Field objects used to declare the item do not stay assigned as class attributes. Instead, they can be accessed through the Item.fields attribute.

Working with Items

Here are some examples of common tasks performed with items, using the Product item declared above. You will notice the API is very similar to the dict API.

Creating items

  1. >>> product = Product(name='Desktop PC', price=1000)
  2. >>> print(product)
  3. Product(name='Desktop PC', price=1000)

Getting field values

  1. >>> product['name']
  2. Desktop PC
  3. >>> product.get('name')
  4. Desktop PC
  1. >>> product['price']
  2. 1000
  1. >>> product['last_updated']
  2. Traceback (most recent call last):
  3. ...
  4. KeyError: 'last_updated'
  1. >>> product.get('last_updated', 'not set')
  2. not set
  1. >>> product['lala'] # getting unknown field
  2. Traceback (most recent call last):
  3. ...
  4. KeyError: 'lala'
  1. >>> product.get('lala', 'unknown field')
  2. 'unknown field'
  1. >>> 'name' in product # is name field populated?
  2. True
  1. >>> 'last_updated' in product # is last_updated populated?
  2. False
  1. >>> 'last_updated' in product.fields # is last_updated a declared field?
  2. True
  1. >>> 'lala' in product.fields # is lala a declared field?
  2. False

Setting field values

  1. >>> product['last_updated'] = 'today'
  2. >>> product['last_updated']
  3. today
  1. >>> product['lala'] = 'test' # setting unknown field
  2. Traceback (most recent call last):
  3. ...
  4. KeyError: 'Product does not support field: lala'

Accessing all populated values

To access all populated values, just use the typical dict API:

  1. >>> product.keys()
  2. ['price', 'name']
  1. >>> product.items()
  2. [('price', 1000), ('name', 'Desktop PC')]

Copying items

To copy an item, you must first decide whether you want a shallow copy or a deep copy.

If your item contains mutable values like lists or dictionaries, a shallow copy will keep references to the same mutable values across all different copies.

For example, if you have an item with a list of tags, and you create a shallow copy of that item, both the original item and the copy have the same list of tags. Adding a tag to the list of one of the items will add the tag to the other item as well.

If that is not the desired behavior, use a deep copy instead.

See copy for more information.

To create a shallow copy of an item, you can either call copy() on an existing item (product2 = product.copy()) or instantiate your item class from an existing item (product2 = Product(product)).

To create a deep copy, call deepcopy() instead (product2 = product.deepcopy()).

Other common tasks

Creating dicts from items:

  1. >>> dict(product) # create a dict from all populated values
  2. {'price': 1000, 'name': 'Desktop PC'}

Creating items from dicts:

  1. >>> Product({'name': 'Laptop PC', 'price': 1500})
  2. Product(price=1500, name='Laptop PC')
  1. >>> Product({'name': 'Laptop PC', 'lala': 1500}) # warning: unknown field in dict
  2. Traceback (most recent call last):
  3. ...
  4. KeyError: 'Product does not support field: lala'

Extending Items

You can extend Items (to add more fields or to change some metadata for some fields) by declaring a subclass of your original Item.

For example:

  1. class DiscountedProduct(Product):
  2. discount_percent = scrapy.Field(serializer=str)
  3. discount_expiration_date = scrapy.Field()

You can also extend field metadata by using the previous field metadata and appending more values, or changing existing values, like this:

  1. class SpecificProduct(Product):
  2. name = scrapy.Field(Product.fields['name'], serializer=my_serializer)

That adds (or replaces) the serializer metadata key for the name field, keeping all the previously existing metadata values.

Item objects

class scrapy.item.``Item([arg])[source]

Return a new Item optionally initialized from the given argument.

Items replicate the standard dict API, including its __init__ method, and also provide the following additional API members:

  • copy()

  • deepcopy()

    Return a deepcopy() of this item.

  • fields

    A dictionary containing all declared fields for this Item, not only those populated. The keys are the field names and the values are the Field objects used in the Item declaration.

Field objects

class scrapy.item.``Field([arg])[source]

The Field class is just an alias to the built-in dict class and doesn’t provide any extra functionality or attributes. In other words, Field objects are plain-old Python dicts. A separate class is used to support the item declaration syntax based on class attributes.

Other classes related to Item

class scrapy.item.``BaseItem(\args, **kwargs*)[source]

Base class for all scraped items.

In Scrapy, an object is considered an item if it is an instance of either BaseItem or dict. For example, when the output of a spider callback is evaluated, only instances of BaseItem or dict are passed to item pipelines.

If you need instances of a custom class to be considered items by Scrapy, you must inherit from either BaseItem or dict.

Unlike instances of dict, instances of BaseItem may be tracked to debug memory leaks.

class scrapy.item.``ItemMeta(class_name, bases, attrs)[source]

Metaclass of Item that handles field definitions.