Objects

pipeline pipeline

The Objects pipeline reads a list of images and returns a list of detected objects.

Example

The following shows a simple example using this pipeline.

  1. from txtai.pipeline import Objects
  2. # Create and run pipeline
  3. objects = Objects()
  4. objects("path to image file")

See the link below for a more detailed example.

NotebookDescription
Generate image captions and detect objectsCaptions and object detection for imagesOpen In Colab

Configuration-driven example

Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.

config.yml

  1. # Create pipeline using lower case class name
  2. objects:
  3. # Run pipeline with workflow
  4. workflow:
  5. objects:
  6. tasks:
  7. - action: objects

Run with Workflows

  1. from txtai.app import Application
  2. # Create and run pipeline with workflow
  3. app = Application("config.yml")
  4. list(app.workflow("objects", ["path to image file"]))

Run with API

  1. CONFIG=config.yml uvicorn "txtai.api:app" &
  2. curl \
  3. -X POST "http://localhost:8000/workflow" \
  4. -H "Content-Type: application/json" \
  5. -d '{"name":"objects", "elements":["path to image file"]}'

Methods

Python documentation for the pipeline.

Source code in txtai/pipeline/image/objects.py

  1. 21
  2. 22
  3. 23
  4. 24
  5. 25
  6. 26
  7. 27
  8. 28
  1. def init(self, path=None, quantize=False, gpu=True, model=None, classification=False, threshold=0.9, kwargs):
  2. if not PIL:
  3. raise ImportError(‘Objects pipeline is not available - install pipeline extra to enable’)
  4. super().init(“image-classification if classification else object-detection”, path, quantize, gpu, model, kwargs)
  5. self.classification = classification
  6. self.threshold = threshold

Applies object detection/image classification models to images. Returns a list of (label, score).

This method supports a single image or a list of images. If the input is an image, the return type is a 1D list of (label, score). If text is a list, a 2D list of (label, score) is returned with a row per image.

Parameters:

NameTypeDescriptionDefault
images

image|list

required
flatten

flatten output to a list of objects

False
workers

number of concurrent workers to use for processing data, defaults to None

0

Returns:

TypeDescription

list of (label, score)

Source code in txtai/pipeline/image/objects.py

  1. 30
  2. 31
  3. 32
  4. 33
  5. 34
  6. 35
  7. 36
  8. 37
  9. 38
  10. 39
  11. 40
  12. 41
  13. 42
  14. 43
  15. 44
  16. 45
  17. 46
  18. 47
  19. 48
  20. 49
  21. 50
  22. 51
  23. 52
  24. 53
  25. 54
  26. 55
  27. 56
  28. 57
  29. 58
  30. 59
  31. 60
  32. 61
  33. 62
  34. 63
  35. 64
  36. 65
  37. 66
  38. 67
  39. 68
  40. 69
  41. 70
  42. 71
  43. 72
  44. 73
  45. 74
  46. 75
  47. 76
  48. 77
  49. 78
  50. 79
  51. 80
  1. def call(self, images, flatten=False, workers=0):
  2. “””
  3. Applies object detection/image classification models to images. Returns a list of (label, score).
  4. This method supports a single image or a list of images. If the input is an image, the return
  5. type is a 1D list of (label, score). If text is a list, a 2D list of (label, score) is
  6. returned with a row per image.
  7. Args:
  8. images: image|list
  9. flatten: flatten output to a list of objects
  10. workers: number of concurrent workers to use for processing data, defaults to None
  11. Returns:
  12. list of (label, score)
  13. “””
  14. # Convert single element to list
  15. values = [images] if not isinstance(images, list) else images
  16. # Open images if file strings
  17. values = [Image.open(image) if isinstance(image, str) else image for image in values]
  18. # Run pipeline
  19. results = (
  20. self.pipeline(values, num_workers=workers)
  21. if self.classification
  22. else self.pipeline(values, threshold=self.threshold, num_workers=workers)
  23. )
  24. # Build list of (id, score)
  25. outputs = []
  26. for result in results:
  27. # Convert to (label, score) tuples
  28. result = [(x[“label”], x[“score”]) for x in result if x[“score”] > self.threshold]
  29. # Sort by score descending
  30. result = sorted(result, key=lambda x: x[1], reverse=True)
  31. # Deduplicate labels
  32. unique = set()
  33. elements = []
  34. for label, score in result:
  35. if label not in unique:
  36. elements.append(label if flatten else (label, score))
  37. unique.add(label)
  38. outputs.append(elements)
  39. # Return single element if single element passed in
  40. return outputs[0] if not isinstance(images, list) else outputs