Find Small Images Inside Large Images

Find Small Images Inside Large Images - 图1

Alaeddine @ Jina AI

Find Small Images Inside Large Images - 图2October 29, 2021

Open In Colab

The purpose of this tutorial is to build an image search engine capable of finding small images inside bigger ones. This requires a different architecture than typical image search engines since we need to perform object detection.

Tip

The full source code of this tutorial is available in this google colab notebook

Understanding And Formulating the Problem

As we want to find small images inside big images, simply encoding both the indexed images and the query image and matching will not work. Imagine that you have the following big image :

../../../_images/cat-bird.jpg

It contains a scene with a cat in the background, a bird and a few other items in the scene.

Now let’s suppose that the query image is a simple bird:

../../../_images/bird.jpg

Encoding the query image will generate embeddings that effectively represent it. However, it’s not easy to build an encoder that effectively represents the big image, since it contains a complex scene with different objects. The embeddings will not be representative enough and therefore we need to think about a better approach.

Can you think of another solution ?

Hint

Encoding a complex image is not easy, but what if we can encode objects inside it ? Imagine that we can identify these objects inside the big image like so:

../../../_images/cat-bird-detections.jpg

Right, identifying objects inside the big image and then encoding each one of them will result in better, more representative embeddings. Right now, we should ask 2 questions:

  1. How can we identify objects ?

  2. How can we retrieve the big image if we match the query against identified objects ?

The first question is easy. And the response is simply object detection. There are many models that can perform object detection and in this tutorial, we will be using yolov5. Detected objects can be easily represented as chunks of the original indexed documents.

See Also

If you’re not familiar with chunks in jina, check this section

The second question can be a bit complex. Actually, we will match query documents against chunks of the original documents but we need to return the original documents (the big images). We can solve this problem by relying on a ranker executor, which roughly does the following:

  1. Retrieve the parent document IDs from the matched chunks along with their scores

  2. For each parent ID, aggregate the scores of that parent

  3. Replace the matches by the parent documents instead of children documents (aka chunks).

  4. Sort the new matches by their aggregated scores.

Cool, seems like a complex logic, but no worries, we will build our ranker executor later step by step. However, note that since the ranker is not a storage executor, it’s not capable of retrieving the parent documents from chunks. Instead, we can create empty documents that only contain the IDs. This implies that in a later step, we need to retrieve those documents by IDs.

Now let’s try to imagine and design our Flows given what we’ve discussed so far:

Index Flow:

../../../_images/index_flow_brainstorming.svg

Query Flow:

../../../_images/query_flow_brainstorming.svg

Oh, because we use the ranker, we will need something to help us retrieve original parent documents by IDs. Well that can be any storage executor. Actually Jina Hub includes many storage executors but in this tutorial, we will build our own storage executor. Since this executor should store parent documents, we will call it the root_indexer. Also, since we need it in the query Flow, we also have to add it to the index Flow. One more note, this root_indexer will index documents as they are, so it makes sense to put it in parallel to the other processing steps (segmenting, encoding,…).

Now, the technology behind this executor will be LMDB.

See Also

Jina natively supports complex toplogies of Flow where you can put executors in parallel. Checkout this section to learn more.

Cool, but what about the other indexer ?

Well, it should support matching and indexing chunks of images after they are segmented. Therefore, it needs to support vector search along with indexing. The Jina Hub already includes such indexers (for example, SimpleIndexer), however, we will create our own version of simple indexer. And by the way, it will be convenient to rename this indexer to chunks_indexer.

Alright, before seeing the final architecture, let’s agree on names for our executors:

  • chunks_indexer: SimpleIndexer

  • root_indexer: LMDBStorage (well because we use LMDB)

  • encoder: CLIPImageEncoder (yes we will be using the CLIP model to encode images)

  • segmenter: YoloV5Segmenter. Actually we could name object-detector but segmenter is a term that aligns better with Jina’s terminology

  • ranker: SimpleRanker (trust me it’s going to be simple)

Finally, here is what our Flows will look like. Index Flow:

../../../_images/index_flow.svg

Query Flow:

../../../_images/query_flow.svg

Pre-requisites

In this tutorial, we will need the following dependencies installed:

  1. pip install Pillow jina==2.1.13 torch==1.9.0 torchvision==0.10.0 transformers==4.9.1 yolov5==5.0.7 lmdb==1.2.1 matplotlib [email protected]+https://github.com/jina-ai/jina-commons.git#egg=jina-commons

We also need to download the dataset and unzip it.

You can use the link or the following commands:

  1. wget https://open-images.s3.eu-central-1.amazonaws.com/data.zip
  2. unzip data.zip

You should find 2 folders after unzipping:

  • images: this folder contains the images that we will index

  • query: this folder contains small images that we will use as search queries

Building Executors

In this section, we will start developing the necessary executors, for both query and index flows.

CLIPImageEncoder

This encoder encodes an image into embeddings using the CLIP model. We want an executor that loads the CLIP model and encodes it during the query and index flows.

Our executor should:

  • support both GPU and CPU: That’s why we will provision the device parameter and use it when encoding.

  • be able to process documents in batches in order to use our resources effectively: To do so, we will use the parameter batch_size

  • be able to encode the full image during the query flow and encode only chunks during the index flow: This can be achieved with traversal_paths and method DocumentArray.batch.

  1. from typing import Optional, Tuple
  2. import torch
  3. from jina import DocumentArray, Executor, requests
  4. from jina.logging.logger import JinaLogger
  5. from transformers import CLIPFeatureExtractor, CLIPModel
  6. class CLIPImageEncoder(Executor):
  7. """Encode image into embeddings using the CLIP model."""
  8. def __init__(
  9. self,
  10. pretrained_model_name_or_path: str = "openai/clip-vit-base-patch32",
  11. device: str = "cpu",
  12. batch_size: int = 32,
  13. traversal_paths: Tuple = ("r",),
  14. *args,
  15. **kwargs,
  16. ):
  17. super().__init__(*args, **kwargs)
  18. self.batch_size = batch_size
  19. self.traversal_paths = traversal_paths
  20. self.pretrained_model_name_or_path = pretrained_model_name_or_path
  21. self.device = device
  22. self.preprocessor = CLIPFeatureExtractor.from_pretrained(
  23. pretrained_model_name_or_path
  24. )
  25. self.model = CLIPModel.from_pretrained(self.pretrained_model_name_or_path)
  26. self.model.to(self.device).eval()
  27. @requests
  28. def encode(self, docs: Optional[DocumentArray], parameters: dict, **kwargs):
  29. if docs is None:
  30. return
  31. traversal_paths = parameters.get("traversal_paths", self.traversal_paths)
  32. batch_size = parameters.get("batch_size", self.batch_size)
  33. document_batches_generator = docs.batch(
  34. traversal_paths=traversal_paths,
  35. batch_size=batch_size,
  36. require_attr="blob",
  37. )
  38. with torch.inference_mode():
  39. for batch_docs in document_batches_generator:
  40. blob_batch = [d.blob for d in batch_docs]
  41. tensor = self._generate_input_features(blob_batch)
  42. embeddings = self.model.get_image_features(**tensor)
  43. embeddings = embeddings.cpu().numpy()
  44. for doc, embed in zip(batch_docs, embeddings):
  45. doc.embedding = embed
  46. def _generate_input_features(self, images):
  47. input_tokens = self.preprocessor(
  48. images=images,
  49. return_tensors="pt",
  50. )
  51. input_tokens = {
  52. k: v.to(torch.device(self.device)) for k, v in input_tokens.items()
  53. }
  54. return input_tokens

YoloV5Segmenter

Since we want to retrieve small images in bigger images, the technique that we will heavily rely on is segmenting. Basically, we want to do object detection on the indexed images. This will generate bounding boxes around objects detected inside the images. The detected objects will be extracted and added as chunks to the original documents. By the way, guess what is the state-of-the-art object detection model ?

Right, we will use YoloV5.

Our YoloV5Segmenter should be able to load the ultralytics/yolov5 model from Torch hub, otherwise, load a custom model. To achieve this, the executor accepts parameter model_name_or_path which will be used when loading. We will implement the method load which checks if the model exists in the the Torch Hub, otherwise, loads it as a custom model.

For our use case, we will just rely on yolov5s (small version of yolov5). Of course, for better quality, you can choose a more complicated model or your custom model.

Furthermore, we want YoloV5Segmenter to support both GPU and CPU and it should be able to process in batches. Again, this is as simple as adding parameters device and batch_size and using them during segmenting.

To perform segmenting, we will implement method _segment_docs which performs the following steps:

  1. For each batch (a batch consists of several images), use the model to get predictions for each image

  2. Each prediction of an image can contain several detections (because yolov5 will extract as much bounding boxes as possible, along with their confidence scores). We will filter out detections whose scores are below the confidence_threshold to keep good quality.

Each detection is actually 2 points -top left (x1, y1) and bottom right (x2, y2)- a confidence score and a class. We will not use the class of the detection, but it can be useful in other search applications.

  1. With the detections that we have, we create crops (using the 2 points returned). Finally, we add these crops to image documents as chunks.
  1. from typing import Dict, Iterable, Optional
  2. import torch
  3. from jina import Document, DocumentArray, Executor, requests
  4. from jina_commons.batching import get_docs_batch_generator
  5. class YoloV5Segmenter(Executor):
  6. def __init__(
  7. self,
  8. model_name_or_path: str = 'yolov5s',
  9. confidence_threshold: float = 0.3,
  10. batch_size: int = 32,
  11. device: str = 'cpu',
  12. *args,
  13. **kwargs
  14. ):
  15. super().__init__(*args, **kwargs)
  16. self.model_name_or_path = model_name_or_path
  17. self.confidence_threshold = confidence_threshold
  18. self.batch_size = batch_size
  19. if device != 'cpu' and not device.startswith('cuda'):
  20. self.logger.error('Torch device not supported. Must be cpu or cuda!')
  21. raise RuntimeError('Torch device not supported. Must be cpu or cuda!')
  22. if device == 'cuda' and not torch.cuda.is_available():
  23. self.logger.warning(
  24. 'You tried to use GPU but torch did not detect your'
  25. 'GPU correctly. Defaulting to CPU. Check your CUDA installation!'
  26. )
  27. device = 'cpu'
  28. self.device = torch.device(device)
  29. self.model = self._load(self.model_name_or_path)
  30. @requests
  31. def segment(
  32. self, docs: Optional[DocumentArray] = None, parameters: Dict = {}, **kwargs
  33. ):
  34. if docs:
  35. document_batches_generator = get_docs_batch_generator(
  36. docs,
  37. traversal_path=['r'],
  38. batch_size=parameters.get('batch_size', self.batch_size),
  39. needs_attr='blob',
  40. )
  41. self._segment_docs(document_batches_generator, parameters=parameters)
  42. def _segment_docs(self, document_batches_generator: Iterable, parameters: Dict):
  43. with torch.no_grad():
  44. for document_batch in document_batches_generator:
  45. images = [d.blob for d in document_batch]
  46. predictions = self.model(
  47. images,
  48. size=640,
  49. augment=False,
  50. ).pred
  51. for doc, prediction in zip(document_batch, predictions):
  52. for det in prediction:
  53. x1, y1, x2, y2, conf, cls = det
  54. if conf < parameters.get(
  55. 'confidence_threshold', self.confidence_threshold
  56. ):
  57. continue
  58. crop = doc.blob[int(y1) : int(y2), int(x1) : int(x2), :]
  59. doc.chunks.append(Document(blob=crop))
  60. def _load(self, model_name_or_path):
  61. if model_name_or_path in torch.hub.list('ultralytics/yolov5'):
  62. return torch.hub.load(
  63. 'ultralytics/yolov5', model_name_or_path, device=self.device
  64. )
  65. else:
  66. return torch.hub.load(
  67. 'ultralytics/yolov5', 'custom', model_name_or_path, device=self.device
  68. )

Indexers

After developing the encoder, we will need 2 kinds of indexers:

  1. SimpleIndexer: This indexer will take care of storing chunks of images. It also should support vector similarity search which is important to match small query images against segments of original images.

  2. LMDBStorage: LMDB is a simple memory-mapped transactional key-value store. It is convenient for this example because we can use it to store the original images (so that we can retrieve them later). We will use it to create LMDBStorage which offers 2 functionalities: indexing documents and retrieving documents by ID.

SimpleIndexer

To implement SimpleIndexer, we can leverage Jina’s DocumentArrayMemmap. You can read about this data type here.

Our indexer will create an instance of DocumentArrayMemmap when it’s initialized. We want to store indexed documents inside the workspace folder that’s why we pass the workspace attribute of the executor to DocumentArrayMemmap.

To index, we implement the method index which is bound to the index flow. It’s as simple as extending the received docs to DocumentArrayMemmap instance.

On the other hand, for search, we implement the method search. We bind it to the query flow using the decorator @requests(on='/search').

In jina, searching for query documents can be done by adding the results to the matches attribute of each query document. Since docs is a DocumentArray we can use method match to match query against the indexed documents. Read more about match here. There’s another detail here. We already indexed documents before search, but we need to match query documents against chunks of the indexed images. Luckily, DocumentArray.match allows us to specify the traversal paths of the right-hand-side parameter with parameter traversal_rdarray. Since we want to match the left side docs (query) against the chunks of the right side docs (indexed docs), we can specify that traversal_rdarray=['c'].

  1. from typing import Dict, Optional
  2. from jina import DocumentArray, Executor, requests
  3. from jina.types.arrays.memmap import DocumentArrayMemmap
  4. class SimpleIndexer(Executor):
  5. def __init__(self, **kwargs):
  6. super().__init__(**kwargs)
  7. self._storage = DocumentArrayMemmap(
  8. self.workspace, key_length=kwargs.get('key_length', 64)
  9. )
  10. @requests(on='/index')
  11. def index(
  12. self,
  13. docs: Optional['DocumentArray'] = None,
  14. **kwargs,
  15. ):
  16. if docs:
  17. self._storage.extend(docs)
  18. @requests(on='/search')
  19. def search(
  20. self,
  21. docs: Optional['DocumentArray'] = None,
  22. parameters: Optional[Dict] = None,
  23. **kwargs,
  24. ):
  25. if not docs:
  26. return
  27. docs.match(self._storage, traversal_rdarray=['c'])

LMDBStorage

In order to implement the LMDBStorage, we need the following parts:

I. Handler

This will be a context manager that we will use when we access our LMDB database. We will create it as a standalone class.

II. LMDBStorage constructor

The constructor should initialize a few attributes:

  • the map_size of the database

  • the default_traversal_paths. Actually we need traversal paths because we will not be traversing documents in the same way during index and query flows. During index, we want to store the root documents. However, during query,
    we need to get the matches of documents by ID.

  • the index file: again, to keep things clean, we will store the index file inside the workspace folder. Therefore we can use the workspace attribute.

III. LMDBStorage.index

In order to index documents, we first start a transaction (so that our Storage executor is ACID-compliant). Then, we traverse them according to the traversal_paths (will be root in the index Flow). Finally, each document is serialized to string and then added to the database (the key is the document ID)

IV. LMDBStorage.search

Unlike search in the SimpleIndexer, we only wish to get the matched Documents by ID and return them. Actually, the matched documents will be empty and will only contain the IDs. The goal is to return full matched documents using IDs. To accomplish this, again, we start a transaction, traverse the matched documents, get each matched document by ID and use the results to fill our documents.

  1. import os
  2. from typing import Dict, List
  3. import lmdb
  4. from jina import Document, DocumentArray, Executor, requests
  5. class _LMDBHandler:
  6. def __init__(self, file, map_size):
  7. # see https://lmdb.readthedocs.io/en/release/#environment-class for usage
  8. self.file = file
  9. self.map_size = map_size
  10. @property
  11. def env(self):
  12. return self._env
  13. def __enter__(self):
  14. self._env = lmdb.Environment(
  15. self.file,
  16. map_size=self.map_size,
  17. subdir=False,
  18. readonly=False,
  19. metasync=True,
  20. sync=True,
  21. map_async=False,
  22. mode=493,
  23. create=True,
  24. readahead=True,
  25. writemap=False,
  26. meminit=True,
  27. max_readers=126,
  28. max_dbs=0, # means only one db
  29. max_spare_txns=1,
  30. lock=True,
  31. )
  32. return self._env
  33. def __exit__(self, exc_type, exc_val, exc_tb):
  34. if hasattr(self, '_env'):
  35. self._env.close()
  36. class LMDBStorage(Executor):
  37. def __init__(
  38. self,
  39. map_size: int = 1048576000, # in bytes, 1000 MB
  40. default_traversal_paths: List[str] = ['r'],
  41. *args,
  42. **kwargs,
  43. ):
  44. super().__init__(*args, **kwargs)
  45. self.map_size = map_size
  46. self.default_traversal_paths = default_traversal_paths
  47. self.file = os.path.join(self.workspace, 'db.lmdb')
  48. if not os.path.exists(self.workspace):
  49. os.makedirs(self.workspace)
  50. def _handler(self):
  51. return _LMDBHandler(self.file, self.map_size)
  52. @requests(on='/index')
  53. def index(self, docs: DocumentArray, parameters: Dict, **kwargs):
  54. traversal_paths = parameters.get(
  55. 'traversal_paths', self.default_traversal_paths
  56. )
  57. if docs is None:
  58. return
  59. with self._handler() as env:
  60. with env.begin(write=True) as transaction:
  61. for d in docs.traverse_flat(traversal_paths):
  62. transaction.put(d.id.encode(), d.SerializeToString())
  63. @requests(on='/search')
  64. def search(self, docs: DocumentArray, parameters: Dict, **kwargs):
  65. traversal_paths = parameters.get(
  66. 'traversal_paths', self.default_traversal_paths
  67. )
  68. if docs is None:
  69. return
  70. docs_to_get = docs.traverse_flat(traversal_paths)
  71. with self._handler() as env:
  72. with env.begin(write=True) as transaction:
  73. for i, d in enumerate(docs_to_get):
  74. id = d.id
  75. serialized_doc = Document(transaction.get(d.id.encode()))
  76. d.update(serialized_doc)
  77. d.id = id

SimpleRanker

You might think why do we need a ranker at all ?

Actually, a ranker is needed because we will be matching small query images against chunks of parent documents. But how can we get back to parent documents (aka full images) given the chunks ? And what if 2 chunks belonging to the same parent are matched ? We can solve this by aggregating the similarity scores of chunks that belong to the same parent (using an aggregation method, in our case, will be the min value). So, for each query document, we perform the following:

  1. We create an empty collection of parent scores. This collection will store, for each parent, a list of scores of its chunk documents.

  2. For each match, since it’s a chunk document, we can retrieve its parent_id. And it’s also a match document so we get its match score and add that value to the parent scores collection.

  3. After processing all matches, we need to aggregate the scores of each parent using the min metric.

  4. Finally, using the aggregated score values of parents, we can create a new list of matches (this time consisting of parents, not chunks). We also need to sort the matches list by aggregated scores.

When query documents exit the SimpleRanker, they now have matches consisting of parent documents. However, parent documents just have IDs. That’s why, during the previous steps, we created LMDBStorage: to actually retrieve parent documents by IDs and fill them with data.

  1. from collections import defaultdict
  2. from typing import Dict, Iterable, Optional
  3. from jina import Document, DocumentArray, Executor, requests
  4. class SimpleRanker(Executor):
  5. def __init__(
  6. self, *args, **kwargs):
  7. super().__init__(*args, **kwargs)
  8. self.metric = 'cosine'
  9. @requests(on='/search')
  10. def rank(
  11. self, docs: Optional[DocumentArray] = None, parameters: Dict = {}, **kwargs
  12. ):
  13. if docs is None:
  14. return
  15. for doc in docs:
  16. parents_scores = defaultdict(list)
  17. for m in DocumentArray([doc]).traverse_flat(['m']):
  18. parents_scores[m.parent_id].append(m.scores[self.metric].value)
  19. # Aggregate match scores for parent document and
  20. # create doc's match based on parent document of matched chunks
  21. new_matches = []
  22. for match_parent_id, scores in parents_scores.items():
  23. score = min(scores)
  24. new_matches.append(
  25. Document(id=match_parent_id, scores={self.metric: score})
  26. )
  27. # Sort the matches
  28. doc.matches = new_matches
  29. doc.matches.sort(key=lambda d: d.scores[self.metric].value)

Building Flows

Indexing

Now, after creating executors, it’s time to use them in order to build an index Flow and index our data.

Building the index Flow

We create a Flow object and add executors one after the other with the right parameters:

  1. YoloV5Segmenter: We should also specify the device

  2. CLIPImageEncoder: It also receives the device parameter. And since we only encode the chunks, we specify 'traversal_paths': ['c']

  3. SimpleIndexer: We need to specify the workspace parameter

  4. LMDBStorage: We also need to specify the workspace parameter. Furthermore, the executor can run in parallel to the other branch. We can achieve this using needs='gateway'. Finally, we set default_traversal_paths to ['r']

  5. A final executor which just waits for both branches.

After building the index Flow, we can plot it to verify that we’re using the correct architecture.

  1. from jina import Flow
  2. index_flow = Flow().add(uses=YoloV5Segmenter, name='segmenter', uses_with={'device': device}) \
  3. .add(uses=CLIPImageEncoder, name='encoder', uses_with={'device': device, 'traversal_paths': ['c']}) \
  4. .add(uses=SimpleIndexer, name='chunks_indexer', workspace='workspace') \
  5. .add(uses=LMDBStorage, name='root_indexer', workspace='workspace', needs='gateway', uses_with={'default_traversal_paths': ['r']}) \
  6. .add(name='wait_both', needs=['root_indexer', 'chunks_indexer'])
  7. index_flow.plot()

../../../_images/index_flow.svg

Now it’s time to index the dataset that we have downloaded. Actually, we will index images inside the images folder. This helper function will convert image files into Jina Documents and yield them:

  1. from glob import glob
  2. from jina import Document
  3. def input_generator():
  4. for filename in glob('images/*.jpg'):
  5. doc = Document(uri=filename, tags={'filename': filename})
  6. doc.load_uri_to_image_blob()
  7. yield doc

The final step in this section is to send the input documents to the index Flow. Note that indexing can take a while:

  1. with index_flow:
  2. input_docs = input_generator()
  3. index_flow.post(on='/index', inputs=input_docs, show_progress=True)
  1. Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master
  2. Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master
  3. 4/6 waiting segmenter encoder to be ready...YOLOv5 🚀 2021-10-29 torch 1.9.0+cu111 CPU
  4. 4/6 waiting segmenter encoder to be ready...Fusing layers...
  5. 4/6 waiting segmenter encoder to be ready...Model Summary: 213 layers, 7225885 parameters, 0 gradients
  6. Adding AutoShape...
  7. [email protected][I]:🎉 Flow is ready to use!
  8. 🔗 Protocol: GRPC
  9. 🏠 Local access: 0.0.0.0:44619
  10. 🔒 Private network: 172.28.0.2:44619
  11. 🌐 Public address: 34.73.118.227:44619
  12. DONE ━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:01:11 0.0 step/s 2 steps done in 1 minute and 11 seconds

Searching:

Now, let’s build the search Flow and use it in order to find sample query images.

Our Flow contains the following executors:

  1. CLIPImageEncoder: It receives the device parameter. This time, since we want to encode root query documents, we specify that 'traversal_paths': ['r']

  2. SimpleIndexer: We need to specify the workspace parameter

  3. SimpleRanker

  4. LMDBStorage: First we specify the workspace parameter. Then we need to use different traversal paths. This time we will be traversing matches: 'default_traversal_paths': ['m']

  1. from jina import Flow
  2. device = 'cpu'
  3. query_flow = Flow().add(uses=CLIPImageEncoder, name='encoder', uses_with={'device': device, 'traversal_paths': ['r']}) \
  4. .add(uses=SimpleIndexer, name='chunks_indexer', workspace='workspace') \
  5. .add(uses=SimpleRanker, name='ranker') \
  6. .add(uses=LMDBStorage, workspace='workspace', name='root_indexer', uses_with={'default_traversal_paths': ['m']})

Let’s plot our Flow

  1. query_flow.plot()

../../../_images/query_flow.svg

Finally, we can start querying. We will use images inside the query folder. For each image, we will create a Jina Document. Then we send our documents to the query Flow and receive the response.

For each query document, we can print the image and its top 3 search results

  1. import glob
  2. with query_flow:
  3. docs = [Document(uri=filename) for filename in glob.glob('query/*.jpg')]
  4. for doc in docs:
  5. doc.load_uri_to_image_blob()
  6. resp = query_flow.post('/search', docs, return_results=True)
  7. for doc in resp[0].docs:
  8. print('query:')
  9. plt.imshow(doc.blob)
  10. plt.show()
  11. print('results:')
  12. show_docs(doc.matches)

Sample results:

  1. query:

../../../_images/query.png

  1. results:

../../../_images/result_1.png

../../../_images/result_2.png

../../../_images/result_3.png

Congratulations !

The approach that we’ve adopted could effectively match the small bird image against bigger images containing birds.

Again, the full source code of this tutorial is available in this google colab notebook.

Feel free to try it !