Full Text Search via Feature Hashing

Tip

Find the full source code and run FeatureHasher on Jina Hub.

Full-text search often indicates solutions that are based on good-old term-frequency. Can Jina do that? Yes! And you know you come to the right community when we skip the question of why and directly comes to how. Jokes asides, there are real-world use cases that have such requirement. In practice, not all text are necessarily to be embedded via heavy DNN, some texts such as keywords, phrases, simple sentences, source codes, commands, especially those semi-structured text are probably better by searching as-is.

This article will introduce you the basic idea of feature hashing, and how to use it for full text search.

Good-old term frequency

Let’s look at an example and recap what the term frequency is about. Say you have two sentences:

  1. i love jina
  2. but does jina love me

If you apply term-frequency methodology, you first have to build a dictionary.

  1. {'i': 1, 'love': 2, 'jina': 3, 'but': 4, 'does': 5, 'me': 6}

And then convert the original sentences into 5-dimensional vectors:

  1. [1, 1, 1, 0, 0, 0]
  2. [0, 1, 1, 1, 1, 1]

Note that this vector does not need to be 0-1 only, you can add term-frequency on each element. In the search time, you simply compute cosine distance of between your query vector and those indexed vectors.

The problem of this approach is the dimension of the final vector is unbounded and proportional to the vocabulary size, which you can not really grantee to be consistent during index and search time. In practice, this approach will easily result in 10K-dim sparse vector which are not really easy to store and compute.

This is basically the methodology we used in the first tutorial.

Feature hashing

Feature hashing is a fast and space-efficient way of turning arbitrary features into fixed-length vectors. It works by applying a hash function to the features and using their hash values as indices directly, rather than looking the indices up in an associative array.

Comparing to term-frequency, feature hashing defines a bounded embedding space, which is kept fixed and not increased with a growing data set. When using feature hashing, you can completely forget about vocabulary or feature set. They are irrelevant to the algorithm.

Let’s see how it works. We first define the number of dimensions we want embed our text into, say 256.

Then we need a function that maps any word into [0, 255] so that each word will correspond to one column. For example,

  1. import hashlib
  2. h = lambda x: int(hashlib.md5(str(x).encode('utf-8')).hexdigest(), base=16) % 256
  3. h('i')
  4. h('love')
  5. h('jina')
  1. 65
  2. 126
  3. 7

Here h() is our hash function, it is the essence of feature hashing. You are free to construct other hash functions: as long as they are fast, deterministic and yield few collisions.

Now that we have the indices, we can simply encode i love jina sentence as:

  1. import numpy as np
  2. embed = np.zeros(256)
  3. embed[65] = 1
  4. embed[126] = 1
  5. embed[7] = 1

Again, embed does not need to be 0-1 only, you can add term-frequency of each word on the element. You may also use sparse array to store the embedding to get better space-efficiency.

That’s it. Very simple right?

Build FeatureHasher executor

Let’s write everything we learned into an Executor. The full source code can be found here

Let’s first add basic arguments to the init function:

  1. import hashlib
  2. from typing import Tuple
  3. from jina import Executor
  4. class FeatureHasher(Executor):
  5. def __init__(self, n_dim: int = 256, sparse: bool = False, text_attrs: Tuple[str, ...] = ('text',), **kwargs):
  6. super().__init__(**kwargs)
  7. self.n_dim = n_dim
  8. self.hash = hashlib.md5
  9. self.text_fields = text_attrs
  10. self.sparse = sparse

n_dim plays the trade-off between space and hash effectiveness. An extreme case such as n_dim=1 will force all words mapping into the same index which is really bad. A super large n_dim avoids most of the collisions yet not very space-efficient. In case you don’t know what is the best option, just go with 256. It is often good enough.

Next we add our embedding algorithm to encode() and bind it with @request. This serves the core logic of the FeatureHasher.

  1. import numpy as np
  2. from jina import DocumentArray, requests
  3. # ...
  4. @requests
  5. def encode(self, docs: DocumentArray, **kwargs):
  6. if self.sparse:
  7. from scipy.sparse import csr_matrix
  8. for idx, doc in enumerate(docs):
  9. all_tokens = doc.get_vocabulary(self.text_fields)
  10. if all_tokens:
  11. idxs, data = [], [] # sparse
  12. table = np.zeros(self.n_dim) # dense
  13. for f_id, val in all_tokens.items():
  14. h = int(self.hash(f_id.encode('utf-8')).hexdigest(), base=16)
  15. col = h % self.n_dim
  16. idxs.append((0, col))
  17. data.append(np.sign(h) * val)
  18. table[col] += np.sign(h) * val
  19. if self.sparse:
  20. doc.embedding = csr_matrix((data, zip(*idxs)), shape=(1, self.n_dim))
  21. else:
  22. doc.embedding = table

Here we use Document API doc.get_vocabulary to get all tokens and their counts in a dict. We then use the count, i.e. the term frequency as the value on certain index.

Result

Let’s take a look how to use it for full-text search. We first download the and then cut it into non-empty sentences.

  1. from jina import Document, DocumentArray, Executor
  2. # load <Pride and Prejudice by Jane Austen>
  3. d = Document(uri='https://www.gutenberg.org/files/1342/1342-0.txt').load_uri_to_text()
  4. # cut into non-empty sentences store in a DA
  5. da = DocumentArray(Document(text=s.strip()) for s in d.text.split('\n') if s.strip())

Here we use Document API load_uri_to_text and store sentences in da as one DocumentArray.

Embed all of them with our FeatureHasher, and do a self-matching, take the top-5 results:

  1. exec = Executor.from_hub('jinahub://FeatureHasher')
  2. exec.encode(da)
  3. da.match(da, exclude_self=True, limit=5, normalization=(1, 0))

Let’s print them

  1. for d in da:
  2. print(d.text)
  3. for m in d.matches:
  4. print(m.scores['cosine'], m.text)
  5. input()
  1. matching...
  2. total sentences: 12153
  3. The Project Gutenberg eBook of Pride and Prejudice, by Jane Austen
  4. <jina.types.score.NamedScore ('value',) at 5846290384> *** END OF THE PROJECT GUTENBERG EBOOK PRIDE AND PREJUDICE ***
  5. <jina.types.score.NamedScore ('value',) at 5846288464> *** START OF THE PROJECT GUTENBERG EBOOK PRIDE AND PREJUDICE ***
  6. <jina.types.score.NamedScore ('value',) at 5846289872> production, promotion and distribution of Project Gutenberg-tm
  7. <jina.types.score.NamedScore ('value',) at 5846290000> Pride and Prejudice
  8. <jina.types.score.NamedScore ('value',) at 5846289744> By Jane Austen
  9. This eBook is for the use of anyone anywhere in the United States and
  10. <jina.types.score.NamedScore ('value',) at 5846290000> This eBook is for the use of anyone anywhere in the United States and
  11. <jina.types.score.NamedScore ('value',) at 5846289744> by the awkwardness of the application, and at length wholly
  12. <jina.types.score.NamedScore ('value',) at 5846290000> Elizabeth passed the chief of the night in her sisters room, and
  13. <jina.types.score.NamedScore ('value',) at 5846289744> the happiest memories in the world. Nothing of the past was
  14. <jina.types.score.NamedScore ('value',) at 5846290000> charities and charitable donations in all 50 states of the United
  15. most other parts of the world at no cost and with almost no restrictions
  16. <jina.types.score.NamedScore ('value',) at 5845950032> most other parts of the world at no cost and with almost no
  17. <jina.types.score.NamedScore ('value',) at 5843094160> Pride and Prejudice
  18. <jina.types.score.NamedScore ('value',) at 5845950032> Title: Pride and Prejudice
  19. <jina.types.score.NamedScore ('value',) at 5845950352> With no expectation of pleasure, but with the strongest
  20. <jina.types.score.NamedScore ('value',) at 5845763088> *** END OF THE PROJECT GUTENBERG EBOOK PRIDE AND PREJUDICE ***
  21. whatsoever. You may copy it, give it away or re-use it under the terms
  22. <jina.types.score.NamedScore ('value',) at 5845762960> restrictions whatsoever. You may copy it, give it away or re-use it
  23. <jina.types.score.NamedScore ('value',) at 5845763088> man.
  24. <jina.types.score.NamedScore ('value',) at 5845764624> Mr. Bennet came from it. The coach, therefore, took them the
  25. <jina.types.score.NamedScore ('value',) at 5845713168> I almost envy you the pleasure, and yet I believe it would be
  26. <jina.types.score.NamedScore ('value',) at 5845764624> therefore, I shall be uniformly silent; and you may assure

In practice, you can implement matching and storing via an indexer inside Flow. This tutorial is only for demo purpose hence any non-feature hashing related ops are implemented without the Flow to avoid distraction.

Feature hashing is a simple yet elegant method not only for full-text search. It can be also used on tabular data, as we shall see in this tutorial.