反向索引

Elasticsearch uses a structure called an inverted index which is designed
to allow very fast full text searches. An inverted index consists of a list
of all the unique words that appear in any document, and for each word, a list
of the documents in which it appears.

For example, let’s say we have two documents, each with a content field
containing:

  1. ``The quick brown fox jumped over the lazy dog’’
  2. ``Quick brown foxes leap over lazy dogs in summer’’

To create an inverted index, we first split the content field of each
document into separate words (which we call terms or tokens), create a
sorted list of all the unique terms, then list in which document each term
appears. The result looks something like this:

  1. Term Doc_1 Doc_2
  2. -------------------------
  3. Quick | | X
  4. The | X |
  5. brown | X | X
  6. dog | X |
  7. dogs | | X
  8. fox | X |
  9. foxes | | X
  10. in | | X
  11. jumped | X |
  12. lazy | X | X
  13. leap | | X
  14. over | X | X
  15. quick | X |
  16. summer | | X
  17. the | X |
  18. ------------------------

Now, if we want to search for "quick brown" we just need to find the
documents in which each term appears:

  1. Term Doc_1 Doc_2
  2. -------------------------
  3. brown | X | X
  4. quick | X |
  5. ------------------------
  6. Total | 2 | 1

Both documents match, but the first document has more matches than the second.
If we apply a naive similarity algorithm which just counts the number of
matching terms, then we can say that the first document is a better match —
is more relevant to our query — than the second document.

But there are a few problems with our current inverted index:

  1. "Quick" and "quick" appear as separate terms, while the user probably
    thinks of them as the same word.

  2. "fox" and "foxes" are pretty similar, as are "dog" and "dogs"
    — they share the same root word.

  3. "jumped" and "leap", while not from the same root word, are similar
    in meaning — they are synonyms.

With the above index, a search for "+Quick +fox" wouldn’t match any
documents. (Remember, a preceding + means that the word must be present).
Both the term "Quick" and the term "fox" have to be in the same document
in order to satisfy the query, but the first doc contains "quick fox" and
the second doc contains "Quick foxes".

Our user could reasonably expect both documents to match the query. We can do
better.

If we normalize the terms into a standard format, then we can find documents
that contain terms that are not exactly the same as the user requested, but
are similar enough to still be relevant. For instance:

  1. "Quick" can be lowercased to become "quick".

  2. "foxes" can be stemmed — reduced to its root form — to
    become "fox". Similarly "dogs" could be stemmed to "dog".

  3. "jumped" and "leap" are synonyms and can be indexed as just the
    single term "jump".

Now the index looks like this:

  1. Term Doc_1 Doc_2
  2. -------------------------
  3. brown | X | X
  4. dog | X | X
  5. fox | X | X
  6. in | | X
  7. jump | X | X
  8. lazy | X | X
  9. over | X | X
  10. quick | X | X
  11. summer | | X
  12. the | X | X
  13. ------------------------

But we’re not there yet. Our search for "+Quick +fox" would still fail,
because we no longer have the exact term "Quick" in our index. However, if
we apply the same normalization rules that we used on the content field to
our query string, it would become a query for "+quick +fox", which would
match both documents!

IMPORTANT: This is very important. You can only find terms that actually exist in your
index, so: both the indexed text and and query string must be normalized
into the same form
.

This process of tokenization and normalization is called analysis, which we
discuss in the next section.