Edge n-gram token filter

Forms an n-gram of a specified length from the beginning of a token.

For example, you can use the edge_ngram token filter to change quick to qu.

When not customized, the filter creates 1-character edge n-grams by default.

This filter uses Lucene’s EdgeNGramTokenFilter.

The edge_ngram filter is similar to the ngram token filter. However, the edge_ngram only outputs n-grams that start at the beginning of a token. These edge n-grams are useful for search-as-you-type queries.

Example

The following analyze API request uses the edge_ngram filter to convert the quick brown fox jumps to 1-character and 2-character edge n-grams:

  1. GET _analyze
  2. {
  3. "tokenizer": "standard",
  4. "filter": [
  5. { "type": "edge_ngram",
  6. "min_gram": 1,
  7. "max_gram": 2
  8. }
  9. ],
  10. "text": "the quick brown fox jumps"
  11. }

The filter produces the following tokens:

  1. [ t, th, q, qu, b, br, f, fo, j, ju ]

Add to an analyzer

The following create index API request uses the edge_ngram filter to configure a new custom analyzer.

  1. PUT edge_ngram_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "standard_edge_ngram": {
  7. "tokenizer": "standard",
  8. "filter": [ "edge_ngram" ]
  9. }
  10. }
  11. }
  12. }
  13. }

Configurable parameters

max_gram

(Optional, integer) Maximum character length of a gram. For custom token filters, defaults to 2. For the built-in edge_ngram filter, defaults to 1.

See Limitations of the max_gram parameter.

min_gram

(Optional, integer) Minimum character length of a gram. Defaults to 1.

preserve_original

(Optional, boolean) Emits original token when set to true. Defaults to false.

side

(Optional, string) Deprecated. Indicates whether to truncate tokens from the front or back. Defaults to front.

Instead of using the back value, you can use the reverse token filter before and after the edge_ngram filter to achieve the same results.

Customize

To customize the edge_ngram filter, duplicate it to create the basis for a new custom token filter. You can modify the filter using its configurable parameters.

For example, the following request creates a custom edge_ngram filter that forms n-grams between 3-5 characters.

  1. PUT edge_ngram_custom_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "default": {
  7. "tokenizer": "whitespace",
  8. "filter": [ "3_5_edgegrams" ]
  9. }
  10. },
  11. "filter": {
  12. "3_5_edgegrams": {
  13. "type": "edge_ngram",
  14. "min_gram": 3,
  15. "max_gram": 5
  16. }
  17. }
  18. }
  19. }
  20. }

Limitations of the max_gram parameter

The edge_ngram filter’s max_gram value limits the character length of tokens. When the edge_ngram filter is used with an index analyzer, this means search terms longer than the max_gram length may not match any indexed terms.

For example, if the max_gram is 3, searches for apple won’t match the indexed term app.

To account for this, you can use the truncate filter with a search analyzer to shorten search terms to the max_gram character length. However, this could return irrelevant results.

For example, if the max_gram is 3 and search terms are truncated to three characters, the search term apple is shortened to app. This means searches for apple return any indexed terms matching app, such as apply, snapped, and apple.

We recommend testing both approaches to see which best fits your use case and desired search experience.