Remove duplicates token filter

Removes duplicate tokens in the same position.

The remove_duplicates filter uses Lucene’s RemoveDuplicatesTokenFilter.

Example

To see how the remove_duplicates filter works, you first need to produce a token stream containing duplicate tokens in the same position.

The following analyze API request uses the keyword_repeat and stemmer filters to create stemmed and unstemmed tokens for jumping dog.

  1. GET _analyze
  2. {
  3. "tokenizer": "whitespace",
  4. "filter": [
  5. "keyword_repeat",
  6. "stemmer"
  7. ],
  8. "text": "jumping dog"
  9. }

The API returns the following response. Note that the dog token in position 1 is duplicated.

  1. {
  2. "tokens": [
  3. {
  4. "token": "jumping",
  5. "start_offset": 0,
  6. "end_offset": 7,
  7. "type": "word",
  8. "position": 0
  9. },
  10. {
  11. "token": "jump",
  12. "start_offset": 0,
  13. "end_offset": 7,
  14. "type": "word",
  15. "position": 0
  16. },
  17. {
  18. "token": "dog",
  19. "start_offset": 8,
  20. "end_offset": 11,
  21. "type": "word",
  22. "position": 1
  23. },
  24. {
  25. "token": "dog",
  26. "start_offset": 8,
  27. "end_offset": 11,
  28. "type": "word",
  29. "position": 1
  30. }
  31. ]
  32. }

To remove one of the duplicate dog tokens, add the remove_duplicates filter to the previous analyze API request.

  1. GET _analyze
  2. {
  3. "tokenizer": "whitespace",
  4. "filter": [
  5. "keyword_repeat",
  6. "stemmer",
  7. "remove_duplicates"
  8. ],
  9. "text": "jumping dog"
  10. }

The API returns the following response. There is now only one dog token in position 1.

  1. {
  2. "tokens": [
  3. {
  4. "token": "jumping",
  5. "start_offset": 0,
  6. "end_offset": 7,
  7. "type": "word",
  8. "position": 0
  9. },
  10. {
  11. "token": "jump",
  12. "start_offset": 0,
  13. "end_offset": 7,
  14. "type": "word",
  15. "position": 0
  16. },
  17. {
  18. "token": "dog",
  19. "start_offset": 8,
  20. "end_offset": 11,
  21. "type": "word",
  22. "position": 1
  23. }
  24. ]
  25. }

Add to an analyzer

The following create index API request uses the remove_duplicates filter to configure a new custom analyzer.

This custom analyzer uses the keyword_repeat and stemmer filters to create a stemmed and unstemmed version of each token in a stream. The remove_duplicates filter then removes any duplicate tokens in the same position.

  1. PUT my-index-000001
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "my_custom_analyzer": {
  7. "tokenizer": "standard",
  8. "filter": [
  9. "keyword_repeat",
  10. "stemmer",
  11. "remove_duplicates"
  12. ]
  13. }
  14. }
  15. }
  16. }
  17. }