Keep words token filter

Keeps only tokens contained in a specified word list.

This filter uses Lucene’s KeepWordFilter.

To remove a list of words from a token stream, use the stop filter.

Example

The following analyze API request uses the keep filter to keep only the fox and dog tokens from the quick fox jumps over the lazy dog.

  1. GET _analyze
  2. {
  3. "tokenizer": "whitespace",
  4. "filter": [
  5. {
  6. "type": "keep",
  7. "keep_words": [ "dog", "elephant", "fox" ]
  8. }
  9. ],
  10. "text": "the quick fox jumps over the lazy dog"
  11. }

The filter produces the following tokens:

  1. [ fox, dog ]

Configurable parameters

keep_words

(Required*, array of strings) List of words to keep. Only tokens that match words in this list are included in the output.

Either this parameter or keep_words_path must be specified.

keep_words_path

(Required*, array of strings) Path to a file that contains a list of words to keep. Only tokens that match words in this list are included in the output.

This path must be absolute or relative to the config location, and the file must be UTF-8 encoded. Each word in the file must be separated by a line break.

Either this parameter or keep_words must be specified.

keep_words_case

(Optional, boolean) If true, lowercase all keep words. Defaults to false.

Customize and add to an analyzer

To customize the keep filter, duplicate it to create the basis for a new custom token filter. You can modify the filter using its configurable parameters.

For example, the following create index API request uses custom keep filters to configure two new custom analyzers:

  • standard_keep_word_array, which uses a custom keep filter with an inline array of keep words
  • standard_keep_word_file, which uses a customer keep filter with a keep words file
  1. PUT keep_words_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "standard_keep_word_array": {
  7. "tokenizer": "standard",
  8. "filter": [ "keep_word_array" ]
  9. },
  10. "standard_keep_word_file": {
  11. "tokenizer": "standard",
  12. "filter": [ "keep_word_file" ]
  13. }
  14. },
  15. "filter": {
  16. "keep_word_array": {
  17. "type": "keep",
  18. "keep_words": [ "one", "two", "three" ]
  19. },
  20. "keep_word_file": {
  21. "type": "keep",
  22. "keep_words_path": "analysis/example_word_list.txt"
  23. }
  24. }
  25. }
  26. }
  27. }