Trim token filter

Removes leading and trailing whitespace from each token in a stream. While this can change the length of a token, the trim filter does not change a token’s offsets.

The trim filter uses Lucene’s TrimFilter.

Many commonly used tokenizers, such as the standard or whitespace tokenizer, remove whitespace by default. When using these tokenizers, you don’t need to add a separate trim filter.

Example

To see how the trim filter works, you first need to produce a token containing whitespace.

The following analyze API request uses the keyword tokenizer to produce a token for " fox ".

  1. GET _analyze
  2. {
  3. "tokenizer" : "keyword",
  4. "text" : " fox "
  5. }

The API returns the following response. Note the " fox " token contains the original text’s whitespace. Note that despite changing the token’s length, the start_offset and end_offset remain the same.

  1. {
  2. "tokens": [
  3. {
  4. "token": " fox ",
  5. "start_offset": 0,
  6. "end_offset": 5,
  7. "type": "word",
  8. "position": 0
  9. }
  10. ]
  11. }

To remove the whitespace, add the trim filter to the previous analyze API request.

  1. GET _analyze
  2. {
  3. "tokenizer" : "keyword",
  4. "filter" : ["trim"],
  5. "text" : " fox "
  6. }

The API returns the following response. The returned fox token does not include any leading or trailing whitespace.

  1. {
  2. "tokens": [
  3. {
  4. "token": "fox",
  5. "start_offset": 0,
  6. "end_offset": 5,
  7. "type": "word",
  8. "position": 0
  9. }
  10. ]
  11. }

Add to an analyzer

The following create index API request uses the trim filter to configure a new custom analyzer.

  1. PUT trim_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "keyword_trim": {
  7. "tokenizer": "keyword",
  8. "filter": [ "trim" ]
  9. }
  10. }
  11. }
  12. }
  13. }