This version of the OpenSearch documentation is no longer maintained. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.

Token filters

Token filters receive the stream of tokens from the tokenizer and add, remove, or modify the tokens. For example, a token filter may lowercase the tokens so that Actions becomes action, remove stopwords like than, or add synonyms like talk for the word speak.

The following table lists all token filters that OpenSearch supports.

Token filterUnderlying Lucene token filterDescription
apostropheApostropheFilterIn each token that contains an apostrophe, the apostrophe token filter removes the apostrophe itself and all characters following the apostrophe.
asciifoldingASCIIFoldingFilterConverts alphabetic, numeric, and symbolic characters.
cjk_bigramCJKBigramFilterForms bigrams of Chinese, Japanese, and Korean (CJK) tokens.
cjk_widthCJKWidthFilterNormalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules:
- Folds full-width ASCII character variants into the equivalent basic Latin characters.
- Folds half-width Katakana character variants into the equivalent Kana characters.
classicClassicFilterPerforms optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (‘s) and removes . from acronyms.
common_gramsCommonGramsFilterGenerates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams.
conditionalConditionalTokenFilterApplies an ordered list of token filters to tokens that match the conditions provided in a script.
decimal_digitDecimalDigitFilterConverts all digits in the Unicode decimal number general category to basic Latin digits (0–9).
delimited_payloadDelimitedPayloadTokenFilterSeparates a token stream into tokens with corresponding payloads, based on a provided delimiter. A token consists of all characters before the delimiter, and a payload consists of all characters after the delimiter. For example, if the delimiter is |, then for the string foo|bar, foo is the token and bar is the payload.
delimited_term_freqDelimitedTermFrequencyTokenFilterSeparates a token stream into tokens with corresponding term frequencies, based on a provided delimiter. A token consists of all characters before the delimiter, and a term frequency is the integer after the delimiter. For example, if the delimiter is |, then for the string foo|5, foo is the token and 5 is the term frequency.
dictionary_decompounderDictionaryCompoundWordTokenFilterDecomposes compound words found in many Germanic languages.
edge_ngramEdgeNGramTokenFilterTokenizes the given token into edge n-grams (n-grams that start at the beginning of the token) of lengths between min_gram and max_gram. Optionally, keeps the original token.
elisionElisionFilterRemoves the specified elisions from the beginning of tokens. For example, changes l’avion (the plane) to avion (plane).
fingerprintFingerprintFilterSorts and deduplicates the token list and concatenates tokens into a single token.
flatten_graphFlattenGraphFilterFlattens a token graph produced by a graph token filter, such as synonym_graph or word_delimiter_graph, making the graph suitable for indexing.
hunspellHunspellStemFilterUses Hunspell rules to stem tokens. Because Hunspell supports a word having multiple stems, this filter can emit multiple tokens for each consumed token. Requires you to configure one or more language-specific Hunspell dictionaries.
hyphenation_decompounderHyphenationCompoundWordTokenFilterUses XML-based hyphenation patterns to find potential subwords in compound words and checks the subwords against the specified word list. The token output contains only the subwords found in the word list.
keep_typesTypeTokenFilterKeeps or removes tokens of a specific type.
keep_wordKeepWordFilterChecks the tokens against the specified word list and keeps only those that are in the list.
keyword_markerKeywordMarkerFilterMarks specified tokens as keywords, preventing them from being stemmed.
keyword_repeatKeywordRepeatFilterEmits each incoming token twice: once as a keyword and once as a non-keyword.
kstemKStemFilterProvides kstem-based stemming for the English language. Combines algorithmic stemming with a built-in dictionary.
lengthLengthFilterRemoves tokens whose lengths are shorter or longer than the length range specified by min and max.
limitLimitTokenCountFilterLimits the number of output tokens. A common use case is to limit the size of document field values based on token count.
lowercaseLowerCaseFilterConverts tokens to lowercase. The default LowerCaseFilter is for the English language. You can set the language parameter to greek (uses GreekLowerCaseFilter), irish (uses IrishLowerCaseFilter), or turkish (uses TurkishLowerCaseFilter).
min_hashMinHashFilterUses the MinHash technique to estimate document similarity. Performs the following operations on a token stream sequentially:
1. Hashes each token in the stream.
2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket.
3. Outputs the smallest hash from each bucket as a token stream.
multiplexerN/AEmits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens.
ngramNGramTokenFilterTokenizes the given token into n-grams of lengths between min_gram and max_gram.
Normalizationarabic_normalization: ArabicNormalizer
german_normalization: GermanNormalizationFilter
hindi_normalization: HindiNormalizer
indic_normalization: IndicNormalizer
sorani_normalization: SoraniNormalizer
persian_normalization: PersianNormalizer
scandinavian_normalization : ScandinavianNormalizationFilter
scandinavian_folding: ScandinavianFoldingFilter
serbian_normalization: SerbianNormalizationFilter
Normalizes the characters of one of the listed languages.
pattern_captureN/AGenerates a token for every capture group in the provided regular expression. Uses Java regular expression syntax.
pattern_replaceN/AMatches a pattern in the provided regular expression and replaces matching substrings. Uses Java regular expression syntax.
phoneticN/AUses a phonetic encoder to emit a metaphone token for each token in the token stream. Requires installing the analysis-phonetic plugin.
porter_stemPorterStemFilterUses the Porter stemming algorithm to perform algorithmic stemming for the English language.
predicate_token_filterN/ARemoves tokens that don’t match the specified predicate script. Supports inline Painless scripts only.
remove_duplicatesRemoveDuplicatesTokenFilterRemoves duplicate tokens that are in the same position.
reverseReverseStringFilterReverses the string corresponding to each token in the token stream. For example, the token dog becomes god.
shingleShingleFilterGenerates shingles of lengths between min_shingle_size and max_shingle_size for tokens in the token stream. Shingles are similar to n-grams but apply to words instead of letters. For example, two-word shingles added to the list of unigrams [contribute, to, opensearch] are [contribute to, to opensearch].
snowballN/AStems words using a Snowball-generated stemmer. You can use the snowball token filter with the following languages in the language field: Arabic, Armenian, Basque, Catalan, Danish, Dutch, English, Estonian, Finnish, French, German, German2, Hungarian, Irish, Italian, Kp, Lithuanian, Lovins, Norwegian, Porter, Portuguese, Romanian, Russian, Spanish, Swedish, Turkish.
stemmerN/AProvides algorithmic stemming for the following languages in the language field: arabic, armenian, basque, bengali, brazilian, bulgarian, catalan, czech, danish, dutch, dutch_kp, english, light_english, lovins, minimal_english, porter2, possessive_english, estonian, finnish, light_finnish, french, light_french, minimal_french, galician, minimal_galician, german, german2, light_german, minimal_german, greek, hindi, hungarian, light_hungarian, indonesian, irish, italian, light_italian, latvian, Lithuanian, norwegian, light_norwegian, minimal_norwegian, light_nynorsk, minimal_nynorsk, portuguese, light_portuguese, minimal_portuguese, portuguese_rslp, romanian, russian, light_russian, sorani, spanish, light_spanish, swedish, light_swedish, turkish.
stemmer_overrideN/AOverrides stemming algorithms by applying a custom mapping so that the provided terms are not stemmed.
stopStopFilterRemoves stop words from a token stream.
synonymN/ASupplies a synonym list for the analysis process. The synonym list is provided using a configuration file.
synonym_graphN/ASupplies a synonym list, including multiword synonyms, for the analysis process.
trimTrimFilterTrims leading and trailing whitespace from each token in a stream.
truncateTruncateTokenFilterTruncates tokens whose length exceeds the specified character limit.
uniqueN/AEnsures each token is unique by removing duplicate tokens from a stream.
uppercaseUpperCaseFilterConverts tokens to uppercase.
word_delimiterWordDelimiterFilterSplits tokens at non-alphanumeric characters and performs normalization based on the specified rules.
word_delimiter_graphWordDelimiterGraphFilterSplits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns multi-position tokens a positionLength attribute.