CJK bigram token filter

Forms bigrams out of CJK (Chinese, Japanese, and Korean) tokens.

This filter is included in Elasticsearch’s built-in CJK language analyzer. It uses Lucene’s CJKBigramFilter.

Example

The following analyze API request demonstrates how the CJK bigram token filter works.

  1. GET /_analyze
  2. {
  3. "tokenizer" : "standard",
  4. "filter" : ["cjk_bigram"],
  5. "text" : "東京都は、日本の首都であり"
  6. }

The filter produces the following tokens:

  1. [ 東京, 京都, 都は, 日本, 本の, の首, 首都, 都で, であ, あり ]

Add to an analyzer

The following create index API request uses the CJK bigram token filter to configure a new custom analyzer.

  1. PUT /cjk_bigram_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "standard_cjk_bigram": {
  7. "tokenizer": "standard",
  8. "filter": [ "cjk_bigram" ]
  9. }
  10. }
  11. }
  12. }
  13. }

Configurable parameters

ignored_scripts

(Optional, array of character scripts) Array of character scripts for which to disable bigrams. Possible values:

  • han
  • hangul
  • hiragana
  • katakana

All non-CJK input is passed through unmodified.

output_unigrams (Optional, boolean) If true, emit tokens in both bigram and unigram form. If false, a CJK character is output in unigram form when it has no adjacent characters. Defaults to false.

Customize

To customize the CJK bigram token filter, duplicate it to create the basis for a new custom token filter. You can modify the filter using its configurable parameters.

  1. PUT /cjk_bigram_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "han_bigrams": {
  7. "tokenizer": "standard",
  8. "filter": [ "han_bigrams_filter" ]
  9. }
  10. },
  11. "filter": {
  12. "han_bigrams_filter": {
  13. "type": "cjk_bigram",
  14. "ignored_scripts": [
  15. "hangul",
  16. "hiragana",
  17. "katakana"
  18. ],
  19. "output_unigrams": true
  20. }
  21. }
  22. }
  23. }
  24. }