Skip to content

Commit ebb749f

Browse files
AntonEliatravagimelikolchfa-awsnatebower
authored
Add common_gram token filter page opensearch-project#7923 (opensearch-project#7933)
* adding common_gram token filter page opensearch-project#7923 Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> * Update common_gram.md Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> * addressing the PR comments Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * addressing the PR comments Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * addressing the PR comments Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> * addressing the PR comments Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * updating parameter table structure Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Melissa Vagi <vagimeli@amazon.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower <nbower@amazon.com>
1 parent 4770a47 commit ebb749f

File tree

2 files changed

+95
-1
lines changed

2 files changed

+95
-1
lines changed
+94
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
---
2+
layout: default
3+
title: Common grams
4+
parent: Token filters
5+
nav_order: 60
6+
---
7+
<!-- vale off -->
8+
# Common grams token filter
9+
<!-- vale on -->
10+
The `common_grams` token filter improves search relevance by keeping commonly occurring phrases (common grams) in the text. This is useful when dealing with languages or datasets in which certain word combinations frequently occur as a unit and can impact search relevance if treated as separate tokens. If any common words are present in the input string, this token filter generates both their unigrams and bigrams.
11+
12+
Using this token filter improves search relevance by keeping common phrases intact. This can help in matching queries more accurately, particularly for frequent word combinations. It also improves search precision by reducing the number of irrelevant matches.
13+
14+
When using this filter, you must carefully select and maintain the `common_words` list.
15+
{: .warning}
16+
17+
## Parameters
18+
19+
The `common_grams` token filter can be configured with the following parameters.
20+
21+
Parameter | Required/Optional | Data type | Description
22+
:--- | :--- | :--- | :---
23+
`common_words` | Required | List of strings | A list of words that should be treated as words that commonly appear together. These words will be used to generate common grams. If the `common_words` parameter is an empty list, the `common_grams` token filter becomes a no-op filter, meaning that it doesn't modify the input tokens at all.
24+
`ignore_case` | Optional | Boolean | Indicates whether the filter should ignore case differences when matching common words. Default is `false`.
25+
`query_mode` | Optional | Boolean | When set to `true`, the following rules are applied:<br>- Unigrams that are generated from `common_words` are not included in the output.<br>- Bigrams in which a non-common word is followed by a common word are retained in the output.<br>- Unigrams of non-common words are excluded if they are immediately followed by a common word.<br>- If a non-common word appears at the end of the text and is preceded by a common word, its unigram is not included in the output.
26+
27+
28+
## Example
29+
30+
The following example request creates a new index named `my_common_grams_index` and configures an analyzer with the `common_grams` filter:
31+
32+
```json
33+
PUT /my_common_grams_index
34+
{
35+
"settings": {
36+
"analysis": {
37+
"filter": {
38+
"my_common_grams_filter": {
39+
"type": "common_grams",
40+
"common_words": ["a", "in", "for"],
41+
"ignore_case": true,
42+
"query_mode": true
43+
}
44+
},
45+
"analyzer": {
46+
"my_analyzer": {
47+
"type": "custom",
48+
"tokenizer": "standard",
49+
"filter": [
50+
"lowercase",
51+
"my_common_grams_filter"
52+
]
53+
}
54+
}
55+
}
56+
}
57+
}
58+
```
59+
{% include copy-curl.html %}
60+
61+
## Generated tokens
62+
63+
Use the following request to examine the tokens generated using the analyzer:
64+
65+
```json
66+
GET /my_common_grams_index/_analyze
67+
{
68+
"analyzer": "my_analyzer",
69+
"text": "A quick black cat jumps over the lazy dog in the park"
70+
}
71+
```
72+
{% include copy-curl.html %}
73+
74+
The response contains the generated tokens:
75+
76+
```json
77+
{
78+
"tokens": [
79+
{"token": "a_quick","start_offset": 0,"end_offset": 7,"type": "gram","position": 0},
80+
{"token": "quick","start_offset": 2,"end_offset": 7,"type": "<ALPHANUM>","position": 1},
81+
{"token": "black","start_offset": 8,"end_offset": 13,"type": "<ALPHANUM>","position": 2},
82+
{"token": "cat","start_offset": 14,"end_offset": 17,"type": "<ALPHANUM>","position": 3},
83+
{"token": "jumps","start_offset": 18,"end_offset": 23,"type": "<ALPHANUM>","position": 4},
84+
{"token": "over","start_offset": 24,"end_offset": 28,"type": "<ALPHANUM>","position": 5},
85+
{"token": "the","start_offset": 29,"end_offset": 32,"type": "<ALPHANUM>","position": 6},
86+
{"token": "lazy","start_offset": 33,"end_offset": 37,"type": "<ALPHANUM>","position": 7},
87+
{"token": "dog_in","start_offset": 38,"end_offset": 44,"type": "gram","position": 8},
88+
{"token": "in_the","start_offset": 42,"end_offset": 48,"type": "gram","position": 9},
89+
{"token": "the","start_offset": 45,"end_offset": 48,"type": "<ALPHANUM>","position": 10},
90+
{"token": "park","start_offset": 49,"end_offset": 53,"type": "<ALPHANUM>","position": 11}
91+
]
92+
}
93+
```
94+

_analyzers/token-filters/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Token filter | Underlying Lucene token filter| Description
2020
`cjk_bigram` | [CJKBigramFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html) | Forms bigrams of Chinese, Japanese, and Korean (CJK) tokens.
2121
[`cjk_width`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/cjk-width/) | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules: <br> - Folds full-width ASCII character variants into their equivalent basic Latin characters. <br> - Folds half-width katakana character variants into their equivalent kana characters.
2222
[`classic`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/classic) | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms.
23-
`common_grams` | [CommonGramsFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html) | Generates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams.
23+
[`common_grams`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/common_gram/) | [CommonGramsFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html) | Generates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams.
2424
`conditional` | [ConditionalTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ConditionalTokenFilter.html) | Applies an ordered list of token filters to tokens that match the conditions provided in a script.
2525
`decimal_digit` | [DecimalDigitFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/DecimalDigitFilter.html) | Converts all digits in the Unicode decimal number general category to basic Latin digits (0--9).
2626
`delimited_payload` | [DelimitedPayloadTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/payloads/DelimitedPayloadTokenFilter.html) | Separates a token stream into tokens with corresponding payloads, based on a provided delimiter. A token consists of all characters before the delimiter, and a payload consists of all characters after the delimiter. For example, if the delimiter is `|`, then for the string `foo|bar`, `foo` is the token and `bar` is the payload.

0 commit comments

Comments
 (0)