Approximate k-NN search

The approximate k-NN search method uses nearest neighbor algorithms from nmslib and faiss to power k-NN search. To see the algorithms that the plugin currently supports, check out the k-NN Index documentation. In this case, approximate means that for a given search, the neighbors returned are an estimate of the true k-nearest neighbors. Of the three search methods the plugin provides, this method offers the best search scalability for large data sets. Generally speaking, once the data set gets into the hundreds of thousands of vectors, this approach is preferred.

The k-NN plugin builds a native library index of the vectors for each “knn-vector field”/ “Lucene segment” pair during indexing that can be used to efficiently find the k-nearest neighbors to a query vector during search. To learn more about Lucene segments, see the Apache Lucene documentation. These native library indices are loaded into native memory during search and managed by a cache. To learn more about pre-loading native library indices into memory, refer to the warmup API. Additionally, you can see what native library indices are already loaded in memory, which you can learn more about in the stats API section.

Because the native library indices are constructed during indexing, it is not possible to apply a filter on an index and then use this search method. All filters are applied on the results produced by the approximate nearest neighbor search.

Get started with approximate k-NN

To use the k-NN plugin’s approximate search functionality, you must first create a k-NN index with setting index.knn to true. This setting tells the plugin to create native library indices for the index.

Next, you must add one or more fields of the knn_vector data type. This example creates an index with two knn_vector’s, one using faiss, the other using nmslib, fields:

  1. PUT my-knn-index-1
  2. {
  3. "settings": {
  4. "index": {
  5. "knn": true,
  6. "knn.algo_param.ef_search": 100
  7. }
  8. },
  9. "mappings": {
  10. "properties": {
  11. "my_vector1": {
  12. "type": "knn_vector",
  13. "dimension": 2,
  14. "method": {
  15. "name": "hnsw",
  16. "space_type": "l2",
  17. "engine": "nmslib",
  18. "parameters": {
  19. "ef_construction": 128,
  20. "m": 24
  21. }
  22. }
  23. },
  24. "my_vector2": {
  25. "type": "knn_vector",
  26. "dimension": 4,
  27. "method": {
  28. "name": "hnsw",
  29. "space_type": "innerproduct",
  30. "engine": "faiss",
  31. "parameters": {
  32. "ef_construction": 256,
  33. "m": 48
  34. }
  35. }
  36. }
  37. }
  38. }
  39. }

In the example above, both knn_vector’s are configured from method definitions. Additionally, knn_vector’s can also be configured from models. Learn more about it here!

The knn_vector data type supports a vector of floats that can have a dimension of up to 10,000, as set by the dimension mapping parameter.

In OpenSearch, codecs handle the storage and retrieval of indices. The k-NN plugin uses a custom codec to write vector data to native library indices so that the underlying k-NN search library can read it.

After you create the index, you can add some data to it:

  1. POST _bulk
  2. { "index": { "_index": "my-knn-index-1", "_id": "1" } }
  3. { "my_vector1": [1.5, 2.5], "price": 12.2 }
  4. { "index": { "_index": "my-knn-index-1", "_id": "2" } }
  5. { "my_vector1": [2.5, 3.5], "price": 7.1 }
  6. { "index": { "_index": "my-knn-index-1", "_id": "3" } }
  7. { "my_vector1": [3.5, 4.5], "price": 12.9 }
  8. { "index": { "_index": "my-knn-index-1", "_id": "4" } }
  9. { "my_vector1": [5.5, 6.5], "price": 1.2 }
  10. { "index": { "_index": "my-knn-index-1", "_id": "5" } }
  11. { "my_vector1": [4.5, 5.5], "price": 3.7 }
  12. { "index": { "_index": "my-knn-index-1", "_id": "6" } }
  13. { "my_vector2": [1.5, 5.5, 4.5, 6.4], "price": 10.3 }
  14. { "index": { "_index": "my-knn-index-1", "_id": "7" } }
  15. { "my_vector2": [2.5, 3.5, 5.6, 6.7], "price": 5.5 }
  16. { "index": { "_index": "my-knn-index-1", "_id": "8" } }
  17. { "my_vector2": [4.5, 5.5, 6.7, 3.7], "price": 4.4 }
  18. { "index": { "_index": "my-knn-index-1", "_id": "9" } }
  19. { "my_vector2": [1.5, 5.5, 4.5, 6.4], "price": 8.9 }

Then you can execute an approximate nearest neighbor search on the data using the knn query type:

  1. GET my-knn-index-1/_search
  2. {
  3. "size": 2,
  4. "query": {
  5. "knn": {
  6. "my_vector2": {
  7. "vector": [2, 3, 5, 6],
  8. "k": 2
  9. }
  10. }
  11. }
  12. }

k is the number of neighbors the search of each graph will return. You must also include the size option, which indicates how many results the query actually returns. The plugin returns k amount of results for each shard (and each segment) and size amount of results for the entire query. The plugin supports a maximum k value of 10,000.

Building a k-NN index from a model

For some of the algorithms that we support, the native library index needs to be trained before it can be used. Training everytime a segment is created would be very expensive, so, instead, we introduce the concept of a model that is used to initialize the native library index during segment creation. A model is created by calling the Train API, passing in the source of training data as well as the method definition of the model. Once training is complete, the model will be serialized to a k-NN model system index. Then, during indexing, the model is pulled from this index to initialize the segments.

In order to train a model, we first need an OpenSearch index with training data in it. Training data can come from any knn_vector field that has a dimension matching the dimension of the model you want to create. Training data can be the same data that you are going to index or a separate set. Let’s create a training index:

  1. PUT /train-index
  2. {
  3. "settings" : {
  4. "number_of_shards" : 3,
  5. "number_of_replicas" : 0
  6. },
  7. "mappings": {
  8. "properties": {
  9. "train-field": {
  10. "type": "knn_vector",
  11. "dimension": 4
  12. }
  13. }
  14. }
  15. }

Notice that index.knn is not set in the index settings. This ensures that we do not create native library indices for this index.

Next, let’s add some data to it:

  1. POST _bulk
  2. { "index": { "_index": "train-index", "_id": "1" } }
  3. { "train-field": [1.5, 5.5, 4.5, 6.4]}
  4. { "index": { "_index": "train-index", "_id": "2" } }
  5. { "train-field": [2.5, 3.5, 5.6, 6.7]}
  6. { "index": { "_index": "train-index", "_id": "3" } }
  7. { "train-field": [4.5, 5.5, 6.7, 3.7]}
  8. { "index": { "_index": "train-index", "_id": "4" } }
  9. { "train-field": [1.5, 5.5, 4.5, 6.4]}
  10. ...

After indexing into the training index completes, we can call our the Train API:

  1. POST /_plugins/_knn/models/_train/my-model
  2. {
  3. "training_index": "train-index",
  4. "training_field": "train-field",
  5. "dimension": 4,
  6. "description": "My models description",
  7. "search_size": 500,
  8. "method": {
  9. "name":"hnsw",
  10. "engine":"faiss",
  11. "parameters":{
  12. "encoder":{
  13. "name":"pq",
  14. "parameters":{
  15. "code_size": 8,
  16. "m": 8
  17. }
  18. }
  19. }
  20. }
  21. }

The Train API will return as soon as the training job is started. To check its status, we can use the Get Model API:

  1. GET /_plugins/_knn/models/my-model?filter_path=state&pretty
  2. {
  3. "state": "training"
  4. }

Once the model enters the “created” state, we can create an index that will use this model to initialize it’s native library indices:

  1. PUT /target-index
  2. {
  3. "settings" : {
  4. "number_of_shards" : 3,
  5. "number_of_replicas" : 1,
  6. "index.knn": true
  7. },
  8. "mappings": {
  9. "properties": {
  10. "target-field": {
  11. "type": "knn_vector",
  12. "model_id": "my-model"
  13. }
  14. }
  15. }
  16. }

Lastly, we can add the documents we want to be searched to the index:

  1. POST _bulk
  2. { "index": { "_index": "target-index", "_id": "1" } }
  3. { "target-field": [1.5, 5.5, 4.5, 6.4]}
  4. { "index": { "_index": "target-index", "_id": "2" } }
  5. { "target-field": [2.5, 3.5, 5.6, 6.7]}
  6. { "index": { "_index": "target-index", "_id": "3" } }
  7. { "target-field": [4.5, 5.5, 6.7, 3.7]}
  8. { "index": { "_index": "target-index", "_id": "4" } }
  9. { "target-field": [1.5, 5.5, 4.5, 6.4]}
  10. ...

After data is ingested, it can be search just like any other knn_vector field!

Using approximate k-NN with filters

If you use the knn query alongside filters or other clauses (e.g. bool, must, match), you might receive fewer than k results. In this example, post_filter reduces the number of results from 2 to 1:

  1. GET my-knn-index-1/_search
  2. {
  3. "size": 2,
  4. "query": {
  5. "knn": {
  6. "my_vector2": {
  7. "vector": [2, 3, 5, 6],
  8. "k": 2
  9. }
  10. }
  11. },
  12. "post_filter": {
  13. "range": {
  14. "price": {
  15. "gte": 5,
  16. "lte": 10
  17. }
  18. }
  19. }
  20. }

Spaces

A space corresponds to the function used to measure the distance between two points in order to determine the k-nearest neighbors. From the k-NN perspective, a lower score equates to a closer and better result. This is the opposite of how OpenSearch scores results, where a greater score equates to a better result. To convert distances to OpenSearch scores, we take 1 / (1 + distance). The k-NN plugin the spaces the plugin supports are below. Not every method supports each of these spaces. Be sure to check out the method documentation to make sure the space you are interested in is supported.

spaceTypeDistance FunctionOpenSearch Score
l2[ Distance(X, Y) = \sum{i=1}^n (X_i - Y_i)^2 ]1 / (1 + Distance Function)
l1[ Distance(X, Y) = \sum{i=1}^n (Xi - Y_i) ]1 / (1 + Distance Function)
linf[ Distance(X, Y) = Max(X_i - Y_i) ]1 / (1 + Distance Function)
cosinesimil[ 1 - {A · B \over |A| · |B|} = 1 - {\sum{i=1}^n (Ai · B_i) \over \sqrt{\sum{i=1}^n Ai^2} · \sqrt{\sum{i=1}^n B_i^2}}] where (|A|) and (|B|) represent normalized vectors.1 / (1 + Distance Function)
innerproduct[ Distance(X, Y) = - {A · B} ]if (Distance Function >= 0) 1 / (1 + Distance Function) else -Distance Function + 1

The cosine similarity formula does not include the 1 - prefix. However, because similarity search libraries equates smaller scores with closer results, they return 1 - cosineSimilarity for cosine similarity space—that’s why 1 - is included in the distance function.