This version of the OpenSearch documentation is no longer maintained. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.

Search pipelines

You can use search pipelines to build new or reuse existing result rerankers, query rewriters, and other components that operate on queries or results. Search pipelines make it easier for you to process search queries and search results within OpenSearch. Moving some of your application functionality into an OpenSearch search pipeline reduces the overall complexity of your application. As part of a search pipeline, you specify a list of processors that perform modular tasks. You can then easily add or reorder these processors to customize search results for your application.

Terminology

The following is a list of search pipeline terminology:

  • Search request processor: A component that intercepts a search request (the query and the metadata passed in the request), performs an operation with or on the search request, and returns the search request.
  • Search response processor: A component that intercepts a search response and search request (the query, results, and metadata passed in the request), performs an operation with or on the search response, and returns the search response.
  • Search phase results processor: A component that runs between search phases at the coordinating node level. A search phase results processor intercepts the results retrieved from one search phase and transforms them before passing them to the next search phase.
  • Processor: Either a search request processor or a search response processor.
  • Search pipeline: An ordered list of processors that is integrated into OpenSearch. The pipeline intercepts a query, performs processing on the query, sends it to OpenSearch, intercepts the results, performs processing on the results, and returns them to the calling application, as shown in the following diagram.

Search processor diagram

Both request and response processing for the pipeline are performed on the coordinator node, so there is no shard-level processing.

Processors

To learn more about available search processors, see Search processors.

Creating a search pipeline

Search pipelines are stored in the cluster state. To create a search pipeline, you must configure an ordered list of processors in your OpenSearch cluster. You can have more than one processor of the same type in the pipeline. Each processor has a tag identifier that distinguishes it from the others. Tagging a specific processor can be helpful for debugging error messages, especially if you add multiple processors of the same type.

Example request

The following request creates a search pipeline with a filter_query request processor that uses a term query to return only public messages and a response processor that renames the field message to notification:

  1. PUT /_search/pipeline/my_pipeline
  2. {
  3. "request_processors": [
  4. {
  5. "filter_query" : {
  6. "tag" : "tag1",
  7. "description" : "This processor is going to restrict to publicly visible documents",
  8. "query" : {
  9. "term": {
  10. "visibility": "public"
  11. }
  12. }
  13. }
  14. }
  15. ],
  16. "response_processors": [
  17. {
  18. "rename_field": {
  19. "field": "message",
  20. "target_field": "notification"
  21. }
  22. }
  23. ]
  24. }

copy

Ignoring processor failures

By default, a search pipeline stops if one of its processors fails. If you want the pipeline to continue running when a processor fails, you can set the ignore_failure parameter for that processor to true when creating the pipeline:

  1. "filter_query" : {
  2. "tag" : "tag1",
  3. "description" : "This processor is going to restrict to publicly visible documents",
  4. "ignore_failure": true,
  5. "query" : {
  6. "term": {
  7. "visibility": "public"
  8. }
  9. }
  10. }

If the processor fails, OpenSearch logs the failure and continues to run all remaining processors in the search pipeline. To check whether there were any failures, you can use search pipeline metrics.

Using search pipelines

To use a pipeline with a query, specify the pipeline name in the search_pipeline query parameter:

  1. GET /my_index/_search?search_pipeline=my_pipeline

copy

Alternatively, you can use a temporary pipeline with a request or set a default pipeline for an index. To learn more, see Using a search pipeline.

Retrieving search pipelines

To retrieve the details of an existing search pipeline, use the Search Pipeline API.

To view all search pipelines, use the following request:

  1. GET /_search/pipeline

copy

The response contains the pipeline that you set up in the previous section:

Response

  1. {
  2. "my_pipeline" : {
  3. "request_processors" : [
  4. {
  5. "filter_query" : {
  6. "tag" : "tag1",
  7. "description" : "This processor is going to restrict to publicly visible documents",
  8. "query" : {
  9. "term" : {
  10. "visibility" : "public"
  11. }
  12. }
  13. }
  14. }
  15. ]
  16. }
  17. }

To view a particular pipeline, specify the pipeline name as a path parameter:

  1. GET /_search/pipeline/my_pipeline

copy

You can also use wildcard patterns to view a subset of pipelines, for example:

  1. GET /_search/pipeline/my*

copy

Updating a search pipeline

To update a search pipeline dynamically, replace the search pipeline using the Search Pipeline API.

Example request

The following request upserts my_pipeline by adding a filter_query request processor and a rename_field response processor:

  1. PUT /_search/pipeline/my_pipeline
  2. {
  3. "request_processors": [
  4. {
  5. "filter_query": {
  6. "tag": "tag1",
  7. "description": "This processor returns only publicly visible documents",
  8. "query": {
  9. "term": {
  10. "visibility": "public"
  11. }
  12. }
  13. }
  14. }
  15. ],
  16. "response_processors": [
  17. {
  18. "rename_field": {
  19. "field": "message",
  20. "target_field": "notification"
  21. }
  22. }
  23. ]
  24. }

copy

Search pipeline versions

When creating your pipeline, you can specify a version for it in the version parameter:

  1. PUT _search/pipeline/my_pipeline
  2. {
  3. "version": 1234,
  4. "request_processors": [
  5. {
  6. "script": {
  7. "source": """
  8. if (ctx._source['size'] > 100) {
  9. ctx._source['explain'] = false;
  10. }
  11. """
  12. }
  13. }
  14. ]
  15. }

copy

The version is provided in all subsequent responses to get pipeline requests:

  1. GET _search/pipeline/my_pipeline

The response contains the pipeline version:

Response

  1. {
  2. "my_pipeline": {
  3. "version": 1234,
  4. "request_processors": [
  5. {
  6. "script": {
  7. "source": """
  8. if (ctx._source['size'] > 100) {
  9. ctx._source['explain'] = false;
  10. }
  11. """
  12. }
  13. }
  14. ]
  15. }
  16. }

Search pipeline metrics

For information about retrieving search pipeline statistics, see Search pipeline metrics.