This version of the OpenSearch documentation is no longer maintained. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.

CSV

The csv processor is used to parse CSVs and store them as individual fields in a document. The processor ignores empty fields. The following is the syntax for the csv processor:

  1. {
  2. "csv": {
  3. "field": "field_name",
  4. "target_fields": ["field1, field2, ..."]
  5. }
  6. }

copy

Configuration parameters

The following table lists the required and optional parameters for the csv processor.

ParameterRequiredDescription
fieldRequiredThe name of the field that contains the data to be converted. Supports template snippets.
target_fieldsRequiredThe name of the field in which to store the parsed data.
descriptionOptionalA brief description of the processor.
empty_valueOptionalRepresents optional parameters that are not required or are not applicable.
ifOptionalA condition for running this processor.
ignore_failureOptionalIf set to true, failures are ignored. Default is false.
ignore_missingOptionalIf set to true, the processor will not fail if the field does not exist. Default is true.
on_failureOptionalA list of processors to run if the processor fails.
quoteOptionalThe character used to quote fields in the CSV data. Default is .
separatorOptionalThe delimiter used to separate the fields in the CSV data. Default is ,.
tagOptionalAn identifier tag for the processor. Useful for debugging to distinguish between processors of the same type.
trimOptionalIf set to true, the processor trims white space from the beginning and end of the text. Default is false.

Using the processor

Follow these steps to use the processor in a pipeline.

Step 1: Create a pipeline.

The following query creates a pipeline, named csv-processor, that splits resource_usage into three new fields named cpu_usage, memory_usage, and disk_usage:

  1. PUT _ingest/pipeline/csv-processor
  2. {
  3. "description": "Split resource usage into individual fields",
  4. "processors": [
  5. {
  6. "csv": {
  7. "field": "resource_usage",
  8. "target_fields": ["cpu_usage", "memory_usage", "disk_usage"],
  9. "separator": ","
  10. }
  11. }
  12. ]
  13. }

copy

Step 2 (Optional): Test the pipeline.

It is recommended that you test your pipeline before you ingest documents.

To test the pipeline, run the following query:

  1. POST _ingest/pipeline/csv-processor/_simulate
  2. {
  3. "docs": [
  4. {
  5. "_index": "testindex1",
  6. "_id": "1",
  7. "_source": {
  8. "resource_usage": "25,4096,10",
  9. "memory_usage": "4096",
  10. "disk_usage": "10",
  11. "cpu_usage": "25"
  12. }
  13. }
  14. ]
  15. }

copy

Response

The following example response confirms that the pipeline is working as expected:

  1. {
  2. "docs": [
  3. {
  4. "doc": {
  5. "_index": "testindex1",
  6. "_id": "1",
  7. "_source": {
  8. "memory_usage": "4096",
  9. "disk_usage": "10",
  10. "resource_usage": "25,4096,10",
  11. "cpu_usage": "25"
  12. },
  13. "_ingest": {
  14. "timestamp": "2023-08-22T16:40:45.024796379Z"
  15. }
  16. }
  17. }
  18. ]
  19. }

Step 3: Ingest a document.

The following query ingests a document into an index named testindex1:

  1. PUT testindex1/_doc/1?pipeline=csv-processor
  2. {
  3. "resource_usage": "25,4096,10"
  4. }

copy

Step 4 (Optional): Retrieve the document.

To retrieve the document, run the following query:

  1. GET testindex1/_doc/1

copy