Opensearch SQL Connector

Sink: Batch Sink: Streaming Append & Upsert Mode

The Opensearch connector allows for writing into an index of the Opensearch engine. This document describes how to setup the Opensearch Connector to run SQL queries against Opensearch.

The connector can operate in upsert mode for exchanging UPDATE/DELETE messages with the external system using the primary key defined on the DDL.

If no primary key is defined on the DDL, the connector can only operate in append mode for exchanging INSERT only messages with external system.

Dependencies

There is no connector (yet) available for Flink version 1.19.

The Opensearch connector is not part of the binary distribution. See how to link with it for cluster execution here.

How to create an Opensearch table

The example below shows how to create an Opensearch sink table:

  1. CREATE TABLE myUserTable (
  2. user_id STRING,
  3. user_name STRING,
  4. uv BIGINT,
  5. pv BIGINT,
  6. PRIMARY KEY (user_id) NOT ENFORCED
  7. ) WITH (
  8. 'connector' = 'opensearch',
  9. 'hosts' = 'http://localhost:9200',
  10. 'index' = 'users'
  11. );

Connector Options

OptionRequiredForwardedDefaultTypeDescription
connector
requiredno(none)StringSpecify what connector to use, the valid value is: opensearch
hosts
requiredyes(none)StringOne or more Opensearch hosts to connect to, e.g. http://host_name:9092;http://host_name:9093.
index
requiredyes(none)StringOpensearch index for every record. Can be a static index (e.g. ‘myIndex’) or a dynamic index (e.g. ‘index-{logts|yyyy-MM-dd}’). See the following Dynamic Index section for more details.
allow-insecure
optionalyes(none)BooleanAllow insecure connections to HTTPS endpoints (disable certificates validation).
document-id.key-delimiter
optionalyesStringDelimiter for composite keys (“_” by default), e.g., “$” would result in IDs “KEY1$KEY2$KEY3”.
username
optionalyes(none)StringUsername used to connect to Opensearch instance. Please notice that Opensearch comes with pre-bundled security feature, you can disable it by following the guidelines on how to configure the security for your Opensearch cluster.
password
optionalyes(none)StringPassword used to connect to Opensearch instance. If username is configured, this option must be configured with non-empty string as well.
sink.delivery-guarantee
optionalnoAT_LEAST_ONCEStringOptional delivery guarantee when committing. Valid values are:
  • EXACTLY_ONCE: records are only delivered exactly-once also under failover scenarios.
  • AT_LEAST_ONCE: records are ensured to be delivered but it may happen that the same record is delivered multiple times.
  • NONE: records are delivered on a best effort basis.
sink.flush-on-checkpoint
optionaltrueBooleanFlush on checkpoint or not. When disabled, a sink will not wait for all pending action requests to be acknowledged by Opensearch on checkpoints. Thus, a sink does NOT provide any strong guarantees for at-least-once delivery of action requests.
sink.bulk-flush.max-actions
optionalyes1000IntegerMaximum number of buffered actions per bulk request. Can be set to ‘0’ to disable it.
sink.bulk-flush.max-size
optionalyes2mbMemorySizeMaximum size in memory of buffered actions per bulk request. Must be in MB granularity. Can be set to ‘0’ to disable it.
sink.bulk-flush.interval
optionalyes1sDurationThe interval to flush buffered actions. Can be set to ‘0’ to disable it. Note, both ‘sink.bulk-flush.max-size’ and ‘sink.bulk-flush.max-actions’ can be set to ‘0’ with the flush interval set allowing for complete async processing of buffered actions.
sink.bulk-flush.backoff.strategy
optionalyesDISABLEDStringSpecify how to perform retries if any flush actions failed due to a temporary request error. Valid strategies are:
  • DISABLED: no retry performed, i.e. fail after the first request error.
  • CONSTANT: wait for backoff delay between retries.
  • EXPONENTIAL: initially wait for backoff delay and increase exponentially between retries.
sink.bulk-flush.backoff.max-retries
optionalyes(none)IntegerMaximum number of backoff retries.
sink.bulk-flush.backoff.delay
optionalyes(none)DurationDelay between each backoff attempt. For CONSTANT backoff, this is simply the delay between each retry. For EXPONENTIAL backoff, this is the initial base delay.
connection.path-prefix
optionalyes(none)StringPrefix string to be added to every REST communication, e.g., ‘/v1’.
connection.request-timeout
optionalyes(none)DurationThe timeout for requesting a connection from the connection manager.
connection.timeout
optionalyes(none)DurationThe timeout for establishing a connection.
socket.timeout
optionalyes(none)DurationThe socket timeout (SO_TIMEOUT) for waiting for data or, put differently, a maximum period inactivity between two consecutive data packets.
format
optionalnojsonStringOpensearch connector supports to specify a format. The format must produce a valid json document. By default uses built-in ‘json’ format. Please refer to JSON Format page for more details.

Features

Key Handling

The Opensearch sink can work in either upsert mode or append mode, depending on whether a primary key is defined. If a primary key is defined, the Opensearch sink works in upsert mode which can consume queries containing UPDATE/DELETE messages. If a primary key is not defined, the Opensearch sink works in append mode which can only consume queries containing INSERT only messages.

In the Opensearch connector, the primary key is used to calculate the Opensearch document id, which is a string of up to 512 bytes. It cannot have whitespaces. The Opensearch connector generates a document ID string for every row by concatenating all primary key fields in the order defined in the DDL using a key delimiter specified by document-id.key-delimiter. Certain types are not allowed as a primary key field as they do not have a good string representation, e.g. BYTES, ROW, ARRAY, MAP, etc. If no primary key is specified, Opensearch will generate a document id automatically.

See CREATE TABLE DDL for more details about the PRIMARY KEY syntax.

Dynamic Index

The Opensearch sink supports both static index and dynamic index.

If you want to have a static index, the index option value should be a plain string, e.g. 'myusers', all the records will be consistently written into “myusers” index.

If you want to have a dynamic index, you can use {field_name} to reference a field value in the record to dynamically generate a target index. You can also use '{field_name|date_format_string}' to convert a field value of TIMESTAMP/DATE/TIME type into the format specified by the date_format_string. The date_format_string is compatible with Java’s DateTimeFormatter. For example, if the option value is 'myusers-{log_ts|yyyy-MM-dd}', then a record with log_ts field value 2020-03-27 12:25:55 will be written into “myusers-2020-03-27” index.

You can also use '{now()|date_format_string}' to convert the current system time to the format specified by date_format_string. The corresponding time type of now() is TIMESTAMP_WITH_LTZ. When formatting the system time as a string, the time zone configured in the session through table.local-time-zone will be used. You can use NOW(), now(), CURRENT_TIMESTAMP, current_timestamp.

NOTE: When using the dynamic index generated by the current system time, for changelog stream, there is no guarantee that the records with the same primary key can generate the same index name. Therefore, the dynamic index based on the system time can only support append only stream.

Data Type Mapping

Opensearch stores document in a JSON string. So the data type mapping is between Flink data type and JSON data type. Flink uses built-in 'json' format for Opensearch connector. Please refer to JSON Format page for more type mapping details.