Introduction

Using plain tables, Manticore Search can fetch data from databases using specialized drivers or ODBC. Current drivers include:

  • mysql - for MySQL/MariaDB/Percona MySQL databases
  • pgsql - for PostgreSQL database
  • mssql - for Microsoft SQL database
  • odbc - for any database that accepts connections using ODBC

To get the data from the database, a source must be configured with type as one of the above. The source requires information about how to connect to the database and the query that will be used to fetch the data. Additional pre and post this query can also be set - either to configure session settings or to perform pre/post fetch tasks. The source also must contain definitions of data types for the columns that are fetched.

Database connection

The source definition must contain the settings of the connection, this includes the host,port, user credentials or specific settings of a driver.

sql_host

The database server host to connect to. Note that MySQL client library chooses whether to connect over TCP/IP or over UNIX socket based on the host name. Specifically “localhost” will force it to use UNIX socket (this is the default and generally recommended mode) and “127.0.0.1” will force TCP/IP usage.

sql_port

The server IP port to connect to. For mysql default is 3306 and for pgsql is 5432.

sql_db

The SQL database to use after the connection is established and perform further queries within.

sql_user

The username used for connecting.

sql_pass

The user password to use when connecting. If the password includes # (which can be used to add comments in the configuration file) you can escape it with \.

sql_sock

UNIX socket name to connect to for local database servers. Note that it depends on sql_host setting whether this value will actually be used.

  1. sql_sock = /var/lib/mysql/mysql.sock

Specific settings for drivers

MySQL

mysql_connect_flags

MySQL client connection flags. Optional, default value is 0 (do not set any flags).

This option must contain an integer value with the sum of the flags. The value will be passed to mysql_real_connect() verbatim. The flags are enumerated in mysql_com.h include file. Flags that are especially interesting in regard to indexing, with their respective values, are as follows:

  • CLIENT_COMPRESS = 32; can use compression protocol
  • CLIENT_SSL = 2048; switch to SSL after handshake
  • CLIENT_SECURE_CONNECTION = 32768; new 4.1 authentication For instance, you can specify 2080 (2048+32) to use both compression and SSL, or 32768 to use new authentication only. Initially, this option was introduced to be able to use compression when the indexer and mysqld are on different hosts. Compression on 1 Gbps links is most likely to hurt indexing time though it reduces network traffic, both in theory and in practice. However, enabling compression on 100 Mbps links may improve indexing time significantly (up to 20-30% of the total indexing time improvement was reported). Your mileage may vary.
  1. mysql_connect_flags = 32 # enable compression

SSL certificate settings

  • mysql_ssl_cert - path to SSL certificate
  • mysql_ssl_key - path to SSL key file
  • mysql_ssl_ca - path to CA certificate

unpack_mysqlcompress

  1. unpack_mysqlcompress_maxsize = 1M

Columns to unpack using MySQL UNCOMPRESS() algorithm. Multi-value, optional, default value is empty list of columns.

Columns specified using this directive will be unpacked by indexer using modified zlib algorithm used by MySQL COMPRESS() and UNCOMPRESS() functions. When indexing on a different box than the database, this lets you offload the database, and save on network traffic. The feature is only available if zlib and zlib-devel were both available during build time.

  1. unpack_mysqlcompress = body_compressed
  2. unpack_mysqlcompress = description_compressed

By default a buffer of 16M is used for uncompressing the data. This can be changed by setting unpack_mysqlcompress_maxsize.

When using unpack_mysqlcompress, due to implementation intricacies it is not possible to deduce the required buffer size from the compressed data. So the buffer must be preallocated in advance, and unpacked data can not go over the buffer size.

unpack_zlib

  1. unpack_zlib = col1
  2. unpack_zlib = col2

Columns to unpack using zlib (aka deflate, aka gunzip). Multi-value, optional, default value is empty list of columns. Applies to source type mysql and pgsql only.

Columns specified using this directive will be unpacked by indexer using standard zlib algorithm (called deflate and also implemented by gunzip). When indexing on a different box than the database, this lets you offload the database, and save on network traffic. The feature is only available if zlib and zlib-devel were both available during build time.

MSSQL

MS SQL Windows authentication flag. Whether to use currently logged in Windows account credentials for authentication when connecting to MS SQL Server.

  1. mssql_winauth = 1

ODBC

Sources using ODBC requires the presence of a DSN (Data Source Name) string which can be set with odbc_dsn.

  1. odbc_dsn = Driver={Oracle ODBC Driver};Dbq=myDBName;Uid=myUsername;Pwd=myPassword

Please note that the format depends on specific ODBC driver used.

Execution of fetch queries

With all the SQL drivers, building a plain table generally works as follows.

  • connection to the database is established;
  • pre query as sql_query_pre is executed to perform any necessary initial setup, such as setting per-connection encoding with MySQL;
  • main query as sql_query is executed and the rows it returns are processed;
  • post-query as sql_query_post is executed to perform any necessary cleanup;
  • connection to the database is closed;
  • indexer does the sorting phase (to be pedantic, table-type specific post-processing);
  • connection to the database is established again;
  • post-processing query as sql_query_post_index is executed to perform any necessary final cleanup;
  • connection to the database is closed again.

Example of a source fetching data from MYSQL:

  1. source mysource {
  2. type = mysql
  3. path = /path/to/realtime
  4. sql_host = localhost
  5. sql_user = myuser
  6. sql_pass = mypass
  7. sql_db = mydb
  8. sql_query_pre = SET CHARACTER_SET_RESULTS=utf8
  9. sql_query_pre = SET NAMES utf8
  10. sql_query = SELECT id, title, description, category_id FROM mytable
  11. sql_query_post = DROP TABLE view_table
  12. sql_query_post_index = REPLACE INTO counters ( id, val ) \
  13. VALUES ( 'max_indexed_id', $maxid )
  14. sql_attr_uint = category_id
  15. sql_field_string = title
  16. }
  17. table mytable {
  18. type = plain
  19. source = mysource
  20. path = /path/to/mytable
  21. ...
  22. }

sql_query

This is the query which is used to retrieve documents from SQL server. There can be only one sql_query declared and it’s mandatory to have one. See also Processing fetched data

sql_query_pre

Pre-fetch query, or pre-query. Multi-value, optional, default is empty list of queries. They are executed before the sql_query exactly in order of appearance in the configuration file. Pre-query results are ignored.

Pre-queries are useful in a lot of ways. They are used to setup encoding, mark records that are going to be indexed, update internal counters, set various per-connection SQL server options and variables, and so on.

Perhaps the most frequent pre-query usage is to specify the encoding that the server will use for the rows it returns. Note that Manticore accepts only UTF-8 texts. Two MySQL specific examples of setting the encoding are:

  1. sql_query_pre = SET CHARACTER_SET_RESULTS=utf8
  2. sql_query_pre = SET NAMES utf8

Also specific to MySQL sources, it is useful to disable query cache (for indexer connection only) in pre-query, because indexing queries are not going to be re-run frequently anyway, and there’s no sense in caching their results. That could be achieved with:

  1. sql_query_pre = SET SESSION query_cache_type=OFF

sql_query_post

Post-fetch query. Optional, default value is empty.

This query is executed immediately after sql_query completes successfully. When post-fetch query produces errors, they are reported as warnings, but indexing is not terminated. It’s result set is ignored. Note that indexing is not yet completed at the point when this query gets executed, and further indexing still may fail. Therefore, any permanent updates should not be done from here. For instance, updates on helper table that permanently change the last successfully indexed ID should not be run from sql_query_post query; they should be run from sql_query_post_index query instead.

sql_query_post_index

Post-processing query. Optional, default value is empty.

This query is executed when indexing is fully and successfully completed. If this query produces errors, they are reported as warnings, but indexing is not terminated. It’s result set is ignored. $maxid macro can be used in its text; it will be expanded to maximum document ID which was actually fetched from the database during indexing. If no documents were indexed, $maxid will be expanded to 0.

Example:

  1. sql_query_post_index = REPLACE INTO counters ( id, val ) \
  2. VALUES ( 'max_indexed_id', $maxid )

The difference between sql_query_post and sql_query_post_index is in that sql_query_post is run immediately when Manticore received all the documents, but further indexing may still fail for some other reason. On the contrary, by the time the sql_query_post_index query gets executed, it is guaranteed that the table was created successfully. Database connection is dropped and re-established because sorting phase can be very lengthy and would just timeout otherwise.