Compiling Manticore from source

Compiling Manticore Search from sources allows you to create custom build configurations, such as disabling certain features or adding new patches for testing. For example, you may want to compile from sources and disable the embedded ICU in order to use a different version installed on your system that can be upgraded independently of Manticore. This can be also useful if you are interested in contributing to the Manticore Search project.

Building using CI docker

To prepare official release and dev packages, we use Docker and a special building image. This image includes an essential toolchain and is designed to be used with external sysroots, so one container can build packages for all operating systems. You can build the image using the Dockerfile and README. This is the easiest way to create binaries for any supported operating system and architecture. Once you have built the image, you need to specify three or more environment variables when you run the container from it:

To find out possible values for DISTR and arch, you can use the directory https://repo.manticoresearch.com/repository/sysroots/roots_with_zstd/ as a reference, as it includes sysroots for all the supported combinations.

After that, building packages inside the Docker container is as easy as calling:

  1. cmake -DPACK=1 /path/to/sources
  2. cmake --build .

For example, to create the same RedHat 7 package as the official one, but without the embedded ICU and its large datafile, you can execute the following (assuming that the sources are placed in /manticore/sources/ on the host):

  1. docker run -it --rm -e SYSROOT_URL=https://repo.manticoresearch.com/repository/sysroots \
  2. -e arch=x86_64 \
  3. -e DISTR=rhel7 \
  4. -e boost=boost_nov22 \
  5. -e sysroot=roots_nov22 \
  6. -v /manticore/sources:/manticore_aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa \
  7. <docker image> bash
  8. # following is to be run inside docker shell
  9. cd /manticore_aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/
  10. RELEASE_TAG="noicu"
  11. mkdir build && cd build
  12. cmake -DPACK=1 -DBUILD_TAG=$RELEASE_TAG -DWITH_ICU_FORCE_STATIC=0 ..
  13. cmake --build . --target package

The long source directory path is required, or it may fail to build the sources.

The same process can be used to build binaries/packages not only for popular Linux distributions, but also for FreeBSD, Windows, and macOS.

Building manually

Compiling Manticore without the building docker is not recommended, but if you need to do it, here’s wha you may need to know:

Required tools

  • C++ compiler
    • in Linux - GNU (4.7.2 and above) or Clang can be used
    • in Windows - Microsoft Visual Studio 2019 and above (community edition is enough)
    • on Mac OS - Clang (from command line tools of XCode, use xcode-select --install to install).
  • Bison, Flex - on most of the systems available as packages, on Windows available in cygwin framework.
  • Cmake - used on all the platforms (version 3.19 or above required)

Fetching sources

From git

Manticore source code is hosted on GitHub. Clone the repo, then checkout a desired branch or tag. Branch master represents main development branch. Upon release we create a versioned tag, like 3.6.0, and start a new branch for current release, in this case manticore-3.6.0. The head of the versioned branch after all changes is used as source to build all binary releases. For example, to take sources of version 3.6.0 you can run:

  1. git clone https://github.com/manticoresoftware/manticoresearch.git
  2. cd manticoresearch
  3. git checkout manticore-3.6.0

From archive

You can download desired code from github by using ‘download zip’ button. Both .zip and .tar.gz are suitable.

  1. wget -c https://github.com/manticoresoftware/manticoresearch/archive/refs/tags/3.6.0.tar.gz
  2. tar -zxf 3.6.0.tar.gz
  3. cd manticoresearch-3.6.0

Configuring

Manticore uses cmake. Assuming you’re inside the root dir of the cloned repository.

  1. mkdir build && cd build
  2. cmake ..

Cmake will investigate available features and configure the build according to them. By default all features are considered enabled, if they’re available. Also script downloads and builds some external libraries assuming you want to use them. Implicitly you get support of maximal number of features.

Also, you can rule configuration explicitly, with flags and options. To demand feature FOO add -DFOO=1 to cmake call. To disable it - same way, -DFOO=0. If not explicitly noticed, enabling of not available feature (say, WITH_GALERA on MS Windows build) will cause configuration to fail with error. Disabling of a feature, apart excluding it from build, also disables it’s investigation on the system, and disables their downloading/building, as it would be done for some external libs in case of implicit configuration.

Configuration flags and options

  • USE_SYSLOG - allows to use syslog in query logging.
  • WITH_GALERA - support replication on search daemon. Support will be configured for the build. Also, sources of Galera library will be downloaded, built and final module will be included into distribution/installation. Usually it is safe if you build with galera, but not distribute the library itself (so, no galera module - no replication). But sometimes you may need to explicitly disable it. Say, if you want to build static binary which by desing can’t load any libraries, so that even presence of call to ‘dlopen’ function inside daemon will cause link error.
  • WITH_RE2 - build with using RE2 regular expression library. It is necessary for functions like REGEX()), and regexp_filter feature.
  • WITH_RE2_FORCE_STATIC - download sources of RE2, compile them and link with them statically, so that final binaries will not depend on presence of shared RE2 library in your system.
  • WITH_STEMMER - build with using Snowball stemming library.
  • WITH_STEMMER_FORCE_STATIC - download snowball sources, compile them and link with them statically, so that final binaries will not depend on presence of shared libstemmer library in your system.
  • WITH_ICU - build with using icu, International Components for Unicode library. That is used in tokenization of Chineze, for text segmentation. It is in game when morplology like icu_chinese in use.
  • WITH_ICU_FORCE_STATIC - download icu sources, compile them and link with them statically, so that final binaries will not depend on presence of shared icu library in your system. Also include icu data file into installation/distribution. Purpose of statically linked ICU - is to have the library of known version, so that behaviour is determined and not depends on any system libraries. You most probably would prefer to use system ICU instead, because it may be updated in time without need to recompile manticore daemon. In this case you need to explicitly disable this option. That will also save you some place occupied by icu data file (about 30M), as it will NOT be included into distribution then.
  • WITH_SSL - used for support https, and also encrypted mysql connections to the daemon. System OpenSSL library will be linked to daemon. That implies, that OpenSSL will be required to start the daemon. That is mandatory for support of https, but not strictly mandatory for the server (i.e. no ssl means no possibility to connect by https, but other protocols will work). SSL library versions starting from 1.0.2 to 1.1.1 may be used by Manticore, however note that for the sake of security it’s highly recommended to use the freshest possible SSL library. For now only v1.1.1 is supported, the rest are outdated ( see openssl release strategy
  • WITH_ZLIB - used by indexer to work with compressed columns from mysql. Used by daemon to provide support of compressed mysql protocol.
  • WITH_ODBC - used by indexer to support indexing sources from ODBC providers (they’re typically UnixODBC and iODBC). On MS Windows ODBC is the proper way to work witn MS SQL sources, so indexing of MSSQL also implies this flag.
  • DL_ODBC - don’t link with ODBC library. If ODBC is linked, but not available, you can’t start indexer tool even if you want to process something not related to ODBC. This option asks indexer to load the library in runtime only when you want to deal with ODBC source.
  • ODBC_LIB - name of ODBC library file. Indexer will try to load that file when you want to process ODBC source. That option is written automatically from available ODBC shared library investigation. You can also override that name on runtime, providing environment variable ODBC_LIB with proper path to alternative library before running indexer.
  • WITH_EXPAT - used by indexer to support indexing xmlpipe sources.
  • DL_EXPAT - don’t link with EXPAT library. If EXPAT is linked, but not available, you can’t start indexer tool even if you want to process something not related to xmlpipe. This option asks indexer to load the library in runtime only when you want to deal with xmlpipe source.
  • EXPAT_LIB - name of EXPAT library file. Indexer will try to load that file when you want to process xmlpipe source. That option is written automatically from available EXPAT shared library investigation. You can also override that name on runtime, providing environment variable EXPAT_LIB with proper path to alternative library before running indexer.
  • WITH_ICONV - for support different encodings when indexing xmlpipe sources with indexer.
  • DL_ICONV - don’t link with iconv library. If iconv is linked, but not available, you can’t start indexer tool even if you want to process something not related to xmlpipe. This option asks indexer to load the library in runtime only when you want to deal with xmlpipe source.
  • ICONV_LIB - name of iconv library file. Indexer will try to load that file when you want to process xmlpipe source. That option is written automatically from available iconv shared library investigation. You can also override that name on runtime, providing environment variable ICONV_LIB with proper path to alternative library before running indexer.
  • WITH_MYSQL - used by indexer to support indexing mysql sources.
  • DL_MYSQL - don’t link with mysql library. If mysql is linked, but not available, you can’t start indexer tool even if you want to process something not related to mysql. This option asks indexer to load the library in runtime only when you want to deal with mysql source.
  • MYSQL_LIB - name of mysql library file. Indexer will try to load that file when you want to process mysql source. That option is written automatically from available mysql shared library investigation. You can also override that name on runtime, providing environment variable MYSQL_LIB with proper path to alternative library before running indexer.
  • WITH_POSTGRESQL - used by indexer to support indexing postgresql sources.
  • DL_POSTGRESQL - don’t link with postgresql library. If postgresql is linked, but not available, you can’t start indexer tool even if you want to process something not related to postgresql. This option asks indexer to load the library in runtime only when you want to deal with postgresql source.
  • POSTGRESQL_LIB - name of postgresql library file. Indexer will try to load that file when you want to process postgresql source. That option is written automatically from available postgresql shared library investigation. You can also override that name on runtime, providing environment variable POSTGRESQL_LIB with proper path to alternative library before running indexer.
  • LOCALDATADIR - default path where daemon stores binlog. If that path is not provided or disabled explicitly in daemon’s runtime config (that is file manticore.conf, no way related to this build configuration), binlogs will be placed to this path. It is assumed to be absolute, however that is not strictly necessary, and you may play with relative values also. You most probably would not, however, change default value defined by configuration, which, depending on target system, might be something like /var/data, /var/lib/manticore/data, or /usr/local/var/lib/manticore/data.
  • FULL_SHARE_DIR - default path where all assets are stored. It may be overriden by environment variable FULL_SHARE_DIR before starting any tool which uses files from that folder. That is quite important path, as many things are by default expected there. That are - predefined charset tables, stopwords, manticore modules and icu data files - all placed into that folder. Configuration script usually determines that path to be something like /usr/share/manticore, or /usr/local/share/manticore.
  • DISTR_BUILD - shortcut of the options for releasing packages. That is string value with the name of the target platform. It may be used instead of manually configuring all the stuff. On debian and redhat linuxes default value might be determined by light introspection and set to generic ‘debian’ or ‘rhel’. Otherwise value is not defined.
  • PACK - even more shortcut. It reads DISTR environment variable, assigns it to DISTR_BUILD param and then works as usual. That is very useful when building in prepared build systems, like docker containers, where that DISTR variable is set on system level and reflects target system for which such container intended.
  • CMAKE_INSTALL_PREFIX (path) - where manticore except itself installed. Building installs nothing, but prepares installation rules which are executed once you run cmake --install command, or create a package and then install it. Prefix may be freely changed anytime, even during install - by invoking cmake --install . --prefix /path/to/installation. However, at config time this variable once used to initialize default values of LOCALDATADIR and FULL_SHARE_DIR. So, for example, setting it to /my/custom at configure time will hardcode LOCALDATADIR as /my/custom/var/lib/manticore/data, and FULL_SHARE_DIR as /my/custom/usr/share/manticore.
  • BUILD_TESTING (bool) whether to support testing. If enabled, after the build you can run ‘ctest’ and test the build. Note that testing implies additional dependencies, like at least presence of PHP cli, python and available mysql server with test database. By default this param is on. So, for ‘just build’, you might want to disable the option by explicitly specifying ‘off’ value.
  • LIBS_BUNDLE - path to a folder with different libraries. This is mostly relevant for Windows building, but may be also helpful if you have to build often in order to avoid downloading third-party sources each time. By default this path is never modified by the configuration script; you should put everything there manually. When, say, we want the support of stemmer - the sources will be downloaded from Snowball homepage, then extracted, configured, built, etc. Instead you can store the original source tarball (which is libstemmer_c.tgz) in this folder. Next time you want to build from scratch, the configuration script will first look up in the bundle, and if it finds the stemmer there, it will not download it again from the Internet.
  • CACHEB - path to a folder with stored builds of 3-rd party libraries. Usually features like galera, re2, icu, etc. first downloaded or being got from bundle, then unpacked, built and installed into temporary internal folder. When building manticore that folder is then used as the place where the things required to support asked feature are live. Finally they either link with manticore, if it is library; either go directly to distribution/installation (like galera or icu data). When CACHEB is defined either as cmake config param, either as system environment variable, it is used as target folder for that builds. This folder might be kept across builds, so that stored libraries there will not be rebuilt anymore, making whole build process much shorter.

Note, that some options organized in triples: WITH_XXX, DL_XXX and XXX_LIB - like support of mysql, odbc, etc. WITH_XXX determines whether next two has effect or not. I.e., if you set WITH_ODBC to 0 - there is no sence to provide DL_ODBC and ODBC_LIB, and these two will have no effect if whole feature is disabled. Also, XXX_LIB has no sense without DL_XXX, because if you don’t want DL_XXX option, dynamic loading will not be used, and name provided by XXX_LIB is useless. That is used by default introspection.

Also, using iconv library assumes expat and is useless if last is disabled.

Also, some libraries may be always available, and so, there is no sense to avoid linkage with them. For example, in windows that is ODBC. On Mac Os that is Expat, iconv and m.b. others. Default introspection determines such libraries and effectively emits only WITH_XXX for them, without DL_XXX and XXX_LIB, that makes the things simpler.

With some options in game configuring might look like:

  1. mkdir build && cd build
  2. cmake -DWITH_MYSQL=1 -DWITH_RE2=1 ..

Apart general configuration values, you may also investigate file CMakeCache.txt which is left in build folder right after you run configuration. Any values defined there might be redefined explicitly when running cmake. For example, you may run cmake -DHAVE_GETADDRINFO_A=FALSE ..., and that config run will not assume investigated value of that variable, but will use one you’ve provided.

Specific environment variables

Environment variables are useful to provide some kind of global settings which are stored aside build configuration and just present ‘always’. For persistency they may be set globally on the system using different ways - like adding them to .bashrc file, or embedding into Dockerfile if you produce docker-based build system, or write in system preferences environment variables on Windows. Also you may set them short-live using export VAR=value in the shell. Or even shorter, prepending values to cmake call, like CACHEB=/my/cache cmake ... - this way it will only work on this call and will not be visible on the next.

Some of such variables are known to be used in general by cmake and some other tools. That is things like CXX which determines current C++ compiler, or CXX_FLAGS to provide compiler flags, etc.

However we have some of the variables specific to manticore configuration, which are invented solely for our builds.

  • CACHEB - same as config CACHEB option
  • LIBS_BUNDLE - same as config LIBS_BUNDLE option
  • DISTR - used to initialize DISTR_BUILD option when -DPACK=1 is used.
  • DIAGNOSTIC - make output of cmake configuration much more verbose, explaining every thing happening
  • WRITEB - assumes LIBS_BUNDLE, and if set, will download source archive files for different tools to LIBS_BUNDLE folder. That is, if fresh version of stemmer came out - you can manually remove libstemmer_c.tgz from the bundle, and then run oneshot WRITEB=1 cmake ... - it will not found stemmer’s sources in the bundle, and then download them from vendor’s site to the bundle (without WRITEB it will download them into some temporary folder inside build and it will disappear as you wipe the build folder).

At the end of configuration you may see what is available and will be used in the list like this one:

  1. -- Enabled features compiled in:
  2. * Galera, replication of tables
  3. * re2, a regular expression library
  4. * stemmer, stemming library (Snowball)
  5. * icu, International Components for Unicode
  6. * OpenSSL, for encrypted networking
  7. * ZLIB, for compressed data and networking
  8. * ODBC, for indexing MSSQL (windows) and generic ODBC sources with indexer
  9. * EXPAT, for indexing xmlpipe sources with indexer
  10. * Iconv, for support different encodings when indexing xmlpipe sources with indexer
  11. * Mysql, for indexing mysql sources with indexer
  12. * PostgreSQL, for indexing postgresql sources with indexer

Building

  1. cmake --build . --config RelWithDebInfo

Installation

To install run:

  1. cmake --install . --config RelWithDebInfo

to install into custom (non-default) folder, run

  1. cmake --install . --prefix path/to/build --config RelWithDebInfo

Building packages

For building package use target package. It will build package according to selection, provided by -DDISTR_BUILD option. By default it will be a simple .zip or .tgz archive with all binaries and supplement files.

  1. cmake --build . --target package --config RelWithDebInfo

Some advanced things about building

Recompilation (update) on single-config

If you didn’t change the path for sources and build, just move to you build folder and run:

  1. cmake .
  2. cmake --build . --clean-first --config RelWithDebInfo

If by any reason it doesn’t work, you can delete file CMakeCache.txt located in the build folder. After this step you have to run cmake again, pointing to the source folder and configuring the options.

If it also doesn’t help, just wipe out your build folder and begin from scratch.

Build types

Shortly - just do --config RelWithDebInfo as written above. It will make no mistake.

We use two build types. For development it is Debug - it assigns compiler flags for optimization and other things the way that it is very friendly for development, in means debug runs with step-by-step execution. However, produced binaries are quite large and slow for production.

For releasing we use another type - RelWithDebInfo - which means ‘release build with debug info’. It produces production binaries with embedded debug info. The latter is then split away into separate debuginfo packages which are stored aside with release packages and might be used in case of some issues like crashes - for investigation and bugfixing. Cmake also provides Release and MinSizeRel, but we don’t use them. If build type is not available, cmake will make noconfig build.

Build system generators

There are two types of generators: single-config and multi-config.

  • single-config needs build type provided on configuration, via CMAKE_BUILD_TYPE parameter. If it is not defined, build fall-back to RelWithDebInfo type which is quite well if you want just build manticore from sources and not going to participate in development. For explicit build you should provide build type, like -DCMAKE_BUILD_TYPE=Debug.
  • multi-config selects build type during the build. It should be provided with --config option, otherwise it will build kind of noconfig, which is quite strange and not desirable. So, you should always specify build type, like --config Debug.

If you want to specify build type, but don’t want to care about whether it is ‘single’ or ‘multi’ config generator - just provide necessary keys in both places. I.e., configure with -DCMAKE_BUILD_TYPE=Debug, and then build with --config Debug. Just be sure that both values are same. If target builder is single-config, it will consume configuration param. If it is multi-config, configuration param will be ignored, but correct build configuration will then be selected by —config key.

If you want RelWihtDebInfo (i.e. just build for production) and know you’re on single-config platform (that is all, except Windows) - you can omit --config flag on cmake invocation. Default CMAKE_BUILD_TYPE=RelWithDebInfo will be configured then, and used. All the commands for ‘building’, ‘installation’ and ‘building package’ will become shorter then.

Explicitly select build system generators

Cmake is the tool which is not performing building by itself, but it generates rules for local build system. Usually it determines available build system well, but sometimes you might need to provide generator explicitly. You can run cmake -G and review the list of available generators.

  • on Windows, if you have more than one version ov Visual Studio installed, you might need to specify which one to use, as:

    1. cmake -G "Visual Studio 16 2019" ....
  • on all other platforms - usually Unix makefiles are in game, but you can specify another one, as Ninja, or Ninja Multi-Config, as:

    1. cmake -GNinja ...

    or

    1. cmake -G"Ninja Multi-Config" ...

    Ninja Multi-Config is quite useful, as it is really ‘multi-config’, and available on linux/macos/bsd. With this generator you may shift choosing of configuration type to build time, and also you may build several configurations in one and same build folder, changing only --config param.

Caveats

  1. If you want to finally build full-featured RPM package, path to build directory must be long enough in order to correctly build debug symbols. Like /manticore012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789, for example. That is because RPM tools modify the path over compiled binaries when building debug info, and it can just write over existing room and won’t allocate more. Above mentioned long path has 100 chars and that is quite enough for such case.

External dependencies

Some libraries should be available if you want to use them.

  • for indexing (indexer tool): expat, iconv, mysql, odbc, postgresql. Without them, you could only process tsv and csv sources.
  • for serving queries (searchd daemon): openssl might be necessary.
  • for all (required, mandatory!) we need Boost library. Minimal version is 1.61.0, however we build the binaries with fresher 1.75.0. Even more fresh (like 1.76) should also be ok. On Windows you can download pre-built Boost from their site (boost.org) and install into default suggested path (that is C:\boost…). On Mac Os the one provided in brew is ok. On linuxes you can check available version in official repositories, and if it doesn’t match requirements you can build from sources. We need component ‘context’, you can also build components ‘system’ and ‘program_options’, they will be necessary if you also want to build Galera library from the sources. Look into dist/build_dockers/xxx/boost_175/Dockerfile for a short self-documented script/instruction how to do it.

On build system you need ‘dev’ or ‘devel’ versions of that packages installed (i.e. - libmysqlclient-devel, unixodbc-devel, etc. Look to our dockerfiles for the names of concrete packages).

On run systems these packages should present at least in final (non-dev) variants. (devel variants usually larger, as they include not only target binaries, but also different development stuff like include headers, etc.).

Building on Windows

Apart necessary pre-requisites, you might need prebuilt expat, iconv, mysql and postgresql client libraries. You have either to build them yourself, either contact us to get our build bundle (that is simple zip archive where folder with these targets located).

See what is compiled

Run indexer -h. It will say which features was configured and built (whenever they’re explicit, or investigated, doesn’t matter):

  1. Built on Linux x86_64 by GNU 8.3.1 compiler.
  2. Configured with these definitions: -DDISTR_BUILD=rhel8 -DUSE_SYSLOG=1 -DWITH_GALERA=1 -DWITH_RE2=1 -DWITH_RE2_FORCE_STATIC=1
  3. -DWITH_STEMMER=1 -DWITH_STEMMER_FORCE_STATIC=1 -DWITH_ICU=1 -DWITH_ICU_FORCE_STATIC=1 -DWITH_SSL=1 -DWITH_ZLIB=1 -DWITH_ODBC=1 -DDL_ODBC=1
  4. -DODBC_LIB=libodbc.so.2 -DWITH_EXPAT=1 -DDL_EXPAT=1 -DEXPAT_LIB=libexpat.so.1 -DWITH_ICONV=1 -DWITH_MYSQL=1 -DDL_MYSQL=1
  5. -DMYSQL_LIB=libmariadb.so.3 -DWITH_POSTGRESQL=1 -DDL_POSTGRESQL=1 -DPOSTGRESQL_LIB=libpq.so.5 -DLOCALDATADIR=/var/lib/manticore/data
  6. -DFULL_SHARE_DIR=/usr/share/manticore

Migration from Sphinx Search

Sphinx 2.x -> Manticore 2.x

Manticore Search 2.x maintains compatibility with Sphinxsearch 2.x and can load existing tables created by Sphinxsearch. In most cases, upgrading is just a matter of replacing the binaries.

Instead of sphinx.conf (in Linux normally located at /etc/sphinxsearch/sphinx.conf) Manticore by default uses /etc/manticoresearch/manticore.conf. It also runs under a different user and use different folders.

Systemd service name has changed from sphinx/sphinxsearch to manticore and the service runs under user manticore (Sphinx was using sphinx or sphinxsearch). It also uses a different folder for the PID file.

The folders used by default are /var/lib/manticore, /var/log/manticore, /var/run/manticore. You can still use the existing Sphinx config, but you need to manually change permissions for /var/lib/sphinxsearch and /var/log/sphinxsearch folders. Or, just rename globally ‘sphinx’ to ‘manticore’ in system files. If you use other folders (for data, wordforms files etc.) the ownership must be also switched to user manticore. The pid_file location should be changed to match the manticore.service to /var/run/manticore/searchd.pid.

If you want to use the Manticore folder instead, the table files need to be moved to the new data folder (/var/lib/manticore) and the permissions to be changed to user manticore.

Sphinx 2.x / Manticore 2.x -> Manticore 3.x

Upgrading from Sphinx / Manticore 2.x to 3.x is not straightforward, because the table storage engine received a massive upgrade and the new searchd can’t load older tables and upgrade them to new format on-the-fly.

Manticore Search 3 got a redesigned table storage. Tables created with Manticore/Sphinx 2.x cannot be loaded by Manticore Search 3 without a conversion. Because of the 4GB limitation, a real-time table in 2.x could still have several disk chunks after an optimize operation. After upgrading to 3.x, these tables can now be optimized to 1-disk chunk with the usual OPTIMIZE command. Index files also changed. The only component that didn’t get any structural changes is the .spp file (hitlists). .sps (strings/json) and .spm (MVA) are now held by .spb (var-length attributes). The new format has an .spm file present, but it’s used for row map (previously it was dedicated for MVA attributes). The new extensions added are .spt (docid lookup), .sphi ( secondary index histograms), .spds (document storage). In case you are using scripts that manipulate table files, they should be adapted for the new file extensions.

The upgrade procedure may differ depending on your setup (number of servers in the cluster, whether you have HA or not etc.), but in general it’s about creating new 3.x table versions and replacing your existing ones with them along with replacing older 2.x binaries with the new ones.

There are two special requirements to take care:

  • Real-time tables need to be flushed using FLUSH RAMCHUNK
  • Plain tables with kill-lists require adding a new directive in table configuration (see killlist_target)

Manticore Search 3 includes a new tool - index_converter - that can convert Sphinx 2.x / Manticore 2.x tables to 3.x format. index_converter comes in a separate package which should be installed first. Using the convert tool create 3.x versions of your tables. index_converter can write the new files in the existing data folder and backup the old files or it can write the new files to a chosen folder.

Basic upgrade instruction

If you have a single server:

  • install manticore-converter package
  • use index_converter to create new versions of the tables in a different folder than the existing data folder ( using -–output-dir option)
  • stop existing Manticore/Sphinx, upgrade to 3.0, move the new tables to data folder, start Manticore

To get a minimal downtime, you can copy 2.x tables, config (you’ll need to edit paths here for tables, logs and different ports) and binaries to a separate location and start this on a separate port and point your application to it. After upgrade is made to 3.0 and the new server is started, you can point back the application to the normal ports. If all is good, stop the 2.x copy and delete the files to free the space.

If you have a spare box (like a testing or staging server), you can do there first the table upgrade and even install Manticore 3 to perform several tests and if everything is ok copy the new table files to the production server. If you have multiple servers which can be pulled out from production, do it one by one and perform the upgrade on each. For distributed setups, 2.x searchd can work as a master with 3.x nodes, so you can do upgrading on the data nodes first and at the end the master node.

There have been no changes made on how clients should connect to the engine or any change in querying mode or queries behavior.

kill-lists in Sphinx / Manticore 2.x vs Manticore 3.x

Kill-lists have been redesigned in Manticore Search 3. In previous versions kill-lists were applied on the result set provided by each previous searched table on query time.

Thus In 2.x the table order at query time mattered. For example if a delta table had a kill-list in order to apply it against the main table the order had to be main, delta (either in a distributed table or in the FROM clause).

In Manticore 3 kill-lists are applied to a table when it’s loaded during searchd startup or gets rotated. New directive killlist_target in table configuration specifies target tables and defines which doc ids from the source table should be used for suppression. These can be ids from the defined kill-list, actual doc ids of the table or the both.

Documents from the kill-lists are deleted from the target tables, they are not returned in results even if the search doesn’t include the table that provided the kill-lists. Because of that the order of tables for searching does not matter any more. Now delta,main and main,delta will provide the same results.

In previous versions tables were rotated following the order from the configuration file. In Manticore 3 table rotation order is much smarter and works in accordance with killlist targets. Before starting to rotate tables the server looks for chains of tables by killlist_target definitions. It will then first rotate tables not referenced anywhere as kill-lists targets. Next it will rotate tables targeted by already rotated tables and so on. For example if we do indexer --all and we have 3 tables : main, delta_big (which targets at the main) and delta_small (with target at delta_big), first the delta_small is rotated, then delta_big and finally the main. This is to ensure that when a dependent table is rotated it gets the most actual kill-list from other tables.

Configuration keys removed in Manticore 3.x

  • docinfo - everything is now extern
  • inplace_docinfo_gap - not needed anymore
  • mva_updates_pool - MVAs don’t have anymore a dedicated pool for updates, as now they can be updated directly in the blob (see below).

Updating var-length attributes in Manticore 3.x

String, JSON and MVA attributes can be updated in Manticore 3.x using UPDATE statement.

In 2.x string attributes required REPLACE, for JSON it was only possible to update scalar properties (as they were fixed-width) and MVAs could be updated using the MVA pool. Now updates are performed directly on the blob component. One setting that may require tuning is attr_update_reserve which allows changing the allocated extra space at the end of the blob used to avoid frequent resizes in case the new values are bigger than the existing values in the blob.

Document IDs in Manticore 3.x

Doc ids used to be UNSIGNED 64-bit integers. Now they are POSITIVE SIGNED 64-bit integers.

RT mode in Manticore 3.x

Read here about the RT mode

Special suffixes since Manticore 3.x

Manticore 3.x recognizes and parses special suffixes which makes easier to use numeric values with special meaning. Common form for them is integer number + literal, like 10k or 100d, but not 40.3s (since 40.3 is not integer), or not 2d 4h (since there are two, not one value). Literals are case-insensitive, so 10W is the same as 10w. There are 2 types of such suffixes currently supported:

  • Size suffixes - can be used in parameters that define size of something (memory buffer, disk file, limit of RAM, etc. ) in bytes. “Naked” numbers in that places mean literally size in bytes (octets). Size values take suffix k for kilobytes (1k=1024), m for megabytes (1m=1024k), g for gigabytes (1g=1024m) and t for terabytes (1t=1024g).
  • Time suffixes - can be used in parameters defining some time interval values like delays, timeouts, etc. “Naked” values for those parameters usually have documented scale, and you must know if their numbers, say, 100, means ‘100 seconds’ or ‘100 milliseconds’. However instead of guessing you just can write suffixed value and it will be fully determined by it’s suffix. Time values take suffix us for useconds (microseconds), ms for milliseconds, s for seconds, m for minutes, h for hours, d for days and w for weeks.

index_converter

index_converter is a tool for converting tables created with Sphinx/Manticore Search 2.x to Manticore Search 3.x table format. The tool can be used in several different ways:

Convert one table at a time

  1. $ index_converter --config /home/myuser/manticore.conf --index tablename

Convert all tables

  1. $ index_converter --config /home/myuser/manticore.conf --all

Convert tables found in a folder

  1. $ index_converter --path /var/lib/manticoresearch/data --all

New version of the table is written by default in the same folder. Previous version files are saved with .old extension in their name. An exception is .spp (hitlists) file which is the only table component that didn’t have any change in the new format.

You can save the new table version to a different folder using –output-dir option

  1. $ index_converter --config /home/myuser/manticore.conf --all --output-dir /new/path

Convert kill lists

A special case is for tables containing kill-lists. As the behaviour of how kill-lists works has changed (see killlist_target), the delta table should know which are the target tables for applying the kill-lists. There are 3 ways to have a converted table ready for setting targeted tables for applying kill-lists:

  • Use -–killlist-target when converting a table

    1. $ index_converter --config /home/myuser/manticore.conf --index deltaindex --killlist-target mainindex:kl
  • Add killlist_target in the configuration before doing the conversion

  • use ALTER … KILLIST_TARGET command after conversion

Complete list of index_converter options

Here’s the complete list of index_converter options:

  • --config <file> (-c <file> for short) tells index_converter to use the given file as its configuration. Normally, it will look for manticore.conf in the installation directory (e.g. /usr/local/manticore/etc/manticore.conf if installed into /usr/local/sphinx), followed by the current directory you are in when calling index_converter from the shell.
  • --index specifies which table should be converted
  • --path - instead of using a config file, a path containing table(s) can be used
  • --strip-path - strips path from filenames referenced by table: stopwords, exceptions and wordforms
  • --large-docid - allows to convert documents with ids larger than 2^63 and display a warning, otherwise it will just exit on the large id with an error. This option was added as in Manticore 3.x doc ids are signed bigint, while previously they were unsigned
  • --output-dir <dir> - writes the new files in a chosen folder rather than the same location as with the existing table files. When this option set, existing table files will remain untouched at their location.
  • --all - converts all tables from the config
  • --killlist-target <targets> - sets the target tables for which kill-lists will be applied. This option should be used only in conjunction with --index option

Quick start guide

Install and start Manticore

You can install and start Manticore easily in Ubuntu, Centos, Debian, Windows and MacOS or use Manticore as a docker container.

  • Ubuntu
  • Debian
  • Centos
  • Windows
  • MacOS
  • Docker

Ubuntu Debian Centos Windows MacOS Docker

  1. wget https://repo.manticoresearch.com/manticore-repo.noarch.deb
  2. sudo dpkg -i manticore-repo.noarch.deb
  3. sudo apt update
  4. sudo apt install manticore manticore-columnar-lib
  5. sudo systemctl start manticore
  1. wget https://repo.manticoresearch.com/manticore-repo.noarch.deb
  2. sudo dpkg -i manticore-repo.noarch.deb
  3. sudo apt update
  4. sudo apt install manticore manticore-columnar-lib
  5. sudo systemctl start manticore
  1. sudo yum install https://repo.manticoresearch.com/manticore-repo.noarch.rpm
  2. sudo yum install manticore manticore-columnar-lib
  3. sudo systemctl start manticore
  • Download Windows archive from https://manticoresearch.com/install/
  • Extract all from the archive to C:\Manticore
    1. C:\Manticore\bin\searchd --install --config C:\Manticore\sphinx.conf.in --servicename Manticore
  • Start Manticore from the Services snap-in of the Microsoft Management Console

  1. brew install manticoresearch
  2. brew services start manticoresearch
  1. docker pull manticoresearch/manticore
  2. docker run -e EXTRA=1 --name manticore -p9306:9306 -p9308:9308 -p9312:9312 -d manticoresearch/manticore

For persisting your data directory read how to use Manticore docker in production

Connect to Manticore

By default Manticore is waiting for your connections on:

  • port 9306 for MySQL clients
  • port 9308 for HTTP/HTTPS connections
  • port 9312 for connections from other Manticore nodes and clients based on Manticore binary API
  • SQL
  • HTTP
  • PHP
  • Python
  • Javascript
  • Java

SQL HTTP PHP Python Javascript Java

  1. mysql -h0 -P9306

HTTP is a stateless protocol so it doesn’t require any special connection phase:

  1. curl -s "http://localhost:9308/search"
  1. // https://github.com/manticoresoftware/manticoresearch-php
  2. require_once __DIR__ . '/vendor/autoload.php';
  3. $config = ['host'=>'127.0.0.1','port'=>9308];
  4. $client = new \Manticoresearch\Client($config);
  1. // https://github.com/manticoresoftware/manticoresearch-python
  2. import manticoresearch
  3. config = manticoresearch.Configuration(
  4. host = "http://127.0.0.1:9308"
  5. )
  6. client = manticoresearch.ApiClient(config)
  7. indexApi = manticoresearch.IndexApi(client)
  8. searchApi = manticoresearch.SearchApi(client)
  9. utilsApi = manticoresearch.UtilsApi(client)
  1. // https://github.com/manticoresoftware/manticoresearch-javascript
  2. var Manticoresearch = require('manticoresearch');
  3. var client= new Manticoresearch.ApiClient()
  4. client.basePath="http://127.0.0.1:9308";
  5. indexApi = new Manticoresearch.IndexApi(client);
  6. searchApi = new Manticoresearch.SearchApi(client);
  7. utilsApi = new Manticoresearch.UtilsApi(client);
  1. // https://github.com/manticoresoftware/manticoresearch-java
  2. import com.manticoresearch.client.*;
  3. import com.manticoresearch.client.model.*;
  4. import com.manticoresearch.client.api.*;
  5. ...
  6. ApiClient client = Configuration.getDefaultApiClient();
  7. client.setBasePath("http://127.0.0.1:9308");
  8. ...
  9. IndexApi indexApi = new IndexApi(client);
  10. SearchApi searchApi = new UtilsApi(client);
  11. UtilsApi utilsApi = new UtilsApi(client);

Create a table

Let’s now create a table called “products” with 2 fields:

  • title - full-text field which will contain our product’s title
  • price - of type “float”

Note that it is possible to omit creating a table with an explicit create statement. For more information, see Auto schema.

  • SQL
  • HTTP
  • PHP
  • Python
  • Javascript
  • Java

SQL HTTP PHP Python Javascript Java

  1. create table products(title text, price float) morphology='stem_en';
  1. POST /cli -d "create table products(title text, price float) morphology='stem_en'"
  1. $index = new \Manticoresearch\Index($client);
  2. $index->setName('products');
  3. $index->create([
  4. 'title'=>['type'=>'text'],
  5. 'price'=>['type'=>'float'],
  6. ],['morphology' => 'stem_en']);
  1. utilsApi.sql('create table products(title text, price float) morphology=\'stem_en\'')
  1. res = await utilsApi.sql('create table products(title text, price float) morphology=\'stem_en\'');
  1. utilsApi.sql("create table products(title text, price float) morphology='stem_en'");

Response

  1. Query OK, 0 rows affected (0.02 sec)
  1. {
  2. "total":0,
  3. "error":"",
  4. "warning":""
  5. }

Add documents

Let’s now add few documents to the table:

  • SQL
  • JSON
  • PHP
  • Python
  • Javascript
  • Java

SQL JSON PHP Python Javascript Java

  1. insert into products(title,price) values ('Crossbody Bag with Tassel', 19.85), ('microfiber sheet set', 19.99), ('Pet Hair Remover Glove', 7.99);

"id":0 or no id forces automatic ID generation.

  1. POST /insert
  2. {
  3. "index":"products",
  4. "doc":
  5. {
  6. "title" : "Crossbody Bag with Tassel",
  7. "price" : 19.85
  8. }
  9. }
  10. POST /insert
  11. {
  12. "index":"products",
  13. "doc":
  14. {
  15. "title" : "microfiber sheet set",
  16. "price" : 19.99
  17. }
  18. }
  19. POST /insert
  20. {
  21. "index":"products",
  22. "doc":
  23. {
  24. "title" : "Pet Hair Remover Glove",
  25. "price" : 7.99
  26. }
  27. }
  1. $index->addDocuments([
  2. ['title' => 'Crossbody Bag with Tassel', 'price' => 19.85],
  3. ['title' => 'microfiber sheet set', 'price' => 19.99],
  4. ['title' => 'Pet Hair Remover Glove', 'price' => 7.99]
  5. ]);
  1. indexApi.insert({"index" : "test", "doc" : {"title" : "Crossbody Bag with Tassel", "price" : 19.85}})
  2. indexApi.insert({"index" : "test", "doc" : {"title" : "microfiber sheet set", "price" : 19.99}})
  3. indexApi.insert({"index" : "test", "doc" : {"title" : "Pet Hair Remover Glove", "price" : 7.99}})
  1. res = await indexApi.insert({"index" : "test", "doc" : {"title" : "Crossbody Bag with Tassel", "price" : 19.85}});
  2. res = await indexApi.insert({"index" : "test", "doc" : {"title" : "microfiber sheet set", "price" : 19.99}});
  3. res = await indexApi.insert({"index" : "test", doc" : {"title" : "Pet Hair Remover Glove", "price" : 7.99}});
  1. InsertDocumentRequest newdoc = new InsertDocumentRequest();
  2. HashMap<String,Object> doc = new HashMap<String,Object>(){{
  3. put("title","Crossbody Bag with Tassel");
  4. put("price",19.85);
  5. }};
  6. newdoc.index("products").setDoc(doc);
  7. sqlresult = indexApi.insert(newdoc);
  8. newdoc = new InsertDocumentRequest();
  9. doc = new HashMap<String,Object>(){{
  10. put("title","microfiber sheet set");
  11. put("price",19.99);
  12. }};
  13. newdoc.index("products").setDoc(doc);
  14. sqlresult = indexApi.insert(newdoc);
  15. newdoc = new InsertDocumentRequest();
  16. doc = new HashMap<String,Object>(){{
  17. put("title","Pet Hair Remover Glove");
  18. put("price",7.99);
  19. }};
  20. newdoc.index("products").setDoc(doc);
  21. indexApi.insert(newdoc);

Response

  1. Query OK, 3 rows affected (0.01 sec)
  1. {
  2. "_index": "products",
  3. "_id": 0,
  4. "created": true,
  5. "result": "created",
  6. "status": 201
  7. }
  8. {
  9. "_index": "products",
  10. "_id": 0,
  11. "created": true,
  12. "result": "created",
  13. "status": 201
  14. }
  15. {
  16. "_index": "products",
  17. "_id": 0,
  18. "created": true,
  19. "result": "created",
  20. "status": 201
  21. }

Let’s find one of the documents. The query we will use is ‘remove hair’. As you can see it finds document with title ‘Pet Hair Remover Glove’ and highlights ‘Hair remover’ in it even though the query has “remove”, not “remover”. This is because when we created the table we turned on using English stemming (morphology "stem_en").

  • SQL
  • JSON
  • PHP
  • Python
  • javascript
  • Java

SQL JSON PHP Python javascript Java

  1. select id, highlight(), price from products where match('remove hair');
  1. POST /search
  2. {
  3. "index": "products",
  4. "query": { "match": { "title": "remove hair" } },
  5. "highlight":
  6. {
  7. "fields": ["title"]
  8. }
  9. }
  1. $result = $index->search('@title remove hair')->highlight(['title'])->get();
  2. foreach($result as $doc)
  3. {
  4. echo "Doc ID: ".$doc->getId()."\n";
  5. echo "Doc Score: ".$doc->getScore()."\n";
  6. echo "Document fields:\n";
  7. print_r($doc->getData());
  8. echo "Highlights: \n";
  9. print_r($doc->getHighlight());
  10. }
  1. searchApi.search({"index":"myindex","query":{"query_string":"@title remove hair"},"highlight":{"fields":["title"]}})
  1. res = await searchApi.search({"index":"myindex","query":{"query_string":"@title remove hair"}"highlight":{"fields":["title"]}});
  1. query = new HashMap<String,Object>();
  2. query.put("query_string","@title remove hair");
  3. searchRequest = new SearchRequest();
  4. searchRequest.setIndex("forum");
  5. searchRequest.setQuery(query);
  6. HashMap<String,Object> highlight = new HashMap<String,Object>(){{
  7. put("fields",new String[] {"title"});
  8. }};
  9. searchRequest.setHighlight(highlight);
  10. searchResponse = searchApi.search(searchRequest);

Response

  1. +---------------------+-------------------------------+----------+
  2. | id | highlight() | price |
  3. +---------------------+-------------------------------+----------+
  4. | 1513686608316989452 | Pet <strong>Hair Remover</strong> Glove | 7.990000 |
  5. +---------------------+-------------------------------+----------+
  6. 1 row in set (0.00 sec)
  1. {
  2. "took": 0,
  3. "timed_out": false,
  4. "hits": {
  5. "total": 1,
  6. "hits": [
  7. {
  8. "_id": "1513686608316989452",
  9. "_score": 1680,
  10. "_source": {
  11. "price": 7.99,
  12. "title": "Pet Hair Remover Glove"
  13. },
  14. "highlight": {
  15. "title": [
  16. "Pet <strong>Hair Remover</strong> Glove"
  17. ]
  18. }
  19. }
  20. ]
  21. }
  22. }
  1. Doc ID: 1513686608316989452
  2. Doc Score: 1680
  3. Document fields:
  4. Array
  5. (
  6. [price] => 7.99
  7. [title] => Pet Hair Remover Glove
  8. )
  9. Highlights:
  10. Array
  11. (
  12. [title] => Array
  13. (
  14. [0] => Pet <strong>Hair Remover</strong> Glove
  15. )
  16. )

`

  1. {'hits': {'hits': [{u'_id': u'1513686608316989452',
  2. u'_score': 1680,
  3. u'_source': {u'title': u'Pet Hair Remover Glove', u'price':7.99},
  4. u'highlight':{u'title':[u'Pet <strong>Hair Remover</strong> Glove']}}}],
  5. 'total': 1},
  6. 'profile': None,
  7. 'timed_out': False,
  8. 'took': 0}
  1. {"hits": {"hits": [{"_id": "1513686608316989452",
  2. "_score": 1680,
  3. "_source": {"title": "Pet Hair Remover Glove", "price":7.99},
  4. "highlight":{"title":["Pet <strong>Hair Remover</strong> Glove"]}}],
  5. "total": 1},
  6. "profile": None,
  7. "timed_out": False,
  8. "took": 0}
  1. class SearchResponse {
  2. took: 84
  3. timedOut: false
  4. hits: class SearchResponseHits {
  5. total: 1
  6. maxScore: null
  7. hits: [{_id=1513686608316989452, _score=1, _source={price=7.99, title=Pet Hair Remover Glove}, highlight={title=[Pet <strong>Hair Remover</strong> Glove]}}]
  8. aggregations: null
  9. }
  10. profile: null
  11. }

Update

Let’s assume we now want to update the document - change the price to 18.5. This can be done by filtering by any field, but normally you know the document id and update something based on that.

  • SQL
  • JSON
  • PHP
  • Python
  • javascript
  • Java

SQL JSON PHP Python javascript Java

  1. update products set price=18.5 where id = 1513686608316989452;
  1. POST /update
  2. {
  3. "index": "products",
  4. "id": 1513686608316989452,
  5. "doc":
  6. {
  7. "price": 18.5
  8. }
  9. }
  1. $doc = [
  2. 'body' => [
  3. 'index' => 'products',
  4. 'id' => 2,
  5. 'doc' => [
  6. 'price' => 18.5
  7. ]
  8. ]
  9. ];
  10. $response = $client->update($doc);
  1. indexApi = api = manticoresearch.IndexApi(client)
  2. indexApi.update({"index" : "products", "id" : 1513686608316989452, "doc" : {"price":18.5}})
  1. res = await indexApi.update({"index" : "products", "id" : 1513686608316989452, "doc" : {"price":18.5}});
  1. UpdateDocumentRequest updateRequest = new UpdateDocumentRequest();
  2. doc = new HashMap<String,Object >(){{
  3. put("price",18.5);
  4. }};
  5. updateRequest.index("products").id(1513686608316989452L).setDoc(doc);
  6. indexApi.update(updateRequest);

Response

  1. Query OK, 1 row affected (0.00 sec)
  1. {
  2. "_index": "products",
  3. "_id": 1513686608316989452,
  4. "result": "updated"
  5. }

Delete

Let’s now delete all documents with price lower than 10.

  • SQL
  • JSON
  • PHP
  • Python
  • javascript
  • Java

SQL JSON PHP Python javascript Java

  1. delete from products where price < 10;
  1. POST /delete
  2. {
  3. "index": "products",
  4. "query":
  5. {
  6. "range":
  7. {
  8. "price":
  9. {
  10. "lte": 10
  11. }
  12. }
  13. }
  14. }
  1. $result = $index->deleteDocuments(new \Manticoresearch\Query\Range('price',['lte'=>10]));
  1. indexApi.delete({"index" : "products", "query": {"range":{"price":{"lte":10}}}})
  1. res = await indexApi.delete({"index" : "products", "query": {"range":{"price":{"lte":10}}}});
  1. DeleteDocumentRequest deleteRequest = new DeleteDocumentRequest();
  2. query = new HashMap<String,Object>();
  3. query.put("range",new HashMap<String,Object>(){{
  4. put("range",new HashMap<String,Object>(){{
  5. put("lte",10);
  6. }});
  7. }});
  8. deleteRequest.index("products").setQuery(query);
  9. indexApi.delete(deleteRequest);

Response

  1. Query OK, 1 row affected (0.00 sec)
  1. {
  2. "_index": "products",
  3. "deleted": 1
  4. }
  1. Array
  2. (
  3. [_index] => products
  4. [deleted] => 1
  5. )