Version 4.1.0

Released on 2020/01/15.

Note

If you are upgrading a cluster, you must be running CrateDB 4.0.2 or higher before you upgrade to 4.1.0.

We recommend that you upgrade to the latest 4.0 release before moving to 4.1.0.

A rolling upgrade to 4.1.0 from 4.0.2+ is supported.

Before upgrading, you should back up your data.

Table of Contents

Breaking Changes

  • Changed arithmetic operations *, +, and - of types integer and bigint to throw an exception instead of rolling over from positive to negative or the other way around.

  • Remap CrateDB object array data type from the PostgreSQL JSON to JSON array type. That might effect some drivers that use the PostgreSQL wire protocol to insert data into tables with object array typed columns. For instance, when using the Npgsql driver, it is not longer possible to insert an array of objects into a column of the object array data type by using the parameter of a SQL statement that has the JSON data type and an array of CLR as its value. Instead, use a string array with JSON strings that represent the objects. See the Npgsql documentation for more details.

  • Changed how columns of type geo_point are being communicated to PostgreSQL clients.

    Before, clients were told that those columns are double arrays. Now, they are correctly mapped to the PostgreSQL point type. This means that applications using clients like JDBC will have to be adapted to use PgPoint. (See Geometric DataTypes in JDBC)

  • Changed the behavior of unnest to fully unnest multi dimensional arrays to their innermost type to be compatible with PostgreSQL.

Deprecations

Changes

Resiliency improvements

  • Allow user to limit the number of threads on a single shard that may be merging at once via the merge.scheduler.max_thread_count table parameter.

  • Some ALTER TABLE operations now internally invoke a single cluster state update instead of multiple cluster state updates. This change improves resiliency because there is no longer a window where the cluster state could be inconsistent.

  • Changed the default garbage collector from Concurrent Mark Sweep to G1GC. This should lead to shorter GC pauses.

  • Added a dynamic bulk sizing mechanism that should prevent INSERT INTO ... FROM query operations from running into out-of-memory errors when the individual records of a table are large.

  • Added the cluster.routing.allocation.total_shards_per_node setting.

Performance improvements

  • Optimized SELECT DISTINCT .. LIMIT n queries. On high cardinality columns, these types of queries now execute up to 200% faster and use less memory.

  • The optimizer now utilizes internal statistics to approximate the number of rows returned by various parts of a query plan. This should result in more efficient execution plans for joins.

  • Reduced recovery time by sending file-chunks concurrently. This change only applies when transport communication is secured or compressed. The number of chunks is controlled by the indices.recovery.max_concurrent_file_chunks setting.

  • Added an optimization that allows WHERE clauses on top of derived tables containing table functions to run more efficiently in some cases.

  • Allow user to control how table data is stored and accessed on a disk via the store.type table parameter and node.store.allow_mmap node setting.

  • Changed the default table data store type from mmapfs to hybridfs.

SQL Standard and PostgreSQL compatibility improvements

Window function extensions

  • Added support for the lag and lead window functions as enterprise features.

  • Added support for ROWS frame definitions in the context of window functions window definitions.

  • Added support for the named window definition. This change allows a user to define a list of window definitions in the WINDOW clause that can be referenced in OVER clauses.

  • Added support for offset PRECEDING and offset FOLLOWING window definitions.

Functions and operators

  • Added support for the ALL operator for array and subquery comparisons.

  • Added a PG_GET_KEYWORDS table function.

  • Extended CONCAT to do implicit casts, so that calls like SELECT 't' || 5 are supported.

  • Added support for casting values of type object to text. This casting will cause the object to be converted to a JSON string.

  • Added support for casting to geo_point, geo_shape and object array data types.

    For example:

    1. cast(['POINT(2 3)','POINT(1 3)'] AS array(geo_point))
  • Added the PG_TYPEOF system function.

  • Added the INTERVAL data type and extended pg_catalog.generate_series(start, stop, [step]) to work with timestamps and the new INTERVAL type.

  • Added LPAD and RPAD scalar functions.

  • Added the LTRIM and RTRIM scalar functions.

  • Added LEFT and RIGHT scalar functions.

  • Added TIMEZONE scalar function.

  • Added AT TIME ZONE syntax.

  • Added support for the operator ILIKE, the case insensitive complement to LIKE.

  • Added support for CIDR notation comparisons through special purpose operator << associated with type IP.

    Statements like 192.168.0.0 << 192.168.0.1/24 evaluate as true, meaning SELECT ip FROM ips_table WHERE ip << 192.168.0.1/24 returns matching IP addresses.

New statements and clauses

Observability improvements

  • Added a failures column to the sys.snapshots table.

  • Improved the error messages that were returned if a relation or schema is not found.

    The error messages may now include suggestions for similarly named tables, which should make typos more apparent and help users figure out they are missing double quotes (e.g., when a table name contains upper case letters).

  • Added a seq_no_stats and a translog_stats column to the sys.shards table.

  • Added new system table sys.segments which contains information about the Lucene segments of a shard.

  • Added a node column to sys.jobs_log.

  • Statements containing limits, filters, window functions, or table functions will now be labelled accordingly in Jobs metrics.

Others

  • Changed the default for write.wait_for_active_shards from ALL to 1. This update improves the out of the box experience by allowing a subset of nodes to become unavailable without blocking write operations. See the documentation linked above for more details about the implications.

  • Added phonetic token filter with following encoders: metaphone, double_metaphone, soundex, refined_soundex, caverphone1, caverphone2, cologne, nysiis, koelnerphonetik, haasephonetik, beider_morse, and daitch_mokotoff.

  • Removed a restriction for predicates in the WHERE clause involving partitioned columns, which could result in a failure response with the message: logical conjunction of the conditions in the WHERE clause which involve partitioned columns led to a query that can't be executed.

  • Support implicit object creation in update statements. For example, UPDATE t SET obj['x'] = 10 will now implicitly set obj to {obj: {x: 10}} on rows where obj was null.

  • Added the codec parameter to CREATE TABLE to control the compression algorithm used to store data.

  • The node argument of the REROUTE commands of ALTER TABLE can now either be the ID or the name of a node.

  • Added support for the PostgreSQL array string literal notation.