ClickHouse Tutorial

What to Expect from This Tutorial?

By going through this tutorial you’ll learn how to set up basic ClickHouse cluster, it’ll be small, but fault tolerant and scalable. We will use one of example datasets to fill it with data and execute some demo queries.

Single Node Setup

To postpone complexities of distributed environment, we’ll start with deploying ClickHouse on a single server or virtual machine. ClickHouse is usually installed from deb or rpm packages, but there are alternatives for the operating systems that do no support them.

For example, you have chosen deb packages and executed:

  1. sudo apt-get install dirmngr
  2. sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4
  3. echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" | sudo tee /etc/apt/sources.list.d/clickhouse.list
  4. sudo apt-get update
  5. sudo apt-get install -y clickhouse-server clickhouse-client

What do we have in the packages that got installed:

  • clickhouse-client package contains clickhouse-client application, interactive ClickHouse console client.
  • clickhouse-common package contains a ClickHouse executable file.
  • clickhouse-server package contains configuration files to run ClickHouse as a server.

Server config files are located in /etc/clickhouse-server/. Before going further please notice the <path> element in config.xml. Path determines the location for data storage, so it should be located on volume with large disk capacity, the default value is /var/lib/clickhouse/. If you want to adjust the configuration it’s not really handy to directly edit config.xml file, considering it might get rewritten on future package updates. Recommended way to override the config elements is to create files in config.d directory which serve as “patches” to config.xml.

As you might have noticed, clickhouse-server is not launched automatically after package installation. It won’t be automatically restarted after updates either. The way you start the server depends on your init system, usually it’s:

  1. sudo service clickhouse-server start

or

  1. sudo /etc/init.d/clickhouse-server start

The default location for server logs is /var/log/clickhouse-server/. Server will be ready to handle client connections once Ready for connections message was logged.

Once the clickhouse-server is up and running, we can use clickhouse-client to connect to the server and run some test queries like SELECT "Hello, world!";.

Quick tips for clickhouse-client

Interactive mode:

  1. clickhouse-client
  2. clickhouse-client --host=... --port=... --user=... --password=...

Enable multiline queries:

  1. clickhouse-client -m
  2. clickhouse-client --multiline

Run queries in batch-mode:

  1. clickhouse-client --query='SELECT 1'
  2. echo 'SELECT 1' | clickhouse-client
  3. clickhouse-client <<< 'SELECT 1'

Insert data from a file in specified format:

  1. clickhouse-client --query='INSERT INTO table VALUES' < data.txt
  2. clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv

Import Sample Dataset

Now it’s time to fill our ClickHouse server with some sample data. In this tutorial we’ll use anonymized data of Yandex.Metrica, the first service that run ClickHouse in production way before it became open-source (more on that in history section). There are multiple ways to import Yandex.Metrica dataset and for the sake of the tutorial we’ll go with the most realistic one.

Download and Extract Table Data

  1. curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
  2. curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv

The extracted files are about 10GB in size.

Create Tables

Tables are logically grouped into “databases”. There’s a default database, but we’ll create a new one named tutorial:

  1. clickhouse-client --query "CREATE DATABASE IF NOT EXISTS tutorial"

Syntax for creating tables is way more complicated compared to databases (see reference. In general CREATE TABLE statement has to specify three key things:

  1. Name of table to create.
  2. Table schema, i.e. list of columns and their data types.
  3. Table engine and it’s settings, which determines all the details on how queries to this table will be physically executed.

Yandex.Metrica is a web analytics service and sample dataset doesn’t cover it’s full functionality, so there are only two tables to create:

  • hits is a table with each action done by all users on all websites covered by the service.
  • visits is a table that contains pre-built sessions instead of individual actions.

Let’s see and execute the real create table queries for these tables:

  1. CREATE TABLE tutorial.hits_v1
  2. (
  3. `WatchID` UInt64,
  4. `JavaEnable` UInt8,
  5. `Title` String,
  6. `GoodEvent` Int16,
  7. `EventTime` DateTime,
  8. `EventDate` Date,
  9. `CounterID` UInt32,
  10. `ClientIP` UInt32,
  11. `ClientIP6` FixedString(16),
  12. `RegionID` UInt32,
  13. `UserID` UInt64,
  14. `CounterClass` Int8,
  15. `OS` UInt8,
  16. `UserAgent` UInt8,
  17. `URL` String,
  18. `Referer` String,
  19. `URLDomain` String,
  20. `RefererDomain` String,
  21. `Refresh` UInt8,
  22. `IsRobot` UInt8,
  23. `RefererCategories` Array(UInt16),
  24. `URLCategories` Array(UInt16),
  25. `URLRegions` Array(UInt32),
  26. `RefererRegions` Array(UInt32),
  27. `ResolutionWidth` UInt16,
  28. `ResolutionHeight` UInt16,
  29. `ResolutionDepth` UInt8,
  30. `FlashMajor` UInt8,
  31. `FlashMinor` UInt8,
  32. `FlashMinor2` String,
  33. `NetMajor` UInt8,
  34. `NetMinor` UInt8,
  35. `UserAgentMajor` UInt16,
  36. `UserAgentMinor` FixedString(2),
  37. `CookieEnable` UInt8,
  38. `JavascriptEnable` UInt8,
  39. `IsMobile` UInt8,
  40. `MobilePhone` UInt8,
  41. `MobilePhoneModel` String,
  42. `Params` String,
  43. `IPNetworkID` UInt32,
  44. `TraficSourceID` Int8,
  45. `SearchEngineID` UInt16,
  46. `SearchPhrase` String,
  47. `AdvEngineID` UInt8,
  48. `IsArtifical` UInt8,
  49. `WindowClientWidth` UInt16,
  50. `WindowClientHeight` UInt16,
  51. `ClientTimeZone` Int16,
  52. `ClientEventTime` DateTime,
  53. `SilverlightVersion1` UInt8,
  54. `SilverlightVersion2` UInt8,
  55. `SilverlightVersion3` UInt32,
  56. `SilverlightVersion4` UInt16,
  57. `PageCharset` String,
  58. `CodeVersion` UInt32,
  59. `IsLink` UInt8,
  60. `IsDownload` UInt8,
  61. `IsNotBounce` UInt8,
  62. `FUniqID` UInt64,
  63. `HID` UInt32,
  64. `IsOldCounter` UInt8,
  65. `IsEvent` UInt8,
  66. `IsParameter` UInt8,
  67. `DontCountHits` UInt8,
  68. `WithHash` UInt8,
  69. `HitColor` FixedString(1),
  70. `UTCEventTime` DateTime,
  71. `Age` UInt8,
  72. `Sex` UInt8,
  73. `Income` UInt8,
  74. `Interests` UInt16,
  75. `Robotness` UInt8,
  76. `GeneralInterests` Array(UInt16),
  77. `RemoteIP` UInt32,
  78. `RemoteIP6` FixedString(16),
  79. `WindowName` Int32,
  80. `OpenerName` Int32,
  81. `HistoryLength` Int16,
  82. `BrowserLanguage` FixedString(2),
  83. `BrowserCountry` FixedString(2),
  84. `SocialNetwork` String,
  85. `SocialAction` String,
  86. `HTTPError` UInt16,
  87. `SendTiming` Int32,
  88. `DNSTiming` Int32,
  89. `ConnectTiming` Int32,
  90. `ResponseStartTiming` Int32,
  91. `ResponseEndTiming` Int32,
  92. `FetchTiming` Int32,
  93. `RedirectTiming` Int32,
  94. `DOMInteractiveTiming` Int32,
  95. `DOMContentLoadedTiming` Int32,
  96. `DOMCompleteTiming` Int32,
  97. `LoadEventStartTiming` Int32,
  98. `LoadEventEndTiming` Int32,
  99. `NSToDOMContentLoadedTiming` Int32,
  100. `FirstPaintTiming` Int32,
  101. `RedirectCount` Int8,
  102. `SocialSourceNetworkID` UInt8,
  103. `SocialSourcePage` String,
  104. `ParamPrice` Int64,
  105. `ParamOrderID` String,
  106. `ParamCurrency` FixedString(3),
  107. `ParamCurrencyID` UInt16,
  108. `GoalsReached` Array(UInt32),
  109. `OpenstatServiceName` String,
  110. `OpenstatCampaignID` String,
  111. `OpenstatAdID` String,
  112. `OpenstatSourceID` String,
  113. `UTMSource` String,
  114. `UTMMedium` String,
  115. `UTMCampaign` String,
  116. `UTMContent` String,
  117. `UTMTerm` String,
  118. `FromTag` String,
  119. `HasGCLID` UInt8,
  120. `RefererHash` UInt64,
  121. `URLHash` UInt64,
  122. `CLID` UInt32,
  123. `YCLID` UInt64,
  124. `ShareService` String,
  125. `ShareURL` String,
  126. `ShareTitle` String,
  127. `ParsedParams` Nested(
  128. Key1 String,
  129. Key2 String,
  130. Key3 String,
  131. Key4 String,
  132. Key5 String,
  133. ValueDouble Float64),
  134. `IslandID` FixedString(16),
  135. `RequestNum` UInt32,
  136. `RequestTry` UInt8
  137. )
  138. ENGINE = MergeTree()
  139. PARTITION BY toYYYYMM(EventDate)
  140. ORDER BY (CounterID, EventDate, intHash32(UserID))
  141. SAMPLE BY intHash32(UserID)
  142. SETTINGS index_granularity = 8192
  1. CREATE TABLE tutorial.visits_v1
  2. (
  3. `CounterID` UInt32,
  4. `StartDate` Date,
  5. `Sign` Int8,
  6. `IsNew` UInt8,
  7. `VisitID` UInt64,
  8. `UserID` UInt64,
  9. `StartTime` DateTime,
  10. `Duration` UInt32,
  11. `UTCStartTime` DateTime,
  12. `PageViews` Int32,
  13. `Hits` Int32,
  14. `IsBounce` UInt8,
  15. `Referer` String,
  16. `StartURL` String,
  17. `RefererDomain` String,
  18. `StartURLDomain` String,
  19. `EndURL` String,
  20. `LinkURL` String,
  21. `IsDownload` UInt8,
  22. `TraficSourceID` Int8,
  23. `SearchEngineID` UInt16,
  24. `SearchPhrase` String,
  25. `AdvEngineID` UInt8,
  26. `PlaceID` Int32,
  27. `RefererCategories` Array(UInt16),
  28. `URLCategories` Array(UInt16),
  29. `URLRegions` Array(UInt32),
  30. `RefererRegions` Array(UInt32),
  31. `IsYandex` UInt8,
  32. `GoalReachesDepth` Int32,
  33. `GoalReachesURL` Int32,
  34. `GoalReachesAny` Int32,
  35. `SocialSourceNetworkID` UInt8,
  36. `SocialSourcePage` String,
  37. `MobilePhoneModel` String,
  38. `ClientEventTime` DateTime,
  39. `RegionID` UInt32,
  40. `ClientIP` UInt32,
  41. `ClientIP6` FixedString(16),
  42. `RemoteIP` UInt32,
  43. `RemoteIP6` FixedString(16),
  44. `IPNetworkID` UInt32,
  45. `SilverlightVersion3` UInt32,
  46. `CodeVersion` UInt32,
  47. `ResolutionWidth` UInt16,
  48. `ResolutionHeight` UInt16,
  49. `UserAgentMajor` UInt16,
  50. `UserAgentMinor` UInt16,
  51. `WindowClientWidth` UInt16,
  52. `WindowClientHeight` UInt16,
  53. `SilverlightVersion2` UInt8,
  54. `SilverlightVersion4` UInt16,
  55. `FlashVersion3` UInt16,
  56. `FlashVersion4` UInt16,
  57. `ClientTimeZone` Int16,
  58. `OS` UInt8,
  59. `UserAgent` UInt8,
  60. `ResolutionDepth` UInt8,
  61. `FlashMajor` UInt8,
  62. `FlashMinor` UInt8,
  63. `NetMajor` UInt8,
  64. `NetMinor` UInt8,
  65. `MobilePhone` UInt8,
  66. `SilverlightVersion1` UInt8,
  67. `Age` UInt8,
  68. `Sex` UInt8,
  69. `Income` UInt8,
  70. `JavaEnable` UInt8,
  71. `CookieEnable` UInt8,
  72. `JavascriptEnable` UInt8,
  73. `IsMobile` UInt8,
  74. `BrowserLanguage` UInt16,
  75. `BrowserCountry` UInt16,
  76. `Interests` UInt16,
  77. `Robotness` UInt8,
  78. `GeneralInterests` Array(UInt16),
  79. `Params` Array(String),
  80. `Goals` Nested(
  81. ID UInt32,
  82. Serial UInt32,
  83. EventTime DateTime,
  84. Price Int64,
  85. OrderID String,
  86. CurrencyID UInt32),
  87. `WatchIDs` Array(UInt64),
  88. `ParamSumPrice` Int64,
  89. `ParamCurrency` FixedString(3),
  90. `ParamCurrencyID` UInt16,
  91. `ClickLogID` UInt64,
  92. `ClickEventID` Int32,
  93. `ClickGoodEvent` Int32,
  94. `ClickEventTime` DateTime,
  95. `ClickPriorityID` Int32,
  96. `ClickPhraseID` Int32,
  97. `ClickPageID` Int32,
  98. `ClickPlaceID` Int32,
  99. `ClickTypeID` Int32,
  100. `ClickResourceID` Int32,
  101. `ClickCost` UInt32,
  102. `ClickClientIP` UInt32,
  103. `ClickDomainID` UInt32,
  104. `ClickURL` String,
  105. `ClickAttempt` UInt8,
  106. `ClickOrderID` UInt32,
  107. `ClickBannerID` UInt32,
  108. `ClickMarketCategoryID` UInt32,
  109. `ClickMarketPP` UInt32,
  110. `ClickMarketCategoryName` String,
  111. `ClickMarketPPName` String,
  112. `ClickAWAPSCampaignName` String,
  113. `ClickPageName` String,
  114. `ClickTargetType` UInt16,
  115. `ClickTargetPhraseID` UInt64,
  116. `ClickContextType` UInt8,
  117. `ClickSelectType` Int8,
  118. `ClickOptions` String,
  119. `ClickGroupBannerID` Int32,
  120. `OpenstatServiceName` String,
  121. `OpenstatCampaignID` String,
  122. `OpenstatAdID` String,
  123. `OpenstatSourceID` String,
  124. `UTMSource` String,
  125. `UTMMedium` String,
  126. `UTMCampaign` String,
  127. `UTMContent` String,
  128. `UTMTerm` String,
  129. `FromTag` String,
  130. `HasGCLID` UInt8,
  131. `FirstVisit` DateTime,
  132. `PredLastVisit` Date,
  133. `LastVisit` Date,
  134. `TotalVisits` UInt32,
  135. `TraficSource` Nested(
  136. ID Int8,
  137. SearchEngineID UInt16,
  138. AdvEngineID UInt8,
  139. PlaceID UInt16,
  140. SocialSourceNetworkID UInt8,
  141. Domain String,
  142. SearchPhrase String,
  143. SocialSourcePage String),
  144. `Attendance` FixedString(16),
  145. `CLID` UInt32,
  146. `YCLID` UInt64,
  147. `NormalizedRefererHash` UInt64,
  148. `SearchPhraseHash` UInt64,
  149. `RefererDomainHash` UInt64,
  150. `NormalizedStartURLHash` UInt64,
  151. `StartURLDomainHash` UInt64,
  152. `NormalizedEndURLHash` UInt64,
  153. `TopLevelDomain` UInt64,
  154. `URLScheme` UInt64,
  155. `OpenstatServiceNameHash` UInt64,
  156. `OpenstatCampaignIDHash` UInt64,
  157. `OpenstatAdIDHash` UInt64,
  158. `OpenstatSourceIDHash` UInt64,
  159. `UTMSourceHash` UInt64,
  160. `UTMMediumHash` UInt64,
  161. `UTMCampaignHash` UInt64,
  162. `UTMContentHash` UInt64,
  163. `UTMTermHash` UInt64,
  164. `FromHash` UInt64,
  165. `WebVisorEnabled` UInt8,
  166. `WebVisorActivity` UInt32,
  167. `ParsedParams` Nested(
  168. Key1 String,
  169. Key2 String,
  170. Key3 String,
  171. Key4 String,
  172. Key5 String,
  173. ValueDouble Float64),
  174. `Market` Nested(
  175. Type UInt8,
  176. GoalID UInt32,
  177. OrderID String,
  178. OrderPrice Int64,
  179. PP UInt32,
  180. DirectPlaceID UInt32,
  181. DirectOrderID UInt32,
  182. DirectBannerID UInt32,
  183. GoodID String,
  184. GoodName String,
  185. GoodQuantity Int32,
  186. GoodPrice Int64),
  187. `IslandID` FixedString(16)
  188. )
  189. ENGINE = CollapsingMergeTree(Sign)
  190. PARTITION BY toYYYYMM(StartDate)
  191. ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID)
  192. SAMPLE BY intHash32(UserID)
  193. SETTINGS index_granularity = 8192

You can execute those queries using interactive mode of clickhouse-client (just launch it in terminal without specifying a query in advance) or try some alternative interface if you want.

As we can see, hits_v1 uses the basic MergeTree engine, while the visits_v1 uses the Collapsing variant.

Import Data

Data import to ClickHouse is done via INSERT INTO query like in many other SQL databases. However data is usually provided in one of the supported formats instead of VALUES clause (which is also supported).

The files we downloaded earlier are in tab-separated format, so here’s how to import them via console client:

  1. clickhouse-client --query "INSERT INTO tutorial.hits_v1 FORMAT TSV" --max_insert_block_size=100000 < hits_v1.tsv
  2. clickhouse-client --query "INSERT INTO tutorial.visits_v1 FORMAT TSV" --max_insert_block_size=100000 < visits_v1.tsv

ClickHouse has a lot of settings to tune and one way to specify them in console client is via arguments, as we can see with --max_insert_block_size. The easiest way to figure out what settings are available, what do they mean and what the defaults are is to query the system.settings table:

  1. SELECT name, value, changed, description
  2. FROM system.settings
  3. WHERE name LIKE '%max_insert_b%'
  4. FORMAT TSV
  5. max_insert_block_size 1048576 0 "The maximum block size for insertion, if we control the creation of blocks for insertion."

Optionally you can OPTIMIZE the tables after import. Tables that are configured with MergeTree-family engine always do merges of data parts in background to optimize data storage (or at least check if it makes sense). These queries will just force table engine to do storage optimization right now instead of some time later:

  1. clickhouse-client --query "OPTIMIZE TABLE tutorial.hits_v1 FINAL"
  2. clickhouse-client --query "OPTIMIZE TABLE tutorial.visits_v1 FINAL"

This is I/O and CPU intensive operation so if the table constantly receives new data it’s better to leave it alone and let merges run in background.

Now we can check that the tables are successfully imported:

  1. clickhouse-client --query "SELECT COUNT(*) FROM tutorial.hits_v1"
  2. clickhouse-client --query "SELECT COUNT(*) FROM tutorial.visits_v1"

Example Queries

  1. SELECT
  2. StartURL AS URL,
  3. AVG(Duration) AS AvgDuration
  4. FROM tutorial.visits_v1
  5. WHERE StartDate BETWEEN '2014-03-23' AND '2014-03-30'
  6. GROUP BY URL
  7. ORDER BY AvgDuration DESC
  8. LIMIT 10
  1. SELECT
  2. sum(Sign) AS visits,
  3. sumIf(Sign, has(Goals.ID, 1105530)) AS goal_visits,
  4. (100. * goal_visits) / visits AS goal_percent
  5. FROM tutorial.visits_v1
  6. WHERE (CounterID = 912887) AND (toYYYYMM(StartDate) = 201403) AND (domain(StartURL) = 'yandex.ru')

Cluster Deployment

ClickHouse cluster is a homogenous cluster. Steps to set up:

  1. Install ClickHouse server on all machines of the cluster
  2. Set up cluster configs in configuration files
  3. Create local tables on each instance
  4. Create a Distributed table

Distributed table is actually a kind of “view” to local tables of ClickHouse cluster. SELECT query from a distributed table will be executed using resources of all cluster’s shards. You may specify configs for multiple clusters and create multiple distributed tables providing views to different clusters.

Example config for cluster with three shards, one replica each:

  1. <remote_servers>
  2. <perftest_3shards_1replicas>
  3. <shard>
  4. <replica>
  5. <host>example-perftest01j.yandex.ru</host>
  6. <port>9000</port>
  7. </replica>
  8. </shard>
  9. <shard>
  10. <replica>
  11. <host>example-perftest02j.yandex.ru</host>
  12. <port>9000</port>
  13. </replica>
  14. </shard>
  15. <shard>
  16. <replica>
  17. <host>example-perftest03j.yandex.ru</host>
  18. <port>9000</port>
  19. </replica>
  20. </shard>
  21. </perftest_3shards_1replicas>
  22. </remote_servers>

For further demonstration let’s create new local table with exactly the same CREATE TABLE query that we used for hits_v1, but different table name:

  1. CREATE TABLE tutorial.hits_local (...) ENGINE = MergeTree() ...

Creating a distributed table providing a view into local tables of the cluster:

  1. CREATE TABLE tutorial.hits_all AS tutorial.hits_local
  2. ENGINE = Distributed(perftest_3shards_1replicas, tutorial, hits_local, rand());

Common practice is to create similar Distributed tables on all machines of the cluster. This would allow to run distributed queries on any machine of the cluster. Also there’s an alternative option to create temporary distributed table for a given SELECT query using remote table function.

Let’s run INSERT SELECT into Distributed table to spread the table to multiple servers.

  1. INSERT INTO tutorial.hits_all SELECT * FROM tutorial.hits_v1;

Notice

This approach is not suitable for sharding of large tables. There’s a separate tool clickhouse-copier that can re-shard arbitrary large tables.

As you could expect computationally heavy queries are executed N times faster being launched on 3 servers instead of one.

In this case we have used a cluster with 3 shards each contains a single replica.

To provide resilience in production environment we recommend that each shard should contain 2-3 replicas distributed between multiple data-centers. Note that ClickHouse supports unlimited number of replicas.

Example config for cluster of one shard containing three replicas:

  1. <remote_servers>
  2. ...
  3. <perftest_1shards_3replicas>
  4. <shard>
  5. <replica>
  6. <host>example-perftest01j.yandex.ru</host>
  7. <port>9000</port>
  8. </replica>
  9. <replica>
  10. <host>example-perftest02j.yandex.ru</host>
  11. <port>9000</port>
  12. </replica>
  13. <replica>
  14. <host>example-perftest03j.yandex.ru</host>
  15. <port>9000</port>
  16. </replica>
  17. </shard>
  18. </perftest_1shards_3replicas>
  19. </remote_servers>

To enable native replication ZooKeeper is required. ClickHouse will take care of data consistency on all replicas and run restore procedure after failure
automatically. It’s recommended to deploy ZooKeeper cluster to separate servers.

ZooKeeper is not a strict requirement: in some simple cases you can duplicate the data by writing it into all the replicas from your application code. This approach is not recommended, in this case ClickHouse won’t be able to
guarantee data consistency on all replicas. This remains the responsibility of your application.

ZooKeeper locations need to be specified in configuration file:

  1. <zookeeper>
  2. <node>
  3. <host>zoo01.yandex.ru</host>
  4. <port>2181</port>
  5. </node>
  6. <node>
  7. <host>zoo02.yandex.ru</host>
  8. <port>2181</port>
  9. </node>
  10. <node>
  11. <host>zoo03.yandex.ru</host>
  12. <port>2181</port>
  13. </node>
  14. </zookeeper>

Also we need to set macros for identifying each shard and replica, it will be used on table creation:

  1. <macros>
  2. <shard>01</shard>
  3. <replica>01</replica>
  4. </macros>

If there are no replicas at the moment on replicated table creation, a new first replica will be instantiated. If there are already live replicas, new replica will clone the data from existing ones. You have an option to create all replicated tables first and that insert data to it. Another option is to create some replicas and add the others after or during data insertion.

  1. CREATE TABLE tutorial.hits_replica (...)
  2. ENGINE = ReplcatedMergeTree(
  3. '/clickhouse_perftest/tables/{shard}/hits',
  4. '{replica}'
  5. )
  6. ...

Here we use ReplicatedMergeTree table engine. In parameters we specify ZooKeeper path containing shard and replica identifiers.

  1. INSERT INTO tutorial.hits_replica SELECT * FROM tutorial.hits_local;

Replication operates in multi-master mode. Data can be loaded into any replica and it will be synced with other instances automatically. Replication is asynchronous so at a given moment of time not all replicas may contain recently inserted data. To allow data insertion at least one replica should be up. Others will sync up data and repair consistency once they will become active again. Please notice that such approach allows for the low possibility of loss of just appended data.