Usage Recommendations

CPU Scaling Governor

Always use the performance scaling governor. The on-demand scaling governor works much worse with constantly high demand.

  1. $ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

CPU Limitations

Processors can overheat. Use dmesg to see if the CPU’s clock rate was limited due to overheating.
The restriction can also be set externally at the datacenter level. You can use turbostat to monitor it under a load.

RAM

For small amounts of data (up to ~200 GB compressed), it is best to use as much memory as the volume of data.
For large amounts of data and when processing interactive (online) queries, you should use a reasonable amount of RAM (128 GB or more) so the hot data subset will fit in the cache of pages.
Even for data volumes of ~50 TB per server, using 128 GB of RAM significantly improves query performance compared to 64 GB.

Do not disable overcommit. The value cat /proc/sys/vm/overcommit_memory should be 0 or 1. Run

  1. $ echo 0 | sudo tee /proc/sys/vm/overcommit_memory

Use perf top to watch the time spent in the kernel for memory management.
Permanent huge pages also do not need to be allocated.

Storage Subsystem

If your budget allows you to use SSD, use SSD.
If not, use HDD. SATA HDDs 7200 RPM will do.

Give preference to a lot of servers with local hard drives over a smaller number of servers with attached disk shelves.
But for storing archives with rare queries, shelves will work.

RAID

When using HDD, you can combine their RAID-10, RAID-5, RAID-6 or RAID-50.
For Linux, software RAID is better (with mdadm). We do not recommend using LVM.
When creating RAID-10, select the far layout.
If your budget allows, choose RAID-10.

If you have more than 4 disks, use RAID-6 (preferred) or RAID-50, instead of RAID-5.
When using RAID-5, RAID-6 or RAID-50, always increase stripe_cache_size, since the default value is usually not the best choice.

  1. $ echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size

Calculate the exact number from the number of devices and the block size, using the formula: 2 * num_devices * chunk_size_in_bytes / 4096.

A block size of 1024 KB is sufficient for all RAID configurations.
Never set the block size too small or too large.

You can use RAID-0 on SSD.
Regardless of RAID use, always use replication for data security.

Enable NCQ with a long queue. For HDD, choose the CFQ scheduler, and for SSD, choose noop. Don’t reduce the ‘readahead’ setting.
For HDD, enable the write cache.

File System

Ext4 is the most reliable option. Set the mount options noatime, nobarrier.
XFS is also suitable, but it hasn’t been as thoroughly tested with ClickHouse.
Most other file systems should also work fine. File systems with delayed allocation work better.

Linux Kernel

Don’t use an outdated Linux kernel.

Network

If you are using IPv6, increase the size of the route cache.
The Linux kernel prior to 3.2 had a multitude of problems with IPv6 implementation.

Use at least a 10 GB network, if possible. 1 Gb will also work, but it will be much worse for patching replicas with tens of terabytes of data, or for processing distributed queries with a large amount of intermediate data.

Huge Pages

If you are using old Linux kernel, disable transparent huge pages. It interferes with memory allocators, which leads to significant performance degradation.
On newer Linux kernels transparent huge pages are alright.

  1. $ echo 'madvise' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled

Hypervisor configuration

If you are using OpenStack, set

  1. cpu_mode=host-passthrough

in nova.conf.

If you are using libvirt, set

  1. <cpu mode='host-passthrough'/>

in XML configuration.

This is important for ClickHouse to be able to get correct information with cpuid instruction.
Otherwise you may get Illegal instruction crashes when hypervisor is run on old CPU models.

ZooKeeper

You are probably already using ZooKeeper for other purposes. You can use the same installation of ZooKeeper, if it isn’t already overloaded.

It’s best to use a fresh version of ZooKeeper – 3.4.9 or later. The version in stable Linux distributions may be outdated.

You should never use manually written scripts to transfer data between different ZooKeeper clusters, because the result will be incorrect for sequential nodes. Never use the “zkcopy” utility for the same reason: https://github.com/ksprojects/zkcopy/issues/15

If you want to divide an existing ZooKeeper cluster into two, the correct way is to increase the number of its replicas and then reconfigure it as two independent clusters.

Do not run ZooKeeper on the same servers as ClickHouse. Because ZooKeeper is very sensitive for latency and ClickHouse may utilize all available system resources.

With the default settings, ZooKeeper is a time bomb:

The ZooKeeper server won’t delete files from old snapshots and logs when using the default configuration (see autopurge), and this is the responsibility of the operator.

This bomb must be defused.

The ZooKeeper (3.5.1) configuration below is used in the Yandex.Metrica production environment as of May 20, 2017:

zoo.cfg:

  1. # http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html
  2. # The number of milliseconds of each tick
  3. tickTime=2000
  4. # The number of ticks that the initial
  5. # synchronization phase can take
  6. # This value is not quite motivated
  7. initLimit=300
  8. # The number of ticks that can pass between
  9. # sending a request and getting an acknowledgement
  10. syncLimit=10
  11. maxClientCnxns=2000
  12. # It is the maximum value that client may request and the server will accept.
  13. # It is Ok to have high maxSessionTimeout on server to allow clients to work with high session timeout if they want.
  14. # But we request session timeout of 30 seconds by default (you can change it with session_timeout_ms in ClickHouse config).
  15. maxSessionTimeout=60000000
  16. # the directory where the snapshot is stored.
  17. dataDir=/opt/zookeeper/{{ cluster['name'] }}/data
  18. # Place the dataLogDir to a separate physical disc for better performance
  19. dataLogDir=/opt/zookeeper/{{ cluster['name'] }}/logs
  20. autopurge.snapRetainCount=10
  21. autopurge.purgeInterval=1
  22. # To avoid seeks ZooKeeper allocates space in the transaction log file in
  23. # blocks of preAllocSize kilobytes. The default block size is 64M. One reason
  24. # for changing the size of the blocks is to reduce the block size if snapshots
  25. # are taken more often. (Also, see snapCount).
  26. preAllocSize=131072
  27. # Clients can submit requests faster than ZooKeeper can process them,
  28. # especially if there are a lot of clients. To prevent ZooKeeper from running
  29. # out of memory due to queued requests, ZooKeeper will throttle clients so that
  30. # there is no more than globalOutstandingLimit outstanding requests in the
  31. # system. The default limit is 1,000.ZooKeeper logs transactions to a
  32. # transaction log. After snapCount transactions are written to a log file a
  33. # snapshot is started and a new transaction log file is started. The default
  34. # snapCount is 10,000.
  35. snapCount=3000000
  36. # If this option is defined, requests will be will logged to a trace file named
  37. # traceFile.year.month.day.
  38. #traceFile=
  39. # Leader accepts client connections. Default value is "yes". The leader machine
  40. # coordinates updates. For higher update throughput at thes slight expense of
  41. # read throughput the leader can be configured to not accept clients and focus
  42. # on coordination.
  43. leaderServes=yes
  44. standaloneEnabled=false
  45. dynamicConfigFile=/etc/zookeeper-{{ cluster['name'] }}/conf/zoo.cfg.dynamic

Java version:

  1. openjdk 11.0.5-shenandoah 2019-10-15
  2. OpenJDK Runtime Environment (build 11.0.5-shenandoah+10-adhoc.heretic.src)
  3. OpenJDK 64-Bit Server VM (build 11.0.5-shenandoah+10-adhoc.heretic.src, mixed mode)

JVM parameters:

  1. NAME=zookeeper-{{ cluster['name'] }}
  2. ZOOCFGDIR=/etc/$NAME/conf
  3. # TODO this is really ugly
  4. # How to find out, which jars are needed?
  5. # seems, that log4j requires the log4j.properties file to be in the classpath
  6. CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper-3.6.2/lib/audience-annotations-0.5.0.jar:/usr/share/zookeeper-3.6.2/lib/commons-cli-1.2.jar:/usr/share/zookeeper-3.6.2/lib/commons-lang-2.6.jar:/usr/share/zookeeper-3.6.2/lib/jackson-annotations-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-core-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-databind-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/javax.servlet-api-3.1.0.jar:/usr/share/zookeeper-3.6.2/lib/jetty-http-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-io-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-security-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-server-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-servlet-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-util-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jline-2.14.6.jar:/usr/share/zookeeper-3.6.2/lib/json-simple-1.1.1.jar:/usr/share/zookeeper-3.6.2/lib/log4j-1.2.17.jar:/usr/share/zookeeper-3.6.2/lib/metrics-core-3.2.5.jar:/usr/share/zookeeper-3.6.2/lib/netty-buffer-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-codec-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-handler-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-resolver-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-epoll-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-unix-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_common-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_hotspot-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_servlet-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-api-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-log4j12-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/snappy-java-1.1.7.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-jute-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-prometheus-metrics-3.6.2.jar:/usr/share/zookeeper-3.6.2/etc"
  7. ZOOCFG="$ZOOCFGDIR/zoo.cfg"
  8. ZOO_LOG_DIR=/var/log/$NAME
  9. USER=zookeeper
  10. GROUP=zookeeper
  11. PIDDIR=/var/run/$NAME
  12. PIDFILE=$PIDDIR/$NAME.pid
  13. SCRIPTNAME=/etc/init.d/$NAME
  14. JAVA=/usr/local/jdk-11/bin/java
  15. ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
  16. ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
  17. JMXLOCALONLY=false
  18. JAVA_OPTS="-Xms{{ cluster.get('xms','128M') }} \
  19. -Xmx{{ cluster.get('xmx','1G') }} \
  20. -Xlog:safepoint,gc*=info,age*=debug:file=/var/log/$NAME/zookeeper-gc.log:time,level,tags:filecount=16,filesize=16M
  21. -verbose:gc \
  22. -XX:+UseG1GC \
  23. -Djute.maxbuffer=8388608 \
  24. -XX:MaxGCPauseMillis=50"

Salt init:

  1. description "zookeeper-{{ cluster['name'] }} centralized coordination service"
  2. start on runlevel [2345]
  3. stop on runlevel [!2345]
  4. respawn
  5. limit nofile 8192 8192
  6. pre-start script
  7. [ -r "/etc/zookeeper-{{ cluster['name'] }}/conf/environment" ] || exit 0
  8. . /etc/zookeeper-{{ cluster['name'] }}/conf/environment
  9. [ -d $ZOO_LOG_DIR ] || mkdir -p $ZOO_LOG_DIR
  10. chown $USER:$GROUP $ZOO_LOG_DIR
  11. end script
  12. script
  13. . /etc/zookeeper-{{ cluster['name'] }}/conf/environment
  14. [ -r /etc/default/zookeeper ] && . /etc/default/zookeeper
  15. if [ -z "$JMXDISABLE" ]; then
  16. JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY"
  17. fi
  18. exec start-stop-daemon --start -c $USER --exec $JAVA --name zookeeper-{{ cluster['name'] }} \
  19. -- -cp $CLASSPATH $JAVA_OPTS -Dzookeeper.log.dir=${ZOO_LOG_DIR} \
  20. -Dzookeeper.root.logger=${ZOO_LOG4J_PROP} $ZOOMAIN $ZOOCFG
  21. end script