Load Blancer + ReplicatedMergeTree + Distributed + zookeeper
2 shards 2 replicas
主机名 | IP | 分片 | 副本 |
---|---|---|---|
clickhouse1 | 192.168.0.13 | shard1 | replica1 |
clickhouse2 | 192.168.0.14 | shard1 | replica2 |
clickhouse3 | 192.168.0.15 | shard2 | replica1 |
clickhouse4 | 192.168.0.100 | shard2 | replica2 |
规划4个节点, 2个分片, 每个分片2个副本。 分片1的副本在主机clickhouse1和clickhouse2上, 2分片的副本在主机clickhouse3和clickhouse4上。
官方建议zookeeper集群与clickhouse集群分开部署,避免资源竞争导致服务异常。
这里我们将zookeeper汲取 部署到k8s上,部署教程为:
clickhouse集群我们采用docker部署
docker run -d --name clickhouse-server --ulimit nofile=262144:262144 --volume=/data/clickhouse/:/var/lib/clickhouse yandex/clickhouse-server
mkdir -p /etc/clickhouse-server
docker cp clickhouse-server:/etc/clickhouse-server/ /etc/
找到下述配置,打开注释并进行修改
<listen_host>::1listen_host>
<listen_host>0.0.0.0listen_host>
<listen_host>127.0.0.1listen_host>
找到metrika.xml位置,修改include from节点为实际引用到的文件
<include_from>/etc/clickhouse-server/metrika.xmlinclude_from>
增加分片与副本信息
<remote_servers>
<cluster_2s_2r>
<shard>
<internal_replication>trueinternal_replication>
<replica>
<host>192.168.0.13host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
<replica>
<host>192.168.0.14host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
shard>
<shard>
<internal_replication>trueinternal_replication>
<replica>
<host>192.168.0.15host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
<replica>
<host>192.168.0.100host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
shard>
cluster_2s_2r>
remote_servers>
将这些文件同步到其他机器上
metrika.xml文件主要用来配置分片及副本的数目和机器的匹配情况,每台机器的配置都不一样,具体如下
<yandex>
<clickhouse_remote_servers>
<cluster_2s_2r>
<shard>
<internal_replication>trueinternal_replication>
<replica>
<host>192.168.0.13host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
<replica>
<host>192.168.0.14host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
shard>
<shard>
<internal_replication>trueinternal_replication>
<replica>
<host>192.168.0.15host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
<replica>
<host>192.168.0.100host>
<port>9000port>
<user>defaultuser>
<password>password>
replica>
shard>
cluster_2s_2r>
clickhouse_remote_servers>
<zookeeper>
<node>
<host>192.168.20.35host>
<port>2181port>
node>
zookeeper>
<macros>
<layer>01layer>
<shard>01shard>
<replica>192.168.0.13replica>
macros>
<networks>
<ip>::/0ip>
networks>
<clickhouse_compression>
<case>
<min_part_size>10000000000min_part_size>
<min_part_size_ratio>0.01min_part_size_ratio>
<method>lz4method>
case>
clickhouse_compression>
yandex>
这里我们只需修改每个metrika.xml配置文件中的宏(macros)即可
用于存储1分片的数据
<yandex>
...
...
<macros>
<layer>01layer>
<shard>01shard>
<replica>192.168.0.13replica>
macros>
<networks>
<ip>::/0ip>
networks>
...
yandex>
用于存储1分片的数据备份, 与clickhouse1的数据相同
<yandex>
...
...
<macros>
<layer>01layer>
<shard>01shard>
<replica>192.168.0.14replica>
macros>
<networks>
<ip>::/0ip>
networks>
...
yandex>
用于存储2分片的数据
<yandex>
...
...
<macros>
<layer>01layer>
<shard>02shard>
<replica>192.168.0.15replica>
macros>
<networks>
<ip>::/0ip>
networks>
...
yandex>
用于存储2分片的数据备份, 与clickhouse3的数据相同
<yandex>
...
...
<macros>
<layer>01layer>
<shard>01shard>
<replica>192.168.0.100replica>
macros>
<networks>
<ip>::/0ip>
networks>
...
yandex>
docker run -d \
--name clickhouse \
--ulimit nofile=262144:262144 \
--volume=/data/clickhouse:/var/lib/clickhouse \
--volume=/etc/clickhouse-server/:/etc/clickhouse-server/ \
--add-host clickhouse1:192.168.0.13 \
--add-host clickhouse2:192.168.0.14 \
--add-host clickhouse3:192.168.0.15 \
--add-host clickhouse4:192.168.0.100 \
--hostname $(hostname) \
-p 9000:9000 \
-p 8123:8123 \
-p 9009:9009 \
mirror.corp.wuyacapital.com/common/clickhouse-server
使用clickhouse-client登录到集群,并查看集群
# 登录集群
docker run -it --rm --add-host clickhouse1:192.168.0.13 --add-host clickhouse2:192.168.0.14 --add-host clickhouse3:192.168.0.15 --add-host clickhouse4:192.168.0.100 yandex/clickhouse-client --host clickhouse1 --port 9000
# 登录成功
ClickHouse client version 22.1.3.7 (official build).
Connecting to clickhouse1:9000 as user default.
Connected to ClickHouse server version 22.1.3 revision 54455.
clickhouse1 :)
查看集群
clickhouse1 :)select * from system.clusters
SELECT *
FROM system.clusters
Query id: 9f13df9c-862f-44f1-8abd-bf47cdb4eb9c
┌─cluster───────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
│ cluster_2s_2r │ 1 │ 1 │ 1 │ 192.168.0.13 │ 192.168.0.13 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster_2s_2r │ 1 │ 1 │ 2 │ 192.168.0.14 │ 192.168.0.14 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster_2s_2r │ 2 │ 1 │ 1 │ 192.168.0.15 │ 192.168.0.15 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster_2s_2r │ 2 │ 1 │ 2 │ 192.168.0.100 │ 192.168.0.100 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
└───────────────┴───────────┴──────────────┴─────────────┴─────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
4 rows in set. Elapsed: 0.005 sec.
可以看到集群创建成功,分片与副本设置正确
https://blog.51cto.com/u_14900374/2629096
https://newsqlgroup.com/t/clickhouse/65
https://blog.csdn.net/tototuzuoquan/article/details/111305125
https://zhuanlan.zhihu.com/p/343786164
https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replacingmergetree