ClickHouse 分片双副本集群部署

环境

  • 每天机器已安装单机版的ClickHouse,官方推荐使用 rpm 安装,也可以安装好一台后复制文件过去
  • CentOS Linux release 7.8.2003 (Core)
  • ClickHouse 20.11.6.6
  • 防火墙、selinux已关闭
  • 注意: 这边配置为false,官方推荐是true+复制表,但是需要部署ZooKeeper,当前示例未部署,仅测试多副本的场景
分片 副本01 副本02
分片01 192.168.66.101:9000 192.168.66.104:9000
分片02 192.168.66.102:9000 192.168.66.105:9000
分片03 192.168.66.103:9000 192.168.66.106:9000

限制

  • 单副本多分片情况下,数据会进行分片存储,但若有一台机器分配挂掉,就会导致整个集群不可用。
  • 因此需要对数据进行一份备份,做高可用,就算一份数据不可用也可读取另一份数据,高可用方案可以如下配置

方案一【本例使用】

每台分片服务器后面都挂一台高可用使用的复制机器

  • 优点:性能能跑满,两台都能用到(会随机从机器上获取数据)
  • 缺点:相对而言,比较费机器


    image.png

方案二

一台服务器上安装俩个clickhouse,端口不同,做俩俩复制,挂掉任何一台服务器不影响整个集群

  • 优点:省机器吧
  • 缺点:资源无法全负荷使用,会有资源争用情况


    image.png

配置host【所有机器配置】

echo "192.168.66.101 yqtest1" >> /etc/hosts
echo "192.168.66.102 yqtest2" >> /etc/hosts
echo "192.168.66.103 yqtest3" >> /etc/hosts
echo "192.168.66.104 yqtest4" >> /etc/hosts
echo "192.168.66.105 yqtest5" >> /etc/hosts
echo "192.168.66.106 yqtest6" >> /etc/hosts

安装【所有都安装】

安装略,可参考前篇文章,主要修改下面参数

//clickhouse/log/clickhouse-server.log
//clickhouse/log/clickhouse-server.err.log
/clickhouse/data/
/clickhouse/data/tmp/
/clickhouse/data/user_files/
/clickhouse/data/access/
/clickhouse/data/format_schemas/

集群配置

集群配置在配置文件中加入后立即生效

配置分片与副本信息【所有机器都需配置】

可配置完成一个后传过去,也可以分别配置

vi /etc/clickhouse-server/config.d/metrika.xml



    
    
        
        
            
            
                false
                
                
                    yqtest1
                    9000
                
                
                
                    yqtest4
                    9000
                
            
            
            
                false
                
                
                    yqtest2
                    9000
                
                
                
                    yqtest5
                    9000
                
            
        
            
                false
                
                
                    yqtest3
                    9000
                
                
                
                    yqtest6
                    9000
                
            
        
    
    
    
        
            10000000000
            0.01
            lz4
        
    

注意:internal_replication的参数设置很重要,当有多个副本集群时

  • 创建的表为非复制表 false
    写分布式表:会将数据插入到所有副本的本地表中,副本表上的数据保持同步。写入本地单表,数据仅写入到当前服务器单表上,会出现不同服务器查询结果不同。
    true
    写分布式表:数据只插入到一个副本的本地表中,不会做同步,数据紊乱,官方不推荐使用
  • 创建的表为复制表
    false
    写分布式表:数据会分片后插入到所有本地表中,会出现重复,但复制表会自动删除重复,这有性能损耗
    true
    写分布式表/本地表:会随机写一份到分片上,然后自动同步数据到复制副本分片上。官方推荐

配置信息引入config文件【所有服务器配置】

vi /etc/clickhouse-server/config.xml
搜索 metrika,在下面添加引入


/etc/clickhouse-server/config.d/metrika.xml

查看集群信息【任何一台】

  • 可查看到一个分配有两个副本
clickhouse-client -h 192.168.66.101 --port 9000 --user default --query "select * from system.clusters where cluster = 'ckcluster_3shards_2replicas'";
clickhouse-client -h 192.168.66.102 --port 9000 --user default --query "select * from system.clusters where cluster = 'ckcluster_3shards_2replicas'";
clickhouse-client -h 192.168.66.103 --port 9000 --user default --query "select * from system.clusters where cluster = 'ckcluster_3shards_2replicas'";
clickhouse-client -h 192.168.66.104 --port 9000 --user default --query "select * from system.clusters where cluster = 'ckcluster_3shards_2replicas'";
clickhouse-client -h 192.168.66.105 --port 9000 --user default --query "select * from system.clusters where cluster = 'ckcluster_3shards_2replicas'";
clickhouse-client -h 192.168.66.106 --port 9000 --user default --query "select * from system.clusters where cluster = 'ckcluster_3shards_2replicas'";
image.png
image.png
  • is_local字段与ip匹配,这边不一致一定要重视,否则可能出现查询分片表数据不一致
  • 下面这些test的集群可以修改clickhouse-server/config.xml中的

数据导入测试

导入测试数据【101】

官方测试数据下载地址:
【405M】https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz
仅需传到其中一台服务器上即可,本例放在了 101 上,数据库需要预先所有库创建好

# 解压,解压后2.5G
unxz visits_v1.tsv.xz

-- 创建库【所有】
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS yqtest"

-- 创建表【101】
clickhouse-client --query "CREATE TABLE yqtest.visits_v1 ( CounterID UInt32,  StartDate Date,  Sign Int8,  IsNew UInt8,  VisitID UInt64,  UserID UInt64,  StartTime DateTime,  Duration UInt32,  UTCStartTime DateTime,  PageViews Int32,  Hits Int32,  IsBounce UInt8,  Referer String,  StartURL String,  RefererDomain String,  StartURLDomain String,  EndURL String,  LinkURL String,  IsDownload UInt8,  TraficSourceID Int8,  SearchEngineID UInt16,  SearchPhrase String,  AdvEngineID UInt8,  PlaceID Int32,  RefererCategories Array(UInt16),  URLCategories Array(UInt16),  URLRegions Array(UInt32),  RefererRegions Array(UInt32),  IsYandex UInt8,  GoalReachesDepth Int32,  GoalReachesURL Int32,  GoalReachesAny Int32,  SocialSourceNetworkID UInt8,  SocialSourcePage String,  MobilePhoneModel String,  ClientEventTime DateTime,  RegionID UInt32,  ClientIP UInt32,  ClientIP6 FixedString(16),  RemoteIP UInt32,  RemoteIP6 FixedString(16),  IPNetworkID UInt32,  SilverlightVersion3 UInt32,  CodeVersion UInt32,  ResolutionWidth UInt16,  ResolutionHeight UInt16,  UserAgentMajor UInt16,  UserAgentMinor UInt16,  WindowClientWidth UInt16,  WindowClientHeight UInt16,  SilverlightVersion2 UInt8,  SilverlightVersion4 UInt16,  FlashVersion3 UInt16,  FlashVersion4 UInt16,  ClientTimeZone Int16,  OS UInt8,  UserAgent UInt8,  ResolutionDepth UInt8,  FlashMajor UInt8,  FlashMinor UInt8,  NetMajor UInt8,  NetMinor UInt8,  MobilePhone UInt8,  SilverlightVersion1 UInt8,  Age UInt8,  Sex UInt8,  Income UInt8,  JavaEnable UInt8,  CookieEnable UInt8,  JavascriptEnable UInt8,  IsMobile UInt8,  BrowserLanguage UInt16,  BrowserCountry UInt16,  Interests UInt16,  Robotness UInt8,  GeneralInterests Array(UInt16),  Params Array(String),  Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime,  Price Int64,  OrderID String, CurrencyID UInt32),  WatchIDs Array(UInt64),  ParamSumPrice Int64,  ParamCurrency FixedString(3),  ParamCurrencyID UInt16,  ClickLogID UInt64,  ClickEventID Int32,  ClickGoodEvent Int32,  ClickEventTime DateTime,  ClickPriorityID Int32,  ClickPhraseID Int32,  ClickPageID Int32,  ClickPlaceID Int32,  ClickTypeID Int32,  ClickResourceID Int32,  ClickCost UInt32,  ClickClientIP UInt32,  ClickDomainID UInt32,  ClickURL String,  ClickAttempt UInt8,  ClickOrderID UInt32,  ClickBannerID UInt32,  ClickMarketCategoryID UInt32,  ClickMarketPP UInt32,  ClickMarketCategoryName String,  ClickMarketPPName String,  ClickAWAPSCampaignName String,  ClickPageName String,  ClickTargetType UInt16,  ClickTargetPhraseID UInt64,  ClickContextType UInt8,  ClickSelectType Int8,  ClickOptions String,  ClickGroupBannerID Int32,  OpenstatServiceName String,  OpenstatCampaignID String,  OpenstatAdID String,  OpenstatSourceID String,  UTMSource String,  UTMMedium String,  UTMCampaign String,  UTMContent String,  UTMTerm String,  FromTag String,  HasGCLID UInt8,  FirstVisit DateTime,  PredLastVisit Date,  LastVisit Date,  TotalVisits UInt32,  TraficSource    Nested(ID Int8,  SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String),  Attendance FixedString(16),  CLID UInt32,  YCLID UInt64,  NormalizedRefererHash UInt64,  SearchPhraseHash UInt64,  RefererDomainHash UInt64,  NormalizedStartURLHash UInt64,  StartURLDomainHash UInt64,  NormalizedEndURLHash UInt64,  TopLevelDomain UInt64,  URLScheme UInt64,  OpenstatServiceNameHash UInt64,  OpenstatCampaignIDHash UInt64,  OpenstatAdIDHash UInt64,  OpenstatSourceIDHash UInt64,  UTMSourceHash UInt64,  UTMMediumHash UInt64,  UTMCampaignHash UInt64,  UTMContentHash UInt64,  UTMTermHash UInt64,  FromHash UInt64,  WebVisorEnabled UInt8,  WebVisorActivity UInt32,  ParsedParams    Nested(Key1 String,  Key2 String,  Key3 String,  Key4 String, Key5 String, ValueDouble    Float64),  Market Nested(Type UInt8, GoalID UInt32, OrderID String,  OrderPrice Int64,  PP UInt32,  DirectPlaceID UInt32,  DirectOrderID  UInt32,  DirectBannerID UInt32,  GoodID String, GoodName String, GoodQuantity Int32,  GoodPrice Int64),  IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

-- 导入数据【101】
cat visits_v1.tsv | clickhouse-client --query "INSERT INTO yqtest.visits_v1 FORMAT TSV" --max_insert_block_size=50000

-- 查看导入数据量,因为仅在101导入,因此仅101存在【101】
[root@yqtest1 ~]# clickhouse-client --query "select count(*) from yqtest.visits_v1"
1681077

创建本地表和分片表【所有机器】

本地表创建

clickhouse-client --query "CREATE TABLE yqtest.test_local ( CounterID UInt32,  StartDate Date,  Sign Int8,  IsNew UInt8,  VisitID UInt64,  UserID UInt64,  StartTime DateTime) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

创建分片表

clickhouse-client --query "create table yqtest.test_all as yqtest.test_local ENGINE = Distributed(ckcluster_3shards_2replicas,yqtest,test_local,rand())"

插入数据【101】

# 当前表
yqtest1 :) show tables;
┌─name───────┐
│ test_all   │ # 0
│ test_local │ # 0
│ visits_v1  │ # 1681077
└────────────┘

# 将数据插入分片表
insert into yqtest.test_all select CounterID,StartDate,Sign,IsNew,VisitID,UserID,StartTime from visits_v1 limit 30000;

查看数据分布

  • 查看本地表可看到数据被打散分布在3太机器上,对应的副本上的数据与这些机器上的相同
  • 查看分片表可看到在任意一台数据量查询到的都是相同的
-- 分别执行
SELECT count(*) FROM yqtest.test_local
SELECT count(*) FROM yqtest.test_all
服务器 本地表[test_local] 分片表[test_all]
101 10008 30000
104 10008 30000
102 9924 30000
105 9924 30000
103 10068 30000
106 10068 30000

故障测试

假设【103】宕机,直接对 103 执行了 poweroff

  • 宕机任何一台由于存在副本,不会出现集群不可用的情况,分片表可正常查询,但测试发现首次查询分片表时会拉取数据,会变慢,后面查询就好了。
  • 集群正常使用
  • 关于宕机后首次查询分片表变慢问题:


    image.png

查看日志,做了合并聚合操作


image.png
-- 分别执行
SELECT count(*) FROM yqtest.test_local
SELECT count(*) FROM yqtest.test_all
服务器 本地表[test_local] 分片表[test_all]
101 10008 30000
104 10008 30000
102 9924 30000
105 9924 30000
103 宕机 宕机
106 10068 30000

宕机后继续有数据写入分片表

【103】已宕机,继续对分片表写入了部分数据,查看恢复后的【103】

insert into yqtest.test_all select CounterID,StartDate,Sign,IsNew,VisitID,UserID,StartTime from yqtest.visits_v1 limit 300;
服务器 本地表[test_local] 分片表[test_all]
101 10100 30300
104 10100 30300
102 10088 30300
105 10088 30300
103 宕机 宕机
106 10112 30300

【103】恢复后

服务器 本地表[test_local] 分片表[test_all]
101 10100 30300
104 10100 30300
102 10088 30300
105 10088 30300
103 10112 30300 # 恢复后数据会自动同步
106 10112 30300

你可能感兴趣的:(ClickHouse 分片双副本集群部署)