Clickhouse 集群部署安装

ClickHouse 集群部署安装

1、环境准备

(1)、阿里云服务器两台 集群节点信息

192.168.5.13 ch01

192.168.5.14 ch02

(2)、修改 /etc/cloud/cloud.cfg ( 所有节点 )

[root@iZbp1fsk0p3opmtlo52u91Z ~]# vim /etc/cloud/cloud.cfg
注释掉 # manage_etc_hosts: localhost

(3)、文件打开数调整 ( 所有节点 )

在 /etc/security/limits.conf和/etc/security/limits.d/20-nproc.conf 这两个文件的末尾加入以下内容:
注意:没有20-nproc.conf 该文件就创建一个

[root@ch01 conf]# vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
* soft nproc 131072
* hard nproc 131072

[root@ch01 conf]# vim /etc/security/limits.d/20-nproc.conf
* soft nofile 65535
* hard nofile 65535
* soft nproc 131072
* hard nproc 131072

(4)、取消selinux ( 所有节点 )

修改 /etc/selinux/config 中的 SELINUX=disabled 后重启

vim /etc/selinux/config
SELINUX=disabled

(5)、关闭防火墙 ( 所有节点 )

systemctl status firewalld.service
systemctl stop firewalld.service

(6)、安装依赖 ( 所有节点 )

root用户执行:
yum install -y libtool
yum install -y *unixODBC*

(7)、验证是否支持sse 4.2指令集 ( 所有节点 )

需要验证当前服务器的CPU是否支持SSE 4.2指令集,因为向量化执行需要用到这项特性

[root@ch02 ~]# grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
SSE 4.2 supported


(8)、修改 hostname ( 所有节点 )

192.168.5.13
[root@iZbp1fsk0p3opmtlo52u91Z ~]# hostnamectl set-hostname ch01
[root@iZbp1fsk0p3opmtlo52u91Z ~]# exit
登出

连接断开
连接主机...
连接主机成功
Last login: Wed Jun 29 09:53:30 2022 from 117.64.249.192

Welcome to Alibaba Cloud Elastic Compute Service !

[root@ch01 ~]# 
192.168.5.14
[root@iZbp1fsk0p3opmtlo52u92Z ~]# hostnamectl set-hostname ch02
[root@iZbp1fsk0p3opmtlo52u92Z ~]# exit
登出

连接断开
连接主机...
连接主机成功
Last login: Wed Jun 29 09:53:28 2022 from 117.64.249.192

Welcome to Alibaba Cloud Elastic Compute Service !

[root@ch02 ~]# 

(9)、配置hosts ( 所有节点 )

[root@ch01 ~]# vim /etc/hosts
# 192.168.5.13  iZbp1fsk0p3opmtlo52u91Z iZbp1fsk0p3opmtlo52u91Z
192.168.5.13  ch01
192.168.5.14  ch02

[root@ch02 ~]# vim /etc/hosts
# 192.168.5.14  iZbp1fsk0p3opmtlo52u92Z iZbp1fsk0p3opmtlo52u92Z
192.168.5.13  ch01
192.168.5.14  ch02

(10) 上面步骤完成后重启服务器 reboot

2、JDK安装

(1)、上传jdk压缩包至 服务器/opt 目录下

[root@ch01 opt]# ls
Clickhouse21.3.4.25  jdk-8u241-linux-x64.tar.gz  zookeeper-3.4.13.tar.gz

(2)、解压安装包 ,配置环境变量

[root@ch01 opt]# tar -zxvf jdk-8u241-linux-x64.tar.gz
[root@ch01 opt]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin

(3)、source环境变量 ,验证是否成功

[root@ch01 opt]# source /etc/profile
[root@ch01 opt]# java -version
java version "1.8.0_241"
Java(TM) SE Runtime Environment (build 1.8.0_241-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.241-b07, mixed mode)

3、Zookeeper 集群安装

使用kafka上的Zookeeper ,此处安装省略

在 clickhouse 集群上 hosts 上新增

192.168.5.5 zk01
192.168.5.6 zk02
192.168.5.8 zk03

4、安装部署ClickHouse

(1)、上传安装包至服务器上 /opt/Clickhouse21.3.4.25 目录下

[root@ch01 /]# cd /opt/Clickhouse21.3.4.25/
[root@ch01 Clickhouse21.3.4.25]# ll
总用量 698960
-rw-r--r-- 1 root root     46224 628 20:24 clickhouse-client-21.3.4.25-2.noarch.rpm
-rw-r--r-- 1 root root 126245942 628 20:25 clickhouse-common-static-21.3.4.25-2.x86_64.rpm
-rw-r--r-- 1 root root 589359990 628 20:27 clickhouse-common-static-dbg-21.3.4.25-2.x86_64.rpm
-rw-r--r-- 1 root root     69097 628 20:24 clickhouse-server-21.3.4.25-2.noarch.rpm
[root@ch01 Clickhouse21.3.4.25]# 

(2)、先进入我们创建的clickhouse目录下,使用shell命令sudo rpm -ivh *.rpm 安装4个rpm包,执行安装

[root@ch01 Clickhouse21.3.4.25]# sudo rpm -ivh *.rpm
警告:clickhouse-client-21.3.4.25-2.noarch.rpm: 头V4 RSA/SHA1 Signature, 密钥 ID e0c56bd4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:clickhouse-common-static-21.3.4.2################################# [ 25%]
   2:clickhouse-client-21.3.4.25-2    ################################# [ 50%]
   3:clickhouse-server-21.3.4.25-2    ################################# [ 75%]

节点2同样步骤

(3)、启动 clickhouse

[root@ch01 ~]# systemctl start clickhouse-server.service
查看启动状态 
[root@ch01 ~]# systemctl status clickhouse-server.service
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
   Loaded: loaded (/etc/systemd/system/clickhouse-server.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since 三 2022-06-29 17:12:19 CST; 5s ago
  Process: 28187 ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid (code=exited, status=70)
 Main PID: 28187 (code=exited, status=70)

629 17:12:19 ch01 systemd[1]: Unit clickhouse-server.service entered failed state.
629 17:12:19 ch01 systemd[1]: clickhouse-server.service failed.

节点2同样步骤

(4)、进入clickhouse-client

[root@ch01 ~]# clickhouse-client -h localhost --port 9000 -u default  # 回车 初始没有密码
ClickHouse client version 21.3.4.25 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.3.4 revision 54447.

# 验证数据
ch01 :) show databases;

SHOW DATABASES

Query id: 994f4d84-908d-4092-bf48-227d70c3e212

┌─name────┐
│ default │
│ system  │
└─────────┘

2 rows in set. Elapsed: 0.001 sec. 

ch01 :)

节点2同样步骤

到此 证明clickhouse完全安装成功并可以正确使用

5、集群部署

两个节点全部按照上面的指导部署单节点成功后开始配置部署集群需要的配置

(1)、首先以一个节点为例配置, 进入 /etc/clickhouse-server 目录下,添加配置信息如下:

将config.xml , users.xml 备份起来 避免配置修改失误,导致clickhouse 启动失败。

[root@ch01 clickhouse-server]# cp config.xml config_bak20220630.xml 
[root@ch01 clickhouse-server]# cp users.xml users_bak20220630.xml 

编辑 config.xml 清空 内容,将以下内容粘贴进去

[root@ch01 clickhouse-server]# vim config.xml


<yandex>
    <logger>
        
        <level>tracelevel>
        <log>/var/log/clickhouse-server/clickhouse-server.loglog>
        <errorlog>/var/log/clickhouse-server/clickhouse-server.err.logerrorlog>
        
        <size>1000Msize>
        <count>10count>
         

        
        

        
        
    logger>
		<timezone>Asia/Shanghaitimezone>
		<database_atomic_delay_before_drop_table_sec>0database_atomic_delay_before_drop_table_sec>
    
    

    
    <http_port>8123http_port>

    
    <tcp_port>9000tcp_port>

    
    <mysql_port>9004mysql_port>

    
    

    
    

    
    

    
    

    
    <interserver_http_port>9009interserver_http_port>

    
    

    
    

    
    

    
    

    
    
    <listen_host>0.0.0.0listen_host>

    
    

    
    

    
    

    

    <max_connections>4096max_connections>

    
    <keep_alive_timeout>3keep_alive_timeout>

    
    
    <grpc>
        <enable_ssl>falseenable_ssl>

        
        <ssl_cert_file>/path/to/ssl_cert_filessl_cert_file>
        <ssl_key_file>/path/to/ssl_key_filessl_key_file>

        
        <ssl_require_client_auth>falsessl_require_client_auth>

        
        <ssl_ca_cert_file>/path/to/ssl_ca_cert_filessl_ca_cert_file>

        
        <compression>deflatecompression>

        
        <compression_level>mediumcompression_level>

        
        <max_send_message_size>-1max_send_message_size>
        <max_receive_message_size>-1max_receive_message_size>

        
        <verbose_logs>falseverbose_logs>
    grpc>

    
    <openSSL>
        <server> 
            
            <certificateFile>/etc/clickhouse-server/server.crtcertificateFile>
            <privateKeyFile>/etc/clickhouse-server/server.keyprivateKeyFile>
            
            <dhParamsFile>/etc/clickhouse-server/dhparam.pemdhParamsFile>
            <verificationMode>noneverificationMode>
            <loadDefaultCAFile>trueloadDefaultCAFile>
            <cacheSessions>truecacheSessions>
            <disableProtocols>sslv2,sslv3disableProtocols>
            <preferServerCiphers>truepreferServerCiphers>
        server>

        <client> 
            <loadDefaultCAFile>trueloadDefaultCAFile>
            <cacheSessions>truecacheSessions>
            <disableProtocols>sslv2,sslv3disableProtocols>
            <preferServerCiphers>truepreferServerCiphers>
            
            <invalidCertificateHandler>
                
                <name>RejectCertificateHandlername>
            invalidCertificateHandler>
        client>
    openSSL>

    
    

    
    <max_concurrent_queries>100max_concurrent_queries>

    
    <max_server_memory_usage>0max_server_memory_usage>

    

    <max_thread_pool_size>10000max_thread_pool_size>

    
    <max_server_memory_usage_to_ram_ratio>0.9max_server_memory_usage_to_ram_ratio>

    
    <total_memory_profiler_step>4194304total_memory_profiler_step>

    
    <total_memory_tracker_sample_probability>0total_memory_tracker_sample_probability>

    
    

    
    <uncompressed_cache_size>8589934592uncompressed_cache_size>

    
    <mark_cache_size>5368709120mark_cache_size>


    
    <path>/var/lib/clickhouse/path>

    
    <tmp_path>/var/lib/clickhouse/tmp/tmp_path>

    
    

    
    <user_files_path>/var/lib/clickhouse/user_files/user_files_path>

    
    <ldap_servers>
        
    ldap_servers>

    
    <user_directories>
        <users_xml>
            
            <path>users.xmlpath>
        users_xml>
        <local_directory>
            
            <path>/var/lib/clickhouse/access/path>
        local_directory>

        
    user_directories>

    
    <default_profile>defaultdefault_profile>

    
    <custom_settings_prefixes>custom_settings_prefixes>

    
    

    
    

    
    <default_database>defaultdefault_database>

    
    

    
    

    
    <mlock_executable>truemlock_executable>

    
    <remap_executable>falseremap_executable>

    
    <remote_servers>
        <default_cluster>
            <shard>
                <internal_replication>trueinternal_replication>
                <replica>
                    <host>ch01host>
                    <port>9000port>
                    <user>defaultuser>
                    <password>1qaz!@#$password>
                replica>
                <replica>
                    <host>ch02host>
                    <port>9000port>
                    <user>defaultuser>
                    <password>1qaz!@#$password>
                replica>
            shard>
        default_cluster>
    remote_servers>

    
    <remote_url_allow_hosts>
        

        
    remote_url_allow_hosts>

    

    

    
    <zookeeper>
        <node>
            <host>zk01host>
            <port>2181port>
        node>
        <node>
            <host>zk02host>
            <port>2181port>
        node>
        <node>
            <host>zk03host>
            <port>2181port>
        node>
    zookeeper>

		
    
    
    <macros>
        <shard>1shard>
        <replica>rep_1_1replica>
    macros>
    
    <default_replica_path>/clickhouse/tables/{shard}/{database}/{table}default_replica_path>
		<default_replica_name>{replica}default_replica_name>

    
    <builtin_dictionaries_reload_interval>3600builtin_dictionaries_reload_interval>


    
    <max_session_timeout>3600max_session_timeout>

    
    <default_session_timeout>60default_session_timeout>

    
    
    

    
    
    

    
    <query_log>
        
        <database>systemdatabase>
        <table>query_logtable>
        
        <partition_by>toYYYYMM(event_date)partition_by>
        

        

        
        <flush_interval_milliseconds>7500flush_interval_milliseconds>
    query_log>

    
    <trace_log>
        <database>systemdatabase>
        <table>trace_logtable>

        <partition_by>toYYYYMM(event_date)partition_by>
        <flush_interval_milliseconds>7500flush_interval_milliseconds>
    trace_log>

    
    <query_thread_log>
        <database>systemdatabase>
        <table>query_thread_logtable>
        <partition_by>toYYYYMM(event_date)partition_by>
        <flush_interval_milliseconds>7500flush_interval_milliseconds>
    query_thread_log>

    

    

    
    <metric_log>
        <database>systemdatabase>
        <table>metric_logtable>
        <flush_interval_milliseconds>7500flush_interval_milliseconds>
        <collect_interval_milliseconds>1000collect_interval_milliseconds>
    metric_log>

    
    <asynchronous_metric_log>
        <database>systemdatabase>
        <table>asynchronous_metric_logtable>
        
        <flush_interval_milliseconds>60000flush_interval_milliseconds>
    asynchronous_metric_log>

    
    <opentelemetry_span_log>
        
        <engine>
            engine MergeTree
            partition by toYYYYMM(finish_date)
            order by (finish_date, finish_time_us, trace_id)
        engine>
        <database>systemdatabase>
        <table>opentelemetry_span_logtable>
        <flush_interval_milliseconds>7500flush_interval_milliseconds>
    opentelemetry_span_log>


    
    <crash_log>
        <database>systemdatabase>
        <table>crash_logtable>

        <partition_by />
        <flush_interval_milliseconds>1000flush_interval_milliseconds>
    crash_log>

    

    
    

    
    


    
    
    <top_level_domains_lists>
        
    top_level_domains_lists>

    
    <dictionaries_config>*_dictionary.xmldictionaries_config>

    
    

    
    <distributed_ddl>
        
        <path>/clickhouse/task_queue/ddlpath>

        
        

        
        

        

        
        

        
        

        
        
    distributed_ddl>

    
    

    
    
    

    
    <graphite_rollup_example>
        <pattern>
            <regexp>click_costregexp>
            <function>anyfunction>
            <retention>
                <age>0age>
                <precision>3600precision>
            retention>
            <retention>
                <age>86400age>
                <precision>60precision>
            retention>
        pattern>
        <default>
            <function>maxfunction>
            <retention>
                <age>0age>
                <precision>60precision>
            retention>
            <retention>
                <age>3600age>
                <precision>300precision>
            retention>
            <retention>
                <age>86400age>
                <precision>3600precision>
            retention>
        default>
    graphite_rollup_example>

    
    <format_schema_path>/var/lib/clickhouse/format_schemas/format_schema_path>

    
    <query_masking_rules>
        <rule>
            <name>hide encrypt/decrypt argumentsname>
            <regexp>((?:aes_)?(?:encrypt|decrypt)(?:_mysql)?)\s*\(\s*(?:'(?:\\'|.)+'|.*?)\s*\)regexp>
            
            <replace>\1(???)replace>
        rule>
    query_masking_rules>

    

    <send_crash_reports>
        
        
        
        <enabled>falseenabled>
        
        <anonymize>falseanonymize>
        
        
        <endpoint>https://[email protected]/5226277endpoint>
    send_crash_reports>

    
    
yandex>

修改 配置中的 数据路径,选择挂载的大容量磁盘路径 /var/lib/clickhouse/

编辑 users.xml 清空 内容,将以下内容粘贴进去


<yandex>
    <profiles>
        <default>
            <max_memory_usage>10000000000max_memory_usage>
            <load_balancing>randomload_balancing>
        default>
        <readonly>
            <readonly>1readonly>
        readonly>
    profiles>
    <users>
        <default>
            <password>1qaz!@#$password>
            <networks>
                <ip>::/0ip>
            networks>
            <profile>defaultprofile>
            <quota>defaultquota>
        default>
    users>

    <quotas>
        <default>
            <interval>
                <duration>3600duration>
                <queries>0queries>
                <errors>0errors>
                <result_rows>0result_rows>
                <read_rows>0read_rows>
                <execution_time>0execution_time>
            interval>
        default>
    quotas>
yandex>

重启clickhouse 服务并查看状态

[root@ch01 clickhouse-server]# systemctl stop clickhouse-server.service
[root@ch01 clickhouse-server]# systemctl start clickhouse-server.service
[root@ch01 clickhouse-server]# systemctl status clickhouse-server.service
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
   Loaded: loaded (/etc/systemd/system/clickhouse-server.service; enabled; vendor preset: disabled)
   Active: active (running) since 四 2022-06-30 14:16:41 CST; 1s ago
 Main PID: 3189 (clckhouse-watch)
    Tasks: 47
   Memory: 83.0M
   CGroup: /system.slice/clickhouse-server.service
           ├─3189 clickhouse-watchdog --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
           └─3190 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid

6月 30 14:16:41 ch01 systemd[1]: Started ClickHouse Server (analytic DBMS for big data).
6月 30 14:16:41 ch01 clickhouse-server[3189]: Processing configuration file '/etc/clickhouse-server/config.xml'.
6月 30 14:16:41 ch01 clickhouse-server[3189]: Logging trace to /var/log/clickhouse-server/clickhouse-server.log
6月 30 14:16:41 ch01 clickhouse-server[3189]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
6月 30 14:16:41 ch01 clickhouse-server[3189]: Processing configuration file '/etc/clickhouse-server/config.xml'.
6月 30 14:16:41 ch01 clickhouse-server[3189]: Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/config.xml'.
6月 30 14:16:41 ch01 clickhouse-server[3189]: Processing configuration file '/etc/clickhouse-server/users.xml'.
6月 30 14:16:41 ch01 clickhouse-server[3189]: Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/users.xml'.

另外一个节点按照上面的方式操作,注意:第二个节点在 修改 config.xml 注意


        1
        rep_1_2

将 macros标签下的 replica 的值由改成 rep_1_2

然后重启clichouse

6、建库建表

1、登录上 节点1 clickhouse 创建集群数据库 log_fact_point

[root@ch01 clickhouse-server]# clickhouse-client -m -h localhost --port 9000 -u default  --password
ClickHouse client version 21.3.4.25 (official build).
Password for user (default): 
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.3.4 revision 54447.

# 创建集群数据库

ch01 :) create database log_fact_point on cluster  default_cluster ;

CREATE DATABASE log_fact_point ON CLUSTER default_cluster

Query id: f6faf1b8-a073-49b7-9e36-e4fe0336227d

┌─host─┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ ch02 │ 90000 │       │                   10 │
│ ch01 │ 90000 │       │                   00 │
└──────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2 rows in set. Elapsed: 0.112 sec. 

# -- 查看数据库 

ch01 :) show databases;

SHOW DATABASES

Query id: df029da1-890d-4584-b0a3-89abcbe47c82

┌─name───────────┐
│ default        │
│ log_fact_point │
│ system         │
└────────────────┘

3 rows in set. Elapsed: 0.001 sec. 

# -- 进入数据库 

ch01 :) use log_fact_point;

USE log_fact_point

Query id: 8500d813-abe0-4f35-98cf-b2a47d04b1df

Ok.

0 rows in set. Elapsed: 0.001 sec. 


# -- 创建本地表

ch01 :) 
:-] CREATE TABLE log_fact_point.log_school_screen_dynamic ON cluster default_cluster  
:-] (  
:-]     `id` String COMMENT 'id',
:-]     `type` Int32 COMMENT '类型(1、教师布置作业,学生提交作业 2、教师上传资源 3、评价动态 4、互动动态)',
:-]     `operation_id` String COMMENT '操作对应业务表的id',
:-]     `realname` String COMMENT '操作人真实姓名',
:-]     `create_time` DateTime64(3) COMMENT '操作时间',
:-]     `operation` String COMMENT '操作内容',
:-]     `school_id` String COMMENT '学校id'
:-] )  
:-] ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/log_fact_point/log_school_screen_dynamic', '{replica}')  
:-] PARTITION BY toYYYYMM(create_time)  
:-] PRIMARY KEY id  
:-] ORDER BY (id, create_time)
:-] ;

CREATE TABLE log_fact_point.log_school_screen_dynamic ON CLUSTER default_cluster
(
    `id` String COMMENT 'id',
    `type` Int32 COMMENT '类型(1、教师布置作业,学生提交作业 2、教师上传资源 3、评价动态 4、互动动态)',
    `operation_id` String COMMENT '操作对应业务表的id',
    `realname` String COMMENT '操作人真实姓名',
    `create_time` DateTime64(3) COMMENT '操作时间',
    `operation` String COMMENT '操作内容',
    `school_id` String COMMENT '学校id'
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/log_fact_point/log_school_screen_dynamic', '{replica}')
PARTITION BY toYYYYMM(create_time)
PRIMARY KEY id
ORDER BY (id, create_time);

Query id: db859174-14ae-4d7f-932f-785fb2f3b295

┌─host─┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ ch02 │ 90000 │       │                   10 │
│ ch01 │ 90000 │       │                   00 │
└──────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2 rows in set. Elapsed: 0.109 sec. 


# -- 创建分布式表

ch01 :) CREATE TABLE log_fact_point.log_school_screen_dynamic_dist ON CLUSTER default_cluster AS log_fact_point.log_school_screen_dynamic ENGINE = Distributed(default_cluster, log_fact_point, log_school_screen_dynamic, rand());

CREATE TABLE log_fact_point.log_school_screen_dynamic_dist ON CLUSTER default_cluster AS log_fact_point.log_school_screen_dynamic
ENGINE = Distributed(default_cluster, log_fact_point, log_school_screen_dynamic, rand())

Query id: 93928c86-e5f4-40a4-97a3-d3b11963d31f

┌─host─┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ ch02 │ 90000 │       │                   10 │
│ ch01 │ 90000 │       │                   00 │
└──────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2 rows in set. Elapsed: 0.111 sec. 



# -- 查看数据表

ch01 :) show tables;

SHOW TABLES

Query id: a77b90e5-51c6-43c0-b14a-81a29ec5ef16

┌─name───────────────────────────┐
│ log_school_screen_dynamic      │
│ log_school_screen_dynamic_dist │
└────────────────────────────────┘

2 rows in set. Elapsed: 0.002 sec. 

ch01 :) 

2、登录上 节点2 clickhouse 查看刚刚在节点1上 创建数据库 和表

[root@ch02 ~]# clickhouse-client -m -h localhost --port 9000 -u default  --password
ClickHouse client version 21.3.4.25 (official build).
Password for user (default):   # 输入密码 ******
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.3.4 revision 54447.

ch02 :) create database log_fact_point;

CREATE DATABASE log_fact_point

Query id: 09374849-50fb-44ab-bb35-3a6e69466d48

Ok.

0 rows in set. Elapsed: 0.003 sec. 

# --查看数据库

ch02 :) show databases;

SHOW DATABASES

Query id: 916b0d54-ab5e-404b-9d6e-3223520c4396

┌─name───────────┐
│ default        │
│ log_fact_point │
│ system         │
└────────────────┘

3 rows in set. Elapsed: 0.002 sec. 

# --进入数据库

ch02 :) use log_fact_point;

USE log_fact_point

Query id: f255930e-3836-4075-9be1-24a481d547f0

Ok.

0 rows in set. Elapsed: 0.001 sec. 

# --查看数据表

ch02 :) show tables;

SHOW TABLES

Query id: d93c0818-0636-4475-8b99-02b401aa1b38

┌─name───────────────────────────┐
│ log_school_screen_dynamic      │
│ log_school_screen_dynamic_dist │
└────────────────────────────────┘

2 rows in set. Elapsed: 0.002 sec. 

ch02 :)

验证成功

到此 clickhouse 分布式集群安装部署完成。

你可能感兴趣的:(Clickhouse,linux,服务器,运维)