最全的clickhouse的增删改查以及导入导出操作

下面的相关代码,全是基于生产环境部署实践的,

 

---客户端链接工具
clickhouse-client -m -u root -p root

查询正在进行执行的sql操作
SHOW PROCESSLIST

建表语句

create TABLE test.test( id Int32,create_date Date ,c2 Nullable(String) ) 
engine=MergeTree(create_date,id,(c3,c2),8192);

表变更预计

ALTER查询只支持MergeTree表,。该查询有几个变体。
ALTER TABLE [db].name [ON CLUSTER cluster] ADD|DROP|MODIFY COLUMN
--更改列的类型

alter TABLE test.ontime_wpp_t MODIFY COLUMN TailNum Nullable(String);

alter TABLE test.ontime_wpp_t ADD COLUMN TailNum2 Nullable(String)  after   Div5TailNum ;

alter TABLE test.ontime_wpp_t drop COLUMN TailNum2;

表变更数据系统监控
select * from system.mutations where is_done = 0 order by create_time desc limit 1;

删除表
drop table cdm_dwd.dwd_ord_car_sharing_df on cluster crm_4shards_1replicas;

删除数据

truncate table cdm_dwd.dwd_ord_car_sharing_df on cluster crm_4shards_1replicas;
 

变更表名
RENAME TABLE test.ontime_wpp_t to test.ontime_wpp_t2;

集群操作
RENAME TABLE cdm_dwd.dwd_ord_carsh_base_df2 to cdm_dwd.dwd_ord_carsh_base_df on cluster crm_4shards_1replicas;

1,数据导出
在相关节点执行:
echo 'select * from test.ads_user_portrait_vertical_df_cls' | curl localhost:8123?database=test -uroot:root -d @- > table_name.sql


2、导入数据,以tab作为分割符:

导入数据库的本机执行:

cat table_name.sql | clickhouse-client --query="INSERT INTO database.table_name FORMAT TabSeparated"
     

cat /root/user_lable_local_mid_cluster.tgz | clickhouse-client --user hadoop --password hadoop --query="INSERT INTO lmmbase.user_lable_local_mid_cluster FORMAT CSV";
cat /home/hadoop/work_wpp/user_label_uid_cluster | clickhouse-client --user hadoop --password hadoop --query="INSERT INTO lmmbase.user_label_uid_cluster FORMAT CSV";

cat /tmp/test_user2| clickhouse-client --user hadoop --password hadoop --query="INSERT INTO lmmbase.test_user2 FORMAT CSV";


插入语句
不严格插入数据,没有出现的列自动填充为默认值
INSERT INTO [db.]table [(c1, c2, c3)] VALUES (v11, v12, v13), (v21, v22)

严格插入数据,每一列都必须出现在上面
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT Values (v11, v12, v13), (v21, v22, v23)

cat /tmp/user_point_info | clickhouse-client --query="INSERT INTO test.user_point_info FORMAT CSV";

clickhouse-client -m --user hadoop --password hadoop --query="truncate table lmmbase.user_label_uid on cluster crm_4shards_1replicas";

ssh hadoop@dn1 "/bin/bash /home/hadoop/app/otherApp/truncate_user_label_uid_data.sh"

clickhouse-client --query=" alter table  test.ads_user_portrait_vertical_df delete where create_time ='2019-10-17' ";

 

 

 

                              相关压测,同时执行相关sql,看下机器负载
*/2 * * * * clickhouse-client -m --query="select t_mac,t_type,count(*) cnt from carendpoint_porlog_cls group by t_mac,t_type order by cnt desc limit 100;"
*/2 * * * * clickhouse-client  -m --query="select t_mac,count(*) cnt from carendpoint_porlog_cls group by t_mac order by cnt desc limit 100;"
*/2 * * * * clickhouse-client  -m --query="select t_type,count(*) cnt from carendpoint_porlog_cls group by t_type order by cnt desc limit 100;"

*/1 * * * * clickhouse-client  -m --query="select t_ip,t_type,count(*) cnt from carendpoint_porlog_cls group by t_ip,t_type order by cnt desc limit 100;" >> /root/wpp1.log
*/1 * * * * clickhouse-client  -m --query="select t_ip,count(*) cnt from carendpoint_porlog_cls group by t_ip order by cnt desc limit 100;" >> /root/wpp2.log
*/1 * * * * clickhouse-client  -m --query="select event,count(*) cnt from carendpoint_porlog_cls group by event order by cnt desc limit 100;" >> /root/wpp2.log
 

 

有兴趣的可以加本人微信号:  wangpengpeng0525

码字不易,有闲钱可以打赏杯奶茶喝

 

最全的clickhouse的增删改查以及导入导出操作_第1张图片

你可能感兴趣的:(clickhouse)