需要根据业务需求设计严谨的集群架构,一般来说,需要注意以下几项:
根据实际业务情况粗估集群规模后,无法较为准确的判断出对应的集群规模,建议先部署一套最小架构的集群,后面逐步扩容。
主机名 | 配置 |
---|---|
192.168.110.101(node-101) | 6核 32GB 万兆网卡 CPU支持AVX2指令集 |
192.168.110.102(node-102) | 6核 32GB 万兆网卡 CPU支持AVX2指令集 |
192.168.110.103(node-103) | 6核 32GB 万兆网卡 CPU支持AVX2指令集 |
192.168.110.104(node-104) | 6核 32GB 万兆网卡 CPU支持AVX2指令集 |
192.168.110.105(node-105) | 6核 32GB 万兆网卡 CPU支持AVX2指令集 |
机器节点 | 部署服务 |
---|---|
192.168.110.101(node-101) | FE(Leader)、MySQL-Client |
192.168.110.102(node-102) | FE(Observer) |
192.168.110.103(node-103) | BE、Broker |
192.168.110.104(node-104) | BE、Broker |
192.168.110.105(node-105) | BE、Broker |
FE | 部署目录:/opt/module/starrocks/fe;日志目录:/data/starrocks/log/fe;元数据目录:/data/starrocks/data/meta |
---|---|
BE | 部署目录:/opt/module/starrocks/be;日志目录:/data/starrocks/log/be;数据存储目录:/data/starrocks/data/storage |
Broker | 部署目录:/opt/module/starrocks/apache_hdfs_broker |
实际生产中,个别场景下是使用用户名作为鉴权方式,为贴合实际业务,后续的部署操作分别新建starrocks用户进行(密码也暂设为starrocks):
useradd starrocks
passwd starrocks
Changing password for user starrocks.
New password: 这里输入starrocks
BAD PASSWORD: The password contains the user name in some form
Retype new password: 再次输入starrocks
passwd: all authentication tokens updated successfully.
再分别对其它节点新建用户starrocks(操作同上,略)。
使用root用户分别在各节点上新建目录,并将文件夹所有者变更为starrocks用户:
ansible cluster -m shell -a "mkdir -p /opt/module/starrocks/"
ansible cluster -m shell -a "mkdir -p /data/starrocks/log/{fe,be}"
ansible cluster -m shell -a "mkdir -p /data/starrocks/data/{meta,storage}"
为starrocks用户配置集群间SSH免密。SSH免密配置方法比较灵
su starrocks
ssh-keygen -t rsa
分发至集群其他所有节点
ssh-copy-id 192.168.110.101
tar -zxf StarRocks-3.1.1.tar.gz -C /opt/module/
ansible cluster -m copy -a 'src=/opt/module/StarRocks-3.1.1/fe dest=/opt/module/starrocks/'
ansible cluster -m copy -a 'src=/opt/module/StarRocks-3.1.1/be dest=/opt/module/starrocks/'
ansible cluster -m copy -a 'src=/opt/module/StarRocks-3.1.1/apache_hdfs_broker dest=/opt/module/starrocks/'
a、修改Java堆内存,避免GC建议16G以上;
b、设置元数据目录,默认目录为fe/meta,我们需要新建目录并修改配置文件,上文已创建;
c、注意默认端口,避免端口冲突,正常情况下不需要修改;
d、绑定IP(CIDR表示法),避免多网卡情况下FE无法自动找到正确的IP。再次注意,如果不清楚CIDR表示法,就直接填写完整ip,例如配置为priority_networks = 192.168.110.101,这样的写法等同于priority_networks = 192.168.110.101/32;
vi fe/conf/fe.conf
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#####################################################################
## The uppercase properties are read and exported by bin/start_fe.sh.
## To see all Frontend configurations,
## see fe/src/com/starrocks/common/Config.java
# the output dir of stderr/stdout/gc
# LOG_DIR = ${STARROCKS_HOME}/log
LOG_DIR = /data/starrocks/log/fe
JAVA_HOME=/usr/java/jdk-17
DATE = "$(date +%Y%m%d-%H%M%S)"
JAVA_OPTS="-Dlog4j2.formatMsgNoLookups=true -Xmx16384m -XX:+UseMembar -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xloggc:${LOG_DIR}/fe.gc.log.$DATE -XX:+PrintConcurrentLocks"
# For jdk 11+, this JAVA_OPTS will be used as default JVM options
JAVA_OPTS_FOR_JDK_11="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseG1GC -Xlog:gc*:${LOG_DIR}/fe.gc.log.$DATE:time"
##
## the lowercase properties are read by main program.
##
# DEBUG, INFO, WARN, ERROR, FATAL
sys_log_level = INFO
# store metadata, create it if it is not exist.
# Default value is ${STARROCKS_HOME}/meta
# meta_dir = ${STARROCKS_HOME}/meta
meta_dir = /data/starrocks/data/meta
http_port = 8030
rpc_port = 9020
query_port = 9030
edit_log_port = 9010
mysql_service_nio_enabled = true
# Enable jaeger tracing by setting jaeger_grpc_endpoint
# jaeger_grpc_endpoint = http://localhost:14250
# Choose one if there are more than one ip except loopback address.
# Note that there should at most one ip match this list.
# If no ip match this rule, will choose one randomly.
# use CIDR format, e.g. 10.10.10.0/24
# Default value is empty.
# priority_networks = 10.10.10.0/24;192.168.0.0/16
priority_networks =192.168.110.101
# Advanced configurations
# log_roll_size_mb = 1024
sys_log_dir = /data/starrocks/log/fe
# sys_log_roll_num = 10
# sys_log_verbose_modules =
audit_log_dir = /data/starrocks/log/fe
# audit_log_modules = slow_query, query
# audit_log_roll_num = 10
# meta_delay_toleration_second = 10
# qe_max_connection = 1024
# max_conn_per_user = 100
# qe_query_timeout_second = 300
# qe_slow_log_ms = 5000
max_routine_load_batch_size = 524288000
routine_load_task_consume_second = 3
routine_load_task_timeout_second = 15
a、注意默认端口,避免端口冲突,正常情况下不需要修改;
b、绑定IP,避免多网卡情况下BE无法自动找到正确的IP;
c、设置数据存储目录,默认目录为be/storage,我们建议根据磁盘情况新建目录并修改配置文件
vi be/conf/be.conf
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# INFO, WARNING, ERROR, FATAL
sys_log_level = INFO
# ports for admin, web, heartbeat service
be_port = 9060
be_http_port = 8040
heartbeat_service_port = 9050
brpc_port = 8060
# Enable jaeger tracing by setting jaeger_endpoint
# jaeger_endpoint = localhost:6831
# Choose one if there are more than one ip except loopback address.
# Note that there should at most one ip match this list.
# If no ip match this rule, will choose one randomly.
# use CIDR format, e.g. 10.10.10.0/24
# Default value is empty.
# priority_networks = 10.10.10.0/24;192.168.0.0/16
priority_networks = 192.168.110.103
# data root path, separate by ';'
# you can specify the storage medium of each root path, HDD or SSD, seperate by ','
# eg:
# storage_root_path = /data1,medium:HDD;/data2,medium:SSD;/data3
# /data1, HDD;
# /data2, SSD;
# /data3, HDD(default);
#
# Default value is ${STARROCKS_HOME}/storage, you should create it by hand.
# storage_root_path = ${STARROCKS_HOME}/storage
storage_root_path = /data/starrocks/data/storage,medium:ssd
# Advanced configurations
sys_log_dir = /data/starrocks/log/be
# sys_log_roll_mode = SIZE-MB-1024
# sys_log_roll_num = 10
# sys_log_verbose_modules = *
# log_buffer_level = -1
# JVM options for be
JAVA_HOME=/usr/java/jdk-17
# eg:
# JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"
# For jdk 9+, this JAVA_OPTS will be used as default JVM options
# JAVA_OPTS_FOR_JDK_9="-Djava.security.krb5.conf=/etc/krb5.conf"
base_compaction_check_interval_seconds = 10
cumulative_compaction_num_threads_per_disk = 4
base_compaction_num_threads_per_disk = 2
cumulative_compaction_check_interval_seconds = 2
tablet_max_versions = 15000
ansible cluster -m copy -a 'src=/opt/module/starrocks/fe/conf/fe.conf dest=/opt/module/starrocks/fe/conf/'
ansible cluster -m copy -a 'src=/opt/module/starrocks/be/conf/be.conf dest=/opt/module/starrocks/be/conf/'
FE
priority_networks = 192.168.110.102
BE
priority_networks = 192.168.110.104
storage_root_path = /opt/module/storage
rpm -ivh mysql-community-client-plugins-8.0.34-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-8.0.34-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-8.0.34-1.el7.x86_64.rpm
mysql --version
export JAVA_HOME=/usr/java/jdk-17
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export PATH=$JAVA_HOME/bin:$PATH
首个启动的FE自动成为Leader
可在FE日志目录中查看日志追踪原因,FE的主要日志在fe.log中,所有查询的审计日志在fe.audit.log中。
./start_fe.sh --daemon
jps | grep StarRocksFe
mysql -h192.168.110.101 -P9030 -uroot
建议“先将实例添加进入集群”,然后逐个“启动实例”。将node02的FE作为Observer先添加入集群,端口为edit_log_port,默认是9010。
alter system add observer '192.168.110.102:9010';
若需要将其作为Follower角色加入集群,这里的sql写法为:
alter system add follower "fe_host:edit_log_port";
如果添加时IP或端口信息输入有误,或者由于其他情况需要将实例从集群中删除,sql写法如下:
删除Follower:
alter system drop follower "fe_host:edit_log_port";
删除Observer:
alter system drop observer "fe_host:edit_log_port";
特别注意:除首个启动的FE外,其他FE节点首次启动时,必需指定一个已存在的FE节点作为helper(后面再启动时就不需要)。
首次启动node-102节点,指定node-101的FE实例作为helper:
cd /opt/module/starrocks/fe/bin/
./start_fe.sh --helper 192.168.110.101:9010 --daemon
jps | grep StarRocksFe
在node-101的mysql-client中查看FE状态:
Alive均为true,说明FE状态正常。若状态为false,可以在日志中定位问题,如果异常排查比较耗时,由于是初次启动,可以清空FE的元数据目录,再从头开始操作。
show frontends\G
使用mysql-client先将这3个BE实例添加进入集群,这里需要使用的端口是portheartbeat_service_port,默认为9050:
alter system add backend '192.168.110.103:9050';
alter system add backend '192.168.110.104:9050';
alter system add backend '192.168.110.105:9050';
如果BE实例添加时IP或端口信息输入有误,或者由于其他异常情况我们需要在集群中删除BE实例,sql写法:
alter system dropp backend "be_host:be_heartbeat_service_port";
alter system decommission backend "be_host:be_heartbeat_service_port";
进程状态异常可在BE日志目录中查看日志追踪原因,BE的主要日志在be.INFO中,其他的日志在be.out中。
cd /opt/module/starrocks/be/bin/
./start_be.sh --daemon
# 检查进程状态
ps -ef | grep starrocks_be
使用mysql-client访问StarRocks集群:
Alive均为true,状态正常,若为false,可根据日志排查问题。同样的,因为当前BE为初次启动,如果出现无法快速定位的问题,可以清空storage数据目录和日志目录,重新启动服务。
show backends\G
Broker实例不需要绑定IP。生产环境下,通常也不需要修改Broker配置文件中的其他配置。
在node-101的mysql-client中,先将3个Broker实例添加进入集群,这里端口使用broker_ipc_port,默认端口为8000:
alter system add broker hdfs_broker '192.168.110.103:8000';
alter system add broker hdfs_broker '192.168.110.104:8000';
alter system add broker hdfs_broker '192.168.110.105:8000';
若需要在集群中删除Broker,sql写法为:
ALTER SYSTEM DROP BROKER broker_name "broker_host:broker_ipc_port";
部署了一组独立的 Broker,并将 hdfs-site.xml 文件放在 HDFS 集群对应的 Broker 节点的 {deploy}/conf 目录下。Broker 进程重启
# 分发 hdfs-site.xml
ansible cluster -m copy -a 'src=/opt/module/starrocks/apache_hdfs_broker/conf/hdfs-site.xml dest=/opt/module/starrocks/apache_hdfs_broker/conf/'
注:各节点依次启动即可;如果进程状态异常可查看日志追踪原因。
cd /opt/module/starrocks/apache_hdfs_broker/bin/
./start_broker.sh --daemon
jps | grep BrokerBootstrap
Alive均为true,状态正常。Broker日志在apache_hdfs_broker.log中,若状态为false,可依据日志定位问题。
mysql -h192.168.110.101 -P9030 -uroot
show broker\G
将 hdfs-site.xml 文件放在每个 FE 节点和每个 BE 节点的 {deploy}/conf 目录下。
# 分发 hdfs-site.xml
ansible cluster -m copy -a 'src=/opt/module/starrocks/fe/conf/hdfs-site.xml dest=/opt/module/starrocks/fe/conf'
ansible cluster -m copy -a 'src=/opt/module/starrocks/be/conf/hdfs-site.xml dest=/opt/module/starrocks/be/conf'
集群部署完成后,若机器重启或有服务down掉,需要手动或编写脚本启停服务。
cd /opt/module/starrocks/fe/bin/
#FE启动
./start_fe.sh --daemon
# FE停止
./stop_fe.sh
cd /opt/module/starrocks/be/bin
# BE启动
./start_be.sh --daemon
# BE停止
./stop_be.sh
cd /opt/module/starrocks/apache_hdfs_broker/bin/
# Broker启动
./start_broker.sh --daemon
# Broker停止
./stop_broker.sh
在node-101节点,使用starrocks用户在/home/starrocks目录下创建starrocks.sh文件:
#!/bin/bash
# use-method: starrocks.sh start|stop|restart
case $1 in
"start"){
for i in node-101 node-102 node-103 node-104 node-105
do
echo "=================== start $i's service ================"
ssh $i "source /etc/profile.d/my_env.sh ;cd /opt/module/starrocks;./fe/bin/start_fe.sh --daemon"
ssh $i "/opt/module/starrocks/be/bin/start_be.sh --daemon"
ssh $i "source /etc/profile.d/my_env.sh ;cd /opt/module/starrocks;./apache_hdfs_broker/bin/start_broker.sh --daemon"
done
};;
"stop"){
for i in node-101 node-102 node-103 node-104 node-105
do
echo "=================== stop $i's service ================"
ssh $i "/opt/module/starrocks/apache_hdfs_broker/bin/stop_broker.sh"
ssh $i "/opt/module/starrocks/be/bin/stop_be.sh"
ssh $i "/opt/module/starrocks/fe/bin/stop_fe.sh"
done
};;
"restart")
starrocks.sh stop
sleep 2
starrocks.sh start
;;
*)
echo "Parameter ERROR!!!"
;;
esac
chmod a+x starrocks.sh
切换至root用户,将脚本移动至/bin目录下以便全局调用
mv starrocks.sh /bin/
切换至starrocks用户,测试使用脚本启动集群(先确定集群各实例都处于未启动状态):
# 启动集群
starrocks.sh start
# 停止集群服务
starrocks.sh stop
# 重启集群服务
starrocks.sh restart
当前版本,StarRocks部署完成后自带一个高权限用户:root,其默认密码为空。在node-101上使用mysql-client访问StarRocks集群:
mysql -h192.168.110.101 -P9030 -uroot
# 以root用户为例,生产环境建议设置复杂密码
set password=password('StarRocks*2308');
create database star;
CREATE USER 'starrocks'@'%' IDENTIFIED BY 'StarRocks*2308' DEFAULT ROLE user_admin;
grant all on star.* to 'starrocks'@'%';
grant create table on database star to user 'starrocks'@'%';
mysql -h192.168.110.101 -P9030 -ustarrocks -pStarRocks*2308
use star;
CREATE TABLE IF NOT EXISTS `customer` (
`c_custkey` int(11) NOT NULL COMMENT "",
`c_name` varchar(26) NOT NULL COMMENT "",
`c_address` varchar(41) NOT NULL COMMENT "",
`c_city` varchar(11) NOT NULL COMMENT "",
`c_nation` varchar(16) NOT NULL COMMENT "",
`c_region` varchar(13) NOT NULL COMMENT "",
`c_phone` varchar(16) NOT NULL COMMENT "",
`c_mktsegment` varchar(11) NOT NULL COMMENT ""
) ENGINE=OLAP
DUPLICATE KEY(`c_custkey`)
COMMENT "OLAP"
DISTRIBUTED BY HASH(`c_custkey`) BUCKETS 12
PROPERTIES (
"replication_num" = "1",
"in_memory" = "false",
"storage_format" = "DEFAULT"
);
insert into table customer
select
1,'Customer#000000001','j5JsirBM9P','MOROCCO 0','MOROCCO','AFRICA','25-989-741-2988','BUILDING';
curl --location-trusted -u starrocks:StarRocks*2308-H "label:star_customer" -H "column_separator:|" -T /home/starrocks/customer.tbl http://192.168.110.101:8030/api/star/customer/_stream_load
CREATE TABLE dwd.dwd_enterprise_info(
id BIGINT NOT NULL AUTO_INCREMENT COMMENT '自增主键',
`eid` string COMMENT '企业eid',
`credit_no` string COMMENT '统一社会信用代码',
`name` string COMMENT '企业名称',
`ipcs` string COMMENT '专利IPC分类号',
`titles` string COMMENT '专利标题',
`scope` string COMMENT '经营范围',
`introduction` string COMMENT '企业简介',
`main_business` string COMMENT '主营业务',
`industry_code` string COMMENT '行业代码',
`industrial_field_new` string COMMENT '行业领域',
`html` string COMMENT '网站',
`hat_name` string COMMENT '企业资质')
PRIMARY KEY (id,eid)
COMMENT '企业产业链匹配专用表 - 含企业简介主营专利信息'
DISTRIBUTED BY HASH(id,eid) BUCKETS 12
PROPERTIES (
"replication_num" = "1",
"replicated_storage" = "true"
);
LOAD LABEL dwd.dwd_alg_enterprise_business_patents_scope_info
(
DATA INFILE("hdfs://IP:8020/warehouse/tablespace/external/hive/dwd.db/dwd_enterprise_info/*")
INTO TABLE dwd_alg_enterprise_business_patents_scope_info
COLUMNS TERMINATED BY "|^|"
FORMAT AS "orc"
(eid, credit_no, name,ipcs,titles,scope,introduction,main_business,industry_code,industrial_field_new,html,hat_name)
)
WITH BROKER 'hdfs_broker';