使用普通用户登录中控机,以 tidb
用户为例,后续安装 TiUP 及集群管理操作均通过该用户完成:
1、执行如下命令安装 TiUP 工具:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
2、按如下步骤设置 TiUP 环境变量:
重新声明全局环境变量:
source .bash_profile
确认 TiUP 工具是否安装:
which tiup
3、安装 TiUP cluster 组件
tiup cluster
如果已经安装,则更新 TiUP cluster 组件至最新版本:
tiup update --self && tiup update cluster
预期输出 “Update successfully!”
字样。
4、验证当前 TiUP cluster 版本信息。执行如下命令查看 TiUP cluster 组件版本:
[tidb@db01 ~]$ tiup --binary cluster
/home/tidb/.tiup/components/cluster/v1.3.1/tiup-cluster
1、编辑配置文件
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
pd_servers:
- host: 192.168.137.129
- host: 192.168.137.130
- host: 192.168.137.131
tidb_servers:
- host: 192.168.137.129
- host: 192.168.137.130
- host: 192.168.137.131
tikv_servers:
- host: 192.168.137.129
- host: 192.168.137.130
- host: 192.168.137.131
monitoring_servers:
- host: 192.168.137.129
grafana_servers:
- host: 192.168.137.129
alertmanager_servers:
- host: 192.168.137.129
2、安装TiDB
通过 TiUP 进行集群部署可以使用密钥或者交互密码方式来进行安全认证:
-i
或者 --identity_file
来指定密钥的路径;-p
进入密码交互窗口;一般情况下 TiUP 会在目标机器上创建 topology.yaml
中约定的用户和组,以下情况例外:
topology.yaml
中设置的用户名在目标机器上已存在。--skip-create-user
明确指定跳过创建用户的步骤。[tidb@db01 ~]$ tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --user root -p
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.1/tiup-cluster deploy tidb-test v4.0.0 ./topology.yaml --user root -p
Please confirm your topology:
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v4.0.0
Type Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.137.129 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd 192.168.137.130 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd 192.168.137.131 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 192.168.137.129 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.137.130 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.137.131 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb 192.168.137.129 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tidb 192.168.137.130 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tidb 192.168.137.131 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
prometheus 192.168.137.129 9090 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 192.168.137.129 3000 linux/x86_64 /tidb-deploy/grafana-3000
alertmanager 192.168.137.129 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: y
Input SSH password:
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v4.0.0 (linux/amd64) ... Done
- Download tikv:v4.0.0 (linux/amd64) ... Done
- Download tidb:v4.0.0 (linux/amd64) ... Done
- Download prometheus:v4.0.0 (linux/amd64) ... Done
- Download grafana:v4.0.0 (linux/amd64) ... Done
- Download alertmanager:v0.17.0 (linux/amd64) ... Done
- Download node_exporter:v0.17.0 (linux/amd64) ... Done
- Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 192.168.137.129:22 ... Done
- Prepare 192.168.137.130:22 ... Done
- Prepare 192.168.137.131:22 ... Done
+ Copy files
- Copy pd -> 192.168.137.129 ... Done
- Copy pd -> 192.168.137.130 ... Done
- Copy pd -> 192.168.137.131 ... Done
- Copy tikv -> 192.168.137.129 ... Done
- Copy tikv -> 192.168.137.130 ... Done
- Copy tikv -> 192.168.137.131 ... Done
- Copy tidb -> 192.168.137.129 ... Done
- Copy tidb -> 192.168.137.130 ... Done
- Copy tidb -> 192.168.137.131 ... Done
- Copy prometheus -> 192.168.137.129 ... Done
- Copy grafana -> 192.168.137.129 ... Done
- Copy alertmanager -> 192.168.137.129 ... Done
- Copy node_exporter -> 192.168.137.130 ... Done
- Copy node_exporter -> 192.168.137.131 ... Done
- Copy node_exporter -> 192.168.137.129 ... Done
- Copy blackbox_exporter -> 192.168.137.129 ... Done
- Copy blackbox_exporter -> 192.168.137.130 ... Done
- Copy blackbox_exporter -> 192.168.137.131 ... Done
+ Check status
Enabling component pd
Enabling instance pd 192.168.137.131:2379
Enabling instance pd 192.168.137.129:2379
Enabling instance pd 192.168.137.130:2379
Enable pd 192.168.137.131:2379 success
Enable pd 192.168.137.129:2379 success
Enable pd 192.168.137.130:2379 success
Enabling component node_exporter
Enabling component blackbox_exporter
Enabling component node_exporter
Enabling component blackbox_exporter
Enabling component node_exporter
Enabling component blackbox_exporter
Enabling component tikv
Enabling instance tikv 192.168.137.131:20160
Enabling instance tikv 192.168.137.129:20160
Enabling instance tikv 192.168.137.130:20160
Enable tikv 192.168.137.129:20160 success
Enable tikv 192.168.137.131:20160 success
Enable tikv 192.168.137.130:20160 success
Enabling component tidb
Enabling instance tidb 192.168.137.131:4000
Enabling instance tidb 192.168.137.129:4000
Enabling instance tidb 192.168.137.130:4000
Enable tidb 192.168.137.129:4000 success
Enable tidb 192.168.137.131:4000 success
Enable tidb 192.168.137.130:4000 success
Enabling component prometheus
Enabling instance prometheus 192.168.137.129:9090
Enable prometheus 192.168.137.129:9090 success
Enabling component grafana
Enabling instance grafana 192.168.137.129:3000
Enable grafana 192.168.137.129:3000 success
Enabling component alertmanager
Enabling instance alertmanager 192.168.137.129:9093
Enable alertmanager 192.168.137.129:9093 success
Cluster `tidb-test` deployed successfully, you can start it with command: `tiup cluster start tidb-test`
TiUP 支持管理多个 TiDB 集群,该命令会输出当前通过 TiUP cluster 管理的所有集群信息,包括集群名称、部署用户、版本、密钥信息等:
[tidb@db01 ~]$ tiup cluster list
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.1/tiup-cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
tidb-test tidb v4.0.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
执行如下命令检查 tidb-test 集群情况,预期输出包括 tidb-test 集群中实例 ID、角色、主机、监听端口和状态(由于还未启动,所以状态为 Down/inactive)、目录信息:
[tidb@db01 ~]$ tiup cluster display tidb-test
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.1/tiup-cluster display tidb-test
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v4.0.0
SSH type: builtin
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.137.129:9093 alertmanager 192.168.137.129 9093/9094 linux/x86_64 inactive /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
192.168.137.129:3000 grafana 192.168.137.129 3000 linux/x86_64 inactive - /tidb-deploy/grafana-3000
192.168.137.129:2379 pd 192.168.137.129 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.137.130:2379 pd 192.168.137.130 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.137.131:2379 pd 192.168.137.131 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.137.129:9090 prometheus 192.168.137.129 9090 linux/x86_64 inactive /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.137.129:4000 tidb 192.168.137.129 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
192.168.137.130:4000 tidb 192.168.137.130 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
192.168.137.131:4000 tidb 192.168.137.131 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
192.168.137.129:20160 tikv 192.168.137.129 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.137.130:20160 tikv 192.168.137.130 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.137.131:20160 tikv 192.168.137.131 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
预期结果输出 Started cluster tidb-test
successfully 标志启动成功。
[tidb@db01 ~]$ tiup cluster start tidb-test
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.1/tiup-cluster start tidb-test
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.131
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.130
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.130
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.131
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.131
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.130
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [ Serial ] - StartCluster
Starting component pd
Starting instance pd 192.168.137.131:2379
Starting instance pd 192.168.137.129:2379
Starting instance pd 192.168.137.130:2379
Start pd 192.168.137.131:2379 success
Start pd 192.168.137.129:2379 success
Start pd 192.168.137.130:2379 success
Starting component node_exporter
Starting instance 192.168.137.129
Start 192.168.137.129 success
Starting component blackbox_exporter
Starting instance 192.168.137.129
Start 192.168.137.129 success
Starting component node_exporter
Starting instance 192.168.137.130
Start 192.168.137.130 success
Starting component blackbox_exporter
Starting instance 192.168.137.130
Start 192.168.137.130 success
Starting component node_exporter
Starting instance 192.168.137.131
Start 192.168.137.131 success
Starting component blackbox_exporter
Starting instance 192.168.137.131
Start 192.168.137.131 success
Starting component tikv
Starting instance tikv 192.168.137.131:20160
Starting instance tikv 192.168.137.129:20160
Starting instance tikv 192.168.137.130:20160
Start tikv 192.168.137.131:20160 success
Start tikv 192.168.137.129:20160 success
Start tikv 192.168.137.130:20160 success
Starting component tidb
Starting instance tidb 192.168.137.131:4000
Starting instance tidb 192.168.137.129:4000
Starting instance tidb 192.168.137.130:4000
Start tidb 192.168.137.129:4000 success
Start tidb 192.168.137.131:4000 success
Start tidb 192.168.137.130:4000 success
Starting component prometheus
Starting instance prometheus 192.168.137.129:9090
Start prometheus 192.168.137.129:9090 success
Starting component grafana
Starting instance grafana 192.168.137.129:3000
Start grafana 192.168.137.129:3000 success
Starting component alertmanager
Starting instance alertmanager 192.168.137.129:9093
Start alertmanager 192.168.137.129:9093 success
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully
通过 TiUP 检查集群状态,预期结果输出,注意 Status 状态信息为 Up 说明集群状态正常
[tidb@db01 ~]$ tiup cluster display tidb-test
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.1/tiup-cluster display tidb-test
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v4.0.0
SSH type: builtin
Dashboard URL: http://192.168.137.131:2379/dashboard
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.137.129:9093 alertmanager 192.168.137.129 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
192.168.137.129:3000 grafana 192.168.137.129 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.137.129:2379 pd 192.168.137.129 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.137.130:2379 pd 192.168.137.130 2379/2380 linux/x86_64 Up /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.137.131:2379 pd 192.168.137.131 2379/2380 linux/x86_64 Up|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.137.129:9090 prometheus 192.168.137.129 9090 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.137.129:4000 tidb 192.168.137.129 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.137.130:4000 tidb 192.168.137.130 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.137.131:4000 tidb 192.168.137.131 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.137.129:20160 tikv 192.168.137.129 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.137.130:20160 tikv 192.168.137.130 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.137.131:20160 tikv 192.168.137.131 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
Total nodes: 12
连接TiDB
[tidb@db01 ~]$ mysql -u root -h 192.168.137.129 -P 4000
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.25-TiDB-v4.0.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
root@mysql 19:51: [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
5 rows in set (0.00 sec)
root@mysql 19:51: [(none)]> use test;
Database changed
root@mysql 19:51: [test]> show tables;
Empty set (0.00 sec)
root@mysql 19:51: [test]> select user,host from mysql.user;
+------+------+
| user | host |
+------+------+
| root | % |
+------+------+
1 row in set (0.00 sec)
root@mysql 19:51: [test]> ^DBye
停止TiDB
[tidb@db01 ~]$ tiup cluster stop tidb-test
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.1/tiup-cluster stop tidb-test
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.131
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.130
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.131
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.130
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.129
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.131
+ [Parallel] - UserSSH: user=tidb, host=192.168.137.130
+ [ Serial ] - StopCluster
Stopping component alertmanager
Stopping instance 192.168.137.129
Stop alertmanager 192.168.137.129:9093 success
Stopping component grafana
Stopping instance 192.168.137.129
Stop grafana 192.168.137.129:3000 success
Stopping component prometheus
Stopping instance 192.168.137.129
Stop prometheus 192.168.137.129:9090 success
Stopping component tidb
Stopping instance 192.168.137.131
Stopping instance 192.168.137.129
Stopping instance 192.168.137.130
Stop tidb 192.168.137.129:4000 success
Stop tidb 192.168.137.131:4000 success
Stop tidb 192.168.137.130:4000 success
Stopping component tikv
Stopping instance 192.168.137.131
Stopping instance 192.168.137.129
Stopping instance 192.168.137.130
Stop tikv 192.168.137.129:20160 success
Stop tikv 192.168.137.130:20160 success
Stop tikv 192.168.137.131:20160 success
Stopping component pd
Stopping instance 192.168.137.131
Stopping instance 192.168.137.129
Stopping instance 192.168.137.130
Stop pd 192.168.137.129:2379 success
Stop pd 192.168.137.130:2379 success
Stop pd 192.168.137.131:2379 success
Stopping component node_exporter
Stopping component blackbox_exporter
Stopping component node_exporter
Stopping component blackbox_exporter
Stopping component node_exporter
Stopping component blackbox_exporter
Stopped cluster `tidb-test` successfully