TiDB官网文档:https://pingcap.com/docs-cn/overview/#tidb-%e7%ae%80%e4%bb%8b
阿里云服务器七台,三台跑TiKV server,三台跑PD server, 两台跑TiDB server
注: 这七台服务器需在同一区域(如:华南 1 可用区 C)
内网ip | 外网ip | 角色 |
---|---|---|
172.18.56.156 | 120.90.188.11 | 下载机 |
172.18.56.155 | 无 | PD、Prometheus、Grafana、Pushgateway、Node_exporter |
172.18.56.154 | 无 | PD、TiDB、Node_exporter |
172.18.56.153 | 无 | PD、TiDB、Node_exporter |
172.18.56.152 | 无 | TiKV、Node_exporter |
172.18.56.151 | 无 | TiKV、Node_exporter |
172.18.56.150 | 无 | TiKV、Node_exporter |
注: 可以通过访问外网ip机器来访问其他内网ip机器
用root用户分别登录到172.18.56.150-155
机器创建tidb用户并设置免密
# 创建tidb用户
$ useradd tidb
$ passwd tidb # 密码也设置为tidb
# 设置免密
$ visudo # 或者 vim /etc/sudoers
# 跳到最后新增一行
tidb ALL=(ALL) NOPASSWD: ALL
在172.18.56.155
中控机上生成ssh key
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tidb/.ssh/id_rsa):
Created directory '/home/tidb/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/tidb/.ssh/id_rsa.
Your public key has been saved in /home/tidb/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:eIBykszR1KyECA/h0d7PRKz4fhAeli7IrVphhte7/So tidb@172.18.56.155
The key's randomart image is:
+---[RSA 2048]----+
|=+o+.o. |
|o=o+o.oo |
| .O.=.= |
| . B.B + |
|o B * B S |
| * + * + |
| o + . |
| o E+ . |
|o ..+o. |
+----[SHA256]-----+
以tidb
用户登录到中控机,执行以下命令,将中控机的pub key发送到其他需要互信的机器上
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.18.56.150
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.18.56.151
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.18.56.152
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.18.56.153
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.18.56.154
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.18.56.155
验证ssh免密登录(互信)
$ ssh 172.18.56.154
在中控机上ssh到互信的机器,是不需要输入密码的
需把TiKV server三台机器的外挂SSD磁盘格式化为ext4格式
# 查看磁盘,找到需要格式化的磁盘号(假设磁盘号为/dev/vdb)
$ fdisk -l
# 对磁盘进行分区,此处我们只分一个区,按提示一步一步操作即可
# 可以参考:https://blog.csdn.net/panfelix/article/details/39701011
$ fdisk /dev/vdb
# 创建一个文件夹,用于刚格式化的分区mount到该文件夹
$ mkdir /u01
# 设置开机自动mount,编辑/etc/fstab文件,加入一行
$ vim /etc/fstab
/dev/vdb /u01 ext4 defaults,nodelalloc,noatime 0 2
# 确认是否生效
$ mount -a
$ mount -t ext4
/dev/vdb on /u01 type ext4 (rw,noatime,nodelalloc,data=ordered)
参考官方文档:https://pingcap.com/docs-cn/op-guide/offline-ansible-deployment/
由于中控机没有公网ip,不能连接外网,所以需要借助下载机(172.18.56.156)下载pip安装包,然后scp
到中控机(172.18.56.155)
# 在下载机中下载pip安装包
$ wget https://download.pingcap.org/pip-rpms.el7.tar.gz
# scp 到中控机
$ scp pip-rpms.el7.tar.gz root@172.18.56.155:/home/tidb
# 中控机中安装pip
$ tar -zxvf pip-rpms.el7.tar.gz
$ cd pip-rpms.el7.tar.gz
$ sh install_pip.sh
# 验证pip是否安装成功
$ pip -V
# 下载机也需要安装pip,安装方式跟中控机一样,解压再执行shell安装脚本
# 下载机中下载Ansible2.5安装包
$ wget https://download.pingcap.org/ansible-2.5.0-pip.tar.gz
# scp 到中控机
scp ansible-2.5.0-pip.tar.gz root@172.18.56.155:/home/tidb
# 在中控机中安装Ansible2.5
$ tar -xzvf ansible-2.5.0-pip.tar.gz
$ cd ansible-2.5.0-pip/
$ sh install_ansible.sh
# 查看ansible版本
$ ansible --version
# 下载机中也需要安装ansible,安装方式跟中控机一样,解压再执行shell安装脚本
注: 如果中控机安装ansible过程中提示依赖没有安装,可以到下载机中下载相关依赖的rpm包,scp到中控机安装yum -y install xxx.rmp
,centos rmp包查找
在下载机中下载tidb-ansible,这里我们下载2.0版本的
$ wget git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git
在下载机中执行 local_prepare.yml playbook,联网下载 TiDB binary 到下载机
$ cd tidb-ansible
$ ansible-playbook local_prepare.yml
下载机中scp tidb-ansible文件夹到中控机,此步骤后接下来的操作就跟下载机无关了
$ scp -r tidb-ansible root@172.18.56.155:/home/tidb
在中控机中配置集群节点的分布情况,以及安装路径
$ cd tidb-ansible
$ vim inventory.ini
## TiDB Cluster Part
[tidb_servers]
172.18.56.153
172.18.56.154
[tikv_servers]
172.18.56.150
172.18.56.151
172.18.56.152
[pd_servers]
172.18.56.153
172.18.56.154
172.18.56.155
[spark_master]
[spark_slaves]
## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
172.18.56.155
[grafana_servers]
172.18.56.155
# node_exporter and blackbox_exporter servers
[monitored_servers]
172.18.56.150
172.18.56.151
172.18.56.152
172.18.56.153
172.18.56.154
172.18.56.155
[alertmanager_servers]
## Binlog Part
[pump_servers:children]
tidb_servers
## Group variables
[pd_servers:vars]
# location_labels = ["zone","rack","host"]
## Global variables
[all:vars]
deploy_dir = /u01
## Connection
# ssh via normal user
ansible_user = tidb
cluster_name = test-cluster
tidb_version = latest
# process supervision, [systemd, supervise]
process_supervision = systemd
# timezone of deployment region
timezone = Asia/Shanghai
set_timezone = True
enable_firewalld = False
# check NTP service
enable_ntpd = True
set_hostname = False
## binlog trigger
enable_binlog = False
# zookeeper address of kafka cluster, example:
# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181"
zookeeper_addrs = ""
# store slow query log into seperate file
enable_slow_query_log = False
# enable TLS authentication in the TiDB cluster
enable_tls = False
# KV mode
deploy_without_tidb = False
# Optional: Set if you already have a alertmanager server.
# Format: alertmanager_host:alertmanager_port
alertmanager_target = ""
grafana_admin_user = "admin"
grafana_admin_password = "admin"
修改完配置文件后,在中控机中执行初始化
$ ansible-playbook -i inventory.ini bootstrap.yml -k -K
在中控机中安装服务
$ ansible-playbook -i inventory.ini deploy.yml -k -K
启动所有服务
$ ansible-playbook -i inventory.ini start.yml -k
停止所有服务
$ ansible-playbook -i inventory.ini stop.yml
http://172.18.56.155:3000
注: 中控机没有公网ip,我们可以通过下载机来访问
我们的TiDBs server运行在172.18.56.153
和172.18.56.154
,所以我们可以通过这两个ip进行访问
mysql -u root -h 172.18.56.153 -P 4000