1离线TiDB-Ansible 部署问题-总结

总结,以草稿的方式,以点概面:

一, hosts 各节点绑定

######################

cat >/etc/hosts << EOF

127.0.0.1 tikv1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

 

###### TiDB #######

10.0.77.5 tikv1-500.com

10.0.77.6 tikv2-500.com

10.0.77.10 tikv3-500.com

10.0.77.11 tidb1-500.com

10.0.25.5 tidb2-500.com

10.0.25.6 tidb-cluster.monitor

192.168.41.22 pd1-500.com

192.168.41.27 pd2-500.com

192.168.41.13 pd3-500.com

 

###################

EOF

 

 

二,参数配置

 

cat >> /etc/pam.d/login << EOF

session required /lib64/security/pam_limits.so

EOF

 

--时间同步

yum install ntp ntpdate -y && systemctl start ntpd.service && systemctl status ntpd.service

cat >> /etc/ntp.conf << EOF

##

## ntp server ##

server 192.168.0.188 iburst

EOF

 

 

--创建用户

useradd -m -d /home/tidb tidb

passwd tidb

 

-- sudo 无密设置

cat >> /etc/sudoers << EOF

# visudo

tidb ALL=(ALL) NOPASSWD: ALL

EOF

 

--创建目录,授权

mkdir -pv /data/tidb/deploy && chown -R tidb:tidb /data

 

 

-- ssh 等效验证(手工)配置

####

中控机:

ssh-keygen -t rsa

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.25.6 --中控

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.77.5

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.77.6

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.77.10

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.77.11

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.25.5

ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.41.22

ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.41.27

ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.41.13

 

其他节点同样执行:

#####################

 

批量处理创建脚本:

-- yum install expect -y

 

[tidb@pd3-500 ~]$ cat ssh_auto.sh

#!/bin/bash

[ ! -f /home/tidb/.ssh/id_rsa.pub ] && ssh-keygen -t rsa -p '' &>/dev/null # 密钥对不存在则创建密钥

while read line;do

ip=`echo $line | cut -d " " -f1` # 提取文件中的ip

user_name=`echo $line | cut -d " " -f2` # 提取文件中的用户名

pass_word=`echo $line | cut -d " " -f3` # 提取文件中的密码

 

expect <

spawn ssh-copy-id -i /home/tidb/.ssh/id_rsa.pub $user_name@$ip

expect {

"yes/no" { send "yes\n";exp_continue}

"password" { send "$pass_word\n"}

}

expect eof

EOF

done < /home/tidb/host_ip.txt # 读取存储ip的文件

#---------------- The End ------------------------#

 

scp host_ip.txt ssh_auto.sh 10.0.25.6:.

scp host_ip.txt ssh_auto.sh 10.0.77.5:.

scp host_ip.txt ssh_auto.sh 10.0.77.6:.

scp host_ip.txt ssh_auto.sh 10.0.77.10:.

scp host_ip.txt ssh_auto.sh 10.0.77.11:.

scp host_ip.txt ssh_auto.sh 10.0.25.5:.

scp host_ip.txt ssh_auto.sh 192.168.41.22:.

scp host_ip.txt ssh_auto.sh 192.168.41.27:.

scp host_ip.txt ssh_auto.sh 192.168.41.13:.

 

--验证

ssh xxxx date;date

sudo su root

--均不要输入密码

 

 

--tidb 自动创建ssh及sudo ,在安装时:

### tidb 可以自己配置ssh 等效验证:

https://pingcap.com/docs-cn/v3.0/how-to/deploy/orchestrated/ansible/#在中控机上配置部署机器-ssh-互信及-sudo-规则

 

###

[tidb@tidb-cluster tidb-ansible]$ ansible-playbook -i hosts.ini create_users.yml -uroot -k

SSH password:

--以下是输出结果:

PLAY [all]

**********************************************************************************************

 

TASK [create user]

**********************************************************************************************

ok: [10.0.77.5]

ok: [10.0.77.11]

ok: [10.0.77.6]

ok: [10.0.77.10]

ok: [10.0.25.5]

ok: [10.0.25.6]

ok: [192.168.41.22]

ok: [192.168.41.13]

ok: [192.168.41.27]

 

TASK [set authorized key]

*************************************************************************************************

ok: [10.0.77.5]

ok: [10.0.77.10]

ok: [10.0.77.6]

ok: [10.0.25.5]

ok: [10.0.77.11]

ok: [10.0.25.6]

ok: [192.168.41.27]

ok: [192.168.41.13]

ok: [192.168.41.22]

 

TASK [update sudoers file]

**********************************************************************************************

changed: [10.0.77.5]

changed: [10.0.77.10]

changed: [10.0.77.11]

changed: [10.0.25.5]

changed: [10.0.77.6]

changed: [10.0.25.6]

ok: [192.168.41.13]

ok: [192.168.41.27]

ok: [192.168.41.22]

 

PLAY RECAP

************************************************************************************************

10.0.25.5 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

10.0.25.6 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

10.0.77.10 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

10.0.77.11 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

10.0.77.5 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

10.0.77.6 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

192.168.41.13 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

192.168.41.22 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

192.168.41.27 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Congrats! All goes well. :-)

 

 

####

[tidb@tidb-cluster tidb-ansible]$ ls

ansible.cfg common_tasks downloads inventory.ini README.md scripts stop.yml

bootstrap.yml conf fact_files library requirements.txt start_drainer.yml templates

callback_plugins create_users.yml filter_plugins LICENSE resources start_spark.yml unsafe_cleanup_container.yml

clean_log_cron.yml deploy_drainer.yml graceful_stop.yml local_prepare.yml roles start.yml unsafe_cleanup_data.yml

cloud deploy_ntp.yml group_vars log rolling_update_monitor.yml stop_drainer.yml unsafe_cleanup.yml

collect_diagnosis.yml deploy.yml hosts.ini migrate_monitor.yml rolling_update.yml stop_spark.yml

[tidb@tidb-cluster tidb-ansible]$ cd downloads/ ## 在线时,中控机联网自动下载的一些软件

[tidb@tidb-cluster downloads]$ ls

alertmanager-0.14.0.tar.gz grafana-4.6.3.tar.gz node_exporter-0.15.2.tar.gz spark-2.4.3-bin-hadoop2.7.tgz tispark-assembly-2.2.0.jar

blackbox_exporter-0.12.0.tar.gz grafana_collector-latest.tar.gz prometheus-2.2.1.tar.gz tidb-insight.tar.gz tispark-sample-data.tar.gz

fio-3.8.tar.gz kafka_exporter-1.1.0.tar.gz pushgateway-0.4.0.tar.gz tidb-v2.1.17.tar.gz

 

 

####

1,检测ssh 等效性验证:

 

执行以下命令如果所有 server 返回 tidb 表示 ssh 互信配置成功。

 

[tidb@tidb-cluster tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami'

10.0.25.5 | CHANGED | rc=0 >>

tidb

 

10.0.77.10 | CHANGED | rc=0 >>

tidb

 

10.0.77.6 | CHANGED | rc=0 >>

tidb

 

10.0.77.11 | CHANGED | rc=0 >>

tidb

 

10.0.77.5 | CHANGED | rc=0 >>

tidb

 

10.0.25.6 | CHANGED | rc=0 >>

tidb

 

192.168.41.13 | CHANGED | rc=0 >>

tidb

 

192.168.41.22 | CHANGED | rc=0 >>

tidb

 

192.168.41.27 | CHANGED | rc=0 >>

tidb

 

#######

 

执行以下命令如果所有 server 返回 root 表示 tidb 用户 sudo 免密码配置成功。

 

Copy

ansible -i inventory.ini all -m shell -a 'whoami' -b

 

 

[tidb@tidb-cluster tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami' -b

10.0.25.5 | CHANGED | rc=0 >>

root

 

10.0.77.10 | CHANGED | rc=0 >>

root

 

10.0.77.6 | CHANGED | rc=0 >>

root

 

10.0.77.5 | CHANGED | rc=0 >>

root

 

10.0.77.11 | CHANGED | rc=0 >>

root

 

10.0.25.6 | CHANGED | rc=0 >>

root

 

192.168.41.27 | CHANGED | rc=0 >>

root

 

192.168.41.22 | CHANGED | rc=0 >>

root

 

192.168.41.13 | CHANGED | rc=0 >>

root

 

#####

 

2 执行 local_prepare.yml playbook,联网下载 TiDB binary 到中控机:

 

[tidb@tidb-cluster tidb-ansible]$ ansible-playbook local_prepare.yml

 

PLAY [do local preparation]

*******************************************************************************************

 

TASK [local : Stop if ansible version is too low, make sure that the Ansible version is Ansible 2.4.2 or later, otherwise a compatibility issue occurs.]

*******************

ok: [localhost] => {

"changed": false,

"msg": "All assertions passed"

}

 

TASK [local : create downloads and resources directories]

*******************************************************************************************

 

ok: [localhost] => (item=/data/tidb/tidb-ansible/downloads)

ok: [localhost] => (item=/data/tidb/tidb-ansible/resources)

ok: [localhost] => (item=/data/tidb/tidb-ansible/resources/bin)

 

TASK [local : create cert directory]

*******************************************************************************************

 

TASK [local : create packages.yml]

*********************************************************************************************

ok: [localhost]

 

TASK [local : create specific deployment method packages.yml]

*******************************************************************************************

ok: [localhost]

 

TASK [local : include_vars]

*************************************************************************************

ok: [localhost]

 

TASK [local : include_vars]

*****************************************************************************************

ok: [localhost]

 

TASK [local : detect outbound network]

***************************************************************************************

ok: [localhost]

 

TASK [local : set outbound network fact]

****************************************************************************

ok: [localhost]

 

TASK [local : fail]

*******************************************************************************************

 

TASK [local : detect GFW]

************************************************************************************************

ok: [localhost]

 

TASK [local : set GFW fact]

***************************************************************************************

ok: [localhost]

 

TASK [local : download tidb binary]

****************************************************************************************************

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-v2.1.17-linux-amd64.tar.gz', u'version': u'v2.1.17', u'name': u'tidb'})

 

TASK [local : download common binary]

***************************************************************************************

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/fio-3.8.tar.gz', u'checksum': u'sha256:15739abde7e74b59ac59df57f129b14fc5cd59e1e2eca2ce37b41f8c289c3d58', u'version': 3.8, u'name': u'fio'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/grafana_collector-latest-linux-amd64.tar.gz', u'version': u'latest', u'name': u'grafana_collector'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/kafka_exporter-1.1.0.linux-amd64.tar.gz', u'version': u'1.1.0', u'name': u'kafka_exporter'})

 

TASK [local : download diagnosis tools]

**************************************************************************************************

changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-insight-v0.2.5-1-g99b8fea.tar.gz', u'version': u'v0.2.5-1-g99b8fea', u'name': u'tidb-insight'})

 

TASK [local : download cfssl binary]

**************************************************************************************************

 

TASK [local : download cfssljson binary]

***********************************************************************************************

 

TASK [local : include_tasks]

***********************************************************************************************

included: /data/tidb/tidb-ansible/roles/local/tasks/binary_deployment.yml for localhost

 

TASK [local : download other binary]

********************************************************************************************

 

TASK [local : download other binary under gfw]

*******************************************************************************************

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/prometheus-2.2.1.linux-amd64.tar.gz', u'version': u'2.2.1', u'name': u'prometheus'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/alertmanager-0.14.0.linux-amd64.tar.gz', u'version': u'0.14.0', u'name': u'alertmanager'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/node_exporter-0.15.2.linux-amd64.tar.gz', u'version': u'0.15.2', u'name': u'node_exporter'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/pushgateway-0.4.0.linux-amd64.tar.gz', u'version': u'0.4.0', u'name': u'pushgateway'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/grafana-4.6.3.linux-x64.tar.gz', u'version': u'4.6.3', u'name': u'grafana'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/blackbox_exporter-0.12.0.linux-amd64.tar.gz', u'version': u'0.12.0', u'name': u'blackbox_exporter'})

 

TASK [local : download TiSpark packages]

**************************************************************************************

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/spark-2.4.3-bin-hadoop2.7.tgz', u'checksum': u'sha256:80a4c564ceff0d9aff82b7df610b1d34e777b45042e21e2d41f3e497bb1fa5d8', u'version': u'2.4.3', u'name': u'spark-2.4.3-bin-hadoop2.7.tgz'})

ok: [localhost] => (item={u'url': u'https://download.pingcap.org/tispark-assembly-2.2.0.jar', u'version': u'2.2.0', u'name': u'tispark-assembly-2.2.0.jar'})

ok: [localhost] => (item={u'url': u'http://download.pingcap.org/tispark-sample-data.tar.gz', u'version': u'latest', u'name': u'tispark-sample-data.tar.gz'})

 

TASK [local : unarchive third party binary]

****************************************************************************************************

changed: [localhost] => (item={u'url': u'https://github.com/prometheus/prometheus/releases/download/v2.2.1/prometheus-2.2.1.linux-amd64.tar.gz', u'version': u'2.2.1', u'name': u'prometheus'})

changed: [localhost] => (item={u'url': u'https://github.com/prometheus/alertmanager/releases/download/v0.14.0/alertmanager-0.14.0.linux-amd64.tar.gz', u'version': u'0.14.0', u'name': u'alertmanager'})

changed: [localhost] => (item={u'url': u'https://github.com/prometheus/node_exporter/releases/download/v0.15.2/node_exporter-0.15.2.linux-amd64.tar.gz', u'version': u'0.15.2', u'name': u'node_exporter'})

changed: [localhost] => (item={u'url': u'https://github.com/prometheus/blackbox_exporter/releases/download/v0.12.0/blackbox_exporter-0.12.0.linux-amd64.tar.gz', u'version': u'0.12.0', u'name': u'blackbox_exporter'})

changed: [localhost] => (item={u'url': u'https://github.com/prometheus/pushgateway/releases/download/v0.4.0/pushgateway-0.4.0.linux-amd64.tar.gz', u'version': u'0.4.0', u'name': u'pushgateway'})

changed: [localhost] => (item={u'url': u'https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.3.linux-x64.tar.gz', u'version': u'4.6.3', u'name': u'grafana'})

 

TASK [local : unarchive tispark-sample-data]

******************************************************************************************************

changed: [localhost]

 

TASK [local : cp monitoring binary]

**********************************************************************************************

changed: [localhost] => (item=alertmanager)

changed: [localhost] => (item=prometheus)

changed: [localhost] => (item=node_exporter)

changed: [localhost] => (item=pushgateway)

changed: [localhost] => (item=blackbox_exporter)

 

TASK [local : cp tispark]

**************************************************************************************************************

changed: [localhost]

 

TASK [local : cp tispark-sample-data]

***************************************************************************************************

changed: [localhost]

 

TASK [local : unarchive tidb binary]

******************************************************************************************************

changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-v2.1.17-linux-amd64.tar.gz', u'version': u'v2.1.17', u'name': u'tidb'})

 

TASK [local : unarchive common binary]

****************************************************************************************

changed: [localhost] => (item={u'url': u'http://download.pingcap.org/fio-3.8.tar.gz', u'checksum': u'sha256:15739abde7e74b59ac59df57f129b14fc5cd59e1e2eca2ce37b41f8c289c3d58', u'version': 3.8, u'name': u'fio'})

changed: [localhost] => (item={u'url': u'http://download.pingcap.org/grafana_collector-latest-linux-amd64.tar.gz', u'version': u'latest', u'name': u'grafana_collector'})

changed: [localhost] => (item={u'url': u'http://download.pingcap.org/kafka_exporter-1.1.0.linux-amd64.tar.gz', u'version': u'1.1.0', u'name': u'kafka_exporter'})

 

TASK [local : cp tidb binary]

****************************************************************************************

changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-v2.1.17-linux-amd64.tar.gz', u'version': u'v2.1.17', u'name': u'tidb'})

 

TASK [local : cp fio binary]

********************************************************************************

changed: [localhost] => (item=fio)

 

TASK [local : cp grafana_collector binary and fonts]

*******************************************************************************

changed: [localhost]

 

TASK [local : cp kafka_exporter binary]

*******************************************************************************

changed: [localhost] => (item=kafka_exporter)

 

TASK [local : cp daemontools binary]

**********************************************************************************************

 

TASK [local : cp tidb-insight tarball]

***********************************************************************************

changed: [localhost]

 

TASK [local : clean up download dir]

************************************************************************************************

changed: [localhost]

 

PLAY RECAP

******************************************************************************************************************

localhost : ok=29 changed=14 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0

 

Congrats! All goes well. :-)

 

##########

-- 在最后开始安装tidb 软件时, 得一个步骤:

初始化系统环境,修改内核参数

$ ansible-playbook bootstrap.yml

 

........

........

 

报错一:

 

TASK [check_system_static : Preflight check - Check the currently active governor] *****************************************************************************************

changed: [10.0.25.5]

changed: [10.0.77.10]

changed: [10.0.77.6]

changed: [10.0.77.5]

changed: [10.0.77.11]

changed: [10.0.25.6]

changed: [192.168.41.22]

changed: [192.168.41.13]

changed: [192.168.41.27]

 

TASK [check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode]

*************************************************************

fatal: [10.0.77.5]: FAILED! => {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

fatal: [10.0.77.6]: FAILED! => {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

fatal: [10.0.77.10]: FAILED! => {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

fatal: [10.0.77.11]: FAILED! => {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

fatal: [10.0.25.5]: FAILED! => {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

fatal: [10.0.25.6]: FAILED! => {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

 

NO MORE HOSTS LEFT ****************************************************************************************************

 

PLAY RECAP *************************************************************************************************************

10.0.25.5 : ok=30 changed=10 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0

10.0.25.6 : ok=30 changed=10 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0

10.0.77.10 : ok=30 changed=7 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0

10.0.77.11 : ok=30 changed=7 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0

10.0.77.5 : ok=30 changed=7 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0

10.0.77.6 : ok=30 changed=7 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0

192.168.41.13 : ok=30 changed=7 unreachable=0 failed=0 skipped=16 rescued=0 ignored=0

192.168.41.22 : ok=30 changed=7 unreachable=0 failed=0 skipped=16 rescued=0 ignored=0

192.168.41.27 : ok=30 changed=7 unreachable=0 failed=0 skipped=16 rescued=0 ignored=0

localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0

 

 

ERROR MESSAGE SUMMARY **************************************************************************************************

[10.0.77.5]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode; message: {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

 

[10.0.77.6]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode; message: {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

 

[10.0.77.10]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode; message: {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

 

[10.0.77.11]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode; message: {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

 

[10.0.25.5]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode; message: {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

 

[10.0.25.6]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode; message: {"changed": false, "msg": "To achieve maximum performance, it is recommended to set The CPU frequency governor to performance mode, see https://github.com/pingcap/docs/blob/master/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine"}

 

Ask for help:

Contact us: [email protected]

It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)

 

报错一,解决办法:

#### 解决方法:####

本例中当前配置是 powersave 模式,你可以通过以下命令设置为 performance 模式。

cpupower frequency-set --governor performance

你也可以通过以下命令在部署目标机器上批量设置:

 

ansible -i hosts.ini all -m shell -a "cpupower frequency-set --governor performance" -u tidb -b

 

[tidb@tidb-cluster tidb-ansible]$ ansible -i hosts.ini all -m shell -a "cpupower frequency-set --governor performance" -u tidb -b

......

......

 

原因:

-- 41.27 - 21.22 -21.13 不支持

[root@pd2-500 ~]# cpupower frequency-info --governors

analyzing CPU 0:

available cpufreq governors: Not Available

[root@pd2-500 ~]#

---------------------------------------- the end --------------------------------------------------

 

 

#####

报错二:

####

 

TASK [machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement] *****************************************************************

fatal: [10.0.77.5]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 218 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

fatal: [10.0.77.6]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 199 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

fatal: [10.0.77.10]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 222 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

 

PLAY RECAP *************************************************************************************************************

10.0.25.5 : ok=32 changed=7 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0

10.0.25.6 : ok=32 changed=7 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0

10.0.77.10 : ok=41 changed=14 unreachable=0 failed=1 skipped=35 rescued=0 ignored=0

10.0.77.11 : ok=32 changed=7 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0

10.0.77.5 : ok=44 changed=14 unreachable=0 failed=1 skipped=33 rescued=0 ignored=0

10.0.77.6 : ok=43 changed=14 unreachable=0 failed=1 skipped=33 rescued=0 ignored=0

192.168.41.13 : ok=32 changed=7 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0

192.168.41.22 : ok=32 changed=7 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0

192.168.41.27 : ok=32 changed=7 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0

localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0

 

 

ERROR MESSAGE SUMMARY ************************************************************************************************************

[10.0.77.5]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 218 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

 

[10.0.77.6]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 199 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

 

[10.0.77.10]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 222 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

 

Ask for help:

Contact us: [email protected]

It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)

 

 

----------------

报错二,解决方法:

本次没有使用ssd 盘,所以报错:

在对应的配置中,注释或者调低检测阈值:

以下参数,对应的行序号,其前缀为"#" 都表示注释了。

 

(1):注释掉硬盘检查

[tidb@tidb-cluster tidb-ansible]$ pwd

/data/tidb/tidb-ansible

[tidb@tidb-cluster tidb-ansible]$ vim roles/machine_benchmark/tasks/fio_randread.yml

---

 

1 ---

2

3 #- name: fio randread benchmark on tikv_data_dir disk

4 # shell: "cd {{ fio_deploy_dir }} && ./fio -ioengine=psync -bs=32k -fdatasync=1 -thread -rw=randread -size={{ benchmark_size }} -filename=fio_randread_test.txt -name=' fio randread test' -iodepth=4 -runtime=60 -numjobs=4 -group_reporting --output-format=json --output=fio_randread_result.json"

5 # register: fio_randread

6

7 - name: clean fio randread benchmark temporary file

8 file:

9 path: "{{ fio_deploy_dir }}/fio_randread_test.txt"

10 state: absent

11

12 #- name: get fio randread iops

13 # shell: "python parse_fio_output.py --target='fio_randread_result.json' --read-iops"

14 # register: disk_randread_iops

15 # args:

16 # chdir: "{{ fio_deploy_dir }}/"

17

18 #- name: get fio randread summary

19 # shell: "python parse_fio_output.py --target='fio_randread_result.json' --summary"

20 # register: disk_randread_smmary

21 # args:

22 # chdir: "{{ fio_deploy_dir }}/"

23

24 - name: fio randread benchmark command

25 debug:

26 msg: "fio randread benchmark command: {{ fio_randread.cmd }}."

27 run_once: true

28

29 - name: fio randread benchmark summary

30 debug:

31 msg: "fio randread benchmark summary: {{ disk_randread_smmary.stdout }}."

32

33 #- name: Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement

34 # fail:

35 # msg: 'fio: randread iops of tikv_data_dir disk is too low: {{ disk_randread_iops.stdout }} < {{ min_ssd_randread_iops }}, it is strongly recommended to use SSD dis ks for TiKV and PD, or there might be performance issues.'

36 # when: disk_randread_iops.stdout|int < min_ssd_randread_iops|int

 

-------------------------------

(二):注释掉磁盘转数检查 (行序号对应的 "#" 注销)

 

[tidb@tidb-cluster tidb-ansible]$ vim roles/machine_benchmark/tasks/fio_randread_write_latency.yml

1 ---

2

3 - name: fio mixed randread and sequential write benchmark for latency on tikv_data_dir disk

4 shell: "cd {{ fio_deploy_dir }} && ./fio -ioengine=psync -bs=32k -fdatasync=1 -thread -rw=randrw -percentage_random=100,0 -size={{ benchmark_size }} -filename=fio_ran dread_write_latency_test.txt -name='fio mixed randread and sequential write test' -iodepth=1 -runtime=60 -numjobs=1 -group_reporting --output-format=json --output=fio_r andread_write_latency_test.json"

5 register: fio_randread_write_latency

6

7 - name: clean fio mixed randread and sequential write benchmark for latency temporary file

8 file:

9 path: "{{ fio_deploy_dir }}/fio_randread_write_latency_test.txt"

10 state: absent

11

12 - name: get fio mixed test randread latency

13 shell: "python parse_fio_output.py --target='fio_randread_write_latency_test.json' --read-lat"

14 register: disk_mix_randread_lat

15 args:

16 chdir: "{{ fio_deploy_dir }}/"

17

18 - name: get fio mixed test write latency

19 shell: "python parse_fio_output.py --target='fio_randread_write_latency_test.json' --write-lat"

20 register: disk_mix_write_lat

21 args:

22 chdir: "{{ fio_deploy_dir }}/"

23

24 - name: get fio mixed randread and sequential write for latency summary

25 shell: "python parse_fio_output.py --target='fio_randread_write_latency_test.json' --summary"

26 register: disk_mix_randread_write_latency_smmary

27 args:

28 chdir: "{{ fio_deploy_dir }}/"

29

30 - name: fio mixed randread and sequential write benchmark for latency command

31 debug:

32 msg: "fio mixed randread and sequential write benchmark for latency command: {{ fio_randread_write_latency.cmd }}."

33 run_once: true

34

35 - name: fio mixed randread and sequential write benchmark for latency summary

36 debug:

37 msg: "fio mixed randread and sequential write benchmark summary: {{ disk_mix_randread_write_latency_smmary.stdout }}."

38

39 #- name: Preflight check - Does fio mixed randread and sequential write latency of tikv_data_dir disk meet requirement - randread

40 # fail:

41 # msg: 'fio mixed randread and sequential write test: randread latency of tikv_data_dir disk is too low: {{ disk_mix_randread_lat.stdout }} ns > {{ max_ssd_mix_rand read_lat }} ns, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues.'

42 # when: disk_mix_randread_lat.stdout|int > max_ssd_mix_randread_lat|int

43

44 #- name: Preflight check - Does fio mixed randread and sequential write latency of tikv_data_dir disk meet requirement - sequential write

45 # fail:

46 # msg: 'fio mixed randread and sequential write test: sequential write latency of tikv_data_dir disk is too low: {{ disk_mix_write_lat.stdout }} ns > {{ max_ssd_mix_ write_lat }} ns, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues.'

47 # when: disk_mix_write_lat.stdout|int > max_ssd_mix_write_lat|int

 

--------------------------

 

(三):也是磁盘转数注释修改

[tidb@tidb-cluster tidb-ansible]$ vim bootstrap.yml

-- 对应的是42行,对应的一行注释掉。

42 # - { role: machine_benchmark, when: not dev_mode|default(false) }

 

(四):注释掉监控主机检测

[tidb@tidb-cluster tidb-ansible]$ vim deploy.yml

23 # - check_config_static

[tidb@tidb-cluster tidb-ansible]$ vim bootstrap.yml

21 #- check_config_static

[tidb@tidb-cluster tidb-ansible]$ vim start.yml

23 #- check_config_static

 

[tidb@tidb-cluster tidb-ansible]$ vim stop.yml

23 # - check_config_static

 

#####################

 

(五): 告警

TASK [bootstrap : group hosts by distribution] ******************************************************************************************************

[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details

-- 可略过,

-------------------------------

报错二原因,因本次此时为非ssd 磁盘,全部使用sas 替代

-------------------- The End -------------------------------

 

#################

再执行检测:3 .初始化系统环境,修改内核参数

 

--------

所有截取检测日志:

[tidb@tidb-cluster tidb-ansible]$ ansible-playbook bootstrap.yml

 

.....

......

Congrats! All goes well. :-) # 最后正常结尾

 

########

 

4 : 此时,可以安装了TiDB 软件了。(对应的目录下/data/tidb/deploy/均为空。)

 

[tidb@tidb-cluster tidb-ansible]$ ansible-playbook deploy.yml

----

安装日志:

 

此时对应的/data/tidb/deploy 有对应的数据。(所有node)

 

######################################

 

5. 启动 TiDB 集群

ansible-playbook start.yml

 

-------

[tidb@tidb-cluster tidb-ansible]$ ansible-playbook start

start_drainer.yml start_spark.yml start.yml

-------

 

启动日志:

###################

 

[tidb@tidb-cluster tidb-ansible]$ ansible-playbook start.yml

 

 

目录文件:

[tidb@tidb-cluster tidb-ansible]$ pwd

/data/tidb/tidb-ansible

 

[tidb@tidb-cluster tidb-ansible]$ cat hosts.ini

[servers]

10.0.77.5

10.0.77.6

10.0.77.10

10.0.77.11

10.0.25.5

10.0.25.6

192.168.41.22

192.168.41.27

192.168.41.13

 

[all:vars]

username = tidb

ntp_server = pool.ntp.org

 

######

 

[tidb@tidb-cluster tidb-ansible]$ cat inventory.ini

## TiDB Cluster Part

[tidb_servers]

10.0.77.11

10.0.25.5

 

[tikv_servers]

10.0.77.5

10.0.77.6

10.0.77.10

 

[pd_servers]

192.168.41.22

192.168.41.27

192.168.41.13

 

[spark_master]

 

[spark_slaves]

 

[lightning_server]

 

[importer_server]

 

## Monitoring Part

# prometheus and pushgateway servers

[monitoring_servers]

10.0.25.6

 

[grafana_servers]

10.0.25.6

 

# node_exporter and blackbox_exporter servers

[monitored_servers]

10.0.77.5

10.0.77.6

10.0.77.10

10.0.77.11

10.0.25.5

10.0.25.6

192.168.41.22

192.168.41.27

192.168.41.13

 

 

[alertmanager_servers]

10.0.25.6

 

[kafka_exporter_servers]

 

## Binlog Part

[pump_servers]

 

[drainer_servers]

 

## Group variables

[pd_servers:vars]

# location_labels = ["zone","rack","host"]

 

## Global variables

[all:vars]

#deploy_dir = /home/tidb/deploy

deploy_dir = /data/tidb/deploy

 

## Connection

# ssh via normal user

ansible_user = tidb

 

cluster_name = test-cluster

 

tidb_version = v2.1.17

 

# process supervision, [systemd, supervise]

process_supervision = systemd

 

timezone = Asia/Shanghai

 

enable_firewalld = False

# check NTP service

enable_ntpd = True

set_hostname = False

 

## binlog trigger

enable_binlog = False

 

# kafka cluster address for monitoring, example:

# kafka_addrs = "192.168.0.11:9092,192.168.0.12:9092,192.168.0.13:9092"

kafka_addrs = ""

 

# zookeeper address of kafka cluster for monitoring, example:

# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181"

zookeeper_addrs = ""

 

# enable TLS authentication in the TiDB cluster

enable_tls = False

 

# KV mode

deploy_without_tidb = False

 

# Optional: Set if you already have a alertmanager server.

# Format: alertmanager_host:alertmanager_port

alertmanager_target = ""

 

grafana_admin_user = "admin"

grafana_admin_password = "admin"

 

 

### Collect diagnosis

collect_log_recent_hours = 2

 

enable_bandwidth_limit = True

# default: 10Mb/s, unit: Kbit/s

collect_bandwidth_limit = 10000

 

 

###################

对应的 后台进程:

 

1): 中控机: 10.0.25.6 tidb-cluster.monitor

 

[tidb@tidb-cluster ~]$ ps -ef |grep tidb

tidb 8671 1 0 00:32 ? 00:00:02 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 8672 8671 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 8673 8672 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 9270 1 0 00:32 ? 00:00:02 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 9271 9270 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 9272 9271 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 9759 1 0 00:33 ? 00:00:00 bin/alertmanager --config.file=conf/alertmanager.yml --storage.path=/data/tidb/deploy/data.alertmanager --data.retention=120h --log.level=info --web.listen-address=:9093 --mesh.listen-address=:6783

tidb 9760 9759 0 00:33 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_alertmanager.sh

tidb 9761 9760 0 00:33 ? 00:00:00 tee -i -a /data/tidb/deploy/log/alertmanager.log

tidb 9996 1 2 00:33 ? 00:00:05 bin/pushgateway --log.level=info --web.listen-address=:9091

tidb 9997 9996 0 00:33 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_pushgateway.sh

tidb 9998 9997 0 00:33 ? 00:00:00 tee -i -a /data/tidb/deploy/log/pushgateway.log

tidb 10329 1 3 00:33 ? 00:00:09 bin/prometheus --config.file=/data/tidb/deploy/conf/prometheus.yml --web.listen-address=:9090 --web.external-url=http://10.0.25.6:9090/ --web.enable-admin-api --log.level=info --storage.tsdb.path=/data/tidb/deploy/prometheus2.0.0.data.metrics --storage.tsdb.retention=30d

tidb 10330 10329 0 00:33 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_prometheus.sh

tidb 10331 10330 0 00:33 ? 00:00:00 tee -i -a /data/tidb/deploy/log/prometheus.log

tidb 11069 1 1 00:34 ? 00:00:02 opt/grafana/bin/grafana-server --homepath=/data/tidb/deploy/opt/grafana --config=/data/tidb/deploy/opt/grafana/conf/grafana.ini

tidb 11430 1 0 00:34 ? 00:00:00 bin/grafana_collector --ip=10.0.25.6:3000 --port=:8686 --config=conf/grafana_collector.toml --font-dir=/data/tidb/deploy/conf/fonts/ --log-file=/data/tidb/deploy/log/grafana_collector.log --log-level=info

root 12048 13384 0 00:36 pts/2 00:00:00 su - tidb

tidb 12049 12048 0 00:36 pts/2 00:00:00 -bash

tidb 12082 12049 0 00:37 pts/2 00:00:00 ps -ef

tidb 12083 12049 0 00:37 pts/2 00:00:00 grep --color=auto tidb

root 28087 28066 0 Oct16 pts/4 00:00:00 su - tidb

tidb 28088 28087 0 Oct16 pts/4 00:00:00 -bash

 

 

2):tidb Server :

 

[tidb@tidb2-500 ~]$ hostname

tidb2-500.com

10.0.25.5 tidb2-500.com

 

[tidb@tidb2-500 ~]$ ps -ef |grep tidb

tidb 3955 1 0 00:32 ? 00:00:02 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 3956 3955 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 3957 3956 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 4230 1 0 00:32 ? 00:00:02 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 4231 4230 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 4232 4231 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 4511 1 1 00:33 ? 00:00:05 bin/tidb-server -P 4000 --status=10080 --advertise-address=10.0.25.5 --path=192.168.41.22:2379,192.168.41.27:2379,192.168.41.13:2379 --config=conf/tidb.toml --log-slow-query=/data/tidb/deploy/log/tidb_slow_query.log --log-file=/data/tidb/deploy/log/tidb.log

 

####

[tidb@tidb1-500 ~]$ hostname

tidb1-500.com

[tidb@tidb1-500 ~]$ ps -ef |grep tidb

tidb 28512 1 0 12:32 ? 00:00:00 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 28513 28512 0 12:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 28514 28513 0 12:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 28772 1 0 12:32 ? 00:00:00 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 28773 28772 0 12:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 28774 28773 0 12:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 29054 1 1 12:33 ? 00:00:01 bin/tidb-server -P 4000 --status=10080 --advertise-address=10.0.77.11 --path=192.168.41.22:2379,192.168.41.27:2379,192.168.41.13:2379 --config=conf/tidb.toml --log-slow-query=/data/tidb/deploy/log/tidb_slow_query.log --log-file=/data/tidb/deploy/log/tidb.log

 

 

3): pd client :

###################

[tidb@pd1-500 ~]$ hostname

pd1-500.com

192.168.41.22 pd1-500.com

 

## pd1 ##

[tidb@pd1-500 ~]$ ps -ef |grep tidb

tidb 31143 1 2 12:32 ? 00:00:03 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 31144 31143 0 12:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 31145 31144 0 12:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 31438 1 4 12:33 ? 00:00:06 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 31439 31438 0 12:33 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 31440 31439 0 12:33 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 31714 1 46 12:33 ? 00:00:53 bin/pd-server --name=pd_pd1-500 --client-urls=http://192.168.41.22:2379 --advertise-client-urls=http://192.168.41.22:2379 --peer-urls=http://192.168.41.22:2380 --advertise-peer-urls=http://192.168.41.22:2380 --data-dir=/data/tidb/deploy/data.pd --initial-cluster=pd_pd1-500=http://192.168.41.22:2380,pd_pd2-500=http://192.168.41.27:2380,pd_pd3-500=http://192.168.41.13:2380 --config=conf/pd.toml --log-file=/data/tidb/deploy/log/pd.log

 

#####

 

### pd2 ###

[tidb@pd2-500 deploy]$ ps -ef |grep tidb

tidb 30111 1 2 12:32 ? 00:00:03 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 30112 30111 0 12:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 30113 30112 0 12:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 30403 1 4 12:33 ? 00:00:05 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 30404 30403 0 12:33 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 30405 30404 0 12:33 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 30685 1 22 12:33 ? 00:00:22 bin/pd-server --name=pd_pd2-500 --client-urls=http://192.168.41.27:2379 --advertise-client-urls=http://192.168.41.27:2379 --peer-urls=http://192.168.41.27:2380 --advertise-peer-urls=http://192.168.41.27:2380 --data-dir=/data/tidb/deploy/data.pd --initial-cluster=pd_pd1-500=http://192.168.41.22:2380,pd_pd2-500=http://192.168.41.27:2380,pd_pd3-500=http://192.168.41.13:2380 --config=conf/pd.toml --log-file=/data/tidb/deploy/log/pd.log

 

### pd3 ###

 

[tidb@pd3-500 ~]$ ps -ef |grep tidb

tidb 31120 1 1 12:32 ? 00:00:03 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 31121 31120 0 12:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 31122 31121 0 12:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 31413 1 4 12:33 ? 00:00:06 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 31414 31413 0 12:33 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 31415 31414 0 12:33 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 31690 1 21 12:33 ? 00:00:27 bin/pd-server --name=pd_pd3-500 --client-urls=http://192.168.41.13:2379 --advertise-client-urls=http://192.168.41.13:2379 --peer-urls=http://192.168.41.13:2380 --advertise-peer-urls=http://192.168.41.13:2380 --data-dir=/data/tidb/deploy/data.pd --initial-cluster=pd_pd1-500=http://192.168.41.22:2380,pd_pd2-500=http://192.168.41.27:2380,pd_pd3-500=http://192.168.41.13:2380 --config=conf/pd.toml --log-file=/data/tidb/deploy/log/pd.log

 

4) tikv server :

[tidb@tikv2-500 ~]$ cat /etc/hosts

127.0.0.1 tikv1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

###### TiDB #######

10.0.77.5 tikv1-500.com

10.0.77.6 tikv2-500.com

10.0.77.10 tikv3-500.com

 

 

#### tikv2 : ###

[tidb@tikv2-500 ~]$ ps -ef |grep tidb

tidb 33026 1 0 00:32 ? 00:00:01 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 33027 33026 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 33028 33027 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 33286 1 0 00:32 ? 00:00:01 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 33287 33286 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 33288 33287 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 33563 1 1 00:33 ? 00:00:01 bin/tikv-server --addr 0.0.0.0:20160 --advertise-addr 10.0.77.6:20160 --pd 192.168.41.22:2379,192.168.41.27:2379,192.168.41.13:2379 --data-dir /data/tidb/deploy/data --config conf/tikv.toml --log-file /data/tidb/deploy/log/tikv.log

 

#### tikv1: ###

[tidb@tikv1-500 ~]$ ps -ef |grep tidb

tidb 30279 1 0 00:32 ? 00:00:00 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 30280 30279 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 30281 30280 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 30540 1 0 00:32 ? 00:00:01 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 30541 30540 0 00:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 30542 30541 0 00:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 30832 1 0 00:33 ? 00:00:01 bin/tikv-server --addr 0.0.0.0:20160 --advertise-addr 10.0.77.5:20160 --pd 192.168.41.22:2379,192.168.41.27:2379,192.168.41.13:2379 --data-dir /data/tidb/deploy/data --config conf/tikv.toml --log-file /data/tidb/deploy/log/tikv.log

 

 

#### tikv3: ###

[tidb@tikv3-500 ~]$ ps -ef |grep tidb

tidb 31070 1 0 12:32 ? 00:00:01 bin/node_exporter --web.listen-address=:9100 --log.level=info

tidb 31071 31070 0 12:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_node_exporter.sh

tidb 31072 31071 0 12:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/node_exporter.log

tidb 31334 1 0 12:32 ? 00:00:01 bin/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml

tidb 31335 31334 0 12:32 ? 00:00:00 /bin/bash /data/tidb/deploy/scripts/run_blackbox_exporter.sh

tidb 31336 31335 0 12:32 ? 00:00:00 tee -i -a /data/tidb/deploy/log/blackbox_exporter.log

tidb 31627 1 0 12:33 ? 00:00:01 bin/tikv-server --addr 0.0.0.0:20160 --advertise-addr 10.0.77.10:20160 --pd 192.168.41.22:2379,192.168.41.27:2379,192.168.41.13:2379 --data-dir /data/tidb/deploy/data --config conf/tikv.toml --log-file /data/tidb/deploy/log/tikv.log

 

 

#### 账号密码:

10.0.25.6 tidb tidb1234

10.0.77.5 tidb tidb1234

10.0.77.6 tidb tidb1234

10.0.77.10 tidb tidb1234

10.0.77.11 tidb tidb1234

10.0.25.5 tidb tidb1234

192.168.41.22 tidb tidb1234

192.168.41.27 tidb tidb1234

192.168.41.13 tidb tidb1234

 

 

### 通过mysql

 

tidb1 登陆:

[mysql@tidb09 ~]$ /usr/local/mysql-5.7.25/bin/mysql -uroot -h 10.0.77.11 -P4000

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 200

Server version: 5.7.25-TiDB-v2.1.17 MySQL Community Server (Apache License 2.0)

 

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

 

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

 

mysql> show processlist;

+------+------+---------------+------+---------+------+-------+------------------+------+

| Id | User | Host | db | Command | Time | State | Info | Mem |

+------+------+---------------+------+---------+------+-------+------------------+------+

| 200 | root | 192.168.41.24 | NULL | Query | 0 | 2 | show processlist | 0 |

+------+------+---------------+------+---------+------+-------+------------------+------+

1 row in set (0.00 sec)

 

mysql>

 

 

tidb2登陆:

[mysql@tidb09 ~]$ /usr/local/mysql-5.7.25/bin/mysql -uroot -h 10.0.25.5 -P4000

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 208

Server version: 5.7.25-TiDB-v2.1.17 MySQL Community Server (Apache License 2.0)

 

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

 

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

 

mysql> show processlist;

+------+------+---------------+------+---------+------+-------+------------------+------+

| Id | User | Host | db | Command | Time | State | Info | Mem |

+------+------+---------------+------+---------+------+-------+------------------+------+

| 208 | root | 192.168.41.24 | NULL | Query | 0 | 2 | show processlist | 0 |

+------+------+---------------+------+---------+------+-------+------------------+------+

1 row in set (0.00 sec)

 

 

 

 

你可能感兴趣的:(㊣,Mysql-Cluster,㊣)