TiDB集群手动安装

TiDB是PingCAP 公司受google开源论文Spanner和F1 启发而开发的开源的分布式数据库,是目前市场上最接近OceanBase数据库的分布式数据库。经常会有技术朋友客户咨询TiDB跟OceanBase的区别。所以,这里也搭建一个TiDB集群。并在后面跟OceanBase集群做性能对比。

由于环境的限制,不能打通ssh通道,所以没办法使用官网推荐的自动化安装方式。

声明:由于时间仓促,安装TiDB集群参考了网友的文章 ,原文链接见 TiDB集群手动安装:https://www.cnblogs.com/vansky/p/9328375.html?spm=a2c4e.11155515.0.0.28c76b21Niwtbe

一、安装准备。

  1. 机器规划

IP 角色 说明
xxx.xxx.242.23 PD Server 集群管理模块
xxx.xxx.242.26 PD Server 集群管理模块
xxx.xxx.242.27 PD Server 集群管理模块
xxx.xxx.243.68 TiKV Server 存储数据,存储节点
xxx.xxx.82.150 TiKV Server 存储数据,存储节点
xxx.xxx.82.169 TiKV Server 存储数据,存储节点
xxx.xxx.82.173 TiDB Server sql 请求路由,数据处理,计算节点

  1. OS准备
    OS环境使用redhat 7。
    需要改一些配置

sudo vi /etc/sysctl.conf
fs.file-max = 1000000

sudo vi /etc/security/limits.conf
root soft nofile 1000000
root hard nofile 1000000
admin soft nofile 1000000
admin hard nofile 1000000

二、TiPD 安装

  1. 安装软件

此次安装在admin用户下。需要sudo 权限。

tar zxvf tidb-ansible.tar.gz
cd /home/admin/tidb-ansible/downloads
tar zxvf tidb-v2.0.7.tar.gz
sudo mv tidb-v2.0.7-linux-amd64 /usr/local/
sudo ln -s /usr/local/tidb-v2.0.7-linux-amd64 /usr/local/tidb

  1. 准备数据目录

sudo mkdir -p /usr/local/tidb/conf/

sudo mkdir -p /data/1/tidb/pd/4205
sudo mkdir -p /data/log1/tidb/pd
sudo chown -R admin.admin /data/1/tidb
sudo chown -R admin.admin /data/log1/tidb

  1. 准备配置文件

vi /usr/local/tidb/conf/tipd_4205.conf
client-urls=“http://xxx.xxx.242.23:4205”
name=“pd1”
data-dir="/data/1/tidb/pd/4205/"
peer-urls=“http://xxx.xxx.242.23:4206”
initial-cluster=“pd1=http://xxx.xxx.242.23:4206,pd2=http://xxx.xxx.242.26:4206,pd3=http://xxx.xxx.242.27:4206”
log-file="/data/log1/tidb/pd/4205_run.log"

  1. 启动TiPD

nohup /usr/local/tidb/bin/pd-server --config=/usr/local/tidb/conf/tipd_4205.conf &

三、TiKV安装

  1. 安装软件

ar zxvf tidb-ansible.tar.gz
cd /home/admin/tidb-ansible/downloads
tar zxvf tidb-v2.0.7.tar.gz
sudo mv tidb-v2.0.7-linux-amd64 /usr/local/
sudo ln -s /usr/local/tidb-v2.0.7-linux-amd64 /usr/local/tidb

  1. 准备数据目录

mkdir -p /data/1/tidb/kv/4402/import
chown -R admin.admin /data/1/tidb
mkdir -p /usr/local/tidb/conf/

  1. 准备配置文件

vi /usr/local/tidb/conf/tikv_4402.conf

log-level = “info”
log-file = “/data/1/tidb/kv/4402/run.log”
[server]
addr = “xxx.xxx.82.169:4402”
[storage]
data-dir = “/data/1/tidb/kv/4402”
scheduler-concurrency = 1024000
scheduler-worker-pool-size = 100
#labels = {zone = “ZONE3”, host = “10074”}
[pd]
#指定tipd节点 这里指定的都是tipd的client-urls
endpoints = [“xxx.xxx.242.23:4205”,“xxx.xxx.242.26:4205”,“xxx.xxx.242.27:4205”]
[metric]
interval = “15s”
address = “”
job = “tikv”
[raftstore]
sync-log = false
region-max-size = “384MB”
region-split-size = “256MB”
[rocksdb]
max-background-jobs = 28
max-open-files = 409600
max-manifest-file-size = “20MB”
compaction-readahead-size = “20MB”
[rocksdb.defaultcf]
block-size = “64KB”
compression-per-level = [“no”, “no”, “lz4”, “lz4”, “lz4”, “zstd”, “zstd”]
write-buffer-size = “128MB”
max-write-buffer-number = 10
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
max-bytes-for-level-base = “512MB”
target-file-size-base = “32MB”
[rocksdb.writecf]
compression-per-level = [“no”, “no”, “lz4”, “lz4”, “lz4”, “zstd”, “zstd”]
write-buffer-size = “128MB”
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = “512MB”
target-file-size-base = “32MB”
[raftdb]
max-open-files = 409600
compaction-readahead-size = “20MB”
[raftdb.defaultcf]
compression-per-level = [“no”, “no”, “lz4”, “lz4”, “lz4”, “zstd”, “zstd”]
write-buffer-size = “128MB”
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = “512MB”
target-file-size-base = “32MB”
block-cache-size = “10G”
[import]
import-dir = “/data/1/tidb/kv/4402/import”
num-threads = 8
stream-channel-window = 128

  1. 启动 TiKV

nohup /usr/local/tidb/bin/tikv-server --config=/usr/local/tidb/conf/tikv_4402.conf &

四、TiDB安装

  1. 安装软件
    跟前面TiPD一样。

  2. 准备配置文件

vi /usr/local/tidb/conf/tidb_4001.conf

host = “0.0.0.0”
port = 4001
#存储类型指定为tikv。
store = “tikv”
#指定tipd节点。这里指定的都是tipd的client-urls
path = “xxx.xxx.242.23:4205,xxx.xxx.242.26:4205,xxx.xxx.242.27:4205”
socket = “”
run-ddl = true
lease = “45s”
split-table = true
token-limit = 1000
oom-action = “log”
enable-streaming = false
lower-case-table-names = 2
[log]
level = “info”
format = “text”
disable-timestamp = false
slow-query-file = “”
slow-threshold = 300
expensive-threshold = 10000
query-log-max-len = 2048
[log.file]
filename = “/data/1/tidb/db/4001/tidb.log”
max-size = 300
max-days = 0
max-backups = 0
log-rotate = true
[security]
ssl-ca = “”
ssl-cert = “”
ssl-key = “”
cluster-ssl-ca = “”
cluster-ssl-cert = “”
cluster-ssl-key = “”
[status]
report-status = true
status-port = 10080 #报告tidb状态的通讯端口
metrics-addr = “”
metrics-interval = 15
[performance]
max-procs = 0
stmt-count-limit = 5000
tcp-keep-alive = true
cross-join = true
stats-lease = “3s”
run-auto-analyze = true
feedback-probability = 0.05
query-feedback-limit = 1024
pseudo-estimate-ratio = 0.8
[proxy-protocol]
networks = “”
header-timeout = 5
[plan-cache]
enabled = false
capacity = 2560
shards = 256
[prepared-plan-cache]
enabled = false
capacity = 100
[opentracing]
enable = false
rpc-metrics = false
[opentracing.sampler]
type = “const”
param = 1.0
sampling-server-url = “”
max-operations = 0
sampling-refresh-interval = 0
[opentracing.reporter]
queue-size = 0
buffer-flush-interval = 0
log-spans = false
local-agent-host-port = “”
[tikv-client]
grpc-connection-count = 16
commit-timeout = “41s”
[txn-local-latches]
enabled = false
capacity = 1024000
[binlog]
binlog-socket = “”

  1. 启动TiDB

nohup /usr/local/tidb/bin/tidb-server --config=/usr/local/tidb/conf/tidb_4001.conf &

4001 就是连接端口了。

  1. 测试TiDB连接

mysql -h xxx.xxx.82.173 -uroot -P 4001

五、总结

TiDB集群的架构,计算与存储分离。安装前要准备一个配置文件,其他步骤都很简单。

TiDB集群安装好后就可以直接提供服务,使用体验类似MySQL。这点跟OceanBase不同,

TiDB集群没有租户功能。从文档描述来看,内部架构和功能相对来说还是比较简单。

你可能感兴趣的:(TiDB)