TiDB 是 PingCAP 公司自主设计、研发的开源分布式关系型数据库,是一款同时支持在线事务处理与在线分析处理 (Hybrid Transactional and Analytical Processing, HTAP) 的融合型分布式数据库产品。
得益于 TiDB 存储计算分离的架构的设计,可按需对计算、存储分别进行在线扩容或者缩容,扩容或者缩容过程中对应用运维人员透明。
数据采用多副本存储,数据副本通过 Multi-Raft 协议同步事务日志,多数派写入成功事务才能提交,确保数据强一致性且少数副本发生故障时不影响数据的可用性。可按需配置副本地理位置、副本数量等策略满足不同容灾级别的要求。
提供行存储引擎 TiKV、列存储引擎 TiFlash 两款存储引擎,TiFlash 通过 Multi-Raft Learner 协议实时从 TiKV 复制数据,确保行存储引擎 TiKV 和列存储引擎 TiFlash 之间的数据强一致。TiKV、TiFlash 可按需部署在不同的机器,解决 HTAP 资源隔离的问题。
专为云而设计的分布式数据库,通过 TiDB Operator 可在公有云、私有云、混合云中实现部署工具化、自动化。
兼容 MySQL 5.7 协议、MySQL 常用的功能、MySQL 生态,应用无需或者修改少量代码即可从 MySQL 迁移到 TiDB。提供丰富的数据迁移工具帮助应用便捷完成数据迁移。
SQL 层,对外暴露 MySQL 协议的连接 endpoint,负责接受客户端的连接,执行 SQL 解析和优化,最终生成分布式执行计划。TiDB 层本身是无状态的,实践中可以启动多个 TiDB 实例,通过负载均衡组件(如 LVS、HAProxy 或 F5)对外提供统一的接入地址,客户端的连接可以均匀地分摊在多个 TiDB 实例上以达到负载均衡的效果。TiDB Server 本身并不存储数据,只是解析 SQL,将实际的数据读取请求转发给底层的存储节点 TiKV(或 TiFlash)。
整个 TiDB 集群的元信息管理模块,负责存储每个 TiKV 节点实时的数据分布情况和集群的整体拓扑结构,提供 TiDB Dashboard 管控界面,并为分布式事务分配事务 ID。PD 不仅存储元信息,同时还会根据 TiKV 节点实时上报的数据分布状态,下发数据调度命令给具体的 TiKV 节点,可以说是整个集群的“大脑”。此外,PD 本身也是由至少 3 个节点构成,拥有高可用的能力。建议部署奇数个 PD 节点。
1.TiKV Server:负责存储数据,从外部看 TiKV 是一个分布式的提供事务的 Key-Value 存储引擎。存储数据的基本单位是 Region,每个 Region 负责存储一个 Key Range(从 StartKey 到 EndKey 的左闭右开区间)的数据,每个 TiKV 节点会负责多个 Region。TiKV 的 API 在 KV 键值对层面提供对分布式事务的原生支持,默认提供了 SI (Snapshot Isolation) 的隔离级别,这也是 TiDB 在 SQL 层面支持分布式事务的核心。TiDB 的 SQL 层做完 SQL 解析后,会将 SQL 的执行计划转换为对 TiKV API 的实际调用。所以,数据都存储在 TiKV 中。另外,TiKV 中的数据都会自动维护多副本(默认为三副本),天然支持高可用和自动故障转移。
2.TiFlash:TiFlash 是一类特殊的存储节点。和普通 TiKV 节点不一样的是,在 TiFlash 内部,数据是以列式的形式进行存储,主要的功能是为分析型的场景加速。
本次TiDB集群默认部署组件:3 个 PD,3 个 TiKV,1 个 TiDB 和监控组件 Prometheus,Pushgateway,Grafana 以及 tidb-vision等。
[root@server ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2022-10-10 22:55:04 CST; 5min ago
Docs: https://docs.docker.com
Main PID: 10218 (dockerd)
Tasks: 16
Memory: 115.0M
CGroup: /system.slice/docker.service
└─10218 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Oct 10 22:54:52 192.168.3.240 dockerd[10218]: time="2022-10-10T22:54:52.997828575+08:00" level=info msg="ccResolverWrapper: sending update to cc: ...dule=grpc
Oct 10 22:54:52 192.168.3.240 dockerd[10218]: time="2022-10-10T22:54:52.997860425+08:00" level=info msg="ClientConn switching balancer to \"pick_f...dule=grpc
Oct 10 22:54:54 192.168.3.240 dockerd[10218]: time="2022-10-10T22:54:54.645100072+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
Oct 10 22:54:59 192.168.3.240 dockerd[10218]: time="2022-10-10T22:54:59.834619854+08:00" level=info msg="Loading containers: start."
Oct 10 22:55:02 192.168.3.240 dockerd[10218]: time="2022-10-10T22:55:02.643646144+08:00" level=info msg="Default bridge (docker0) is assigned with... address"
Oct 10 22:55:02 192.168.3.240 dockerd[10218]: time="2022-10-10T22:55:02.914993511+08:00" level=info msg="Loading containers: done."
Oct 10 22:55:03 192.168.3.240 dockerd[10218]: time="2022-10-10T22:55:03.832847548+08:00" level=info msg="Docker daemon" commit=e42327a graphdriver...=20.10.18
Oct 10 22:55:03 192.168.3.240 dockerd[10218]: time="2022-10-10T22:55:03.832929345+08:00" level=info msg="Daemon has completed initialization"
Oct 10 22:55:04 192.168.3.240 systemd[1]: Started Docker Application Container Engine.
Oct 10 22:55:04 192.168.3.240 dockerd[10218]: time="2022-10-10T22:55:04.289729613+08:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.
[root@server ~]# docker version
Client: Docker Engine - Community
Version: 20.10.18
API version: 1.41
Go version: go1.18.6
Git commit: b40c2f6
Built: Thu Sep 8 23:14:08 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.18
API version: 1.41 (minimum version 1.12)
Go version: go1.18.6
Git commit: e42327a
Built: Thu Sep 8 23:12:21 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.8
GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
[root@192 ~]#
[root@server ~]# docker-compose version
docker-compose version 1.25.0, build 0a186604
docker-py version: 4.1.0
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
git clone https://github.com/pingcap/tidb-docker-compose.git
[root@server tidb-docker-compose-master]# pwd
/data/TiDB/tidb-docker-compose-master
[root@server tidb-docker-compose-master]# ll
total 64
drwxr-xr-x 3 root root 60 Jul 11 14:28 compose
drwxr-xr-x 4 root root 4096 Jul 11 14:28 config
drwxr-xr-x 3 root root 59 Jul 11 14:28 dashboard-installer
drwxr-xr-x 3 root root 19 Jul 11 14:28 docker
-rw-r--r-- 1 root root 11343 Jul 11 14:28 docker-compose-binlog.yml
-rw-r--r-- 1 root root 292 Jul 11 14:28 docker-compose-test.yml
-rw-r--r-- 1 root root 1566 Jul 11 14:28 docker-compose-tiflash-nightly.yml
-rw-r--r-- 1 root root 5445 Jul 11 14:28 docker-compose.yml
-rw-r--r-- 1 root root 4736 Jul 11 14:28 docker-swarm.yml
-rw-r--r-- 1 root root 11294 Jul 11 14:28 LICENSE
drwxr-xr-x 2 root root 24 Jul 11 14:28 pd
-rw-r--r-- 1 root root 11443 Jul 11 14:28 README.md
drwxr-xr-x 2 root root 24 Jul 11 14:28 tidb
drwxr-xr-x 2 root root 24 Jul 11 14:28 tidb-binlog
drwxr-xr-x 2 root root 24 Jul 11 14:28 tidb-vision
drwxr-xr-x 2 root root 24 Jul 11 14:28 tikv
drwxr-xr-x 6 root root 95 Jul 11 14:28 tispark
drwxr-xr-x 2 root root 29 Jul 11 14:28 tools
[root@server tidb-docker-compose-master]# cat docker-compose.yml
version: '2.1'
services:
pd0:
image: pingcap/pd:latest
ports:
- "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --name=pd0
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd0:2379
- --advertise-peer-urls=http://pd0:2380
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
- --data-dir=/data/pd0
- --config=/pd.toml
- --log-file=/logs/pd0.log
restart: on-failure
pd1:
image: pingcap/pd:latest
ports:
- "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --name=pd1
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd1:2379
- --advertise-peer-urls=http://pd1:2380
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
- --data-dir=/data/pd1
- --config=/pd.toml
- --log-file=/logs/pd1.log
restart: on-failure
pd2:
image: pingcap/pd:latest
ports:
- "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --name=pd2
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd2:2379
- --advertise-peer-urls=http://pd2:2380
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
- --data-dir=/data/pd2
- --config=/pd.toml
- --log-file=/logs/pd2.log
restart: on-failure
tikv0:
image: pingcap/tikv:latest
volumes:
- ./config/tikv.toml:/tikv.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv0:20160
- --data-dir=/data/tikv0
- --pd=pd0:2379,pd1:2379,pd2:2379
- --config=/tikv.toml
- --log-file=/logs/tikv0.log
depends_on:
- "pd0"
- "pd1"
- "pd2"
restart: on-failure
tikv1:
image: pingcap/tikv:latest
volumes:
- ./config/tikv.toml:/tikv.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv1:20160
- --data-dir=/data/tikv1
- --pd=pd0:2379,pd1:2379,pd2:2379
- --config=/tikv.toml
- --log-file=/logs/tikv1.log
depends_on:
- "pd0"
- "pd1"
- "pd2"
restart: on-failure
tikv2:
image: pingcap/tikv:latest
volumes:
- ./config/tikv.toml:/tikv.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv2:20160
- --data-dir=/data/tikv2
- --pd=pd0:2379,pd1:2379,pd2:2379
- --config=/tikv.toml
- --log-file=/logs/tikv2.log
depends_on:
- "pd0"
- "pd1"
- "pd2"
restart: on-failure
tidb:
image: pingcap/tidb:latest
ports:
- "4000:4000"
- "10080:10080"
volumes:
- ./config/tidb.toml:/tidb.toml:ro
- ./logs:/logs
command:
- --store=tikv
- --path=pd0:2379,pd1:2379,pd2:2379
- --config=/tidb.toml
- --log-file=/logs/tidb.log
- --advertise-address=tidb
depends_on:
- "tikv0"
- "tikv1"
- "tikv2"
restart: on-failure
tispark-master:
image: pingcap/tispark:v2.1.1
command:
- /opt/spark/sbin/start-master.sh
volumes:
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
environment:
SPARK_MASTER_PORT: 7077
SPARK_MASTER_WEBUI_PORT: 8080
ports:
- "7077:7077"
- "8080:8080"
depends_on:
- "tikv0"
- "tikv1"
- "tikv2"
restart: on-failure
tispark-slave0:
image: pingcap/tispark:v2.1.1
command:
- /opt/spark/sbin/start-slave.sh
- spark://tispark-master:7077
volumes:
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
environment:
SPARK_WORKER_WEBUI_PORT: 38081
ports:
- "38081:38081"
depends_on:
- tispark-master
restart: on-failure
tidb-vision:
image: pingcap/tidb-vision:latest
environment:
PD_ENDPOINT: pd0:2379
ports:
- "8010:8010"
restart: on-failure
# monitors
pushgateway:
image: prom/pushgateway:v0.3.1
command:
- --log.level=error
restart: on-failure
prometheus:
user: root
image: prom/prometheus:v2.2.1
command:
- --log.level=error
- --storage.tsdb.path=/data/prometheus
- --config.file=/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
volumes:
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- ./config/pd.rules.yml:/etc/prometheus/pd.rules.yml:ro
- ./config/tikv.rules.yml:/etc/prometheus/tikv.rules.yml:ro
- ./config/tidb.rules.yml:/etc/prometheus/tidb.rules.yml:ro
- ./data:/data
restart: on-failure
grafana:
image: grafana/grafana:6.0.1
user: "0"
environment:
GF_LOG_LEVEL: error
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
volumes:
- ./config/grafana:/etc/grafana
- ./config/dashboards:/tmp/dashboards
- ./data/grafana:/var/lib/grafana
ports:
- "3000:3000"
restart: on-failure
[root@server tidb-docker-compose-master]# docker-compose pull
Pulling pd0 ... done
Pulling pd1 ... done
Pulling pd2 ... done
Pulling tikv0 ... done
Pulling tikv1 ... done
Pulling tikv2 ... done
Pulling tidb ... done
Pulling tispark-master ... done
Pulling tispark-slave0 ... done
Pulling tidb-vision ... done
Pulling pushgateway ... done
Pulling prometheus ... done
Pulling grafana ... done
[root@server tidb-docker-compose-master]# docker-compose up -d
Creating network "tidb-docker-compose-master_default" with the default driver
Creating tidb-docker-compose-master_grafana_1 ... done
Creating tidb-docker-compose-master_prometheus_1 ... done
Creating tidb-docker-compose-master_pd0_1 ... done
Creating tidb-docker-compose-master_tidb-vision_1 ... done
Creating tidb-docker-compose-master_pushgateway_1 ... done
Creating tidb-docker-compose-master_pd2_1 ... done
Creating tidb-docker-compose-master_pd1_1 ... done
Creating tidb-docker-compose-master_tikv2_1 ... done
Creating tidb-docker-compose-master_tikv1_1 ... done
Creating tidb-docker-compose-master_tikv0_1 ... done
Creating tidb-docker-compose-master_tidb_1 ... done
Creating tidb-docker-compose-master_tispark-master_1 ... done
Creating tidb-docker-compose-master_tispark-slave0_1 ... done
[root@server tidb-docker-compose-master]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3a109b3ac77 pingcap/tispark:v2.1.1 "/opt/spark/sbin/sta…" 31 seconds ago Up 30 seconds 0.0.0.0:38081->38081/tcp, :::38081->38081/tcp tidb-docker-compose-master_tispark-slave0_1
750f938b9669 pingcap/tispark:v2.1.1 "/opt/spark/sbin/sta…" 32 seconds ago Up 31 seconds 0.0.0.0:7077->7077/tcp, :::7077->7077/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp tidb-docker-compose-master_tispark-master_1
7fd728cc1019 pingcap/tidb:latest "/tidb-server --stor…" 32 seconds ago Up 31 seconds 0.0.0.0:4000->4000/tcp, :::4000->4000/tcp, 0.0.0.0:10080->10080/tcp, :::10080->10080/tcp tidb-docker-compose-master_tidb_1
0e18dbcd2efe pingcap/tikv:latest "/tikv-server --addr…" 33 seconds ago Up 32 seconds 20160/tcp tidb-docker-compose-master_tikv0_1
3789709f53b1 pingcap/tikv:latest "/tikv-server --addr…" 33 seconds ago Up 32 seconds 20160/tcp tidb-docker-compose-master_tikv2_1
304c90121b1c pingcap/tikv:latest "/tikv-server --addr…" 33 seconds ago Up 32 seconds 20160/tcp tidb-docker-compose-master_tikv1_1
d3f461f3e313 pingcap/pd:latest "/pd-server --name=p…" 35 seconds ago Up 33 seconds 2380/tcp, 0.0.0.0:49155->2379/tcp, :::49155->2379/tcp tidb-docker-compose-master_pd1_1
6b2a18f4b823 prom/pushgateway:v0.3.1 "/bin/pushgateway --…" 35 seconds ago Up 33 seconds 9091/tcp tidb-docker-compose-master_pushgateway_1
51deb002f753 grafana/grafana:6.0.1 "/run.sh" 35 seconds ago Up 33 seconds 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp tidb-docker-compose-master_grafana_1
82b528f74a0c pingcap/pd:latest "/pd-server --name=p…" 35 seconds ago Up 33 seconds 2380/tcp, 0.0.0.0:49154->2379/tcp, :::49154->2379/tcp tidb-docker-compose-master_pd2_1
0939bfabe52b prom/prometheus:v2.2.1 "/bin/prometheus --l…" 35 seconds ago Up 33 seconds 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp tidb-docker-compose-master_prometheus_1
b2d645e3b30a pingcap/tidb-vision:latest "/bin/sh -c 'sed -i …" 35 seconds ago Up 33 seconds 80/tcp, 443/tcp, 2015/tcp, 0.0.0.0:8010->8010/tcp, :::8010->8010/tcp tidb-docker-compose-master_tidb-vision_1
c3cb9f6acc84 pingcap/pd:latest "/pd-server --name=p…" 35 seconds ago Up 33 seconds 2380/tcp, 0.0.0.0:49153->2379/tcp, :::49153->2379/tcp
[root@server ~]# mysql -h 127.0.0.1 -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.25-TiDB-v5.0.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]>
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
5 rows in set (0.00 sec)
MySQL [(none)]> drop database lhrdb;
Query OK, 0 rows affected (0.05 sec)
MySQL [(none)]> create database ittest charset utf8mb4;
Query OK, 0 rows affected (0.04 sec)
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| ittest |
| mysql |
| test |
+--------------------+
6 rows in set (0.00 sec)
MySQL [(none)]>