前言:
自去年伊始,并越演越烈的中美贸易站的大前提下,国产数据库可能迎来最为辉煌的时刻。而TiDB就是其中的佼佼者。
为何选择GKE?
为何不用虚拟机? TiDB的问题是对于硬件资源的要求很高,如想进行一些测试,本地虚拟机无法满足要求。
为何不用AWS? 在AWS的部署上,因为选择的机器的CPU cores要求高,我的free tier帐户在安装过程中,最终报CPU cores超过了限制而失败。
GKE部署选用的是n1-standard-1类型机器,亲测安装成功。完全依照原始官方文档安装,某些步骤会报错。
包括网上的所有能找到的资料,都或多或少会出现一些小的错误。本着为人民服务的精神,如下博客记录了完整安装过程。
首先需要启动 Google Cloud Shell
启动 3 个节点的 kubernetes 集群的操作步骤
1. gcloud 创建项目
$ gcloud projects create lytidbtest Create in progress for [ https://cloudresourcemanager.googleapis.com/v1/projects/lytidbtest] . Waiting for [operations/cp.8771001241886955324] to finish...done. Enabling service [cloudapis.googleapis.com] on project [lytidbtest]... Operation "operations/acf.2cb3c9ca-4ba7-45f1-b2b0-64623fcf79a5" finished successfully.
2. gcloud 配置项目和设置可用区
$ gcloud config set project lytidbtest && gcloud config set compute/zone us-west1-a Updated property [core/project]. Updated property [compute/zone].
3. 创建 TiDB 集群
启动一个包含 3 个
n1-standard-1
类型节点的 Kubernetes 集群。
$ gcloud container clusters create tidb WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning. WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag. WARNING: Starting with version 1.18, clusters will have shielded GKE nodes by default. WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs. ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Kubernetes Engine API is not enabled for this project. Please ensure it is enabled in Google Cloud Console and try again: visit https://console.cloud.google.com/apis/api/container.googleapis.com/overview?project=lytidbtest to do so.
出现错误的原因是我们必须要启用 kubernetes Engine API,此过程会涉及开通结算账号。
$ gcloud container clusters create tidb WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning. WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag. WARNING: Starting with version 1.18, clusters will have shielded GKE nodes by default. WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs. Creating cluster tidb in us-west1-a... Cluster is being health-checked (master is healthy)...done. Created [ https://container.googleapis.com/v1/projects/lytidbtest/zones/us-west1-a/clusters/tidb] . To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-west1-a/tidb?project=lytidbtest kubeconfig entry generated for tidb. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS tidb us-west1-a 1.14.10-gke.27 104.198.100.101 n1-standard-1 1.14.10-gke.27 3 RUNNING
4. 将刚刚创建的集群设置为默认集群
$ gcloud config set container/cluster tidb Updated property [container/cluster].
5. 验证创建的集群节点情况
$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-tidb-default-pool-d5e3c725-77fd Ready54s v1.14.10-gke.27 gke-tidb-default-pool-d5e3c725-8szm Ready 53s v1.14.10-gke.27 gke-tidb-default-pool-d5e3c725-jbfb Ready 56s v1.14.10-gke.27
我们看到所有节点状态为 Ready
,表示我们已经成功搭建了一个 Kubernetes 集群。
安装 Helm
Helm 是 Kubernetes 包管理工具,通过 Helm 可以一键安装 TiDB 的所有分布式组件。
1. 安装 helm
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7001 100 7001 0 0 14083 0 --:--:-- --:--:-- --:--:-- 14086 Helm v2.14.3 is available. Changing from version v2.14.1. Downloading https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz Preparing to install helm and tiller into /usr/local/bin helm installed into /usr/local/bin/helm tiller installed into /usr/local/bin/tiller Run 'helm init' to configure helm.
2. 小技巧
将 helm 复制到我们自己的 $HOME
目录下,这样即使 Google Cloud Shell
断开连接了,我们再次登录也能在gcloud shell 终端访问 Helm:
mkdir -p ~/bin && cp /usr/local/bin/helm ~/bin && echo 'PATH="$PATH:$HOME/bin"' >> ~/.bashrc
3. 给 helm 赋权
$ kubectl apply -f ./manifests/tiller-rbac.yaml &&
> helm init --service-account tiller --upgrade
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding created
$HELM_HOME has been configured at /home/yanglu1661/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
官方文档上的操作,会报如上错误。通过查找如下文档,可修改命令如下
https://pingcap.com/docs-cn/tidb-in-kubernetes/v1.0/tidb-toolkit/
$ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/tiller-rbac.yaml && \ helm init --service-account tiller --upgrade serviceaccount/tiller unchanged clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding configured $HELM_HOME has been configured at /home/yanglu1661/.helm. Tiller (the Helm server-side component) has been updated to gcr.io/kubernetes-helm/tiller:v2.16.6 .
查看处理进度:
$ watch "kubectl get pods --namespace kube-system | grep tiller" tiller-deploy-8756df4d9-sh8px 1/1 Running 0 6m39s
当 Pod 状态为 Running,可以 Ctrl+C
停止并继续下一步。
添加 Helm 仓库
PingCAP Helm 仓库中存放着 PingCAP 发布的 charts,例如 tidb-operator、tidb-cluster 和 tidb-backup 等等。
添加仓库命令:
$ helm repo add pingcap https://charts.pingcap.org/ && helm repo list "pingcap" has been added to your repositories NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879/charts pingcap https://charts.pingcap.org/
查看可用:
$ helm repo update && > helm search tidb-cluster -l && > helm search tidb-operator -l Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "pingcap" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. NAME CHART VERSION APP VERSION DESCRIPTION pingcap/tidb-cluster latest A Helm chart for TiDB Cluster pingcap/tidb-cluster v1.0.6 A Helm chart for TiDB Cluster pingcap/tidb-cluster v1.0.5 A Helm chart for TiDB Cluster pingcap/tidb-cluster v1.0.4 A Helm chart for TiDB Cluster pingcap/tidb-cluster v1.0.3 A Helm chart for TiDB Cluster pingcap/tidb-cluster v1.0.2 A Helm chart for TiDB Cluster pingcap/tidb-cluster v1.0.1 A Helm chart for TiDB Cluster pingcap/tidb-cluster v1.0.0 A Helm chart for TiDB Cluster NAME CHART VERSION APP VERSION DESCRIPTION pingcap/tidb-operator latest tidb-operator Helm chart for Kubernetes pingcap/tidb-operator v1.0.6 tidb-operator Helm chart for Kubernetes pingcap/tidb-operator v1.0.5 tidb-operator Helm chart for Kubernetes pingcap/tidb-operator v1.0.4 tidb-operator Helm chart for Kubernetes pingcap/tidb-operator v1.0.3 tidb-operator Helm chart for Kubernetes pingcap/tidb-operator v1.0.2 tidb-operator Helm chart for Kubernetes pingcap/tidb-operator v1.0.1 tidb-operator Helm chart for Kubernetes pingcap/tidb-operator v1.0.0 tidb-operator Helm chart for Kubernetes
部署 TiDB 集群
1. 安装 TiDB 组件 TiDB-Operator
version 可以根据实际情况进行修改。(同样,原始官方文档会报manifests不存在的错误)
$kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml && \ > kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/gke/persistent-disk.yaml && \ > helm install pingcap/tidb-operator -n tidb-admin --namespace=tidb-admin --version=1.0.0 customresourcedefinition.apiextensions.k8s.io/tidbclusters.pingcap.com created customresourcedefinition.apiextensions.k8s.io/backups.pingcap.com created customresourcedefinition.apiextensions.k8s.io/restores.pingcap.com created customresourcedefinition.apiextensions.k8s.io/backupschedules.pingcap.com created customresourcedefinition.apiextensions.k8s.io/tidbmonitors.pingcap.com created customresourcedefinition.apiextensions.k8s.io/tidbinitializers.pingcap.com created customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com created storageclass.storage.k8s.io/pd-ssd created NAME: tidb-admin LAST DEPLOYED: Tue Apr 28 11:22:32 2020 NAMESPACE: tidb-admin STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE tidb-scheduler-policy 1 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE tidb-controller-manager-6b95cbc986-stv9q 0/1 ContainerCreating 0 1s tidb-controller-manager-6b95cbc986-stv9q 0/1 ContainerCreating 0 1s ==> v1/ServiceAccount NAME SECRETS AGE tidb-controller-manager 1 1s tidb-scheduler 1 1s ==> v1beta1/ClusterRole NAME AGE tidb-admin:tidb-controller-manager 1s tidb-admin:tidb-scheduler 1s ==> v1beta1/ClusterRoleBinding NAME AGE tidb-admin:tidb-controller-manager 1s tidb-admin:tidb-scheduler 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE tidb-controller-manager 0/1 1 0 1s tidb-scheduler 0/1 1 0 1s NOTES: 1. Make sure tidb-operator components are running kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-admin 2. Install CRD kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml kubectl get customresourcedefinitions 3. Modify tidb-cluster/values.yaml and create a TiDB cluster by installing tidb-cluster charts helm install tidb-cluster
2. 观察 Operator 启动情况
$ kubectl get pods --namespace tidb-admin -o wide --watch NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tidb-controller-manager-6b95cbc986-stv9q 1/1 Running 0 2m6s 10.40.1.5 gke-tidb-default-pool-d5e3c725-77fdtidb-scheduler-74db4d7f59-jwft6 2/2 Running 0 2m6s 10.40.1.6 gke-tidb-default-pool-d5e3c725-77fd
如果 tidb-scheduler
和 tidb-controller-manager
状态都为 Running
,则可以 Ctrl+C
停止并继续下一步部署一个 TiDB 集群!
3. 一键部署一个 TiDB 集群
$ helm install pingcap/tidb-cluster -n demo --namespace=tidb \ --set pd.storageClassName=pd-ssd,tikv.storageClassName=pd-ssd --version=1.0.0 NAME: demo LAST DEPLOYED: Tue Apr 28 15:23:38 2020 NAMESPACE: tidb STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE demo-monitor 3 1s demo-pd-aa6df71f 2 1s demo-tidb 2 1s demo-tidb-a4c4bb14 2 1s demo-tikv-210ef60f 2 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE demo-discovery-68b8f875c4-jd89c 0/1 ContainerCreating 0 1s demo-discovery-68b8f875c4-jd89c 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE demo-monitor Opaque 2 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo-discovery ClusterIP 10.43.245.13710261/TCP 1s demo-grafana NodePort 10.43.244.130 3000:31417/TCP 1s demo-monitor-reloader NodePort 10.43.253.163 9089:30211/TCP 1s demo-prometheus NodePort 10.43.255.198 9090:30995/TCP 1s demo-tidb NodePort 10.43.250.181 4000:32222/TCP,10080:30580/TCP 1s ==> v1/ServiceAccount NAME SECRETS AGE demo-discovery 1 1s demo-monitor 1 1s ==> v1alpha1/TidbCluster NAME PD STORAGE READY DESIRE TIKV STORAGE READY DESIRE TIDB READY DESIRE AGE demo 1Gi 3 10Gi 3 2 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE demo-discovery 0/1 1 0 1s demo-monitor 0/1 1 0 1s ==> v1beta1/Role NAME AGE demo-discovery 1s demo-monitor 1s ==> v1beta1/RoleBinding NAME AGE demo-discovery 1s demo-monitor 1s NOTES: Cluster Startup 1. Watch tidb-cluster up and running watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide 2. List services in the tidb-cluster kubectl get services --namespace tidb -l app.kubernetes.io/instance=demo Cluster access * Access tidb-cluster using the MySQL client kubectl port-forward -n tidb svc/demo-tidb 4000:4000 & mysql -h 127.0.0.1 -P 4000 -u root -D test Set a password for your user SET PASSWORD FOR 'root'@'%' = '0BDJi6nGHv'; FLUSH PRIVILEGES; * View monitor dashboard for TiDB cluster kubectl port-forward -n tidb svc/demo-grafana 3000:3000 Open browser at http://localhost:3000. The default username and password is admin/admin. If you are running this from a remote machine, you must specify the server's external IP address.
访问 TiDB
TiDB 和 Google Cloud Shell 之间建立一个隧道。
$ kubectl -n tidb port-forward svc/demo-tidb 4000:4000 &>/tmp/port-forward.log &
在 Google Cloud Shell 上执行
$ sudo apt-get install -y mysql-client && mysql -h 127.0.0.1 -u root -P 4000
在 MySQL 终端输入
mysql> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v3.0.1
Git Commit Hash: 9e4e8da3c58c65123db5f26409759fe1847529f8
Git Branch: HEAD
UTC Build Time: 2019-07-16 01:03:40
GoVersion: go version go1.12 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false
1 row in set (0.12 sec)
我们看到了 TiDB 的版本信息。
访问 Grafana 面板
$sudo netstat --tcp --udp --listening --program|grep 4000 tcp 0 0 localhost:4000 0.0.0.0:* LISTEN 557/kubectl $ sudo netstat --tcp --udp --listening --program|grep 3000 tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN -
Grafana默认的端口是3000,但是因为未知原因(未知内核进程占用此端口),此端口被占用,所以在这里我使用3300端口替换。
$ kubectl -n tidb port-forward svc/demo-grafana 3300:3000 &>/dev/null &
在 Cloud Shell 中,点击 Web Preview 按钮并输入端口 3000,将打开一个新的浏览器标签页访问 Grafana 面板。或者也可以在新浏览器标签或者窗口中直接访问 URL:https://ssh.cloud.google.com/devshell/proxy?port=3300
将会需要你的账号授权,然后成功之后会自动跳转到一个 https://xxx.appspot.com/?orgId=1
地址。
查看所有面板,可以点击左边的图标-> Manager
点击成功后,可以看到如下所有相关监控面板
搭建环境的目的是测试,如下可以新建一台虚拟机,在上面用sysbench进行压力测试。如此Grafana上才可以收集并显示图表数据。
为了方便在windows环境下进行连接和操作,我们首先要先生成KEY。在这里我用puttygen来生成。
下面,需要注意的是,改写Key comment为gcloud-user(你也可以修改成其它名字),它是用来连接虚拟机的用户名。
修改完Key comment后,占击”Save private key”,保存Private Key。上面的“Public Key for pasting into OpenSSH authorized_keys file”文本框的内容也保存。
Region选择和GKE相同的region
创建虚拟机的时候,需要把上面”Public Key for pasting into OpenSSH authorized_keys file“的内容粘贴到SSH Key里面
打开putty进行连接,连接的时候,需要指定Auth为上面保存的Private Key
连接成功
login as: gcloud-user Authenticating with public key "gcloud-user" Linux instance-1 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. gcloud-user@instance-1:~$
接下来,我们要通过虚拟机安装mysql客户端,sysbench等,来做为测试机器。
如下可以看到tidb是没有external-ip的,即无法外部访问。所以在这里,还是通过port-forward的方式,在虚拟机与Tidb建立访问关系。
$ kubectl get services demo-tidb --namespace=tidb NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo-tidb NodePort 10.43.248.1064000:32728/TCP,10080:30733/TCP 7m26s
因为虚拟机默认是没有安装kubectl的,所以需要先安装,安装指导点这里
gcloud-user@instance-1:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 41.9M 100 41.9M 0 0 79.5M 0 --:--:-- --:--:-- --:--:-- 79.6M
访问 TiDB,根据上面的相同内容,进行同样的操作。
在TiDB 和 Google Cloud Shell 之间建立一个隧道。
gcloud-user@instance-1:~$kubectl -n tidb port-forward svc/demo-tidb 4000:4000 &> /tmp/port-forward.log& [1]+ Exit 1 kubectl -n tidb port-forward svc/demo-tidb 4000:4000 &> /tmp/port-forward.log gcloud-user@instance-1:~$ more /tmp/port-forward.log error: Missing or incomplete configuration info. Please point to an existing, c omplete config file: 1. Via the command-line flag --kubeconfig 2. Via the KUBECONFIG environment variable 3. In your home directory as ~/.kube/config To view or setup config directly use the 'config' command.
如上,根据问题提示。我直接从cloud shell内COPY一个同样的文件,放在虚拟机的.kube/config目录
yanglu1661@cloudshell:~ (lytidbtest)$ cat .kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data:
在虚拟机上,完全拷贝cloudshell的文件内容,再重新执行
gcloud-user@instance-1:~$ mkdir .kube gcloud-user@instance-1:~$ nano .kube/config gcloud-user@instance-1:~$ kubectl -n tidb port-forward svc/demo-tidb 4000:4000 &> /tmp/port-forward.log&
安装mysqlclient和sysbench
gcloud-user@instance-1:~$ sudo apt-get install -y mysql-client gcloud-user@instance-1:~$ sudo apt-get install -y sysbench Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: sysbench 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 62.6 kB of archives. After this operation, 154 kB of additional disk space will be used. Get:1 http://deb.debian.org/debian stretch/main amd64 sysbench amd64 0.4.12-1.2 [62.6 kB] Fetched 62.6 kB in 0s (1,332 kB/s) Selecting previously unselected package sysbench. (Reading database ... 40443 files and directories currently installed.) Preparing to unpack .../sysbench_0.4.12-1.2_amd64.deb ... Unpacking sysbench (0.4.12-1.2) ... Processing triggers for man-db (2.7.6.1-2) ... Setting up sysbench (0.4.12-1.2) ...
创建一个sysbench测试帐号
MySQL [(none)]> create user sysbench@'%' identified by 'comverse';
Query OK, 1 row affected (0.06 sec)
MySQL [(none)]> grant all privileges on test.* to sysbench@'%';
Query OK, 0 rows affected (0.07 sec)
sysbench测试,完整的使用文档在这里
你需要先prepare数据
$ sysbench --test=oltp --mysql-host=127.0.0.1 --mysql-port=4000 --mysql-user=sysbench --mysql-password=comverse --oltp-test-mode=nontrx --num-threads=10 --mysql-db=test --oltp-table-size=1000000 --max-requests=1000000 prepare
再执行运行
$ sysbench --test=oltp --mysql-host=127.0.0.1 --mysql-port=4000 --mysql-user=sysbench --mysql-password=comverse --oltp-test-mode=nontrx --oltp-nontrx-mode=update_key --num-threads=10 --mysql-db=test --oltp-table-size=1000000 --max-requests=100000 --max-time=300 run
运行完后,可以使用cleanup清除数据
$ sysbench --test=oltp --mysql-host=127.0.0.1 --mysql-port=4000 --mysql-user=sysbench --mysql-password=comverse --oltp-test-mode=nontrx --oltp-nontrx-mode=update_key --num-threads=10 --mysql-db=test --oltp-table-size=1000000 --max-requests=100000 --max-time=300 cleanup
sysbench在执行完后,会输出结果。过程中,你可以通过processlist查看当前执行的SQL语句。
MySQL [(none)]> show processlist;
+------+----------+-----------+------+---------+------+-------+------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+----------+-----------+------+---------+------+-------+------------------------------------+
| 18 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 21 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 20 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 16 | root | 127.0.0.1 | NULL | Query | 0 | 2 | show processlist |
| 24 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 27 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 19 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 25 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 26 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 22 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
| 23 | sysbench | 127.0.0.1 | test | Execute | 0 | 2 | UPDATE sbtest set k=k+1 where id=? |
+------+----------+-----------+------+---------+------+-------+------------------------------------+
11 rows in set (0.00 sec)
拥有了环境,就像编程做出了一个hello world,接下来你就可以畅快的进行探索了。
销毁 TiDB 集群
删除运行的 Pod:
$ helm delete demo --purge
release "demo" deleted
清除数据和动态创建的持久化磁盘:
$ kubectl delete pvc -n tidb -l app.kubernetes.io/instance=demo,app.kubernetes.io/managed-by=tidb-operator && > kubectl get pv -l app.kubernetes.io/namespace=tidb,app.kubernetes.io/managed-by=tidb-operator,app.kubernetes.io/instance=demo -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' persistentvolumeclaim "pd-demo-pd-0" deleted persistentvolumeclaim "pd-demo-pd-1" deleted persistentvolumeclaim "pd-demo-pd-2" deleted persistentvolumeclaim "tikv-demo-tikv-0" deleted persistentvolumeclaim "tikv-demo-tikv-1" deleted persistentvolumeclaim "tikv-demo-tikv-2" deleted persistentvolume/pvc-2e9661d9-8921-11ea-9fc8-42010a8a00d3 patched persistentvolume/pvc-2eb131d0-8921-11ea-9fc8-42010a8a00d3 patched persistentvolume/pvc-2ebed5da-8921-11ea-9fc8-42010a8a00d3 patched persistentvolume/pvc-4adb0695-8921-11ea-9fc8-42010a8a00d3 patched persistentvolume/pvc-4ae0bb20-8921-11ea-9fc8-42010a8a00d3 patched persistentvolume/pvc-4aee11ce-8921-11ea-9fc8-42010a8a00d3 patched
删除整个 kubernetes 集群
$ gcloud container clusters delete tidbThe following clusters will be deleted. - [tidb] in [us-west1-a] Do you want to continue (Y/n)? y Deleting cluster tidb...done.Deleted [https://container.googleapis.com/v1/projects/luyangtidb/zones/us-west1-a/clusters/tidb].
未删除时:
参考链接:
https://asktug.com/t/topic/437
https://pingcap.com/docs/tidb-in-kubernetes/v1.0/deploy-tidb-from-kubernetes-gke/
https://pingcap.com/docs-cn/tidb-in-kubernetes/v1.0/tidb-toolkit/