tidb的部署

tidb的部署

【参考】: https://docs.pingcap.com/tidb-in-kubernetes/stable/get-started#deploy-tidb-operator

一、依赖
helm3
docker
kubectl

安装docker 和 kubectl

docker的安装如下:
yum install docker

kubectl 的安装: https://www.cnblogs.com/xiluhua/p/14684920.html

1、设置仓库底地址
cat > /etc/docker/daemon.json << EOF
{ "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] }
EOF

2、添加 yum 源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3、完成。接下来:安装 kubeadm,kubelet 和 kubectl

systemctl enable kubelet

针对helm3:
yum install helm3

手动部署chaos-mesh

docker pull pingcap/chaos-mesh:v1.2.1
docker pull pingcap/chaos-daemon:v1.2.1
docker pull pingcap/chaos-dashboard:v1.2.1
docker pull pingcap/coredns:v0.2.0

docker tag pingcap/chaos-dashboard:v1.1.0 hub.kce.ksyun.com/nosql/test/chaos-dashboard:v1.2.1
docker tag pingcap/chaos-mesh:v1.1.0 hub.kce.ksyun.com/nosql/test/chaos-mesh:v1.2.1
docker tag pingcap/chaos-daemon:v1.1.0 hub.kce.ksyun.com/nosql/test/chaos-daemon:v1.2.1
docker image tag pingcap/coredns:v0.2.0 hub.kce.ksyun.com/nosql/test/coredns:v0.2.0

docker push hub.kce.ksyun.com/nosql/test/chaos-dashboard:v1.1.0
docker push hub.kce.ksyun.com/nosql/test/chaos-mesh:v1.1.0
docker push hub.kce.ksyun.com/nosql/test/chaos-daemon:v1.1.0
docker push hub.kce.ksyun.com/nosql/test/coredns:v0.2.0

helm install helm/chaos-mesh --name=chaos-mesh --namespace=chaos-testing
--set dashboard.create=true
--set dnsServer.create=true
--set chaosDaemon.image=hub.kce.ksyun.com/nosql/test/chaos-daemon:v1.2.1
--set controllerManager.image=hub.kce.ksyun.com/nosql/test/chaos-mesh:v1.2.1
--set dashboard.image=hub.kce.ksyun.com/nosql/test/chaos-dashboard:v1.2.1
--set dnsServer.image=hub.kce.ksyun.com/nosql/test/coredns:v0.2.0

tidb通过secret 共享镜像

Error: a release named chaos-mesh already exists.
Run: helm ls --all chaos-mesh; to check the status of the release

kubectl create secret docker-registry nosqlimage --docker-server=hub.kce.ksyun.com --docker-username=2000003486 --docker-password=Ksyun@NoSql!2020 --docker-email="[email protected]" --namespace=chaos-testing

={"imagePullSecrets":[{"name":"nosql-image"}]}

删除命令:helm del --purge chaos-mesh

kubectl get pods --namespace chaos-testing -l app.kubernetes.io/instance=chaos-mesh

针对rbac 生成的相关信息:

kubectl apply -f rbac.yaml
serviceaccount/account-chaos-testing-manager-qsmrx created
role.rbac.authorization.k8s.io/role-chaos-testing-manager-qsmrx created
rolebinding.rbac.authorization.k8s.io/bind-chaos-testing-manager-qsmrx created
[root@vm10-0-2-6 chaos-mesh]#
[root@vm10-0-2-6 chaos-mesh]# kubectl describe -n chaos-testing secrets account-chaos-testing-manager-qsmrx
Name: account-chaos-testing-manager-qsmrx-token-z2bk6
Namespace: chaos-testing
Labels:
Annotations: kubernetes.io/service-account.name: account-chaos-testing-manager-qsmrx
kubernetes.io/service-account.uid: a45f47a5-059d-4894-b3b7-8c1a916075dc

Type: kubernetes.io/service-account-token

Data

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlJWcFJJZ1FxQ2ZmaURoRElYVWhZZzIzZUxJT0NwUk5WYlVidDAxU1dXeVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaGFvcy10ZXN0aW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImFjY291bnQtY2hhb3MtdGVzdGluZy1tYW5hZ2VyLXFzbXJ4LXRva2VuLXoyYms2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFjY291bnQtY2hhb3MtdGVzdGluZy1tYW5hZ2VyLXFzbXJ4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYTQ1ZjQ3YTUtMDU5ZC00ODk0LWIzYjctOGMxYTkxNjA3NWRjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNoYW9zLXRlc3Rpbmc6YWNjb3VudC1jaGFvcy10ZXN0aW5nLW1hbmFnZXItcXNtcngifQ.MQhyXVdEb07GUJvVz_l5ND0Bi3hdyZyBo45IIb_VXBNDq0SUzL-t-OstmOkwJBBNBsN4vWtIS8uYKR4urVHyVFSGJNcdz0OrDJQv0AqU2tdnGnBowmAQ04gsQyuVX9UjVOINWwjG45dAjmiYTlzo-dAYVa_qDy3nZTHUyTXCTbhO4--bWVnFH2QoNB8PVQy94PxnqbJrXVWF5CBixQI0ZOSBxrsrO4rH_mKRmdcnVT9AMhEWO9alGoTOqQFzGg-2Rg87mesNPSayVdkXIGuUICl7HADrSeAqC-rw3UqSskQ_qCgQTF9fPOqHQDyPAz0LbaCpsdmM9LyT22noX4lUkQ
ca.crt: 1029 bytes
namespace: 13 bytes

在线部署

执行 curl -sSL https://mirrors.chaos-mesh.org/v1.2.1/install.sh | bash -s -- --local kind 会将相关的依赖 包括helm3、kubectl、kind都安装上。

1、如果kubectl 找不到,通过执行find命令查找:

find / -name kubectl

2、编辑 vim /etc/profile。
添加:export PATH="/root/local/bin/:$PATH"

3、source /etc/profile

tidb的部署

步骤一、Deploy TiDB Operator in Kubernetes

Create CRD

kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml

或者
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/manifests/crd.yaml
kubectl apply -f ./crd.yaml

kubectl get crds

1、helm repo add pingcap https://charts.pingcap.org/

2、kubectl create namespace tidb-admin

3、[root@vm10-0-0-231 tidb]# helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.12
NAME: tidb-operator
LAST DEPLOYED: Thu Jun 10 17:17:20 2021
NAMESPACE: tidb-admin
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Make sure tidb-operator components are running:

kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator

4、kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator

步骤二、Deploy a TiDB cluster

kubectl create namespace tidb-cluster &&
kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml

步骤三:Deploy TiDB monitoring services

curl -LO https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml &&
kubectl -n tidb-cluster apply -f tidb-monitor.yaml

步骤四:View the Pod status

watch kubectl get po -n tidb-cluster

你可能感兴趣的:(tidb的部署)