本文介绍如何利用ArgoCD在三个OpenShift集群上部署一套MongoDB的主从集群。下一章节再在部署三个OpenShift集群上部署应用访问MongoDB集群。
参考《OpenShift 4 之 GitOps(4)用ArgoCD向Multi-Cluster发布应用》,先通过“oc config rename-context <NEW_NAME>”命令修改三个OpenShift集群的Config Context名称(分别命名为cluster1、cluster2、cluster3),然后通过“argocd cluster add
为了让MongoDB可通过TLS访问,需要生成证书和秘钥。
$ cd ~/federation-dev/labs/lab-6-assets
$ cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
$ cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Austin",
"O": "Kubernetes",
"OU": "TX",
"ST": "Texas"
}
]
}
EOF
$ cat > mongodb-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Austin",
"O": "Kubernetes",
"OU": "TX",
"ST": "Texas"
}
]
}
EOF
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ NAMESPACE=mongo
$ SERVICE_NAME=mongo
$ ROUTE_CLUSTER1=mongo-cluster1.$(oc --context=cluster1 get ingresses.config.openshift.io cluster -o jsonpath='{ .spec.domain }')
$ ROUTE_CLUSTER2=mongo-cluster2.$(oc --context=cluster2 get ingresses.config.openshift.io cluster -o jsonpath='{ .spec.domain }')
$ ROUTE_CLUSTER3=mongo-cluster3.$(oc --context=cluster3 get ingresses.config.openshift.io cluster -o jsonpath='{ .spec.domain }')
$ SANS="localhost,localhost.localdomain,127.0.0.1,${ROUTE_CLUSTER1},${ROUTE_CLUSTER2},${ROUTE_CLUSTER3},${SERVICE_NAME},${SERVICE_NAME}.${NAMESPACE},${SERVICE_NAME}.${NAMESPACE}.svc.cluster.local"
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=${SANS} -profile=kubernetes mongodb-csr.json | cfssljson -bare mongodb
$ cat mongodb-key.pem mongodb.pem > mongo.pem
在Gitops中所有部署对象都是YAML文件,所以在通过ArgoCD执行YAML之前,我们还需要将和操作环境相关的配置更新到YAML中。
$ sed -i "s/mongodb.pem: .*$/mongodb.pem: $(openssl base64 -A < mongo.pem)/" base/mongo-secret.yaml
$ sed -i "s/ca.pem: .*$/ca.pem: $(openssl base64 -A < ca.pem)/" base/mongo-secret.yaml
$ sed -i "s/primarynodehere/${ROUTE_CLUSTER1}:443/" base/mongo-rs-deployment.yaml
$ sed -i "s/replicamembershere/${ROUTE_CLUSTER1}:443,${ROUTE_CLUSTER2}:443,${ROUTE_CLUSTER3}:443/" base/mongo-rs-deployment.yaml
$ sed -i "s/mongocluster1route/${ROUTE_CLUSTER1}/" overlays/cluster1/mongo-route.yaml
$ sed -i "s/mongocluster2route/${ROUTE_CLUSTER2}/" overlays/cluster2/mongo-route.yaml
$ sed -i "s/mongocluster3route/${ROUTE_CLUSTER3}/" overlays/cluster3/mongo-route.yaml
$ MY_GITHUB=https://github.com/<MY-GITHUB>/federation-dev.git
$ argocd app create --project default --name cluster1-mongo \
--repo ${MY_GITHUB} \
--path labs/lab-6-assets/overlays/cluster1 \
--dest-server $(argocd cluster list | grep cluster1 | awk '{print $1}') \
--dest-namespace mongo --revision master --sync-policy automated
$ argocd app create --project default --name cluster2-mongo \
--repo ${MY_GITHUB} \
--path labs/lab-6-assets/overlays/cluster2 \
--dest-server $(argocd cluster list | grep cluster2 | awk '{print $1}') \
--dest-namespace mongo --revision master --sync-policy automated
$ argocd app create --project default --name cluster3-mongo \
--repo ${MY_GITHUB} \
--path labs/lab-6-assets/overlays/cluster3 \
--dest-server $(argocd cluster list | grep cluster3 | awk '{print $1}') \
--dest-namespace mongo --revision master --sync-policy automated
如果对自己Github账户中的部署文件进行了更改,可执行以下命令手动同步到ArgoCD。
$ argocd app sync cluster1-mongo
$ argocd app sync cluster2-mongo
$ argocd app sync cluster3-mongo
$ argocd app list
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
cluster1-mongo https://api.cluster-shanghai-fba4.shanghai-fba4.example.opentlc.com:6443 mongo default OutOfSync Healthy Auto <none> https://github.com/liuxiaoyu-git/federation-dev.git labs/lab-6-assets/overlays/cluster1 master
cluster2-mongo https://api.cluster-beijing-7536.beijing-7536.example.opentlc.com:6443 mongo default OutOfSync Healthy Auto <none> https://github.com/liuxiaoyu-git/federation-dev.git labs/lab-6-assets/overlays/cluster2 master
cluster3-mongo https://api.cluster-shanghai-e90b.shanghai-e90b.sandbox1824.opentlc.com:6443 mongo
$ for cluster in cluster1 cluster2 cluster3; do oc --context $cluster -n mongo get deployment mongo; done
$ for cluster in cluster1 cluster2 cluster3; do oc --context $cluster -n mongo get deployment mongo; done
$ MONGO_POD=$(oc --context=cluster1 -n mongo get pod --selector="name=mongo" --output=jsonpath='{.items..metadata.name}')
$ oc --context=cluster1 -n mongo label pod $MONGO_POD replicaset=primary
$ wait-for-mongo-replicaset cluster1 mongo 3
Checking if MongoDB Replicaset from namespace mongo on cluster cluster1 is configured
。。。
MongoDB ReplicaSet Status:
--------------------------
Primary Member:
"mongo-cluster1.apps.cluster-shanghai-fba4.shanghai-fba4.example.opentlc.com:443"
Secondary Members:
"mongo-cluster2.apps.cluster-beijing-7536.beijing-7536.example.opentlc.com:443"
"mongo-cluster3.apps.cluster-shanghai-e90b.shanghai-e90b.sandbox1824.opentlc.com:443"
**注意**:在操作的时候如果出现作为Primary成员的cluster1的状态如果不正常(后台的MongoCD Pod出现CrashLoopBackOff),可以尝试先执行以下命令删除cluster1的“replicaset=primary”的标签,然后再修改本节(1)步骤的命令(将cluster2或cluste在r3设为Primary成员),最后在执行修改后的命令。
一旦在此步骤更换了replicaset的primary,后面所有和MongoDB
$ oc --context=cluster1 -n mongo label pod $MONGO_POD replicaset-
$ MONGO_POD=$(oc --context=cluster1 -n mongo get pod --selector="name=mongo" --output=jsonpath='{.items..metadata.name}')
$ oc --context=cluster1 -n mongo exec $MONGO_POD \
-- bash -c 'mongo --norc --quiet --username=admin --password=$MONGODB_ADMIN_PASSWORD --host localhost admin --tls --tlsCAFile /opt/mongo-ssl/ca.pem --eval "rs.status()"'
此时在三个OpenShift集群中就配置好了三个具备主从关系的MongoDB集群。