参考说明:
安装思路: Kubernetes全栈架构师:基于世界500强的k8s实战课程
镜像模式: RabbitMQ高可用-镜像模式部署使用
项目地址:https://github.com/helm/helm
安装:
# 下载(自行选择版本)
wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz
# 解压
tar zxvf helm-v3.6.1-linux-amd64.tar.gz
# 安装
mv linux-amd64/helm /usr/local/bin/
# 验证
helm version
# 添加仓库
helm repo add
# 查询 charts
helm search repo
# 更新repo仓库资源
helm repo update
# 查看当前安装的charts
helm list -A
# 安装
helm install
# 卸载
helm uninstall
# 更新
helm upgrade
# 添加bitnami仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
# 查询chart
helm search repo bitnami
# 创建工作目录
mkdir -p ~/test/rabbitmq
cd ~/test/rabbitmq
# 拉取rabbitmq
helm pull bitnami/rabbitmq
# 解压
tar zxvf [rabbitmq]
官方配置参考:https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq
cd ~/test/rabbitmq/rabbitmq
vim values.yaml
auth:
username: admin
password: "admin@mq"
existingPasswordSecret: ""
erlangCookie: secretcookie
--set auth.username=admin,auth.password=admin@mq,auth.erlangCookie=secretcookie
clustering.forceBoot
clustering:
enabled: true
addressType: hostname
rebalance: false
forceBoot: true
clustering.forceBoot
时,如下图,通过删除rabbitmq集群所有pod模拟宕机,可见集群重新启动时第一个节点迟迟未就绪clustering.forceBoot
,并更新rabbitmq,可见集群重启正常helm upgrade rabbitmq -n test .
kubectl delete pod -n test rabbitmq-0
get pod -n test -w
extraEnvVars:
- name: TZ
value: "Asia/Shanghai"
replicaCount: 3
若无需持久化,将enabled
设置为false
持久化需使用块存储,本文通过aws的ebs-csi创建storageClass,亦可使用自建块存储storageClass
注:sc最好具备扩容属性
persistence:
enabled: true
storageClass: "ebs-sc"
selector: {}
accessMode: ReadWriteOnce
existingClaim: ""
size: 8Gi
cd ~/test/rabbitmq/rabbitmq
kubectl create ns test
helm install rabbitmq -n test .
helm install rabbitmq -n test . \
--set auth.username=admin,auth.password=admin@mq,auth.erlangCookie=secretcookie
后期upgrade时亦须指定上述参数
kubectl get pod -n test -w
kubectl get svc -n test
当前rabbitmq通过ClusterIP方式暴露,供集群内部访问;外部访问方式将在下章介绍。
# 进入pod
kubectl exec -it -n test rabbitmq-0 -- bash
# 查看集群状态
rabbitmqctl cluster_status
# 列出策略(尚未设置镜像模式)
rabbitmqctl list_policies
#设置集群名称
rabbitmqctl set_cluster_name [cluster_name]
不建议在默认安装方式中指定nodeport,而是另外创建
5672:建议通过service-私网负载均衡器
暴露给私网其它应用使用
15672:建议通过ingress
或service-公网负载均衡器
暴露给外界访问
端口 | 暴露方式(见下文方式三) | 访问方式 |
---|---|---|
5672 | Service-LoadBalancer(配置为私网负载均衡器) | k8s集群内:rabbitmq.test:5672 私网:私网负载均衡IP:5672 |
15672 | ingress-ALB(配置为公网负载均衡器) | 公网负载均衡URL |
注:本文使用亚马逊托管版k8s集群,已配置
aws-load-balancer-controller
cd /test/rabbitmq
kubectl get svc -n test rabbitmq -o yaml > service-clusterip.yaml
cp service-clusterip.yaml service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-nodeport
namespace: test
spec:
ports:
- name: amqp
port: 5672
protocol: TCP
targetPort: amqp
nodePort: 32672
- name: http-stats
port: 15672
protocol: TCP
targetPort: stats
nodePort: 32673
selector:
app.kubernetes.io/instance: rabbitmq
app.kubernetes.io/name: rabbitmq
type: NodePort
kubectl apply -f service-nodeport.yaml
kubectl get svc -n test
service-loadbalancer.yaml
:vim service-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-loadbalance
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- name: amqp
port: 5672
protocol: TCP
targetPort: amqp
- name: http-stats
port: 15672
protocol: TCP
targetPort: stats
selector:
app.kubernetes.io/instance: rabbitmq
app.kubernetes.io/name: rabbitmq
type: LoadBalancer
kubectl apply -f service-loadbalancer.yaml
kubectl get svc -n test
浏览器登录控制台: http://k8s-test-rabbitmq-fbff138068-d79b31bdcb2f6f2b.elb.cn-northwest-1.amazonaws.com.cn:15672:
vim service-lb-internal.yaml
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-lb-internal
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
# service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing #注释后即为私网
spec:
ports:
- name: amqp
port: 5672
protocol: TCP
targetPort: amqp
selector:
app.kubernetes.io/instance: rabbitmq
app.kubernetes.io/name: rabbitmq
type: LoadBalancer
kubectl apply -f service-lb-internal.yaml
vim ingress-alb.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rabbitmq
namespace: test
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
labels:
app: rabbitmq
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "rabbitmq"
servicePort: 15672
kubectl apply -f ingress-alb.yaml
浏览器登录控制台: k8s-test-rabbitmq-4623cb772f-1674334573.cn-northwest-1.elb.amazonaws.com.cn
镜像模式:将需要消费的队列变为镜像队列,存在于多个节点,这样就可以实现 RabbitMQ 的 HA 高可用性。作用就是消息实体会主动在镜像节点之间实现同步,而不是像普通模式那样,在 consumer 消费数据时临时读取。缺点就是,集群内部的同步通讯会占用大量的网络带宽。
# 进入pod
kubectl exec -it -n test rabbitmq-0 -- bash
# 列出策略(尚未设置镜像模式)
rabbitmqctl list_policies
# 设置镜像模式
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all" , "ha-sync-mode":"automatic"}'
# 再次列出策略
rabbitmqctl list_policies
控制台查看
helm uninstall rabbitmq -n test
kubectl delete pvc -n test data-rabbitmq-0 data-rabbitmq-1 data-rabbitmq-2
kubectl delete -f service-nodeport.yaml
kubectl delete -f service-loadbalancer.yaml
kubectl delete -f ingress-alb.yaml
若本篇内容对您有所帮助,请三连点赞,关注,收藏支持下~