k8s中实现mysql主备

文章目录

  • 一、k8s中实现mysql主备
    • 1.1 环境信息
    • 1.2 部署nfs-provisioner
      • 1.2.1 安装nfs
      • 1.2.2 部署nfs-provisioner
    • 1.3 安装mysql
    • 1.4 备库上查看是否同步

一、k8s中实现mysql主备

1.1 环境信息

机器 操作系统 ip mysql版本 k8s版本 storageClass
master1 CentOS7.8 192.168.0.20 mysql5.7.42 1.27.1 nfs
node1 CentOS7.8 192.168.0.21 mysql5.7.42 1.27.1 nfs

1.2 部署nfs-provisioner

说明:
  使用statefulSet部署双机MySQL,所以需要提供storageClass,这里使用nfs-provisioner。

1.2.1 安装nfs

这里nfs安装在node1节点上
mkdir /mnt/nfs && sh nfs_install.sh /mnt/nfs 192.168.0.0/24

nfs_install.sh

#!/bin/bash

### How to install it? ###
### 安装nfs-server,需要两个参数:1、挂载点  2、允许访问nfs-server的网段 ###

### How to use it? ###
### Client节点`yum -y install nfs-utils rpcbind`,然后挂载nfs-server目录到本地 ###
### 如:echo "192.168.0.20:/mnt/data01  /mnt/data01  nfs  defaults  0 0" >> /etc/fstab && mount -a ###

mount_point=$1
subnet=$2

function nfs_server() {
  systemctl stop firewalld
  systemctl disable firewalld
  setenforce 0
  sed -i 's/^SELINUX.*/SELINUX\=disabled/' /etc/selinux/config
  yum -y install nfs-utils rpcbind
  mkdir -p $mount_point
  echo "$mount_point ${subnet}(rw,sync,no_root_squash)" >> /etc/exports
  systemctl start rpcbind && systemctl enable rpcbind
  systemctl restart nfs-server && systemctl enable nfs-server
  chown -R nfsnobody:nfsnobody $mount_point
}

function usage() {
echo "Require 2 argument: [mount_point] [subnet]
eg: sh $0 /mnt/data01 192.168.10.0/24"
}

declare -i arg_nums
arg_nums=$#
if [ $arg_nums -eq 2 ];then
  nfs_server
else
  usage
  exit 1
fi

1.2.2 部署nfs-provisioner

master1节点上执行 kubectl create namespace devops && kubectl apply -f nfs-provisioner.yaml

nfs-provisioner.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: devops
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
- apiGroups: [""]
  resources: ["services", "endpoints"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-nfs-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nfs-provisioner-runner
subjects:
- kind: ServiceAccount
  name: nfs-provisioner
  namespace: devops
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: leader-locking-nfs-provisioner
  namespace: devops
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: leader-locking-nfs-provisioner
  namespace: devops
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: leader-locking-nfs-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-provisioner
  namespace: devops
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-provisioner
  namespace: devops
spec:
  selector:
    matchLabels:
       app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccountName: nfs-provisioner
      containers:
        - name: nfs-provisioner
          image: docker.io/gmoney23/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: example.com/nfs
            - name: NFS_SERVER
              value: 192.168.0.21
            - name: NFS_PATH
              value: /mnt/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.0.21
            path: /mnt/nfs
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs
provisioner: example.com/nfs
#reclaimPolicy: Retain

1.3 安装mysql

kubectl apply -f deploy.yaml

deploy.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: mysql
  labels:
    app: mysql

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    [client]
    default-character-set=utf8mb4
    [mysql]
    default-character-set=utf8mb4
    [mysqld]
    max_connections=2000
    default-time_zone='+8:00'
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci
    innodb_buffer_pool_size=536870912
    datadir=/var/lib/mysql
    pid-file=/var/run/mysqld/mysqld.pid
    log-error=/var/lib/mysql/error.log
    log-bin=mysqllog
    skip-name-resolve
    lower-case-table-names=1
    log_bin_trust_function_creators=1
  slave.cnf: |
    [client]
    default-character-set=utf8mb4
    [mysql]
    default-character-set=utf8mb4
    [mysqld]
    max_connections=2000
    default-time_zone='+8:00'
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci
    innodb_buffer_pool_size=536870912
    datadir=/var/lib/mysql
    pid-file=/var/run/mysqld/mysqld.pid
    log-error=/var/lib/mysql/error.log
    super-read-only
    skip-name-resolve
    log-bin=mysql-bin
    lower-case-table-names=1
    log_bin_trust_function_creators=1

---
apiVersion: v1
kind: Secret
metadata:
  name: mysql-secret
  namespace: mysql
  labels:
    app: mysql
type: Opaque
data:
  password: TnNiZzExMTEqQCE= # Nsbg1111*@!
  replicationUser: Y29weQ== #copy
  replicationPassword: TnNiZzExMTEqQCE= #Nsbg1111*@!

---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
spec:
  selector:
    app: mysql
  clusterIP: None
  ports:
  - name: mysql
    port: 3306

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 2
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: docker.io/library/mysql:5.7.42
        command: 
        - bash
        - "-c"
        - |
          set -ex
          #从pod的hostname中通过正则获取序号,如果没有截取到就退出程序
          ordinal=`cat /etc/hostname | awk -F"-" '{print $2}'` || exit 1
          #将serverId输入到对应的配置文件中,路径可以随意(与之后的对应上就行),但是文件名不能换
          echo [mysqld] > /etc/mysql/conf.d/server-id.cnf
          # 由于server-id不能为0,因此给ID加100来避开它
          server_id=$((100 + $ordinal))
          echo "server-id=$server_id" >> /etc/mysql/conf.d/server-id.cnf
          if [[ ${ordinal} -eq 0 ]]; then
            # 如果Pod的序号为0,说明它是Master节点,从ConfigMap里把Master的配置文件拷贝到/mnt/conf.d目录下
            cp /mnt/config-map/master.cnf /etc/mysql/conf.d
          else
            # 否则,拷贝ConfigMap里的Slave的配置文件
            cp /mnt/config-map/slave.cnf /etc/mysql/conf.d
          fi
          echo "ending..."
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: password
        - name: MYSQL_REPLICATION_USER
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationUser
        - name: MYSQL_REPLICATION_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationPassword
        volumeMounts:
        - name: conf
          mountPath: /etc/mysql/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      containers:
      - name: mysql
        image: docker.io/library/mysql:5.7.42
        lifecycle:
         postStart:
          exec:
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
              #查看是否存在名为mysqlInitOk的文件,我们自己生产的标识文件,防止重复初始化集群
              if [ ! -f mysqlInitOk ]; then
                echo "Waiting for mysqld to be ready(accepting connections)"
                #执行一条mysql的命令,查看mysql是否初始化完毕,如果没有就反复执行直到可以运行
                  #until mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "use mysql;SELECT 1;"; do sleep 1; done
                  sleep 5s
                  echo "Initialize ready"
                  #判断是master还是slave
                  pod_seq=`cat /etc/hostname | awk -F"-" '{print $2}'`
                  if [ $pod_seq -eq 0 ];then
                    #创建主从账户
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "create user '${MYSQL_REPLICATION_USER}'@'%' identified by '${MYSQL_REPLICATION_PASSWORD}';"
                  #设置权限
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "grant replication slave on *.* to '${MYSQL_REPLICATION_USER}'@'%' with grant option;"
                  #刷新配置
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "flush privileges;"
                  #初始化master
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "reset master;"
                else
                  #设置slave连接的master
                  #mysql-0.mysql.mysql的由来{pod-name}.{service-name}.{namespace}
                  mysql -e \
                  "change master to master_host='mysql-0.mysql.mysql',master_port=3306, \
                  master_user='${MYSQL_REPLICATION_USER}',master_password='${MYSQL_REPLICATION_PASSWORD}', \
                  master_log_file='mysqllog.000001',master_log_pos=154;"
                  #重置slave
                  mysql -e "reset slave;"
                  #开始同步
                  mysql -e "start slave;"
                  #改成只读模式
                  mysql -e "set global read_only=1;"
                fi
                #运行完毕创建标识文件,防止重复初始化集群
                touch mysqlInitOk
              fi
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: password
        - name: MYSQL_REPLICATION_USER
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationUser
        - name: MYSQL_REPLICATION_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationPassword
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        - name: run-mysql
          mountPath: /var/run/mysql
        resources:
          requests:
            cpu: 500m
            memory: 2Gi
        #设置存活探针
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping", "-uroot", "-p${MYSQL_ROOT_PASSWORD}"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        #设置就绪探针
        readinessProbe:
          exec:
            command: ["mysqladmin", "ping", "-uroot", "-p${MYSQL_ROOT_PASSWORD}"]
          initialDelaySeconds: 5
          periodSeconds: 10
          timeoutSeconds: 1
      volumes:
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: nfs
      resources:
        requests:
          storage: 5Gi
  - metadata: 
      name: conf
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: nfs
      resources:
        requests:
          storage: 100Mi
  - metadata: 
      name: run-mysql
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: nfs
      resources:
        requests:
          storage: 100Mi

1.4 备库上查看是否同步

k8s中实现mysql主备_第1张图片

你可能感兴趣的:(Kubernetes,kubernetes,mysql,容器)