k8s部署单节点的mysql和nacos

k8s部署单节点的mysql和nacos

    • 自定义StorageClass
    • 部署单节点mysql
    • 部署单节点nacos

自定义StorageClass

因为mysql和nacos都是有状态服务,所以我们需要指定存储的方式。

这里,我们使用的是nfs。你需要提前安装好nfs服务器并且在客户端完成挂载。参考博文:nfs服务器搭建

对每个有状态服务,手动部署pv和pvc是麻烦的,所以我们可以用StorageClass来自动的为我们生成pv和pvc。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system 
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner-01
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-provisioner-01
  template:
    metadata:
      labels:
        app: nfs-provisioner-01
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: jmgao1983/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner-01  # 此处供应者名字供storageclass调用
            - name: NFS_SERVER
              value: 192.168.31.200   # 填入NFS的地址(换成自己服务器的地址)
            - name: NFS_PATH
              value: /public   # 填入NFS挂载的目录(换成自己的目录)
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.31.200   # 填入NFS的地址(换成自己服务器的地址)
            path: /public   # 填入NFS挂载的目录(换成自己的目录)

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-ocean # (名字自己取)
provisioner: nfs-provisioner-01
# Supported policies: Delete、 Retain , default is Delete
reclaimPolicy: Retain

我们在系统的命名空间下创建了一个账号,并且创建了一个角色,该角色能够访问StorageClass。然后我们完成账号和角色的绑定。

我们在kube-system这个命名空间下部署了一个nfs客户端供应者。因为在创建StorageClass的时候我们必要指明provisioner,它表明供应pv的插件是nfs。

最后我们创建StorageClass资源。回收策略默认是Delete,我们选择手工回收。

apply一下后我们查看一下部署的资源:

[root@MiWiFi-R4CM-srv k8s]# kubectl get sc -n kube-system 
NAME                 PROVISIONER                AGE
nfs-ocean            nfs-provisioner-01         4d1h

[root@MiWiFi-R4CM-srv k8s]# kubectl get ClusterRole 
NAME                                                                   AGE
admin                                                                  4d23h
cluster-admin                                                          4d23h
edit                                                                   4d23h
kubernetes-dashboard                                                   4d23h
nfs-client-provisioner-runner (这是我们创建的)                                          4d1h
...
[root@MiWiFi-R4CM-srv k8s]# kubectl get ClusterRoleBinding 
NAME                                                   AGE
cluster-admin                                          4d23h
kubeadm:kubelet-bootstrap                              4d23h
kubeadm:node-autoapprove-bootstrap                     4d23h
kubeadm:node-autoapprove-certificate-rotation          4d23h
kubeadm:node-proxier                                   4d23h
kubernetes-dashboard                                   4d23h
minikube-rbac                                          4d23h
run-nfs-client-provisioner (这是我们创建的)              4d1h
 ...
[root@MiWiFi-R4CM-srv k8s]# kubectl get deployment -n kube-system
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
coredns              2/2     2            2           4d23h
nfs-provisioner-01   1/1     1            1           4d1h
[root@MiWiFi-R4CM-srv k8s]# kubectl get sc
NAME                 PROVISIONER                AGE
nfs-ocean            nfs-provisioner-01         4d1h
standard (default)   k8s.io/minikube-hostpath   4d23h

可以看到StorageClass是不区分命名空间的。

部署单节点mysql

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: "mysql-svc"
  template:
    metadata:
      labels:
        app: mysql # has to match .spec.selector.matchLabels
    spec:
      containers:
      - image: mysql:5.7
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: 123456 # mysql初始化密码
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-volumeclaim
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-volumeclaim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-ocean" # 名字为我们上面创建的StorageClass的名字
      resources:
        requests:
          storage: 5Gi

你只需要改一下mysql的密码,存储类的名称。至于serviceName,我们接着创建:

apiVersion: v1
kind: Service
metadata:
  name: mysql-svc
spec:
  type: NodePort
  selector:
    app: mysql
  ports:
   - port: 3306
     targetPort: 3306

因为我是给自己项目开发用的,所以我就用NodePort对外暴露了。同时,我们最好再暴露一个无头服务,供给集群内其他服务用(比如之后要部署的nacos):

apiVersion: v1
kind: Service
metadata:
  name: mysql-headless
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None

把这三个yaml都apply一下,我们会发现pvc和pv自动创建了:

[root@MiWiFi-R4CM-srv mysql]# kubectl get pvc
NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-volumeclaim-mysql-0   Bound    pvc-e83a928f-2b0b-4a52-a5d7-21622631bc70   5Gi        RWO            nfs-ocean      4d

[root@MiWiFi-R4CM-srv mysql]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS   REASON   AGE
pvc-e83a928f-2b0b-4a52-a5d7-21622631bc70   5Gi        RWO            Retain           Bound    default/mysql-volumeclaim-mysql-0   nfs-ocean               4d

其实pvc的取名是很有规律的:mysql-volumeclaim-mysql-0,其中mysql-volumeclaim是挂载模板的名字,而mysql-0就是我mysql单节点pod的名字。

在nfs服务端也出现了新的目录:

[root@MiWiFi-R4CM-srv public]# pwd
/public
[root@MiWiFi-R4CM-srv public]# ll
total 8
drwxrwxrwx. 7 polkitd root 4096 Apr 22 12:22 default-mysql-volumeclaim-mysql-0-pvc-e83a928f-2b0b-4a52-a5d7-21622631bc70

当然,nfs客户端也会有同样的目录。

部署单节点nacos

直接上部署的yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
spec:
  serviceName: nacos-svc
  replicas: 1
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
        - name: nacos
          imagePullPolicy: IfNotPresent 
          image: nacos/nacos-server:1.4.0
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client-port
          env:
            - name: NACOS_REPLICAS
              value: "1"
            - name: SPRING_DATASOURCE_PLATFORM
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: spring.platform
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.host
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
            - name: MODE
              value: "standalone"
            - name: NACOS_SERVERS
              value: "nacos-headless.default.svc.cluster.local:8848"
            - name: nacos.naming.data.warmup
              value: "false"
          volumeMounts:
            - name: nacos-datadir
              mountPath: /home/nacos/data
            - name: nacos-logdir
              mountPath: /home/nacos/logs
  volumeClaimTemplates:
    - metadata:
        name: nacos-datadir
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "nfs-ocean"
        resources:
          requests:
            storage: 2Gi
    - metadata:
        name: nacos-logdir
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "nfs-ocean"
        resources:
          requests:
            storage: 2Gi
  selector:
    matchLabels:
      app: nacos

因为是单节点,所以replicas为1,模式是standalone

我们等下还要部署一个名为nacos-cm的ConfigMap来告诉nacos我们的mysql在哪里。这里的- name: SPRING_DATASOURCE_PLATFORM必须要配置,也就是要指定为mysql,否则nacos还是会用默认的数据库derby去存储配置信息和nacos的角色用户信息。

至于serviceName,我们也是等下创建。从NACOS_SERVERS的名称nacos-headless.default.svc.cluster.local:8848我们可以看到,等下需要一个名为nacos-headless的无头服务。

另外一个环境变量nacos.naming.data.warmup=false也是必须的,这可以保证单节点服务发现不出错。

挂载的目录,我选择了data和logs挂载,至于那个application.properties,因为它主要的功能就是指定mysql的地址,既然我们已经在环境变量里说了,所以就不挂载了。如果你有个性化定制,就可以也搞一个ConfigMap把配置文件挂载出来。

与集群发现有关的,我们一律不写。

(友情提示:写yaml的时候千万不要用tab键!!!)

下面是nacos-cm

apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
data:
  spring.platform: "mysql"
  mysql.db.name: "nacos_config"
  mysql.db.host: "mysql-headless.default.svc.cluster.local"
  mysql.port: "3306"
  mysql.user: "root"
  mysql.password: "123456"

你自己建一个库,我这里名称为nacos_config,然后在里面建nacos需要的表:nacos-mysql

mysql的地址就是mysql暴露出来的无头服务的名字+命名空间+svc+cluster+local,它也是有规律的。

对内暴露:

apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  labels:
    app: nacos
  annotations:
    service.alpha.kubernetes.io/publishNotReadyAddresses: "true"
spec:
  ports:
    - port: 8848
      name: server
      targetPort: 8848
  clusterIP: None
  selector:
    app: nacos

关于是否使用service.alpha.kubernetes.io/publishNotReadyAddresses: "true"这个注解,参照这个 issue。

对外暴露的话,可以使用无头服务+Ingress,我在这里因为是给自己本地用,就用NodePort来暴露了:


apiVersion: v1
kind: Service
metadata:
  name: nacos-svc
spec:
  type: NodePort
  selector:
    app: nacos
  ports:
   - port: 8848
     targetPort: 8848

以上yaml全部apply就可以用ip:port/nacos访问了。

你可能感兴趣的:(k8s,kubernetes,docker,nacos,mysql,StorageClass)