[K8S系列八] K8S实战 部署spring-cloud+nacos+MySQL服务

纸上得来终觉浅,绝知此事要躬行。前面几篇文章先后介绍了K8S的搭建,组件、基本概念,网络和存储。这章是一个实战篇,实现基于K8S的spring-cloud+nacos+MySQL服务部署。

1. 创建命名空间

之前为了简单都是使用默认的命名空间(default),本篇所有的部署都是基于dev命名空间的,所以首先要创建dev命名空间,namespace.yaml请参考附录1。

# 01 创建命名空间
# kubectl apply -f namespace.yaml
namespace/dev created

# 02 查看已有的命名空间
# kubectl get ns
NAME              STATUS   AGE
default           Active   53d
dev               Active   5s
ingress-nginx     Active   17d
kube-node-lease   Active   53d
kube-public       Active   53d
kube-system       Active   53d

2. 部署nfs-client-provisioner

在dev命名空间下增加nfs-client-provisioner ServiceAccount,否则后续会提示error looking up service account dev/nfs-client-provisioner: serviceaccount "nfs-client-provisioner" not found。创建好ServiceAccount后,在dev下创建nfs-client-provisioner和StorageClass。 rbac.yaml、deployment.yaml、class.yaml请参考附录2、3、4。

# 01 将default命名空间替换为dev,在dev下创建ServiceAccount
# sed -i'' "s/namespace:.*/namespace: dev/g" rbac.yaml
# kubectl apply -f rbac.yaml
# kubectl get ServiceAccount -n dev
NAME                     SECRETS   AGE
default                  1         3d19h
nfs-client-provisioner   1         47h

# 02 创建nfs-client-provisioner
# kubectl apply -f deployment.yaml
# kubectl get deployment -n dev
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           2d2h

# 03 创建StorageClass
# kubectl apply -f class.yaml
# kubectl get StorageClass -n dev
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  3d4h

3. 部署MySQL服务

3.1 创建secret

Secret 是存储密码或密钥类敏感数据的对象。 使用 Secret 意味着不需要在应用程序代码中包含机密数据。
这里存储MySQL的密码,后续以环境变量的方式使用,secret.yaml内容参考附录5。

# echo -n "123456" | base64
MTIzNDU2

# kubectl apply -f secret.yaml
secret/secret created
# kubectl get secret -n dev
NAME                  TYPE                                  DATA   AGE
default-token-xxk4j   kubernetes.io/service-account-token   3      5h9m
secret                Opaque
3.2 部署MySQL服务

mysql.yaml参考附录6,部署之后进入mysql pod,简单测试MySQL服务是否部署成功。

# 01 部署MySQL服务
# kubectl apply -f mysql.yaml
service/mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/mysql created

# 02 查看部署的pv,pvc、挂载的目录、deployment和服务
# kubectl get pv,pvc -n dev
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
persistentvolume/pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3   2Gi        RWO            Delete           Bound    dev/mysql-pv-claim   nfs-client              8s

NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/mysql-pv-claim   Bound    pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3   2Gi        RWO            nfs-client     8s

# ls /nfs/data/dev-mysql-pv-claim/
auto.cnf  ib_buffer_pool  ibdata1  ib_logfile0  ib_logfile1  ibtmp1  mysql  performance_schema  sys

# kubectl get deployment -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
mysql   1/1     1            1           47s

# kubectl get svc -n dev
NAME    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
mysql   ClusterIP   None                 3306/TCP   55s

4. 部署nacos服务

采用集群模式部署nacos,使用MySQL存储配置数据,nacos虽然也支持内嵌数据库,但是使用MySQL可以更方便观察数据的基本情况。nacos镜像版本为v2.0.3,MySQL版本为5.7。
官方部署文档Kubernetes Nacos写的非常简略,如果都按照文档的流程一路走下来可能会一帆风顺,但是要想做些改变,分分钟爆炸。只能一边查资料,一边研究源码,为此踩了超多坑。搭建到最后无意间发现官方更推荐nacos-operator, 不过这个时候已经心力交瘁,没有勇气再去尝试了。

本文与官方文档的主要区别

  1. namespace使用dev,而不是默认的default,因此要有很多要修改;
  2. 没有使用peerfinder扩容插件:
  • peerfinder本身是一个实验性项目,官方正在寻求替代方案, 具体请参考peer-finder
  • 如果clusterIP不设置None会导致插件获取到的是集群ip,无法获取到下面节点的IP,如果不想设置成None目前只能移除peer-finder插件;
  • 未使用peerfinder,需要手动设置NACOS_SERVERS, 但因为写死了servers,所以扩缩容会有问题;
  1. MySQL镜像没有使用nacos/mysql , 而是使用官方的mysql:5.7。
4.1 配置MySQL

创建nacos数据库和用户并授权,初始化nacos配置表,配置表参考:https://github.com/alibaba/nacos/blob/develop/distribution/conf/nacos-mysql.sql

# 01 进入mysql pod
# kubectl get pods -n dev
NAME                                     READY   STATUS    RESTARTS       AGE
mysql-d45f868dd-qb5m2                    1/1     Running   11 (45h ago)   3d

# kubectl exec -it mysql-d45f868dd-qb5m2 -n dev -- /bin/bash

# 02 用root用户登录,创建nacos数据库和用户,并授权
# mysql -u root -p
mysql> create database nacos;
Query OK, 1 row affected (0.01 sec)

mysql> create user 'nacos'@'%' identified by 'nacos';
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on nacos.* TO 'nacos'@'%';

# 03 使用nacos登录,初始化配置表,配置表参考: https://github.com/alibaba/nacos/blob/develop/distribution/conf/nacos-mysql.sql
# mysql -u nacos -p
mysql> use nacos;
4.2 部署nacos

nacos-pvc-nfs.yaml 请参考附录7, 可自行与官方的nacos-pvc-nfs.yaml对比,查看差异。
StatefulSet
需要特别注意的是nacos类型是StatefulSet,StatefulSet适合有状态的应用,有如下特点:
1.每个pod都有自己的存储,所以都用volumeClaimTemplates,为每个pod都生成一个自己的存储,保存自己的状态
2.pod名字始终是固定的
3.service没有ClusterIP,是headlessservice,所以无法负载均衡,返回的都是pod名,所以pod名字都必须固定,StatefulSet在Headless Service的基础上又为StatefulSet控制的每个Pod副本创建了一个DNS域名:pod-name.headless-server-name.namespace.svc.cluster.local
configmap
保存配置信息,用于将配置文件与镜像文件分离,使容器化的应用程序具有可移植性。

# kubectl apply -f nacos-pvc-nfs.yaml 

# kubectl get pv,pvc -n dev
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                STORAGECLASS   REASON   AGE
persistentvolume/pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3   2Gi        RWO            Delete           Terminating   dev/mysql-pv-claim   nfs-client              4d6h
persistentvolume/pvc-baffdbd7-2fcd-42eb-9f35-c9c9d6e46ae7   10Gi       RWX            Delete           Bound         dev/data-nacos-1     nfs-client              3d4h
persistentvolume/pvc-e632f052-7637-401b-b464-4bb80217413e   10Gi       RWX            Delete           Bound         dev/data-nacos-0     nfs-client              3d4h

NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-nacos-0     Bound    pvc-e632f052-7637-401b-b464-4bb80217413e   10Gi       RWX            nfs-client     3d4h
persistentvolumeclaim/data-nacos-1     Bound    pvc-baffdbd7-2fcd-42eb-9f35-c9c9d6e46ae7   10Gi       RWX            nfs-client     3d4h
persistentvolumeclaim/mysql-pv-claim   Bound    pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3   2Gi        RWO            nfs-client     4d6h

# kubectl get pods -n dev -o wide
NAME                                     READY   STATUS    RESTARTS        AGE    IP               NODE   NOMINATED NODE   READINESS GATES
mysql-d45f868dd-qb5m2                    1/1     Running   11 (3d3h ago)   4d6h   10.244.190.110   w1                
nacos-0                                  1/1     Running   0               32m    10.244.80.243    w2                
nacos-1                                  1/1     Running   0               32m    10.244.190.112   w1                
nfs-client-provisioner-dd7474448-r4ckf   1/1     Running   2 (3d3h ago)    3d5h   10.244.190.113   w1                

# kubectl get StatefulSet -n dev
NAME    READY   AGE
nacos   2/2     30m

# kubectl get svc -n dev
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                               AGE
mysql            ClusterIP   None                 3306/TCP                              4d6h
nacos-headless   ClusterIP   None                 8848/TCP,9848/TCP,9849/TCP,7848/TCP   30m
4.3 配置nginx-ingress以便外网能访问

nginx-ingress详细内容请参考 [K8S系列五]Ingress与Ingress Controller,ingress.yaml参考附录8。为了能通过域名foo.mydomain.com访问集群,需要在集群外的/etc/hosts 中增加

192.168.0.61  foo.mydomain.com

访问http://foo.mydomain.com:30434/nacos 查看管理页面,ingress-nginx是以NodePort形式部署的,暴露30434端口。可看到当前集群有两个节点,分别是
nacos-0.nacos-headless.dev.svc.cluster.local
nacos-1.nacos-headless.dev.svc.cluster.local

nacos管理后台
4.4 可能遇到的问题
  1. no such host
    2022/05/21 11:06:36 lookup nacos-headless on 10.96.0.10:53: no such host
    这个是因为clusterIP没有设置为None导致的

  2. UnknownHostException: jmenv.tbsite.net
    Caused by: com.alibaba.nacos.api.exception.NacosException: java.net.UnknownHostException: jmenv.tbsite.net
    这个是因为去掉findpeer之后,没有指定NACOS_SERVERS,
    查看/home/nacos/bin/docker-startup.sh,发现如下一段代码,如果指定了PLUGINS_DIR(即使用findpeer插件),则执行plugin.sh,否则将NACOS_SERVERS写入到cluster.conf中。

  if [[ ! -d "${PLUGINS_DIR}" ]]; then
    echo "" >"$CLUSTER_CONF"
    for server in ${NACOS_SERVERS}; do
      echo "$server" >>"$CLUSTER_CONF"
    done
  else
    bash $PLUGINS_DIR/plugin.sh
    sleep 30
  fi

部署后查看/home/nacos/conf/cluster.conf的内容,正是NACOS_SERVERS的值

# kubectl exec -it nacos-0 -n dev -- cat /home/nacos/conf/cluster.conf
#2022-05-21T15:35:45.067
nacos-0.nacos-headless.dev.svc.cluster.local:8848
nacos-1.nacos-headless.dev.svc.cluster.local:8848

5. 部署java应用

java应用包括spring-cloud-provider-example和spring-cloud-consumer-example两个模块:

  1. spring-cloud-provider-example通过Mybatis-plus访问MySQL数据库,并注册到nacos对外提供http服务;
  2. spring-cloud-consumer-exampe通过RPC调用spring-cloud-provider-example,并对外提供http服务
  • spring-cloud + nacos 示例参考:nacos-spring-cloud-discovery-example
  • mybatis-plus示例参考:快速开始
5.1 核心代码

spring-cloud-provider-example的application.yaml和NacosProviderApplication.java 如下所示,spring-cloud-consumer-example请参考附录9、10。
application.yaml中主要包含了nacos和MySQL的配置:

  • nacos地址为nacos-0.nacos-headless.dev.svc.cluster.local:8848,nacos-1.nacos-headless.dev.svc.cluster.local:8848;
  • mysql地址为mysql.dev.svc.cluster.local:3306;
  • 数据库的密码通过环境变量{MYSQL_PASSWORD}注入进来。
# application.yaml
server:
  port: 8070
spring:
  application:
    name: spring-cloud-provider
  cloud:
    nacos:
      discovery:
        server-addr: nacos-0.nacos-headless.dev.svc.cluster.local:8848,nacos-1.nacos-headless.dev.svc.cluster.local:8848

  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://mysql.dev.svc.cluster.local:3306/demo?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&allowPublicKeyRetrieval=true
    username: root
    password: ${MYSQL_PASSWORD}
  main:
    allow-bean-definition-overriding: true
// NacosProviderApplication.java
@SpringBootApplication
@EnableDiscoveryClient
@MapperScan("com.liuil.k8s.example.spring.cloud")
public class NacosProviderApplication {

    public static void main(String[] args) {
        SpringApplication.run(NacosProviderApplication.class, args);
    }

    private static ObjectMapper objectMapper = new ObjectMapper();

    @Autowired
    private UserMapper userMapper;

    @RestController
    public class UserController {

        @RequestMapping(value = "/user/{id}", method = RequestMethod.GET)
        public String find(@PathVariable int id) {
            return findById(id);
        }

        private String findById(int id) {
            User user = userMapper.selectById(id);
            return serialize(user);
        }

        private  String serialize(T t) {
            try {
                return objectMapper.writeValueAsString(t);
            } catch (JsonProcessingException e) {
                return null;
            }
        }
    }
}
5.2 初始化MySQL
# 01 创建演示数据库
create database demo;
use demo;
# 02 创建表
CREATE TABLE user
(
    id BIGINT(20) NOT NULL COMMENT '主键ID',
    name VARCHAR(30) NULL DEFAULT NULL COMMENT '姓名',
    age INT(11) NULL DEFAULT NULL COMMENT '年龄',
    email VARCHAR(50) NULL DEFAULT NULL COMMENT '邮箱',
    PRIMARY KEY (id)
);

# 03 插入演示数据
INSERT INTO user (id, name, age, email) VALUES
(1, 'Jone', 18, '[email protected]'),
(2, 'Jack', 20, '[email protected]'),
(3, 'Tom', 28, '[email protected]'),
(4, 'Sandy', 21, '[email protected]'),
(5, 'Billie', 24, '[email protected]');

5.3 打包镜像

打包后可以将镜像推送到阿里云等公有仓库或自己搭建的私有仓库,但这里仅用于演示,所以分别在work01、work02机器中构建镜像即可。
将spring-cloud-provider-example-1.0.0.jar 和spring-cloud-consumer-example-1.0.0.jar 拷贝到 192.168.0.61 和192.168.0.62 两台服务器上,与Dockerfile放置在同一个目录,然后打包即可。ProviderDockerfile和ConsumerDockerfile请参考附录11、12。

# 01 将spring-cloud-consumer-example-1.0.0.jar拷贝到w1,其他拷贝类似,稍作修改即可
scp -P 22231 spring-cloud-consumer-example/target/spring-cloud-consumer-example-1.0.0.jar     [email protected]:/root/spring-mysql-nacos/spring-cloud-consumer-example-1.0.0.jar

# 02 打包镜像
 docker build -f ProviderDockerfile -t spring-cloud-provider:v0.0.1 .
 docker build -f ConsumerDockerfile -t spring-cloud-consumer:v0.0.1 .

# 03 查看镜像
# docker images |grep spring-cloud
spring-cloud-provider                                            v0.0.1              96e357196894        About an hour ago   127MB
spring-cloud-consumer                                            v0.0.1              1cae051e2efd        About an hour ago   121MB
5.4 部署

spring-cloud-provider.yaml和spring-cloud-consumer.yaml分别参考附录13、14

# 01 部署spring-cloud-provider和spring-cloud-consumer
# kubectl apply -f spring-cloud-provider.yaml
# kubectl apply -f spring-cloud-consumer.yaml

# 02 查看pods和svc
# kubectl get pods -n dev -o wide
NAME                                     READY   STATUS    RESTARTS        AGE    IP               NODE   NOMINATED NODE   READINESS GATES
mysql-d45f868dd-qb5m2                    1/1     Running   11 (3d4h ago)   4d7h   10.244.190.110   w1                
nacos-0                                  1/1     Running   0               82m    10.244.80.243    w2                
nacos-1                                  1/1     Running   0               82m    10.244.190.112   w1                
nfs-client-provisioner-dd7474448-r4ckf   1/1     Running   2 (3d4h ago)    3d5h   10.244.190.113   w1                
spring-cloud-consumer-78ddf98844-j42g9   1/1     Running   0               40m    10.244.80.246    w2                
spring-cloud-provider-9576d8464-hkfpp    1/1     Running   0               29m    10.244.80.247    w2                
# kubectl get svc -n dev
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                               AGE
mysql                   ClusterIP   None                     3306/TCP                              4d6h
nacos-headless          ClusterIP   None                     8848/TCP,9848/TCP,9849/TCP,7848/TCP   52m
spring-cloud-consumer   ClusterIP   10.107.212.144           8080/TCP                              10m
spring-cloud-provider   ClusterIP   10.108.18.158            8070/TCP                              7s

# 03 测试spring-cloud-provider和spring-cloud-consumer能否正常工作
# 10.108.18.158是spring-cloud-provider service的ip地址,10.107.212.144是spring-cloud-consumer service的ip地址。
# curl 10.108.18.158:8070/user/1
{"id":1,"name":"Jone","age":18,"email":"[email protected]"}

# curl 10.107.212.144:8080/consumer/user/1
{"id":1,"name":"Jone","age":18,"email":"[email protected]"}

登录nacos管理后台查看当前服务列表,发现spring-cloud-provider和spring-cloud-consumer都已经注册成功。

5.5 配置Ingress

配置Ingress以便能够在集群外访问, rules中增加如下内容,完整内容参考附录8

      - path: /consumer
        pathType: Prefix
        backend:
          service:
            name: spring-cloud-consumer
            port:
              number: 8080

最后访问foo.mydomain.com:30434/consumer/user/1 得到如下结果

至此,基于K8S的spring-cloud+nacos+MySQL服务都已经完全部署。

参考

1.示例:使用 Persistent Volumes 部署 WordPress 和 MySQL
2.Kubernetes Nacos

附录

1. namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
   name: dev
   labels:
     name: dev
2. rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: dev
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: dev
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: dev
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: dev
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: dev
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

3. deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  # 指定namespace为dev
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            # 替换成实际的NFS server地址
            - name: NFS_SERVER
              value: 192.168.0.51
            #  替换成实际的NFS路径
            - name: NFS_PATH
              value: /nfs/data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.0.51
            path: /nfs/data


4. class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
  namespace: dev
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  # 指定路径模板
  pathPattern: ${.PVC.namespace}-${.PVC.name}
  archiveOnDelete: "false"
5. secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret
  namespace: dev
data:
  password: MTIzNDU2
6. mysql.yaml
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: dev
  labels:
    app: mysql
spec:
  ports:
    - port: 3306
  selector:
    app: mysql
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  namespace: dev
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: dev
  labels:
    app: mysql
spec:
  selector:
    matchLabels:
      app: mysql
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
        tier: mysql
    spec:
      containers:
      - image: mysql:5.7
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: secret
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
7. nacos-pvc-nfs.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  namespace: dev
  labels:
    app: nacos
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    ## 兼容1.4.x版本的选举端口
    - port: 7848
      name: old-raft-rpc
      targetPort: 7848
  clusterIP: None
  selector:
    app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: dev
data:
  mysql.service.host: "mysql.dev.svc.cluster.local"
  mysql.db.name: "nacos"
  mysql.port: "3306"
  mysql.user: "nacos"
  mysql.password: "nacos"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: dev
spec:
  serviceName: nacos-headless
  replicas: 2
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: nfs-client-provisioner
      #initContainers:
      #  - name: peer-finder-plugin-install
      #    image: nacos/nacos-peer-finder-plugin:1.1
      #    imagePullPolicy: Always
      #    volumeMounts:
      #      - mountPath: /home/nacos/plugins/peer-finder
      #        name: data
      #        subPath: peer-finder
      containers:
        - name: nacos
          imagePullPolicy: Always
          image: nacos/nacos-server:v2.0.3
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client-port
            - containerPort: 9848
              name: client-rpc
            - containerPort: 9849
              name: raft-rpc
            - containerPort: 7848
              name: old-raft-rpc
          env:
            - name: NACOS_REPLICAS
              value: "2"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: DOMAIN_NAME
              value: "cluster.local"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.service.host
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            #- name: SPRING_DATASOURCE_PLATFORM
            #  value: "mysql"
            #- name: MYSQL_SERVICE_DB_PARAM
            #  value: "characterEncoding=utf8&connectTimeout=10000&socketTimeout=30000&autoReconnect=true&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true"
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
            - name: NACOS_SERVERS
              value: "nacos-0.nacos-headless.dev.svc.cluster.local:8848 nacos-1.nacos-headless.dev.svc.cluster.local:8848"
          volumeMounts:
            #- name: data
            #  mountPath: /home/nacos/plugins/peer-finder
            #  subPath: peer-finder
            - name: data
              mountPath: /home/nacos/data
              subPath: data
            - name: data
              mountPath: /home/nacos/logs
              subPath: logs
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
          volume.beta.kubernetes.io/storage-class: "nfs-client"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 10Gi
  selector:
    matchLabels:
      app: nacos

8. ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: dev
spec:
  ingressClassName: nginx
  rules:
  - host: foo.mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nacos-headless
            port:
              number: 8848
      - path: /consumer
        pathType: Prefix
        backend:
          service:
            name: spring-cloud-consumer
            port:
              number: 8080
9. spring-cloud-consumer-example applicaiton.yaml
server:
  port: 8080
spring:
  application:
    name: spring-cloud-consumer
  cloud:
    nacos:
      discovery:
        server-addr: nacos-headless.dev.svc.cluster.local:8848

10.spring-cloud-consumer-example NacosConsumerApplication.java
@SpringBootApplication
@EnableDiscoveryClient
public class NacosConsumerApplication {

    @LoadBalanced
    @Bean
    public RestTemplate restTemplate() {
        return new RestTemplate();
    }

    public static void main(String[] args) {
        SpringApplication.run(NacosConsumerApplication.class, args);
    }

    @RestController
    public class TestController {

        private final RestTemplate restTemplate;

        @Autowired
        public TestController(RestTemplate restTemplate) {this.restTemplate = restTemplate;}

        @RequestMapping(value = "consumer/user/{id}", method = RequestMethod.GET)
        public String echo(@PathVariable int id) {
            return restTemplate.getForObject("http://spring-cloud-provider/user/" + id, String.class);
        }
    }
}
11. ProviderDockerfile
FROM openjdk:8-jre-alpine
COPY spring-cloud-provider-example-1.0.0.jar /spring-cloud-provider.jar
ENTRYPOINT ["java","-jar","/spring-cloud-provider.jar"]
12. ConsumerDockerfile
FROM openjdk:8-jre-alpine
COPY spring-cloud-consumer-example-1.0.0.jar /spring-cloud-consumer.jar
ENTRYPOINT ["java","-jar","/spring-cloud-consumer.jar"]
13. spring-cloud-provider.yaml
# 以Deployment部署Pod
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-cloud-provider
  namespace: dev
spec:
  selector:
    matchLabels:
      app: spring-cloud-provider
  replicas: 1
  template:
    metadata:
      labels:
        app: spring-cloud-provider
    spec:
      containers:
      - name: spring-cloud-provider
        image: spring-cloud-provider:v0.0.1
        ports:
        - containerPort: 8070
        env:
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: secret
              key: password
---
# 创建Pod的Service
apiVersion: v1
kind: Service
metadata:
  name: spring-cloud-provider
  namespace: dev
spec:
  ports:
  - port: 8070
    protocol: TCP
    targetPort: 8070
  selector:
    app: spring-cloud-provider
14. spring-cloud-consumer.yaml
# 以Deployment部署Pod
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-cloud-consumer
  namespace: dev
spec:
  selector:
    matchLabels:
      app: spring-cloud-consumer
  replicas: 1
  template:
    metadata:
      labels:
        app: spring-cloud-consumer
    spec:
      containers:
      - name: spring-cloud-consumer
        image: spring-cloud-consumer:v0.0.1
        ports:
        - containerPort: 8080
---
# 创建Pod的Service
apiVersion: v1
kind: Service
metadata:
  name: spring-cloud-consumer
  namespace: dev
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: spring-cloud-consumer

你可能感兴趣的:([K8S系列八] K8S实战 部署spring-cloud+nacos+MySQL服务)