【K8S实战】-超详细教程(三)

【K8S实战】-超详细教程(三)

1、存储

1.1、nfs默认存储

我这里只演示nfs作为K8S的默认存储,其他的可以看这里【存储类】。

1.1.1、安装nfs服务

所有机器都安装nfs工具

所有机器
[root@master ~]# yum install -y nfs-utils

以下都是在master节点操作
[root@master ~]# echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
[root@master ~]# mkdir -p /nfs/data
设置开机启动且立即启动
[root@master ~]# systemctl enable rpcbind --now
[root@master ~]# systemctl enable nfs-server --now
使配置生效
[root@master ~]# exportfs -r
检查配置是否生效
[root@master ~]# exportfs

配置nfs-client(可以不做)

查看nfs服务的共享文件目录
[root@node1 ~]# showmount -e 172.168.200.130

创建并挂载目录
[root@node1 ~]# mkdir -p /nfs/data
[root@node1 ~]# mount -t nfs 172.168.200.130:/nfs/data /nfs/data
1.1.2、配置默认存储

在master节点执行如下命令:

[root@master ~]# kubectl create -f default-storage.yaml

default-storage.yaml文件内容如下:

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: master ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: master  ## 指定自己nfs服务器地址
            path: /nfs/data  ## nfs服务器共享的目录
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

有可能k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2下载不了,使用这个【离线镜像】, 提取码:u2hi,然后在所有机器执行如下命令:

[root@node1 ~]# docker load -i nfs-subdir-external-provisioner-v4.0.2.tar

1.2、PV与PVC

PV 卷是集群中的资源。PVC 申领是对这些资源的请求,也被用来执行对资源的申领检查。

1.2.1 pv

持久卷(PersistentVolume,PV)是集群中的一块存储,可以由管理员事先供应,或者 使用存储类(Storage Class)来动态供应。
资源pv.yaml内容如下:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/01 #nfs共享的文件目录
    server: 172.168.200.130 #这里是nfs服务器
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/02
    server: 172.168.200.130 #这里是nfs服务器
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/03
    server: 172.168.200.130 #这里是nfs服务器
1.2.1.1、静态供应
在nfs服务上创建如下目录
[root@master pv-pvc]# mkdir -p /nfs/data/01 /nfs/data/02 /nfs/data/03
[root@master pv-pvc]# ll /nfs/data/
total 0
drwxr-xr-x. 2 root root 6 Jul  1 14:05 01
drwxr-xr-x. 2 root root 6 Jul  1 14:05 02
drwxr-xr-x. 2 root root 6 Jul  1 14:05 03

创建pv
[root@master pv-pvc]# kubectl create -f pv.yaml 
persistentvolume/pv01-10m created
persistentvolume/pv02-1gi created
persistentvolume/pv03-3gi created
查看刚刚创建的pv
[root@master pv-pvc]# kubectl get pv 
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available           nfs                     11s
pv02-1gi   1Gi        RWX            Retain           Available           nfs                     11s
pv03-3gi   3Gi        RWX            Retain           Available           nfs                     10s
1.2.2、pvc

持久卷申领(PersistentVolumeClaim,PVC)表达的是用户对存储的请求。概念上与 Pod 类似。
资源pvc.yaml内容如下:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: temp-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName: nfs
1.2.2.1、pvc绑定pv

使用资源pvc.yaml对pv进行申请,发现会绑定到1G的pv卷上了,pvc规则就是杜绝浪费,目标pv>=pvc所申请的空间。

创建pvc并绑定pv
[root@master pv-pvc]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/temp-pvc created

查看相关资源果然如上述所料那样,temp-pvc选择了1G的pv卷
[root@master pv-pvc]# kubectl get pv,pvc
NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM              STORAGECLASS   REASON   AGE
persistentvolume/pv01-10m   10M        RWX            Retain           Available                      nfs                     4m40s
persistentvolume/pv02-1gi   1Gi        RWX            Retain           Bound       default/temp-pvc   nfs                     4m40s
persistentvolume/pv03-3gi   3Gi        RWX            Retain           Available                      nfs                     4m39s

NAME                             STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/temp-pvc   Bound    pv02-1gi   1Gi        RWX            nfs            33s
1.2.2.2、pod申请pvc

资源nginx-pvc.yaml内容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-test-pvc
  name: nginx-test-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-test-pvc
  template:
    metadata:
      labels:
        app: nginx-test-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: temp-pvc #这里是我们上一步创建的pvc名称

实验演示开始:

目前环境运行的初始状态
[root@master ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       nfs-client-provisioner-846cc9c4-4687x      1/1     Running   0          26h
kube-system   calico-kube-controllers-6d9cdcd744-tcjnc   1/1     Running   0          28h
kube-system   calico-node-pbmzt                          1/1     Running   0          28h
kube-system   calico-node-w7dd4                          1/1     Running   0          28h
kube-system   coredns-5897cd56c4-7fgpb                   1/1     Running   0          28h
kube-system   coredns-5897cd56c4-gzbk2                   1/1     Running   0          28h
kube-system   etcd-master                                1/1     Running   0          28h
kube-system   kube-apiserver-master                      1/1     Running   0          28h
kube-system   kube-controller-manager-master             1/1     Running   0          28h
kube-system   kube-proxy-9p7vg                           1/1     Running   0          28h
kube-system   kube-proxy-c5cdp                           1/1     Running   0          28h
kube-system   kube-scheduler-master                      1/1     Running   0          28h

创建pod并申请pvc资源
[root@master pv-pvc]# kubectl apply -f nginx-pvc.yaml 
deployment.apps/nginx-test-pvc created
稍等一会查看全部都是运行状态
[root@master pv-pvc]# kubectl get pod -A -owide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE    IP                NODE     NOMINATED NODE   READINESS GATES
default       nfs-client-provisioner-846cc9c4-4687x      1/1     Running   0          26h    192.168.166.129   node1    <none>           <none>
default       nginx-test-pvc-54f98f8c6-4l9g7             1/1     Running   0          2m8s   192.168.166.163   node1    <none>           <none>
default       nginx-test-pvc-54f98f8c6-bt88x             1/1     Running   0          2m8s   192.168.166.162   node1    <none>           <none>
......
kube-system   kube-scheduler-master                      1/1     Running   0          28h    172.168.200.130   master   <none>           <none>


访问nginx,报403。这是因为我们挂载的/nfs/data/02没有创建index.html。
[root@master pv-pvc]# curl 192.168.166.163
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
[root@master pv-pvc]# 

创建index.html,重新访问就ok了
[root@master pv-pvc]# echo ”欢迎来到德莱联盟“ >> /nfs/data/02/index.html
[root@master pv-pvc]# curl 192.168.166.163
”欢迎来到德莱联盟“
[root@master pv-pvc]# curl 192.168.166.162
”欢迎来到德莱联盟“

:无状态服务多个副本都是使用同一个pvc。

1.3、ConfigMap(配置集)

ConfigMap 主要定义一些pod’需要的配置文件(偶尔变动)。例如环境变量、命令参数等等。
创建一个redis的配置集:

查看配置集创建命令
[root@master confMap]# kubectl create configmap --help
[root@master confMap]# kubectl create configmap my-redis-conf --from-file=redis.conf
error: error reading redis.conf: no such file or directory
[root@master confMap]# vi redis.conf
[root@master confMap]# kubectl create configmap my-redis-conf --from-file=redis.conf
configmap/my-redis-conf created
[root@master confMap]# 

如需使用资源创建配置集,其yaml内容如下:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-redis-conf
  namespace: default
data:    #data是所有真正的数据,key:默认是文件名   value:配置文件的内容
  redis.conf: |
    appendonly yes

redis绑定my-redis-conf配置集,redis.yaml如下:

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    command:
      - redis-server #命令,后续的都是参数
      - "/redis-master/redis.conf"  #指的是redis容器内部的位置
    ports:
    - containerPort: 6379
    volumeMounts:
    - mountPath: /data
      name: data #挂载的名称,对应volumes下的配置
    - mountPath: /redis-master #容器内的路径
      name: config #挂载的名称,对应volumes下的配置
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: my-redis-conf #我们创建配置集的name
        items:
        - key: redis.conf #配置文件名,这里等于创建配置集时的--from-file=redis.conf
          path: redis.conf
[root@master confMap]# kubectl apply -f redis.yaml 
pod/redis created

[root@master confMap]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       nfs-client-provisioner-846cc9c4-4687x      1/1     Running   0          23h
default       redis                                      1/1     Running   0          32s
......
kube-system   kube-scheduler-master                      1/1     Running   0          25h

查看redis的appedonly默认配置
[root@master confMap]# kubectl exec -it redis -- redis-cli config get appendonly
1) "appendonly"
2) "no"

修改config配置集
[root@master confMap]# kubectl edit cm my-redis-conf
configmap/my-redis-conf edited

data:
  redis.conf: ”“
改成下面这种,然后保存即可
data:
  redis.conf: |
    appendonly yes

重启redis,因为配置修改后redis无法感知配置发生变化。
[root@master confMap]# kubectl delete -f redis.yaml
[root@master confMap]# kubectl apply-f redis.yaml

查看更新后的配置
[root@master confMap]# kubectl exec -it redis -- redis-cli config get appendonly
1) "appendonly"
2) "yes"

1.4、Secret

Secret 是一种包含少量敏感信息例如密码、令牌或密钥的对象。 这样的信息可能会被放在 Pod 规约中或者镜像中。 使用 Secret 意味着你不需要在应用程序代码中包含机密数据。说白了就是保护敏感数据。

1.3.1、命令方式创建
kubectl create secret docker-registry regcred \
  --docker-server=<你的镜像仓库服务器> \
  --docker-username=<你的用户名> \
  --docker-password=<你的密码> \
  --docker-email=<你的邮箱地址>
1.3.2、Yaml方式创建

Secret 的类型有很多种,这里使用了Opaque。

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  USER_NAME: YWRtaW4= #Base64 字符串
  PASSWORD: MWYyZDFlMmU2N2Rm #Base64 字符串
[root@master confMap]# kubectl apply -f secret.yaml 
secret/mysecret created

1.3.3、使用Secret

redis-secret.yaml文件内容如下:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret
      optional: false # 默认设置,意味着 "mysecret" 必须已经存在
先把之前的停了,便于观察
[root@master confMap]# kubectl delete -f redis.yaml 
pod "redis" deleted
[root@master confMap]# kubectl get secret 
NAME                                 TYPE                                  DATA   AGE
default-token-zgjgg                  kubernetes.io/service-account-token   3      25h
mysecret                             Opaque                                2      18s
nfs-client-provisioner-token-426rh   kubernetes.io/service-account-token   3      24h

[root@master confMap]# kubectl apply -f redis-secret.yaml 
pod/mypod created
[root@master confMap]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       mypod                                      1/1     Running   0          5m21s
......
kube-system   kube-scheduler-master                      1/1     Running   0          27h
[root@master confMap]# 

查看变量配置,发现密码已经被解码了
[root@master confMap]#  kubectl exec -it mypod --  ls /etc/foo
PASSWORD  USER_NAME
[root@master confMap]#  kubectl exec -it mypod --  cat /etc/foo/USER_NAME
admin
[root@master confMap]#  kubectl exec -it mypod --  cat /etc/foo/PASSWORD
1f2d1e2e67df

环境篇:k8s搭建(超详细,保姆级教程)
上一篇:【K8S实战】-超详细教程(二)
下一篇:【K8S实战】-超详细教程(四)——孵化中

资料参考:

  • K8S官网

你可能感兴趣的:(kubernetes,docker,linux)