K8S NFS PV/PVC 使用实践

nfs 作为 k8s 的网络存储驱动,可以满足持久存储业务的需求,支持多节点读也写。
下面采用 k8s pv与pvc来配套使用nfs资源。
创建pv,并创建与之关联的pvc,如下:

-[appuser@chenqiang-dev pvtest]$ kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                       STORAGECLASS   REASON    AGE
nfs-pv          100Mi      RWX            Retain           Bound     chenqiang-pv-test/nfs-pvc                            12m
nfs-server-pv   100Gi      RWX            Retain           Bound     default/nfs-server-pvc                               2h
-[appuser@chenqiang-dev pvtest]$ kubectl get pvc
NAME             STATUS    VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-server-pvc   Bound     nfs-server-pv   100Gi      RWX                           2h
-[appuser@chenqiang-dev pvtest]$ kubectl -n chenqiang-pv-test get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound     nfs-pv    100Mi      RWX                           12m
-[appuser@chenqiang-dev pvtest]$ kubectl -n chenqiang-pv-test get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                       STORAGECLASS   REASON    AGE
nfs-pv          100Mi      RWX            Retain           Bound     chenqiang-pv-test/nfs-pvc                            12m
nfs-server-pv   100Gi      RWX            Retain           Bound     default/nfs-server-pvc                               2h

注意:pv是没有namespace的概念,也就没有租户的概念,但 pvc 有租户的概念,当需要在某个 namespace 下使用 pvc 时,需要指定该 pvc 所属 namespace。这个我们可以通过上面的演示看出。

创建名字为 nfs-pv 的 pv,假设 yaml 命名为 nfs-pv.yaml 如下:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  namespace: chenqiang-pv-test
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 10.130.44.20
    path: "/test/mysql-nfs01"

创建名字为 nfs-pvc 的 pvc, 假设 yaml 命名为 nfs-pvc.yaml 如下:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
  namespace: chenqiang-pv-test
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 90Mi

创建使用pod

      containers:
      - name: nginx-hello
        image: docker-registry.saicstack.com/chenqiang/nginx-hello:v2.0
        ports:
        - containerPort: 80
        securityContext:
          privileged: true
        volumeMounts:
        # name must match the volume name below
        - name: nfs
          mountPath: "/chenqiang/pv-test"
      volumes:
      - name: nfs
        persistentVolumeClaim:
          claimName: nfs-pvc

看下 pod 是否起来:

-[appuser@chenqiang-dev pvtest]$ kubectl -n chenqiang-pv-test get po
NAME                                      READY     STATUS              RESTARTS   AGE
nginx-hello-deployment-6f9f4d7bcc-86kb5   1/1       Running             0          1h

如果在创建 pod 的时,没有先创建 pvc,那么为出现如下的 event log:

Events:
  Type     Reason                 Age               From                  Message
  ----     ------                 ----              ----                  -------
  Warning  FailedMount            58m (x4 over 1h)  kubelet, 10.130.33.8  Unable to mount volumes for pod "nginx-hello-deployment-6f9f4d7bcc-86kb5_chenqiang-pv-test(7b18c376-7382-11e8-9a03-74eacb756039)": timeout expired waiting for volumes to attach/mount for pod "chenqiang-pv-test"/"nginx-hello-deployment-6f9f4d7bcc-86kb5". list of unattached/unmounted volumes=[nfs]

当 pvc 创建好后,继续查看log如下:

Events:
  Type     Reason                 Age               From                  Message
  ----     ------                 ----              ----                  -------
  Warning  FailedMount            58m (x4 over 1h)  kubelet, 10.130.33.8  Unable to mount volumes for pod "nginx-hello-deployment-6f9f4d7bcc-86kb5_chenqiang-pv-test(7b18c376-7382-11e8-9a03-74eacb756039)": timeout expired waiting for volumes to attach/mount for pod "chenqiang-pv-test"/"nginx-hello-deployment-6f9f4d7bcc-86kb5". list of unattached/unmounted volumes=[nfs]
  Normal   SuccessfulMountVolume  58m               kubelet, 10.130.33.8  MountVolume.SetUp succeeded for volume "nfs-pv"
  Normal   Pulled                 58m               kubelet, 10.130.33.8  Container image "docker-registry.saicstack.com/chenqiang/nginx-hello:v2.0" already present on machine
  Normal   Created                58m               kubelet, 10.130.33.8  Created container
  Normal   Started                58m               kubelet, 10.130.33.8  Started container

当希望创建 2 个 pod 的时候,会发现有一个启动不了。

-[appuser@chenqiang-dev pvtest]$ kubectl -n chenqiang-pv-test get po -o wide
NAME                                      READY     STATUS              RESTARTS   AGE       IP               NODE
nginx-hello-deployment-6f9f4d7bcc-86kb5   1/1       Running             0          1h        172.12.180.144   10.130.33.8
nginx-hello-deployment-6f9f4d7bcc-lbt2n   0/1       ContainerCreating   0          1h                   10.130.33.13

查看该 pod event log:

Events:
  Type     Reason       Age                From                   Message
  ----     ------       ----               ----                   -------
  Warning  FailedMount  6m (x25 over 1h)   kubelet, 10.130.33.13  Unable to mount volumes for pod "nginx-hello-deployment-6f9f4d7bcc-lbt2n_chenqiang-pv-test(7b0fcbd3-7382-11e8-80b5-74eacb7559f1)": timeout expired waiting for volumes to attach/mount for pod "chenqiang-pv-test"/"nginx-hello-deployment-6f9f4d7bcc-lbt2n". list of unattached/unmounted volumes=[nfs]
  Warning  FailedMount  12s (x39 over 1h)  kubelet, 10.130.33.13  (combined from similar events): MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7b0fcbd3-7382-11e8-80b5-74eacb7559f1/volumes/kubernetes.io~nfs/nfs-pv --scope -- mount -t nfs 10.130.44.20:/test/mysql-nfs01 /var/lib/kubelet/pods/7b0fcbd3-7382-11e8-80b5-74eacb7559f1/volumes/kubernetes.io~nfs/nfs-pv
Output: Running scope as unit run-11918.scope.
mount: wrong fs type, bad option, bad superblock on 10.130.44.20:/test/mysql-nfs01,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount. helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

根据错误提示,查看/sbin/mount.文件,果然发现没有/sbin/mount.nfs的文件,安装 nfs-utils 即可

解决办法:

apt-get install nfs-common

或者

yum install nfs-utils

安装之后,/sbin/ 下面多了几个 mount 文件,比如 mount.nfsmount.nfs4等:

[appuser@chenqiang-dev]# cat /sbin/mo
modinfo     modprobe    mount.ceph  mount.nfs   mount.nfs4  mountstats 

再次 kubectl get po 试下,发现起来了:

-[appuser@chenqiang-dev pvtest]$ kubectl -n chenqiang-pv-test get po -o wide
NAME                                      READY     STATUS    RESTARTS   AGE       IP               NODE
nginx-hello-deployment-6f9f4d7bcc-86kb5   1/1       Running   0          1h        172.12.180.144   10.130.33.8
nginx-hello-deployment-6f9f4d7bcc-lbt2n   1/1       Running   0          1h        172.12.232.103   10.130.33.13

接下来测试一下如何同时写:
分别进入这两个pod,并分别执行如下程序。

在 pod nginx-hello-deployment-6f9f4d7bcc-86kb5 中执行:

sh-4.1# for i in `seq 1 1000`; do echo $i >> a.test; done

在 pod nginx-hello-deployment-6f9f4d7bcc-lbt2n 中执行:

sh-4.1# for i in `seq 1000 10000`; do echo $i >> a.test; done

经过一段的同时写后,截取部分结果如下:

995
855
997
998
999
1000
1001
1002
1003
1004
1005

我们发现,两个pod在同时开始写的时候,是有一个先后顺序的,比如第一个是从 1写到999,等第一个写完了,第二个再从 1000 写到 9999 这是加了文件锁的缘故。
如果不加任何延时,由于写的太快,中途可能会丢失数据,需要加些延时处理,比如:
另测试一组再次验证:

pod 1 中执行:

for i in `seq 1 1000`; do echo $i >> a.test; sleep 1; done

pod 2 中执行:

for i in `seq 20001 21000`; do echo $i >> a.test; sleep 1; done

先敲下 pod2 中的命令,然后立马敲下 pod1 中的命令,发现会错开写,部分结果如下:

20001
20002
20003
1
20004
2
20005
3
20006
4
20007
5
20008
6
20009
7
20010
8
20011
9
20012
10

你可能感兴趣的:(kubernetes)