bare metal Kubernetes 集群挂载NFS Volume

Mount NFS Volume in bare metal Kubernetes cluster

如果不清楚怎么etup nfs server, 请参考: https://blog.csdn.net/herhun_chen/article/details/90245639
先上pod的yaml文件. 这个yaml文件来自"Kubernetes in Action"

apiVersion: v1
kind: Pod
metadata:
  name: mongodb-nfs
spec:
  volumes:
  - name: mongodb-data
    nfs:
      server: 192.168.50.3
      path: /home/ops/data/nfs/mongodb-data
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017

有个坑提一下, 如果想用上面的方式在pod中挂载NFS目录,需要先在k8s的主机上安装好NFS的客户端。不然pod创建会失败。

$ kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
mongodb-nfs   0/1     CrashLoopBackOff   2          22h
$ kubectl logs mongodb-nfs
chown: changing ownership of '/data/db': Operation not permitted

针对上面的错误,这里[https://stackoverflow.com/questions/51200115/chown-changing-ownership-of-data-db-operation-not-permitted]有解释
按照上面的解释修改yaml文件:

apiVersion: v1
kind: Pod
metadata:
  name: mongodb-nfs
spec:
  volumes:
  - name: mongodb-data
    nfs:
      server: 192.168.50.3
      path: /home/ops/data/nfs/mongodb-data
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP

  initContainers:
  - name: chmod-er
    image: busybox
    command:
    - /bin/chown
    - "-R"
    - "1000"
    - /data/db
    volumeMounts:
    - name: mongodb-data
      mountPath: "/data/db"

注意: 1000需要加引号,否则报错。这点在哪也看见过,但忘记地方了。
把原来的pod删除,重启apply yaml
但这次依然没正确拉起来,错误如下:

$ kubectl logs mongodb-nfs chmod-er
chown: /data/db: Operation not permitted
chown: /data/db: Operation not permitted

google了一番。在https://github.com/docker-library/mongo/issues/127中有个回复说要修改nfs服务器配置,增加如下选项

all_squash

经常,这个选项的意思是: 不管访问nfs server共享目录的用户身份如何包括root,它的权限都将被压缩成为匿名用户,同时他们的udi和gid都会变成nobody或nfsnobody账户的uid,gid
好吧,错了咱改

# 修改 
$ sudo vim /etc/exports  in NFS server as following:
/home/ops/data/nfs/mongodb-data	192.168.0.0/16(rw,sync,no_subtree_check,all_squash)
# 干脆连NFS服务器也重启一下
$ sudo systemctl restart nfs-kernel-server

把原来的pod删了,重新apply yaml file.

$ kubectl delete po mongodb-nfs
pod "mongodb-nfs" deleted
$ kubectl apply -f mongodb-pod-nfs.yaml 
pod/mongodb-nfs created

但还是失败,查看原因:

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  63s                default-scheduler  Successfully assigned default/mongodb-nfs to wnode5
  Warning  Failed     45s                kubelet, wnode5    Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/library/busybox/manifests/latest: Get https://auth.docker.io/token?scope=repository%3Alibrary%2Fbusybox%3Apull&service=registry.docker.io: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     45s                kubelet, wnode5    Error: ErrImagePull
  Normal   BackOff    45s                kubelet, wnode5    Back-off pulling image "busybox"
  Warning  Failed     45s                kubelet, wnode5    Error: ImagePullBackOff
  Normal   Pulling    34s (x2 over 61s)  kubelet, wnode5    Pulling image "busybox"

busybox image没拉下来。万能的GFW,无语。需要想办法pull image …
经过一番折腾,image是拉下来了,可pod还是启动失败。

$ kubectl logs mongodb-nfs
Error from server (BadRequest): container "mongodb" in pod "mongodb-nfs" is waiting to start: PodInitializing
$ kubectl logs mongodb-nfs chmod-er
chown: /data/db: Operation not permitted
chown: /data/db: Operation not permitted

一切的努力似乎都白费了。其实不是,功夫不负有心人,最终,请在NFS

在挂载NFS的过程中会遇到各种各样的有关权限的问题, 请参照如下修改NFS server 配置文件:

/Users/my-name/share-folder *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)

The optiion ‘insecure’ should be the key.
我们添加如下一个共享目录

/home/ops/data/nfs/elasticsearch *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,all_squash)

and

sudo exportfs -a

然后创建四个目录

$ tree elasticsearch
elasticsearch
├── el-a
├── el-b
├── el-c
└── el-d

创建如下PV资源文件 es-pvs.yaml

kind: List
apiVersion: v1
items:
  - apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: es-a
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Recycle
      nfs:
        server: 192.168.50.3
        path: /home/ops/data/nfs/elasticsearch/el-a
  - apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: es-b
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Recycle
      nfs:
        server: 192.168.50.3
        path: /home/ops/data/nfs/elasticsearch/el-b
  - apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: es-c
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Recycle
      nfs:
        server: 192.168.50.3
        path: /home/ops/data/nfs/elasticsearch/el-c
  - apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: es-d
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Recycle
      nfs:
        server: 192.168.50.3
        path: /home/ops/data/nfs/elasticsearch/el-d

创建资源:

$ kubectl apply -f es-pvs.yaml
$ kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                  STORAGECLASS   REASON   AGE
es-a   20Gi       RWO            Recycle          Available                                                  2d6h
es-b   20Gi       RWO            Recycle          Available                                                  2d6h
es-c   20Gi       RWO            Recycle          Available                                                  2d6h
es-d   20Gi       RWO            Recycle          Available                                                  2d6h

最终我们创建了我们想要的NFS Volume。

总结

其实在整个过程持续了好几个晚上。pod yaml文件也改过好几次。我已经忘记最终执行的是哪一个。

你可能感兴趣的:(Kubernetes)