部署k8s与LINSTOR使用mirror的持久性卷

  • 广州Lab测试
  • 部署k8s环境
    • All nodes
      • 安装docker
      • 安装kubeadm、kubelet 和 kubectl
        • 安装apt-transport-https
        • 下载gpg密钥
        • 添加 k8s 镜像源
        • 更新apt源
        • 安装kubelet kubeadm kubectl
  • 提前准备好LINSTOR相关images(all node)
    • 创建脚本文件
    • 执行过程
    • 查看images
    • master节点
      • 初始化
      • 配置kubectl工具
      • 部署flannel网络
    • node1节点
      • 加入集群
    • 让master节点同样能创建pod,运行容器
    • 配置node状态为ready
    • 配置service状态为healthy
    • 在master节点安装helm工具
      • helm基本概念
        • 名词基本概念
      • 安装helm
  • 安装DRBD9内核(all nodes)
    • 安装
    • 加载模块
  • 使用helm部署LINSTOR(master节点)
    • 添加LINSTOR repository
    • Create a kubernetes secret containing your my.linbit.com credentials
    • 创建pv
    • 使用helm部署LINSTOR pods
      • 编辑一个yaml文件
      • 使用helm install命令部署
  • 测试
    • 查看状态
      • 查看LINSTOR节点状态及详细信息
      • 查看存储池状态
      • 查看节点上的物理存储情况
    • 配置存储池
    • k8s创建pv
      • 创建存储类
        • 执行创建
      • 创建pvc
        • 查看LINSTOR资源状态以及DRBD资源状态
    • pv使用测试
      • 执行创建
        • 查看LINSTOR资源状态和DRBD资源状态
        • 查看系统设备信息
      • I/O测试
        • 查看/mnt/data的挂载信息
        • 创建文件
      • Mirror测试
        • 删除在k8smaster上的pod
        • 修改busybox-pod.yaml文件
        • 执行创建
        • 查看LINSTOR资源状态以及DRBD资源状态
        • 查看/mnt/data路径下的数据
    • 快照测试
      • 查看LINSTOR快照
      • 创建一个volume snapshot 的class
        • 执行创建及查看状态
      • 创建snapshot
        • 执行创建及查看状态
      • 进入到busybox容器交互模式修改数据
      • 将刚刚创建的snapshot restore成一个新的pvc
        • 执行创建以及查看pvc状态
        • 查看LINSTOR资源以及DRBD资源状态
      • 将busybox-restore pvc挂载到busybox上查看
        • 删除busybox pod
        • 编辑yaml文件,修改pvc为busybox-restore
        • 重新创建pod后查看数据

广州Lab测试

部署k8s环境

All nodes

安装docker

命令

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

结果

root@k8smaster:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
# Executing docker install script, commit: 26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c
+ sh -c 'apt-get update -qq >/dev/null'
+ sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
+ sh -c 'curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null'
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c 'echo "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic stable" > /etc/apt/sources.list.d/docker.list'
+ sh -c 'apt-get update -qq >/dev/null'
+ '[' -n '' ']'
+ sh -c 'apt-get install -y -qq --no-install-recommends docker-ce >/dev/null'
+ sh -c 'docker version'
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:02:36 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:01:06 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
root@k8smaster:~#

安装kubeadm、kubelet 和 kubectl

kubeadm:用来初始化集群的指令
kubelet:在集群中的每个节点上用来启动 pod 和容器等
kubectl:用来与集群通信的命令行工具

安装apt-transport-https

命令

apt-get update && apt-get install -y apt-transport-https curl

结果

root@k8smaster:~# apt-get update && sudo apt-get install -y apt-transport-https curl
Hit:1 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]                                            
Hit:3 http://cn.archive.ubuntu.com/ubuntu bionic InRelease                                    
Get:4 http://cn.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:5 http://cn.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Fetched 252 kB in 5s (50.2 kB/s)   
Reading package lists... Done
Reading package lists... Done
Building dependency tree       
Reading state information... Done
curl is already the newest version (7.58.0-2ubuntu3.10).
apt-transport-https is already the newest version (1.6.12ubuntu0.1).
0 upgraded, 0 newly installed, 0 to remove and 154 not upgraded.
root@k8smaster:~# 

下载gpg密钥

命令

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

结果

root@k8smaster:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   653  100   653    0     0   5533      0 --:--:-- --:--:-- --:--:--  5487
OK
root@k8smaster:~# 

添加 k8s 镜像源

命令

cat </etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

结果

root@k8snode1:~# cat </etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
root@k8snode1:~# 

更新apt源

命令

apt update

结果

root@k8smaster:~# apt update
Hit:1 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease
Hit:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease                                                                                                             
Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]                                                                                                             
Hit:4 http://cn.archive.ubuntu.com/ubuntu bionic InRelease
Get:5 http://cn.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]       
Get:6 http://cn.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Fetched 252 kB in 3s (84.4 kB/s)                               
Reading package lists... Done
Building dependency tree       
Reading state information... Done
154 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@k8smaster:~# 

安装kubelet kubeadm kubectl

命令

apt-get install -y kubelet kubeadm kubectl

结果

root@k8smaster:~# apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools kubernetes-cni socat
The following NEW packages will be installed:
  conntrack cri-tools kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 7 newly installed, 0 to remove and 154 not upgraded.
Need to get 68.5 MB of archives.
After this operation, 292 MB of additional disk space will be used.
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-01 [8,775 kB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.8.7-00 [25.0 MB]
Get:3 http://cn.archive.ubuntu.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]
Get:4 http://cn.archive.ubuntu.com/ubuntu bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.19.3-00 [18.2 MB]
Get:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.19.3-00 [8,350 kB]
Get:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.19.3-00 [7,758 kB]
Fetched 68.5 MB in 3s (24.2 MB/s)  
Selecting previously unselected package conntrack.
(Reading database ... 67262 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.13.0-01_amd64.deb ...
Unpacking cri-tools (1.13.0-01) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../2-kubernetes-cni_0.8.7-00_amd64.deb ...
Unpacking kubernetes-cni (0.8.7-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../3-socat_1.7.3.2-2ubuntu2_amd64.deb ...
Unpacking socat (1.7.3.2-2ubuntu2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../4-kubelet_1.19.3-00_amd64.deb ...
Unpacking kubelet (1.19.3-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../5-kubectl_1.19.3-00_amd64.deb ...
Unpacking kubectl (1.19.3-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../6-kubeadm_1.19.3-00_amd64.deb ...
Unpacking kubeadm (1.19.3-00) ...
Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Setting up kubernetes-cni (0.8.7-00) ...
Setting up cri-tools (1.13.0-01) ...
Setting up socat (1.7.3.2-2ubuntu2) ...
Setting up kubelet (1.19.3-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubectl (1.19.3-00) ...
Setting up kubeadm (1.19.3-00) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
root@k8smaster:~# 

提前准备好LINSTOR相关images(all node)

LINSTOR相关容器的image需要从外网pull,会失败,所以从自己构建的私有仓库pull下来之后进行重命名

创建脚本文件

创建从私有仓库pull images,重命名images,删除原来images的sh脚本
脚本内容

#!/bin/bash
for value in { linstor-controller:v1.10.0 linstor-csi:v0.10.1 linstor-operator:v1.2.0 linstor-satellite:v1.10.0 csi-attacher:v3.0.2 csi-snapshotter:v3.0.2 csi-provisioner:v2.0.4 snapshot-controller:v3.0.2 csi-resizer:v1.0.1 drbd9-bionic:v9.0.25 csi-node-driver-registrar:v2.0.1 livenessprobe:v2.1.0 etcd:v3.4.9 }
do
#       echo $value
docker pull teym88/$value
done

docker tag d800df9e3aac drbd.io/linstor-csi:v0.10.1
docker tag 4908948efb1a drbd.io/linstor-operator:v1.2.0
docker tag 6b24920f973a drbd.io/linstor-satellite:v1.10.0
docker tag 2d8acbdb713b drbd.io/linstor-controller:v1.10.0
docker tag 423c30f569a6 drbd.io/drbd9-bionic:v9.0.25
docker tag 390688b9e1ba k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
docker tag 5088af4d7cbd k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2
docker tag 3b5362d4b81a k8s.gcr.io/sig-storage/snapshot-controller:v3.0.2
docker tag 82a3b8324d58 k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4
docker tag e2e5185f7b3c k8s.gcr.io/sig-storage/csi-resizer:v1.0.1
docker tag 84b0f3f7f6f0 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
docker tag de977053da40 k8s.gcr.io/sig-storage/livenessprobe:v2.1.0
docker tag 8f0046e4e01b gcr.io/etcd-development/etcd:v3.4.9
for value in { linstor-controller:v1.10.0 linstor-csi:v0.10.1 linstor-operator:v1.2.0 linstor-satellite:v1.10.0 csi-attacher:v3.0.2 csi-snapshotter:v3.0.2 csi-provisioner:v2.0.4 snapshot-controller:v3.0.2 csi-resizer:v1.0.1 drbd9-bionic:v9.0.25 csi-node-driver-registrar:v2.0.1 livenessprobe:v2.1.0 etcd:v3.4.9 }
do

                        #       echo $value
                                docker rmi teym88/$value
                        done

执行过程

查看images

5个以drbd.io开头的,加上7个以k8s.gcr.io开头的以及一个以gcr.io开头的,一共13个iamges是脚本创建的

root@k8smaster:~# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
ubuntu                                                            latest              f643c72bc252        4 days ago          72.9MB
quay.io/coreos/flannel                                            v0.13.1-rc1         f03a23d55e57        9 days ago          64.6MB
teym88/linstor-csi                                                latest              d800df9e3aac        10 days ago         235MB
drbd.io/linstor-csi                                               v0.10.1             d800df9e3aac        10 days ago         235MB
drbd.io/linstor-operator                                          v1.2.0              4908948efb1a        11 days ago         186MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.19.4             635b36f4d89f        2 weeks ago         118MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.19.4             b15c6247777d        2 weeks ago         119MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.19.4             4830ab618586        2 weeks ago         111MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.19.4             14cd22f7abe7        2 weeks ago         45.7MB
drbd.io/linstor-satellite                                         v1.10.0             6b24920f973a        2 weeks ago         466MB
drbd.io/linstor-controller                                        v1.10.0             2d8acbdb713b        2 weeks ago         478MB
k8s.gcr.io/sig-storage/csi-attacher                               v3.0.2              390688b9e1ba        3 weeks ago         47.7MB
k8s.gcr.io/sig-storage/csi-snapshotter                            v3.0.2              5088af4d7cbd        4 weeks ago         47.8MB
k8s.gcr.io/sig-storage/snapshot-controller                        v3.0.2              3b5362d4b81a        4 weeks ago         43MB
k8s.gcr.io/sig-storage/csi-provisioner                            v2.0.4              82a3b8324d58        4 weeks ago         49.9MB
k8s.gcr.io/sig-storage/csi-resizer                                v1.0.1              e2e5185f7b3c        6 weeks ago         47.7MB
drbd.io/drbd9-bionic                                              v9.0.25             423c30f569a6        2 months ago        262MB
k8s.gcr.io/sig-storage/csi-node-driver-registrar                  v2.0.1              84b0f3f7f6f0        2 months ago        18MB
k8s.gcr.io/sig-storage/livenessprobe                              v2.1.0              de977053da40        3 months ago        17.3MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0            0369cf4303ff        3 months ago        253MB
centos                                                            8                   0d120b6ccaa8        3 months ago        215MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        5 months ago        45.2MB
gcr.io/etcd-development/etcd                                      v3.4.9              8f0046e4e01b        6 months ago        83.8MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        9 months ago        683kB

master节点

初始化

命令

kubeadm init --apiserver-advertise-address=10.203.1.93 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16

结果

root@k8smaster:~# kubeadm init --apiserver-advertise-address=10.203.1.93 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
W1103 10:51:16.581878   26080 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.203.1.93]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [10.203.1.93 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [10.203.1.93 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 203.376735 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ew2jyv.thkumbh520cmlho0
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.203.1.93:6443 --token ew2jyv.thkumbh520cmlho0 \
    --discovery-token-ca-cert-hash sha256:67f2b0ced10cacfe0c91640422c0b8fa15d9d7b2b483daa8fe3446134564b5cf 
root@k8smaster:~#

一些常用参数的含义:

--apiserver-advertise-address: k8s 中的主要服务apiserver的部署地址,填自己的管理节点 ip
--image-repository: 拉取的 docker 镜像源,因为初始化的时候kubeadm会去拉 k8s 的很多组件来进行部署,所以需要指定国内镜像源,下不然会拉取不到镜像。
--pod-network-cidr: 这个是 k8s 采用的节点网络,因为我们将要使用flannel作为 k8s 的网络,所以这里填10.244.0.0/16就好

初始化过程中遇到error 应该执行kubeadm reset重置之后再重新进行初始化

执行结果最后的命令“kubeadm join 10.203.1.93:6443 --token ew2jyv.thkumbh520cmlho0 \
    --discovery-token-ca-cert-hash sha256:67f2b0ced10cacfe0c91640422c0b8fa15d9d7b2b483daa8fe3446134564b5cf”需要记录,后续其他节点可以执行此命令加入集群

配置kubectl工具

命令

mkdir -p /root/.kube && \
cp /etc/kubernetes/admin.conf /root/.kube/config

结果

root@k8smaster:~# mkdir -p /root/.kube && \
> cp /etc/kubernetes/admin.conf /root/.kube/config
root@k8smaster:~# 

部署flannel网络

flannel是一个专门为 k8s 设置的网络规划服务,可以让集群中的不同节点主机创建的 docker 容器都具有全集群唯一的虚拟IP地址。部署flannel的话直接执行下述命令即可
命令

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

结果

root@k8smaster:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds created
root@k8smaster:~#

node1节点

加入集群

命令

kubeadm join 10.203.1.93:6443 --token u8fncm.r7wubf13v8rf7flm \
>     --discovery-token-ca-cert-hash sha256:6a140e41df23bd692dc2d10eeafe1eb26efe06a18427e3a869d7fe647c8dd8a1

结果

root@k8snode1:~# kubeadm join 10.203.1.93:6443 --token u8fncm.r7wubf13v8rf7flm \
>     --discovery-token-ca-cert-hash sha256:6a140e41df23bd692dc2d10eeafe1eb26efe06a18427e3a869d7fe647c8dd8a1 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8snode1:~#

让master节点同样能创建pod,运行容器

出于安全考虑,默认配置下 Kubernetes 不会将 Pod 调度到 Master 节点
参考:https://www.hangge.com/blog/cache/detail_2431.html

root@k8smaster:~# kubectl taint node k8smaster node-role.kubernetes.io/master-            
node/k8smaster untainted

配置node状态为ready

nodes状态为NotReady,需要修改kubectl的默认地址,编辑kubectl配置文件,在最后一行添加Environment="KUBELET_EXTRA_ARGS=--node-ip=xxx"

root@k8smaster:~# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
Environment="KUBELET_EXTRA_ARGS=--node-ip=10.203.1.93"
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
~                                                                                                                                                                                                                  
~                                                                                                                                                                                                                  
"/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" 13L, 954C written                                                           

然后重启服务

root@k8smaster:~# systemctl stop kubelet.service && \
> systemctl daemon-reload && \
> systemctl start kubelet.service
Warning: The unit file, source configuration file or drop-ins of kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
root@k8smaster:~# systemctl daemon-reload
root@k8smaster:~# kubectl get nodes
Error from server: etcdserver: request timed out
root@k8smaster:~# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8smaster   Ready    master   36m   v1.19.3
k8snode1    Ready       26m   v1.19.3

配置service状态为healthy

注释controller-manager以及scheduler的yaml文件中的默认端口

root@k8smaster:~# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
 #  - --port=0
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true
    image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.3
    imagePullPolicy: IfNotPresent
      timeoutSeconds: 15
    name: kube-controller-manager
"/etc/kubernetes/manifests/kube-controller-manager.yaml" 111L, 3381C written                                                                                                           
root@k8smaster:~# vi /etc/kubernetes/manifests/kube-scheduler.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
#   - --port=0
    image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.3
    imagePullPolicy: IfNotPresent
      timeoutSeconds: 15
    name: kube-scheduler
        cpu: 100m
      timeoutSeconds: 15
    volumeMounts:
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
"/etc/kubernetes/manifests/kube-scheduler.yaml" 57L, 1420C written                                                                                                                     
root@k8smaster:~#

将两个文件中的--port=0进行注释,重启kubectl服务后正常

root@k8smaster:~# systemctl restart kubelet.service
root@k8smaster:~# kubectl get cs  
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

在master节点安装helm工具

helm基本概念

helm是k8s平台的包管理工具,类似于apt,yum在Linux系统中的作用

名词基本概念

chart:类似于apt中的dpkg
release:Kubernetes 集群上运行的 Chart 的一个实例。在同一个集群上,一个 Chart 可以安装很多次。每次安装都会创建一个新的 release。例如一个 MySQL Chart,如果想在服务器上运行两个数据库,就可以把这个 Chart 安装两次。每次安装都会生成自己的 Release,会有自己的 Release 名称
repository:发布chart的仓库

安装helm

命令

snap install helm --classic

结果

root@k8smaster:~# snap install helm --classic
2020-11-23T15:07:12+08:00 INFO Waiting for restart...
helm 3.4.1 from Snapcrafters installed

安装DRBD9内核(all nodes)

安装

命令

add-apt-repository ppa:linbit/linbit-drbd9-stack
apt update
apt install -y drbd-dkms drbd-utils lvm2

加载模块

命令

modprobe drbd

结果

root@k8snode1:~# modprobe drbd
root@k8snode1:~# 

使用helm部署LINSTOR(master节点)

添加LINSTOR repository

命令

helm repo add linstor https://charts.linstor.io

执行结果

root@k8smaster:~# helm repo add linstor https://charts.linstor.io
"linstor" has been added to your repositories

查看repository

root@k8smaster:~# helm search repo  
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                          
linstor/linstor         1.1.0           1.1.0           A Helm chart for Linstor Operator    
linstor/pv-hostpath     0.2.2           0.2.2           Hostpath volumes for etcd persistence

Create a kubernetes secret containing your my.linbit.com credentials

命令

kubectl create secret docker-registry drbdiocred --docker-server=drbd.io --docker-username=tonywong --docker-password=iero4eiD

执行结果

root@k8smaster:~# kubectl create secret docker-registry drbdiocred --docker-server=drbd.io --docker-username=tonywong --docker-password=iero4eiD
secret/drbdiocred created
root@k8smaster:~#

创建pv

命令

helm install linstor-etcd-pv linstor/pv-hostpath --set "nodes={k8smaster,k8snode1}"

执行结果

root@k8smaster:~# helm install linstor-etcd-pv linstor/pv-hostpath --set "nodes={k8snode1}"
NAME: linstor-etcd-pv
LAST DEPLOYED: Wed Nov 18 14:44:19 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

查看状态

root@k8smaster:~# kubectl get pv
NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                            STORAGECLASS   REASON   AGE
linstor-etcd-pv-k8smaster   1Gi        RWO            Retain           Available                                                            17m
linstor-etcd-pv-k8snode1    1Gi        RWO            Retain           Bound       default/datadir-linstor-etcd-0                           17m

使用helm部署LINSTOR pods

编辑一个yaml文件

DRBD image 版本可以在http://drbd.io/查询,由于测试虚拟机是ubuntu 18.04,所以添加的是drbd9-bionic,如果不指定版本,将pull最新版本

root@k8smaster:~# vi helm-linstor.yaml 
#set the pull secret
drbdRepoCred: drbdiocred
#stork is a plugin that optimizes placement of pods “near” their volumes
#for a small cluster,this is no needed
stork:
  enabled: false
#drbdimage
operator:
  satelliteSet:
          kernelModuleInjectionImage: drbd.io/drbd9-bionic:v9.0.25   

使用helm install命令部署

命令

helm install linstor linstor/linstor --values helm-linstor.yaml

执行结果

root@k8smaster:~# helm install linstor linstor/linstor --values helm-linstor.yaml 
W1118 15:53:07.247200   27229 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1118 15:53:07.552532   27229 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1118 15:53:07.846426   27229 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1118 15:53:08.107195   27229 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1118 15:53:08.248053   27229 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1118 15:53:08.495601   27229 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
NAME: linstor
LAST DEPLOYED: Wed Nov 18 15:53:37 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Linstor Installed!!!

查看状态

root@k8smaster:~# kubectl get all      
NAME                                          READY   STATUS    RESTARTS   AGE
pod/linstor-cs-controller-5d58f78c99-n9n5z    1/1     Running   0          17m
pod/linstor-csi-controller-6d548f599c-txx44   6/6     Running   0          17m
pod/linstor-csi-node-jrwgc                    3/3     Running   3          17m
pod/linstor-csi-node-lc8nt                    3/3     Running   2          17m
pod/linstor-etcd-0                            1/1     Running   0          17m
pod/linstor-ns-node-bfbsl                     1/1     Running   0          17m
pod/linstor-ns-node-z7z47                     1/1     Running   0          17m
pod/linstor-operator-749cfc8ccc-lwc5w         1/1     Running   0          17m
pod/snapshot-controller-68d8b64678-wqzkr      1/1     Running   0          17m

NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/kubernetes     ClusterIP   10.96.0.1               443/TCP             23m
service/linstor-cs     ClusterIP   10.103.244.66           3370/TCP            17m
service/linstor-etcd   ClusterIP   None                    2380/TCP,2379/TCP   17m

NAME                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/linstor-csi-node   2         2         2       2            2                     17m
daemonset.apps/linstor-ns-node    2         2         2       2            2                     17m

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/linstor-cs-controller    1/1     1            1           17m
deployment.apps/linstor-csi-controller   1/1     1            1           17m
deployment.apps/linstor-operator         1/1     1            1           17m
deployment.apps/snapshot-controller      1/1     1            1           17m

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/linstor-cs-controller-5d58f78c99    1         1         1       17m
replicaset.apps/linstor-csi-controller-6d548f599c   1         1         1       17m
replicaset.apps/linstor-operator-749cfc8ccc         1         1         1       17m
replicaset.apps/snapshot-controller-68d8b64678      1         1         1       17m

NAME                            READY   AGE
statefulset.apps/linstor-etcd   1/1     17m

测试

需要跟linstor-cs-controller容器进行交互来进行测试

查看状态

查看LINSTOR节点状态及详细信息

exec表示进入交互模式,可以看到有3个node,两个Satellite Node是虚拟机节点,而Controller node则是一个运行的容器

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor n l
+-----------------------------------------------------------------------------------------+
| Node                                   | NodeType   | Addresses                | State  |
|=========================================================================================|
| k8smaster                              | SATELLITE  | 10.203.1.90:3366 (PLAIN) | Online |
| k8snode1                               | SATELLITE  | 10.203.1.91:3366 (PLAIN) | Online |
| linstor-cs-controller-5d58f78c99-n9n5z | CONTROLLER | 10.244.1.6:3366 (PLAIN)  | Online |
+-----------------------------------------------------------------------------------------+

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor n info
+-------------------------------------------------------------+
| Node      | Diskless | LVM | LVMThin | ZFS/Thin | File/Thin |
|=============================================================|
| k8smaster | +        | +   | +       | -        | +         |
| k8snode1  | +        | +   | +       | -        | +         |
+-------------------------------------------------------------+
Unsupported storage providers:
 k8smaster: 
  ZFS: 'cat /sys/module/zfs/version' returned with exit code 1
  SPDK: IO exception occured when running 'rpc.py get_spdk_version': Cannot run program "rpc.py": error=2, No such file or directory
  ZFS_THIN: 'cat /sys/module/zfs/version' returned with exit code 1
 k8snode1: 
  ZFS: 'cat /sys/module/zfs/version' returned with exit code 1
  SPDK: IO exception occured when running 'rpc.py get_spdk_version': Cannot run program "rpc.py": error=2, No such file or directory
  ZFS_THIN: 'cat /sys/module/zfs/version' returned with exit code 1

+------------------------------------------+
| Node      | DRBD | LUKS | NVMe | Storage |
|==========================================|
| k8smaster | +    | +    | +    | +       |
| k8snode1  | +    | +    | +    | +       |
+------------------------------------------+
Unsupported resource layers:
 k8smaster: 
  WRITECACHE: 'modprobe dm-writecache' returned with exit code 1
 k8snode1: 
  WRITECACHE: 'modprobe dm-writecache' returned with exit code 1

查看存储池状态

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor sp l
+--------------------------------------------------------------------------------------------------------------+
| StoragePool          | Node      | Driver   | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State |
|==============================================================================================================|
| DfltDisklessStorPool | k8smaster | DISKLESS |          |              |               | False        | Ok    |
| DfltDisklessStorPool | k8snode1  | DISKLESS |          |              |               | False        | Ok    |
+--------------------------------------------------------------------------------------------------------------+

查看节点上的物理存储情况

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor physical-storage list
+------------------------------------------------+
| Size        | Rotational | Nodes               |
|================================================|
| 32212254720 | True       | k8smaster[/dev/sdb] |
|             |            | k8snode1[/dev/sdb]  |
+------------------------------------------------+

linstor physical-storage list似乎是把节点上没使用过的disk列出来

root@k8smaster:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0    7:0    0 55.4M  1 loop /snap/core18/1932
loop1    7:1    0 10.3M  1 loop /snap/helm/308
loop2    7:2    0   31M  1 loop /snap/snapd/9721
sda      8:0    0  100G  0 disk 
└─sda1   8:1    0  100G  0 part /
sdb      8:16   0   30G  0 disk 
sr0     11:0    1 1024M  0 rom  
root@k8smaster:~# vgs
root@k8smaster:~# 

配置存储池

执行以下命令

kubectl edit linstorsatellitesets.linstor.linbit.com linstor-ns

进入Vi编辑模式后修改automaticStorageType的值为LVMTHIN

spec:
  affinity: {}
  automaticStorageType: LVMTHIN
  controllerEndpoint: http://linstor-cs.default.svc:3370
  drbdRepoCred: drbdiocred
  imagePullPolicy: IfNotPresent
  kernelModuleInjectionImage: drbd.io/drbd9-bionic:v9.0.25
  kernelModuleInjectionMode: ShippedModules
  kernelModuleInjectionResources: {}
  linstorHttpsClientSecret: ""
  priorityClassName: ""
  resources: {}
  satelliteImage: drbd.io/linstor-satellite:v1.10.0
  serviceAccountName: ""
  sslSecret: null
  storagePools:
    lvmPools: []
    lvmThinPools: []
    zfsPools: []
  tolerations: []

保存退出后查看存储池状态

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor sp list              
+---------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool          | Node      | Driver   | PoolName                          | FreeCapacity | TotalCapacity | CanSnapshots | State |
|=======================================================================================================================================|
| DfltDisklessStorPool | k8smaster | DISKLESS |                                   |              |               | False        | Ok    |
| DfltDisklessStorPool | k8snode1  | DISKLESS |                                   |              |               | False        | Ok    |
| autopool-sdb         | k8smaster | LVM_THIN | linstor_autopool-sdb/autopool-sdb |    29.93 GiB |     29.93 GiB | True         | Ok    |
| autopool-sdb         | k8snode1  | LVM_THIN | linstor_autopool-sdb/autopool-sdb |    29.93 GiB |     29.93 GiB | True         | Ok    |
+---------------------------------------------------------------------------------------------------------------------------------------+
root@k8smaster:~# 

k8s创建pv

创建存储类

新创建一个storageclass.yaml文件,内容如下

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor-2-replicas
provisioner: linstor.csi.linbit.com
allowVolumeExpansion: true
parameters:
  autoPlace: "2"
  storagePool: autopool-sdb
  resourceGroup: linstor-2-replicas

执行创建

root@k8smaster:~# kubectl apply -f storageclass.yaml 
storageclass.storage.k8s.io/linstor-2-replicas created

查看状态

root@k8smaster:~# kubectl get storageclass
NAME                 PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
linstor-2-replicas   linstor.csi.linbit.com   Delete          Immediate           true                   41s

创建pvc

新创建一个demo-pvc.yaml文件,内容如下

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-claim
spec:
  storageClassName: linstor-2-replicas
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

执行创建

root@k8smaster:~# kubectl apply -f demo-pvc.yaml 
persistentvolumeclaim/demo-claim created

查看状态

root@k8smaster:~# kubectl get pvc
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
datadir-linstor-etcd-0   Bound    linstor-etcd-pv-k8snode1                   1Gi        RWO                                 115m
demo-claim               Bound    pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214   100Mi      RWO            linstor-2-replicas   7s
root@k8smaster:~# 

查看pv

root@k8smaster:~# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                            STORAGECLASS         REASON   AGE
linstor-etcd-pv-k8smaster                  1Gi        RWO            Retain           Available                                                                  118m
linstor-etcd-pv-k8snode1                   1Gi        RWO            Retain           Bound       default/datadir-linstor-etcd-0                                 118m
pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214   100Mi      RWO            Delete           Bound       default/demo-claim               linstor-2-replicas            2m12s
root@k8smaster:~# 

查看LINSTOR资源状态以及DRBD资源状态

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor resource list
+---------------------------------------------------------------------------------------------------------------+
| ResourceName                             | Node      | Port | Usage  | Conns |    State | CreatedOn           |
|===============================================================================================================|
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8smaster | 7000 | Unused | Ok    | UpToDate | 2020-11-30 07:41:37 |
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8snode1  | 7000 | Unused | Ok    | UpToDate | 2020-11-30 07:41:37 |
+---------------------------------------------------------------------------------------------------------------+

root@k8smaster:~# drbdadm status
pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 role:Secondary
  disk:UpToDate
  k8snode1 role:Secondary
    peer-disk:UpToDate

pv使用测试

将pv挂载给某个pod中的容器进行使用,首先创建一个busybox-pod.yaml文件来创建pod,里面运行的容器是busybox,是一款Linux命令行的小工具,yaml文件内容如下:

apiVersion: v1
kind: Pod
metadata:
  name: busybox-pod
spec:
  nodeName: k8smaster
  containers:
  - image: busybox
    name: busybox1
    command: ["init","tail","-f","/dev/null"]
    volumeMounts:
    - name: data
      mountPath: /mnt/data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: demo-claim

内容解析:容器运行的节点是k8smaster,使用的image是busybox,volume mount的路径是/mnt/data,使用的volume是刚刚创建的demo-claim

执行创建

root@k8smaster:~# kubectl apply -f busybox-pod.yaml 
pod/busybox-pod created

查看pod状态

root@k8smaster:~# kubectl get pod busybox-pod
NAME          READY   STATUS    RESTARTS   AGE
busybox-pod   1/1     Running   0          2m16s
root@k8smaster:~# 

查看LINSTOR资源状态和DRBD资源状态

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor resource list
+---------------------------------------------------------------------------------------------------------------+
| ResourceName                             | Node      | Port | Usage  | Conns |    State | CreatedOn           |
|===============================================================================================================|
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8smaster | 7000 | InUse  | Ok    | UpToDate | 2020-11-30 07:41:37 |
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8snode1  | 7000 | Unused | Ok    | UpToDate | 2020-11-30 07:41:37 |
+---------------------------------------------------------------------------------------------------------------+

root@k8smaster:~# drbdadm status
pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 role:Primary
  disk:UpToDate
  k8snode1 role:Secondary
    peer-disk:UpToDate

可以看到在k8smaster上的资源已经是inused的状态

查看系统设备信息

root@k8smaster:~# lsblk
NAME                                                  MAJ:MIN  RM  SIZE RO TYPE MOUNTPOINT
loop0                                                   7:0     0 55.4M  1 loop /snap/core18/1932
loop1                                                   7:1     0 10.3M  1 loop /snap/helm/308
loop2                                                   7:2     0   31M  1 loop /snap/snapd/9721
sda                                                     8:0     0  100G  0 disk 
└─sda1                                                  8:1     0  100G  0 part /
sdb                                                     8:16    0   30G  0 disk 
├─linstor_autopool--sdb-autopool--sdb_tmeta           253:0     0   32M  0 lvm  
│ └─linstor_autopool--sdb-autopool--sdb-tpool         253:2     0   30G  0 lvm  
│   ├─linstor_autopool--sdb-autopool--sdb             253:3     0   30G  1 lvm  
│   └─linstor_autopool--sdb-pvc--d4fdd350--0640--42ee--97e2--bd13e89d1214_00000
│                                                     253:4     0  104M  0 lvm  
│     └─drbd1000                                      147:1000  0  104M  0 disk /var/lib/kubelet/pods/193848eb-30ee-4287-a5bd-0f28f5693689/volumes/kubernetes.io~csi/pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214/mount
└─linstor_autopool--sdb-autopool--sdb_tdata           253:1     0   30G  0 lvm  
  └─linstor_autopool--sdb-autopool--sdb-tpool         253:2     0   30G  0 lvm  
    ├─linstor_autopool--sdb-autopool--sdb             253:3     0   30G  1 lvm  
    └─linstor_autopool--sdb-pvc--d4fdd350--0640--42ee--97e2--bd13e89d1214_00000
                                                      253:4     0  104M  0 lvm  
      └─drbd1000                                      147:1000  0  104M  0 disk /var/lib/kubelet/pods/193848eb-30ee-4287-a5bd-0f28f5693689/volumes/kubernetes.io~csi/pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214/mount
sr0                                                    11:0     1 1024M  0 rom  
root@k8smaster:~# 

可以看到k8s使用LINSTOR相关容器在系统上创建的vg,lv以及DRBD资源等

I/O测试

进入busybox 交互模式下进行
执行命令进入交互模式

root@k8smaster:~# kubectl exec -it busybox-pod -- sh
/ # 

查看/mnt/data的挂载信息

可以看到挂载的设备是/dev/drbd1000

/ # cd /mnt/data
/mnt/data # ls
lost+found
/mnt/data # df -h .
Filesystem                Size      Used Available Use% Mounted on
/dev/drbd1000            96.7M      1.5M     87.9M   2% /mnt/data
/mnt/data # 

创建文件

/mnt/data # echo "hello from busybox on k8smaster" > ./hi.txt
/mnt/data # ls
hi.txt      lost+found
/mnt/data # cat hi.txt 
hello from busybox on k8smaster
/mnt/data # exit
root@k8smaster:~# 

Mirror测试

由于创建的pv是在两个节点之上,所以可以在k8snode1挂载pv来查看数据是否同步

删除在k8smaster上的pod

root@k8smaster:~# kubectl delete -f busybox-pod.yaml 
pod "busybox-pod" deleted

修改busybox-pod.yaml文件

将运行节点修改为k8snode1

apiVersion: v1
kind: Pod
metadata:
  name: busybox-pod
spec:
  nodeName: k8snode1
  containers:
  - image: busybox
    name: busybox1
    command: ["init","tail","-f","/dev/null"]
    volumeMounts:
    - name: data
      mountPath: /mnt/data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: demo-claim

执行创建

root@k8smaster:~# kubectl apply -f busybox-pod.yaml       
pod/busybox-pod created

root@k8smaster:~# kubectl get pod busybox-pod
NAME          READY   STATUS    RESTARTS   AGE
busybox-pod   1/1     Running   0          2m34s

查看LINSTOR资源状态以及DRBD资源状态

资源inused的节点已经变成k8snode1

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor resource list
+---------------------------------------------------------------------------------------------------------------+
| ResourceName                             | Node      | Port | Usage  | Conns |    State | CreatedOn           |
|===============================================================================================================|
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8smaster | 7000 | Unused | Ok    | UpToDate | 2020-11-30 07:41:37 |
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8snode1  | 7000 | InUse  | Ok    | UpToDate | 2020-11-30 07:41:37 |
+---------------------------------------------------------------------------------------------------------------+
root@k8smaster:~# drbdadm status
pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 role:Secondary
  disk:UpToDate
  k8snode1 role:Primary
    peer-disk:UpToDate

root@k8smaster:~# 

查看/mnt/data路径下的数据

可以看到数据已经同步到k8snode1节点

root@k8smaster:~# kubectl exec -it busybox-pod -- sh 
/ # cd /mnt/data/
/mnt/data # ls
hi.txt      lost+found
/mnt/data # cat hi.txt 
hello from busybox on k8smaster
/mnt/data # exi
sh: exi: not found
/mnt/data # exit
command terminated with exit code 127
root@k8smaster:~# 

快照测试

查看LINSTOR快照

暂无

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor snapshot list
+-----------------------------------------------------------------------+
| ResourceName | SnapshotName | NodeNames | Volumes | CreatedOn | State |
|=======================================================================|
+-----------------------------------------------------------------------+

创建一个volume snapshot 的class

创建一个snapshot-class.yaml的文件,内容如下

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: linstor-snapshot
driver: linstor.csi.linbit.com
deletionPolicy: Delete

执行创建及查看状态

root@k8smaster:~# kubectl apply -f snapshot-class.yaml 
volumesnapshotclass.snapshot.storage.k8s.io/linstor-snapshot created
root@k8smaster:~# kubectl get volumesnapshotclass
NAME               DRIVER                   DELETIONPOLICY   AGE
linstor-snapshot   linstor.csi.linbit.com   Delete           14s
root@k8smaster:~# 

创建snapshot

创建一个名字为snapshot-demo.yaml的文件,声明使用刚刚创建的linstor-snapshot VolumeSnapshotClass为demo-claim创建名字为test-snapshot的snapshot,内容如下

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: test-snapshot
spec:
  volumeSnapshotClassName: linstor-snapshot
  source:
    persistentVolumeClaimName: demo-claim

执行创建及查看状态

可以看到快照已经创建

root@k8smaster:~# kubectl apply -f snapshot-demo.yaml 
volumesnapshot.snapshot.storage.k8s.io/test-snapshot created
root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor snapshot list
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ResourceName                             | SnapshotName                                  | NodeNames           | Volumes    | CreatedOn           | State      |
|================================================================================================================================================================|
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | snapshot-25d4163e-700d-487b-87be-122f24089b7b | k8smaster, k8snode1 | 0: 100 MiB | 2020-11-30 08:37:01 | Successful |
+---------------------------------------------------------------------------------------------------------------------------------------------------------

进入到busybox容器交互模式修改数据

将刚刚创建的hi.txt删除

root@k8smaster:~# kubectl exec -it busybox-pod -- sh                                         
/ # cd /mnt/data/
/mnt/data # ls
hi.txt      lost+found
/mnt/data # rm hi.txt 
/mnt/data # ls
lost+found
/mnt/data # exit
root@k8smaster:~# 

将刚刚创建的snapshot restore成一个新的pvc

创建一个snapshot-restore.yaml的文件,声明使用test-snapshot来restore一个新的叫做busybox-restore的pvc,使用的class还是linstor-2-replicas,内容如下

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: busybox-restore
spec:
  dataSource:
    name: test-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  storageClassName: linstor-2-replicas

执行创建以及查看pvc状态

可以看到已经生成一个新的名称为busybox-restore的pvc

root@k8smaster:~# vi snapshot-restore.yaml
root@k8smaster:~# kubectl apply -f snap
snap/                  snapshot-class.yaml    snapshot-demo.yaml     snapshot-restore.yaml  
root@k8smaster:~# kubectl apply -f snapshot-restore.yaml 
persistentvolumeclaim/busybox-restore created
root@k8smaster:~# kubectl get pvc
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
busybox-restore          Bound    pvc-ecb27630-1cfd-4f03-a632-82749b899cd6   100Mi      RWO            linstor-2-replicas   12s
datadir-linstor-etcd-0   Bound    linstor-etcd-pv-k8snode1                   1Gi        RWO                                 3h
demo-claim               Bound    pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214   100Mi      RWO            linstor-2-replicas   65m
root@k8smaster:~# 

查看LINSTOR资源以及DRBD资源状态

restore之后,生成新的DRBD资源

root@k8smaster:~# kubectl exec deployment.apps/linstor-cs-controller -- linstor r list       
+---------------------------------------------------------------------------------------------------------------+
| ResourceName                             | Node      | Port | Usage  | Conns |    State | CreatedOn           |
|===============================================================================================================|
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8smaster | 7000 | Unused | Ok    | UpToDate | 2020-11-30 07:41:37 |
| pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 | k8snode1  | 7000 | InUse  | Ok    | UpToDate | 2020-11-30 07:41:37 |
| pvc-ecb27630-1cfd-4f03-a632-82749b899cd6 | k8smaster | 7001 | Unused | Ok    | UpToDate | 2020-11-30 08:46:27 |
| pvc-ecb27630-1cfd-4f03-a632-82749b899cd6 | k8snode1  | 7001 | Unused | Ok    | UpToDate | 2020-11-30 08:46:27 |
+---------------------------------------------------------------------------------------------------------------+
root@k8smaster:~# drbdadm status
pvc-d4fdd350-0640-42ee-97e2-bd13e89d1214 role:Secondary
  disk:UpToDate
  k8snode1 role:Primary
    peer-disk:UpToDate

pvc-ecb27630-1cfd-4f03-a632-82749b899cd6 role:Secondary
  disk:UpToDate
  k8snode1 role:Secondary
    peer-disk:UpToDate

将busybox-restore pvc挂载到busybox上查看

删除busybox pod

尝试在线修改,失败

root@k8smaster:~# kubectl delete -f busybox-pod.yaml 
pod "busybox-pod" deleted

编辑yaml文件,修改pvc为busybox-restore

apiVersion: v1
kind: Pod
metadata:
  name: busybox-pod
spec:
  nodeName: k8snode1
  containers:
  - image: busybox
    name: busybox1
    command: ["init","tail","-f","/dev/null"]
    volumeMounts:
    - name: data
      mountPath: /mnt/data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: busybox-restore

重新创建pod后查看数据

可以看到数据恢复

root@k8smaster:~# kubectl apply -f busybox-pod.yaml       
pod/busybox-pod created
root@k8smaster:~# kubectl exec -it busybox-pod -- sh     
/ # cd /mnt/data
/mnt/data # ls
hi.txt      lost+found
/mnt/data # cat hi.txt 
hello from busybox on k8smaster
/mnt/data # exit
root@k8smaster:~# 

你可能感兴趣的:(部署k8s与LINSTOR使用mirror的持久性卷)