Kubernetes StorageClss 使用 Ceph nautilus(14.2.22) rbd 动态创建PVC

一、部署ceph nautilus版本

准备3台虚拟机系统centos7 

主机 主机名             ceph组件         配置
192.168.87.200         ceph-mgr1 mgr,mon,osd,mds   2核2G 硬盘3*20G
192.168.87.201                 ceph-mon1 mon,osd,mds   2核2G 硬盘3*20G
192.168.87.202 ceph-osd1 mon,osd,mds   2核2G 硬盘3*20G

1、配置每台主机的静态IP

vim /etc/sysconfig/network-scripts/ifcfg-ens33

service network restart

2、配置每台主机的主机名

hostnamectl set-hostname mgr1 && bash

hostnamectl set-hostname mon1 && bash

hostnamectl set-hostname osd1 && bash

3、配置每台主机的hosts文件

vim /etc/hosts

192.168.87.200 ceph-mgr1

192.168.87.201 ceph-mon1

192.168.87.202 ceph-osd1

4、配置每台主机间互信,每台主机都要配置

ssh-keygen -t rsa 

ssh-copy-id ceph-mgr1

ssh-copy-id ceph-mon1

ssh-copy-id ceph-osd1

5、关闭每台主机的防火墙

systemctl stop firewalld ; systemctl disable firewalld

6、关闭每台主机的selinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
注意: 关闭后重启永久生效

7、每台主机配置源

(1)配置阿里的repo源:

centos镜像-centos下载地址-centos安装教程-阿里巴巴开源镜像站

(2)配置阿里的epel源:

epel镜像-epel下载地址-epel安装教程-阿里巴巴开源镜像站

(3)配置ceph.repo

vim /etc/yum.repos.d/ceph.repo

[ceph]

name=Ceph packages for $basearch

baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch

enabled=1

priority=2

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-noarch]

name=Ceph noarch packages

baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch

enabled=1

priority=2

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-source]

name=Ceph source packages

baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS

enabled=0

priority=2

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

配置完成后:
yum clean all   清空缓存
yum makecache  重新创建缓存
yum -y update  更新yum

8、每台安装iptables

yum install iptables-services -y  

service iptables stop && systemctl disable iptables

9、配置每台主机时间同步

service ntpd stop
ntpdate cn.pool.ntp.org
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
service crond restart

10、每台主机安装基础软件包

yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet deltarpm

11、每台主机安装其他依赖

# 安装ceph的密钥

rpm --import 'https://download.ceph.com/keys/release.asc'

sudo yum install --nogpgcheck -y epel-release

sudo yum install -y yum-utils

sudo yum install -y yum-plugin-priorities

sudo yum install -y snappy leveldb gdisk gperftools-libs

12、安装ceph-deploy

# mgr1节点:

yum install python-setuptools ceph-deploy -y   yum install ceph ceph-radosgw -y

# mon1节点:

yum install ceph ceph-radosgw -y

# osd1节点: 

yum install ceph ceph-radosgw -y

# 查看ceph版本: 

ceph --version

13、创建和初始化monitor节点

mgr1节点操作:

cd /etc/ceph

ceph-deploy install --no-adjust-repos ceph-mgr1 ceph-mon1 ceph-osd1

ceph-deploy new ceph-mgr1 ceph-mon1 ceph-osd1

允许主机以管理员权限执行 Ceph 命令

ceph-deploy admin ceph-mgr1 ceph-mon1 ceph-osd1

在/etc/ceph 下生成如下配置文件:

ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
vim /etc/ceph/ceph.conf
配置如下:

[global]

fsid = 6bdce030-db46-4af3-9107-d899651d94fb

mon_initial_members = ceph-mgr1, ceph-mon1, ceph-osd1

mon_host = 192.168.87.200,192.168.87.201,192.168.87.202

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true

mon_max_pg_per_osd = 8000

osd_pool_default_size = 2

mon clock drift allowed = 0.500

mon clock drift warn backoff = 10

置初始 monitor、收集所有的密钥   ceph-deploy mon create-initial

14、部署OSD服务

mgr1节点操作:

ceph-deploy osd create ceph-mgr1 --data /dev/sdb
ceph-deploy osd create ceph-mon1 --data /dev/sdb
ceph-deploy osd create ceph-osd1 --data /dev/sdb 
ceph-deploy osd list ceph-mgr1 ceph-mom1 ceph-osd1 

15、创建ceph文件系统

创建按mds服务

ceph-deploy mds create ceph-mgr1 ceph-mom1 ceph-osd1

查看 ceph 当前文件系统

ceph fs ls

创建存储池

ceph osd pool create cephfs_data 128

ceph osd pool create cephfs_metadata 128

关于创建存储池
确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:
*少于 5 个 OSD 时可把 pg_num 设置为 128
*OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
*OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
*OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
*自己计算 pg_num 取值时可借助 pgcalc 工具
随着 OSD 数量的增加,正确的 pg_num 取值变得更加重要,因为它显著地影响着集群
的行为、以及出错时的数据持久性(即灾难性事件导致数据丢失的概率)。
创建文件系统
ceph fs new myitsite cephfs_metadata cephfs_data
查看集群状态
ceph -s

16、部署MGR用于获取集群信息

ceph-deploy mgr create ceph-mgr1

17、遇到的问题

问题1: mons are allowing insecure global_id reclaim

解决办法:

sudo ceph config set mon auth_allow_insecure_global_id_reclaim false

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

问题2: Module 'restful' has failed dependency: No module named 'pecan'

解决办法:

pip3 install pecan werkzeug

18、配置Mgr-Dashboard模块

sudo ceph mgr module enable dashboard

发现错误:

Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement 在mgr1节点安装:sudo yum -y install ceph-mgr-dashboard

生成并安装自签名证书

sudo ceph dashboard create-self-signed-cert

创建具有管理员角色的用户

sudo ceph dashboard set-login-credentials admin admin

之前用的“admin admin”,现在好像不能直接这样写了,需要将密码写在一个文件中读取,不然会报错
“dashboard set-login-credentials : Set the login credentials. Password read from -i ”                                                        

echo admin > userpass
sudo ceph dashboard set-login-credentials admin -i userpass
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated

 查看ceph-mgr服务: sudo ceph mgr services 

 浏览器访问测试:https://192.168.87.200:8443

Kubernetes StorageClss 使用 Ceph nautilus(14.2.22) rbd 动态创建PVC_第1张图片

 如果是centos7安装高版本的ceph,比如15以上,安装dashboard报错,缺少python-router3,要不是使用centos8, 要不使用当前我使用的版本,目前没找到其他办法,可以参考这篇文章:

Ceph监控 - 何以.解忧 - 博客园

二、k8s使用storage创建集群

安装前准备:

(1)需要有一个k8s集群

(2)需要在 k8s 的每个 node 节点安装 ceph-common,把 ceph 节点上的 ceph.repo 文件拷贝到 k8s 各个节点/etc/yum.repos.d/目录下,然后在 k8s 的各个 节点 yum install ceph-common -y

(3)把mgr1节点下/etc/ceph 拷贝到k8s集群下的所有节点下对应的 /etc/ceph目录

1、安装存储制备器/供应商 provisioner

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner2
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner2
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner2
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner2
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner2
    namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner2
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns", "coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner2
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner2
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner2
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: kube-system
  name: rbd-provisioner2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner2
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner2
    spec:
      containers:
        - name: rbd-provisioner2
          image: quay.io/external_storage/rbd-provisioner:latest
          imagePullPolicy: IfNotPresent
          env:
            - name: PROVISIONER_NAME
              value: ceph.com/rbd2
          volumeMounts:
            - name: ceph-conf
              mountPath: /etc/ceph
      serviceAccount: rbd-provisioner2
      volumes:
        - name: ceph-conf
          hostPath:
            path: /etc/ceph

2、创建secret

  首先在ceph-mgr1节点创建存储池

   创建新用户可以重新定义ceph的操作权限

ceph osd pool create kube 8 8
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'

 secret中的key获取方式:

 adminSecret: ceph auth get-key client.admin | base64 

 userSecret:ceph auth get-key client.kube | base64 

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin2
  namespace: kube-system
type: ceph.com/rbd2
data:
  key: QVFERTB6NWlCV3lvTmhBQUx5R1FHQXB3NHhNMjBseG1FczNqVnc9PQ==
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-kube2
  namespace: kube-system
type: ceph.com/rbd2
data:
  key: QVFBN0VFQmlWR2ZRTlJBQWlRQno1Q1hUQm1SSndJY3RnaWlyVUE9PQ==

3、创建storageclass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-storage-class
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: ceph.com/rbd2
allowVolumeExpansion: true
parameters:
  monitors: 192.168.87.200:6789,192.168.87.201:6789,192.168.87.202:6789
  adminId: admin
  adminSecretName: ceph-secret-admin2
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret-kube2
  userSecretNamespace: kube-system
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"

4、创建PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-rbd-storageclass-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  storageClassName: ceph-storage-class

5、创建POD测试挂载

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: ceph-rbd-storageclass-pod
  name: ceph-rbd-storageclass-pod
spec:
  containers:
    - name: ceph-rbd-nginx
      image: nginx
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - name: ceph-rbd
          mountPath: /mnt
          readOnly: false
  volumes:
    - name: ceph-rbd
      persistentVolumeClaim:
        claimName: ceph-rbd-storageclass-pvc

免责申明:刚学习kubernetes,对kubernetes中的属于概念理解和解释不是很清楚,ceph也是按照别人博客和课程老师讲解搭建,仅仅是为了跑起来做实验而已,请勿用于生产环境,本文仅仅是自己学习所作记录,如果有搭建时有问题,请评论区交流,请勿吐槽,共勉。

你可能感兴趣的:(kubernetes,ceph,kubernetes,ceph-deploy,rbd,pvc)