Ceph ansible 部署步骤:

    1.  git clone -b stable-3.0 https://github.com/ceph/ceph-ansible.git ; cd ceph-ansible

    2. 生成 inventory
cat > inventory < all.yml <

ceph 在k8s上的使用步骤:

    doc:
        https://docs.openshift.org/3.6/install_config/storage_examples/ceph_rbd_dynamic_example.html

    1.  在ceph 集群部署中 创建用于在k8s中,存储数据用的pool:
        ceph osd pool create kube 128

    注:
        # 128 为 pg 数 的计算公式一般为

        若少于5个OSD, 设置pg_num为128。
        5~10个OSD,设置pg_num为512。
        10~50个OSD,设置pg_num为4096。
        超过50个OSD,可以参考计算公式。

        http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/

        pg数 = osd 数量 * 100 / pool 复制份数 / pool 数量

        # 查看 pool 复制份数, 既 ceph.conf 里设置的 osd_pool_default_size
           ceph osd dump |grep size|grep rbd

        # 当 osd  pool复制数  pool 数量 变更时,应该重新计算并变更 pg 数
        # 变更 pg_num 的时候 应该将 pgp_num 的数量一起变更,否则无法报错
           ceph osd pool set rbd pg_num 256
           ceph osd pool set rbd pgp_num 256

    2.  创建secret 认证:
        1.  创建一个client.kube用户:
            ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

        2.  将用户client.kube 的keyring 转换成base64的格式:
            ceph auth get-key client.kube | base64
            (生成:QVFCRE1aRmFhdWdFQWhBQUI5a21HbUVXRTgwQ2xJSWFJTVphTUE9PQ==)

        3.  创建ceph-secret.yaml:
            cat > ceph-secret.yaml < ceph-storage.yaml < ceph-storage.yaml <

Ceph cluster 安装步骤2 :

    1.  mkdir myceph; cd myceph

    2. ceph-deploy new node1 node2 …..

    3. cat >> ceph.conf <

- [ ] Spec:
  volumeClaimTemplates:
  - metadata:
      name: orderer-home
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ceph-rbd"
      resources:
        requests:
          storage: 2Gi
  - metadata:
      name: orderer-block
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: "cephfs"
      resources:
        requests:
          storage: 100Mi

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: {{ .Values.global.fabricMspPvcName }}
    namespace: {{ .Release.Namespace }}
    annotations:
      "helm.sh/created": {{.Release.Time.Seconds | quote }}
      "helm.sh/hook": pre-install
      "helm.sh/resource-policy": keep
spec:
    accessModes:
      - ReadOnlyMany
    resources:
      requests:
        storage: 1Gi
{{- if eq .Values.storage.type "gluster" }}
    volumeName: {{ .Values.persistence.fabricMspPvName }}
{{- else if eq .Values.storage.type "ceph" }}
    storageClassName: {{ .Values.storage.className }}
{{- end }}

{{- if eq .Values.storage.type "gluster" }}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .Values.persistence.fabricMspPvName}}
spec:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1Gi
  glusterfs:
    endpoints: gluster-endpoints
    path: {{ .Values.persistence.fabricGlusterVolumeName }}
  persistentVolumeReclaimPolicy: Retain
{{- end }}

Ceph 升级 to mimic:

A. 升级软件包:
    ceph-deploy install --release=mimic  --nogpgcheck --no-adjust-repos  pk8s1 pk8s2 pk8s3 
B. 
    1. 重启monitor服务:
        systemctl restart ceph-mon.target
    2. 重启osd服务:
        systemctl restart ceph-osd.target
    3. 重启mds服务:
        systemctl restart ceph-mds.target
    4. 升级client:
        yum -y install ceph-common
    5. 重启mgr服务:
        systemctl restart ceph-mgr.target
C. 查看是否正常:
    1.  ceph mon stat
    2. ceph osd stat
    3. ceph mds stat

D. 启用新的dashboard:
    1. ceph dashboard create-self-signed-cert
    2. ceph config set mgr mgr/dashboard/server_addr $IP
    3. ceph config set mgr mgr/dashboard/pk8s1/server_addr $IP
        ceph config set mgr mgr/dashboard/pk8s2/server_addr $IP
        ceph config set mgr mgr/dashboard/pk8s3/server_addr $IP
    4. ceph dashboard set-login-credentials  

ceph mgr module enable dashboard

容器中挂载 cephfs的方式:

    mount.ceph 172.17.32.2:/ /mnt -o name=admin,secret=AQA2wjBbMljPKBAAID24oKDVT9NGuUxHzpo+1w==

Ceph backup:

    两种情况:
        1.  自己创建的pvc.yaml :
              annotations:
                    "helm.sh/hook": pre-install
                 "helm.sh/hook-delete-policy": "before-hook-creation"  (此删除策略,不管是使用 helm upgrade 还是 helm install 不会出新 resources exists。。。的提示)

Ceph-rbd image 挂载的几种情况:
    1.  Image 已经在pod中被挂载,然后可以在k8s集群外(即本地)二次挂载;

    2. 假如 image 已经在本地挂载,那么当pod 重新启动时,此image 将无法再被挂载,会报如下错误:


        当本地挂载的image 被卸载之后,pod会自动恢复正常。
3. 因为rbd 在k8s中只支持ReadWriteOnce 和 ReadOnlyMany 两种模式,由于需要往rbd image 中写入镜像,所以只有ReadWriteOnce这一种模式可以使用了,倘若在两个不同的pod中调用同一个pvc,那么就会报如下错误:

4. 使用python 脚本来挂载pod 所使用的原始的 rbd image: