使用ceph-deploy 部署集群

Quick Ceph Deploy

集群内有两个节点(tom-1, tom-2),在 tom-1 中通过 ceph-deploy 来部署安装整个集群。均为 centos7.1 系统。

PREFLIGHT CHECKLIST

1. Add ceph repositories

官方的镜像源较慢,这里使用阿里提供的yum源

[root@tom-1 yum.repos.d]# cat ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
priority=1

除此之外还需要 EPEL 仓库:
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

以上步骤在 tom-1tom-2 上都需要执行

2. Install NTP

为了保证服务器时间同步,需要安装ntp服务。
可参考:
[https://docs.openstack.org/ocata/install-guide-rdo/environment-ntp-controller.html]

3. Enable password-less ssh

使用ceph-deploy安装集群的时候,需要在其他node上执行安装命令和配置文件,因此需要做免密码登录。
tom-1 上执行:

ssh-keygen

Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.

拷贝密钥至tom-2
ssh-copy-id tom-2

4. Open reqiured ports

关闭防火墙或者放行Ceph使用端口
放行 6789 (MON), 6800:7300 (OSD)

iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT

iptables -A INPUT -i {iface} -m multiport -p tcp -s {ip-address}/{netmask} --dports 6800:7300 -j ACCEPT

5. Close selinux

setenforce 0

CREATE A CLUSTER

tom-1

  1. 创建一个目录夹,用来保存部署集群时生成的配置文件和日志

    [root@tom-1 ceph-cluster]# mkdir ceph-cluster
    [root@tom-1 ceph-cluster]# cd ceph-cluster
  2. 初始化集群配置

    [root@tom-1 ceph-cluster]# ceph-deploy new tom-1

    tom-1 设为 monitor 节点,此时当前目录会生成一个ceph.conf的集群配置文件

    [root@tom-1 ceph-cluster]# echo "osd pool default size = 2" >> ceph.conf
    [root@tom-1 ceph-cluster]# echo "public network = 172.16.6.0/24" >> ceph.conf

    将集群osd个数设为 2, 以及设置ceph 对外提供服务的网络(服务器上存在多网卡且不同网段时需要设置)

    使用ceph-deploy 部署集群_第1张图片

    [root@tom-1 ceph-cluster]# cat ceph.conf 
    [global]
    fsid = c02c3880-2879-4ee8-93dc-af0e9dba3727
    mon_initial_members = tom-1
    mon_host = 172.16.6.249
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    
    osd pool default size = 2
    public network = 172.16.6.0/24
  3. 安装 ceph

    [root@tom-1 ceph-cluster]# ceph-deploy --username root install tom-{1,2}

    如果网速较慢,可能会导致命令执行失败。不过可以手动安装所需包。

    yum -y install yum-plugin-priorities
    yum -y install ceph ceph-radosgw
  4. 初始化monitor节点

    [root@tom-1 ceph-cluster]# ceph-deploy --overwrite-conf  mon create-initial
    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy --overwrite-conf mon create-initial
    [ceph_deploy.cli][INFO  ] ceph-deploy options:
    [ceph_deploy.cli][INFO  ]  username                      : None
    [ceph_deploy.cli][INFO  ]  verbose                       : False
    [ceph_deploy.cli][INFO  ]  overwrite_conf                : True
    [ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
    [ceph_deploy.cli][INFO  ]  quiet                         : False
    [ceph_deploy.cli][INFO  ]  cd_conf                       : deploy.conf.cephdeploy.Conf instance at 0x2515c68>
    [ceph_deploy.cli][INFO  ]  cluster                       : ceph
    [ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x24c89b0>
    [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
    [ceph_deploy.cli][INFO  ]  default_release               : False
    [ceph_deploy.cli][INFO  ]  keyrings                      : None
    [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts tom-1
    [ceph_deploy.mon][DEBUG ] detecting platform for host tom-1 ...
    [tom-1][DEBUG ] connected to host: tom-1 
    [tom-1][DEBUG ] detect platform information from remote host
    [tom-1][DEBUG ] detect machine type
    [tom-1][DEBUG ] find the location of an executable
    [ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.1.1503 Core
    [tom-1][DEBUG ] determining if provided host has same hostname in remote
    [tom-1][DEBUG ] get remote short hostname
    [tom-1][DEBUG ] deploying mon to tom-1
    [tom-1][DEBUG ] get remote short hostname
    [tom-1][DEBUG ] remote hostname: tom-1
    [tom-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [tom-1][DEBUG ] create the mon path if it does not exist
    [tom-1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-tom-1/done
    [tom-1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-tom-1/done
    [tom-1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-tom-1.mon.keyring
    [tom-1][DEBUG ] create the monitor keyring file
    [tom-1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i tom-1 --keyring /var/lib/ceph/tmp/ceph-tom-1.mon.keyring --setuser 167 --setgroup 167
    [tom-1][DEBUG ] ceph-mon: renaming mon.noname-a 172.16.6.249:6789/0 to mon.tom-1
    [tom-1][DEBUG ] ceph-mon: set fsid to c02c3880-2879-4ee8-93dc-af0e9dba3727
    [tom-1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-tom-1 for mon.tom-1
    [tom-1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-tom-1.mon.keyring
    [tom-1][DEBUG ] create a done file to avoid re-doing the mon deployment
    [tom-1][DEBUG ] create the init path if it does not exist
    [tom-1][INFO  ] Running command: systemctl enable ceph.target
    [tom-1][INFO  ] Running command: systemctl enable ceph-mon@tom-1
    [tom-1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
    [tom-1][INFO  ] Running command: systemctl start ceph-mon@tom-1
    [tom-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tom-1.asok mon_status
    [tom-1][DEBUG ] ********************************************************************************
    [tom-1][DEBUG ] status for monitor: mon.tom-1
    [tom-1][DEBUG ] {
    [tom-1][DEBUG ]   "election_epoch": 3, 
    [tom-1][DEBUG ]   "extra_probe_peers": [], 
    [tom-1][DEBUG ]   "monmap": {
    [tom-1][DEBUG ]     "created": "2017-06-16 11:08:55.887144", 
    [tom-1][DEBUG ]     "epoch": 1, 
    [tom-1][DEBUG ]     "fsid": "c02c3880-2879-4ee8-93dc-af0e9dba3727", 
    [tom-1][DEBUG ]     "modified": "2017-06-16 11:08:55.887144", 
    [tom-1][DEBUG ]     "mons": [
    [tom-1][DEBUG ]       {
    [tom-1][DEBUG ]         "addr": "172.16.6.249:6789/0", 
    [tom-1][DEBUG ]         "name": "tom-1", 
    [tom-1][DEBUG ]         "rank": 0
    [tom-1][DEBUG ]       }
    [tom-1][DEBUG ]     ]
    [tom-1][DEBUG ]   }, 
    [tom-1][DEBUG ]   "name": "tom-1", 
    [tom-1][DEBUG ]   "outside_quorum": [], 
    [tom-1][DEBUG ]   "quorum": [
    [tom-1][DEBUG ]     0
    [tom-1][DEBUG ]   ], 
    [tom-1][DEBUG ]   "rank": 0, 
    [tom-1][DEBUG ]   "state": "leader", 
    [tom-1][DEBUG ]   "sync_provider": []
    [tom-1][DEBUG ] }
    [tom-1][DEBUG ] ********************************************************************************
    [tom-1][INFO  ] monitor: mon.tom-1 is running
    [tom-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tom-1.asok mon_status
    [ceph_deploy.mon][INFO  ] processing monitor mon.tom-1
    [tom-1][DEBUG ] connected to host: tom-1 
    [tom-1][DEBUG ] detect platform information from remote host
    [tom-1][DEBUG ] detect machine type
    [tom-1][DEBUG ] find the location of an executable
    [tom-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tom-1.asok mon_status
    [ceph_deploy.mon][INFO  ] mon.tom-1 monitor has reached quorum!
    [ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
    [ceph_deploy.mon][INFO  ] Running gatherkeys...
    [ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpzD6TIM
    [tom-1][DEBUG ] connected to host: tom-1 
    [tom-1][DEBUG ] detect platform information from remote host
    [tom-1][DEBUG ] detect machine type
    [tom-1][DEBUG ] get remote short hostname
    [tom-1][DEBUG ] fetch remote file
    [tom-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.tom-1.asok mon_status
    [tom-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.admin
    [tom-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.bootstrap-mds
    [tom-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.bootstrap-osd
    [tom-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.bootstrap-rgw
    [ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
    [ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
    [ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
    [ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
    [ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
    [ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpzD6TIM

    Note: The bootstrap-rgw keyring is only created during installation of clusters running Hammer or newer

    Note: If this process fails with a message similar to “Unable to find /etc/ceph/ceph.client.admin.keyring”, please ensure that the IP listed for the monitor node in ceph.conf is the Public IP, not the Private IP.

  5. 添加 OSD

    OSD 需要配置数据和日志两部分。以操作系统文件夹为例(也可指定块设备)。
    在 tom-1, tom-2上执行如下命令

    mkdir -p /ceph/osd/0 && chown -R ceph:ceph /ceph
    

    创建OSD的数据目录,和日志文件。

    tom-1 上执行:

    [root@tom-1 ceph-cluster]# ceph-deploy osd prepare tom-{1,2}:/ceph/osd/0  
    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy osd prepare tom-1:/ceph/osd/0 tom-2:/ceph/osd/0
    [ceph_deploy.cli][INFO  ] ceph-deploy options:
    [ceph_deploy.cli][INFO  ]  username                      : None
    [ceph_deploy.cli][INFO  ]  disk                          : [('tom-1', '/ceph/osd/0', None), ('tom-2', '/ceph/osd/0', None)]
    [ceph_deploy.cli][INFO  ]  dmcrypt                       : False
    [ceph_deploy.cli][INFO  ]  verbose                       : False
    [ceph_deploy.cli][INFO  ]  bluestore                     : None
    [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
    [ceph_deploy.cli][INFO  ]  subcommand                    : prepare
    [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
    [ceph_deploy.cli][INFO  ]  quiet                         : False
    [ceph_deploy.cli][INFO  ]  cd_conf                       : deploy.conf.cephdeploy.Conf instance at 0x1abb2d8>
    [ceph_deploy.cli][INFO  ]  cluster                       : ceph
    [ceph_deploy.cli][INFO  ]  fs_type                       : xfs
    [ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1a6b2a8>
    [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
    [ceph_deploy.cli][INFO  ]  default_release               : False
    [ceph_deploy.cli][INFO  ]  zap_disk                      : False
    [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks tom-1:/ceph/osd/0: tom-2:/ceph/osd/0:
    [tom-1][DEBUG ] connected to host: tom-1 
    [tom-1][DEBUG ] detect platform information from remote host
    [tom-1][DEBUG ] detect machine type
    [tom-1][DEBUG ] find the location of an executable
    [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
    [ceph_deploy.osd][DEBUG ] Deploying osd to tom-1
    [tom-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [ceph_deploy.osd][DEBUG ] Preparing host tom-1 disk /ceph/osd/0 journal None activate False
    [tom-1][DEBUG ] find the location of an executable
    [tom-1][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /ceph/osd/0
    [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
    [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
    [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
    [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
    [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
    [tom-1][WARNIN] populate_data_path: Preparing osd data dir /ceph/osd/0
    [tom-1][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/ceph_fsid.26575.tmp
    [tom-1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/ceph_fsid.26575.tmp
    [tom-1][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/fsid.26575.tmp
    [tom-1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/fsid.26575.tmp
    [tom-1][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/magic.26575.tmp
    [tom-1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/magic.26575.tmp
    [tom-1][INFO  ] checking OSD status...
    [tom-1][DEBUG ] find the location of an executable
    [tom-1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
    [ceph_deploy.osd][DEBUG ] Host tom-1 is now ready for osd use.
    [tom-2][DEBUG ] connected to host: tom-2 
    [tom-2][DEBUG ] detect platform information from remote host
    [tom-2][DEBUG ] detect machine type
    [tom-2][DEBUG ] find the location of an executable
    [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
    [ceph_deploy.osd][DEBUG ] Deploying osd to tom-2
    [tom-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [tom-2][WARNIN] osd keyring does not exist yet, creating one
    [tom-2][DEBUG ] create a keyring file
    [ceph_deploy.osd][DEBUG ] Preparing host tom-2 disk /ceph/osd/0 journal None activate False
    [tom-2][DEBUG ] find the location of an executable
    [tom-2][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /ceph/osd/0
    [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
    [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
    [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
    [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
    [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
    [tom-2][WARNIN] populate_data_path: Preparing osd data dir /ceph/osd/0
    [tom-2][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/ceph_fsid.24644.tmp
    [tom-2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/ceph_fsid.24644.tmp
    [tom-2][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/fsid.24644.tmp
    [tom-2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/fsid.24644.tmp
    [tom-2][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/magic.24644.tmp
    [tom-2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/magic.24644.tmp
    [tom-2][INFO  ] checking OSD status...
    [tom-2][DEBUG ] find the location of an executable
    [tom-2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
    [ceph_deploy.osd][DEBUG ] Host tom-2 is now ready for osd use.
    

    激活 OSD

    [root@tom-1 ceph-cluster]# ceph-deploy osd activate tom-{1,2}:/ceph/osd/0
    
  6. 拷贝配置文件和密钥

    Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

    [root@tom-1 ceph-cluster]# ceph-deploy  --username root admin tom-{1,2}

    在tom-1 和 tom-2 上分别执行:

    [root@tom-1 ceph]# pwd
    /etc/ceph
    [root@tom-1 ceph]# ls -l
    total 12
    -rw------- 1 root root 129 Jun 16 15:58 ceph.client.admin.keyring
    -rw-r--r-- 1 root root 252 Jun 16 15:58 ceph.conf
    -rwxr-xr-x 1 root root  92 Sep 21  2016 rbdmap
    -rw------- 1 root root   0 Jun 16 11:07 tmp9g7ZGm
    -rw------- 1 root root   0 Jun 16 11:08 tmpB8roOG
    [root@tom-1 ceph]# chmod +r ceph.client.admin.keyring

    确保 ceph.client.admin.keyring 有可读权限

  7. 检查集群状态

    执行 ceph health , 如果集群运行正常会返回 HEALTH_OK

    Note:
    如果你使用了ext4文件系统,则会出现如下情况:

    [root@tom-1 ceph-cluster]# ceph health
    HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs stuck inactive
    [root@tom-1 ceph-cluster]# ceph -s
        cluster c02c3880-2879-4ee8-93dc-af0e9dba3727
         health HEALTH_ERR
                64 pgs are stuck inactive for more than 300 seconds
                64 pgs stuck inactive
         monmap e1: 1 mons at {tom-1=172.16.6.249:6789/0}
                election epoch 3, quorum 0 tom-1
         osdmap e5: 2 osds: 0 up, 0 in
                flags sortbitwise
          pgmap v6: 64 pgs, 1 pools, 0 bytes data, 0 objects
                0 kB used, 0 kB / 0 kB avail
                      64 creating

    集群状态不健康,通过 /var/log/message 中可以看到如下错误:

    Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347940 7ff9d80dd800 -1 osd.0 0 backend (filestore) is unable to support max object name[space] len
    Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347949 7ff9d80dd800 -1 osd.0 0    osd max object name len = 2048
    Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347951 7ff9d80dd800 -1 osd.0 0    osd max object namespace len = 256
    Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347952 7ff9d80dd800 -1 osd.0 0 (36) File name too long
    Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.354561 7ff9d80dd800 -1  ** ERROR: osd init failed: (36) File name too long
    Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.354561 7ff9d80dd800 -1  ** ERROR: osd init failed: (36) File name too long
    Jun 16 15:53:32 tom-1 systemd: ceph-osd@0.service: main process exited, code=exited, status=1/FAILURE

    关于文件系统要求,官方解释如下:

    [http://docs.ceph.com/docs/master/rados/configuration/filesystem-recommendations/]

    We recommend against using ext4 due to limitations in the size of xattrs it can store, and the problems this causes with the way Ceph handles long RADOS object names. Although these issues will generally not surface with Ceph clusters using only short object names (e.g., an RBD workload that does not include long RBD image names), other users like RGW make extensive use of long object names and can break.

    Starting with the Jewel release, the ceph-osd daemon will refuse to start if the configured max object name cannot be safely stored on ext4. If the cluster is only being used with short object names (e.g., RBD only), you can continue using ext4 by setting the following configuration option:

    osd max object name len = 256
    osd max object namespace len = 64

    Note This may result in difficult-to-diagnose errors if you try to use RGW or other librados clients that do not properly handle or politely surface any resulting ENAMETOOLONG errors.

    修改 tom-1 和 tom-2 的 /etc/ceph/ceph.conf

    [root@tom-1 0]# cat /etc/ceph/ceph.conf 
    [global]
    fsid = c02c3880-2879-4ee8-93dc-af0e9dba3727
    mon_initial_members = tom-1
    mon_host = 172.16.6.249
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    
    osd pool default size = 2
    public network = 172.16.6.0/24
    
    
    # for ext4
    
    osd max object name len = 256
    osd max object namespace len = 64

    重启 osd 服务

    [root@tom-1 ceph-cluster]# systemctl restart ceph-osd@0.service 
    [root@tom-1 ceph-cluster]# systemctl status ceph-osd@0.service 
    ● ceph-osd@0.service - Ceph object storage daemon
       Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
       Active: active (running) since Fri 2017-06-16 17:22:35 CST; 22s ago
      Process: 15644 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
     Main PID: 15695 (ceph-osd)
       CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
               └─15695 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
    
    Jun 16 17:22:35 tom-1 systemd[1]: Starting Ceph object storage daemon...
    Jun 16 17:22:35 tom-1 ceph-osd-prestart.sh[15644]: create-or-move updated item name 'osd.0' weight 0.0279 at location {host=tom...sh map
    Jun 16 17:22:35 tom-1 systemd[1]: Started Ceph object storage daemon.
    Jun 16 17:22:35 tom-1 ceph-osd[15695]: starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
    Jun 16 17:22:35 tom-1 ceph-osd[15695]: 2017-06-16 17:22:35.933500 7f3193a75800 -1 journal FileJournal::_open: disabling aio fo... anyway
    Jun 16 17:22:35 tom-1 ceph-osd[15695]: 2017-06-16 17:22:35.985202 7f3193a75800 -1 osd.0 0 log_to_monitors {default=true}
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@tom-1 ceph-cluster]# ceph -s
        cluster c02c3880-2879-4ee8-93dc-af0e9dba3727
         health HEALTH_OK
         monmap e1: 1 mons at {tom-1=172.16.6.249:6789/0}
                election epoch 3, quorum 0 tom-1
         osdmap e10: 2 osds: 2 up, 2 in
                flags sortbitwise
          pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects
                33896 MB used, 22017 MB / 58515 MB avail
                      64 active+clean
  8. 卸载集群

    如果你想要重新部署 ceph cluster ,可以通过如下命令来清空ceph 安装包、产生的数据以及配置文件(tom-1上执行)。

    [root@tom-1 ceph-cluster]# ceph-deploy purge {ceph-node} [{ceph-node}]
    [root@tom-1 ceph-cluster]# ceph-deploy purgedata {ceph-node} [{ceph-node}]
    [root@tom-1 ceph-cluster]# ceph-deploy forgetkeys

CEPH FILESYSTEM

Create MDS

在tom-1上执行:

    ceph-deploy  mds create tom-1

检查mds是否启动:

    systemctl status ceph-mds@tom-1

Create pool

Tip : The ceph fs new command was introduced in Ceph 0.84. Prior to this release, no manual steps are required to create a filesystem, and pools named data and metadata exist by default.

The Ceph command line now includes commands for creating and removing filesystems, but at present only one filesystem may exist at a time.

Ceph filesystem 至少需要两个RADOS pool, 一个用作数据存储,一个用于metadata存储。创建 pool 的时候,你需要考虑如下几点:

  • metadata pool 中任何的数据丢失都可能造成整个filesystem的不可用,因此需要提高pool size .
  • 使用高速设备,如SSD来作为metadata pool 的底层存储,着将直接影响到客户端和filesystem的操作延迟。

执行如下命令,创建两个 pool

ceph osd pool create walker_data 128
ceph osd pool create walker_metadata 128

一旦创建好了 pool , 需要通过 fs new 来激活filesystem

ceph fs new   
ceph fs new walkerfs walker_metadata walker_data

通过以下命令检查 Ceph filesystem 状态:

[root@tom-1 ceph-cluster]# ceph fs ls
name: walkerfs, metadata pool: walker_metadata, data pools: [walker_data ]
[root@tom-1 ceph-cluster]# ceph mds stat
e5: 1/1/1 up {0=tom-1=up:active}

成功之后,可以通过以下两种方式使用Cephfilesystem

  • Mount CephFS
  • Mount CephFS as FUSE

REFRENCE

http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

http://docs.ceph.com/docs/master/start/quick-start-preflight/

你可能感兴趣的:(ceph)