Ceph中通过ceph-deploy部署元数据服务器问题解决

安装mon, osd后通过ceph -s查看状态,显示正常:

[ceph@mdsnode ceph]$ ceph status
    cluster 8587ec10-fe1a-41f5-9795-9d38ef20b493
     health HEALTH_OK
     monmap e1: 1 mons at {mdsnode=58.220.31.61:6789/0}
            election epoch 1, quorum 0 mdsnode
     osdmap e10: 3 osds: 2 up, 2in
      pgmap v1109: 64 pgs, 1 pools, 0 bytes data, 0 objects
            47720 MB used, 426 GB / 498 GB avail
                  64 active+clean
[ceph@mdsnode ceph]$
但是mds启动后,ceph status命令还是显示上文一样,没有mdsmap的信息。

通过ceph mds stat,显示如下信息:

[ceph@mdsnode ceph]$ ceph mds stat
e1: 0/0/0up

监控日志显示内容:

6487 2015-09-01 10:05:54.764523 7f5acc99b700 1 mon.mdsnode@0(leader).mds e1 warning, MDS mds.? 58.220.31.60:6800/26175 up but filesystem disabled
6488 2015-09-01 10:05:58.764697 7f5acc99b700 1 mon.mdsnode@0(leader).mds e1 warning, MDS mds.? 58.220.31.60:6800/26175 up but filesystem disabled
6489 2015-09-01 10:06:02.764837 7f5acc99b700 1 mon.mdsnode@0(leader).mds e1 warning, MDS mds.? 58.220.31.60:6800/26175 up but filesystem disabled
6490 2015-09-01 10:06:06.764912 7f5acc99b700 1 mon.mdsnode@0(leader).mds e1 warning, MDS mds.? 58.220.31.60:6800/26175 up but filesystem disabled
6491 2015-09-01 10:06:10.765094 7f5acc99b700 1 mon.mdsnode@0(leader).mds e1 warning, MDS mds.? 58.220.31.60:6800/26175 up but filesystem disabled


后来没有办法,只好通过手工配置方式建立MDS服务器。
[ceph@mdsnode ceph]$ ceph osd pool create metadata 64 64
pool 'metadata' created
[ceph@mdsnode ceph]$ ceph osd pool create data 64 64
pool 'data' created
[ceph@mdsnode ceph]$ ceph fs new cephfs metadata data


创建完毕后,通过ceph -s 命令查看状态可知:相关信息已经配置

Ceph中通过ceph-deploy部署元数据服务器问题解决


在通过命令启动MDS服务器:

[ceph@mdsnode ceph]$ sudo service ceph  start mds.mdsnode
=== mds.mdsnode === 
Starting Ceph mds.mdsnode on mdsnode...
starting mds.mdsnode at :/0
[ceph@mdsnode ceph]$ ceph -s
    cluster 8587ec10-fe1a-41f5-9795-9d38ef20b493
     health HEALTH_OK
     monmap e1: 1 mons at {mdsnode=58.220.31.61:6789/0}
            election epoch 1, quorum 0 mdsnode
     mdsmap e6: 1/1/1 up {0=mdsnode=up:active}
     osdmap e15: 3 osds: 2 up, 2 in
      pgmap v1449: 192 pgs, 3 pools, 1962 bytes data, 20 objects
            47737 MB used, 426 GB / 498 GB avail
                 192 active+clean
  client io 1544 B/s wr, 5 op/s



这时mdsmap中状态就改为了 1/1/1 up active。说明mds server已经启动。

你可能感兴趣的:(Ceph中通过ceph-deploy部署元数据服务器问题解决)