IP地址
|
角色
|
58.220.31.60
|
DeployNode,Client
|
58.220.31.61
|
MdsNode, MonNode
|
58.220.31.63
|
osdNode2
|
58.220.31.64
|
osdNode3 |
d. SSH设置,保证deployNode可以顺利访问其他各节点
我这篇博客有对SSH 无密码访问的详细设置
在/etc/yum.repos.d/目录下,创建ceph.repo文件,输入如下内容
name=Ceph packages for $basearch gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc enabled=1 baseurl=http://ceph.com/rpm-hammer/el6/$basearch priority=1 gpgcheck=1 type=rpm-md [ceph-source] name=Ceph source packages gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc enabled=1 baseurl=http://ceph.com/rpm-hammer/el6/SRPMS priority=1 gpgcheck=1 type=rpm-md [Ceph-noarch] name=Ceph noarch packages gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc enabled=1 baseurl=http://ceph.com/rpm-hammer/el6/noarch priority=1 gpgcheck=1 type=rpm-md
1. 配置/etc/ceph/ceph.conf
[global] fsid = 8587ec10-fe1a-41f5-9795-9d38ef20b493 mon_initial_members = mdsnode mon_host = 58.220.31.61 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true osd_pool_default_size = 2 osd_pool_default_min_size = 1 osd_journal_size = 10000 osd_pool_default_pg_num = 366 osd_pool_default_pgp_num = 366 public_network=58.220.31.0/24
sudo ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
3 生成管理员密钥环,生成 client.admin 用户并加入密钥环
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
sudo ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
5. 用规划好的主机名、对应 IP 地址、和 FSID 生成一个监视器图,并保存为 /etc/ceph/monmap
monmaptool --create --generate -c /etc/ceph/ceph.conf /etc/ceph/monmap
6. 用监视器图和密钥环组装守护进程所需的初始数据
sudo ceph-mon --mkfs -i mdsnode --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
7. 配置相关/etc/ceph/目录下keyring文件的读写权限,否则在启动监视器时会报如下错误:
[ceph@mdsnode ceph]$ sudo /etc/init.d/ceph start mon.mdsnode === mon.mdsnode === Starting Ceph mon.mdsnode on mdsnode...already running [ceph@mdsnode ceph]$ ceph -s 2015-08-31 11:32:17.378858 7f543b014700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2015-08-31 11:32:17.378864 7f543b014700 0 librados: client.admin initialization error (2) No such file or directory Error connecting to cluster: ObjectNotFound
执行命令sudo chmod 777 /etc/ceph/* 后就OK 了
sudo /etc/init.d/ceph start mon.mdsnode
在osdnode2 节点下 准备/var/local/osd2 目录 sudo mkdir /var/local/osd2
在osdnode3 节点下 准备/var/local/osd3 目录 sudo mkdir /var/local/osd3
2. keyring文件准备
将监控节点下/etc/ceph 里面的配置文件,keyring文件拷贝至osd节点的对应目录中
3. 激活osd进程
ceph-deploy osd prepare osdnode2:/var/local/osd2 osdnode3:/var/local/osd3 ceph-deploy osd activate osdnode2:/var/local/osd2 osdnode3:/var/local/osd3
此处并没有使用硬盘进行分区后使用,而是直接使用了文件目录。
[ceph@deploynode mnt]$ sudo ceph-fuse -m 58.220.31.61:6789 /mnt/mycephfs/ ceph-fuse[30250]: starting ceph client 2015-09-01 14:33:50.695812 7f59778ce760 -1 init, newargv = 0x37fe9e0 newargc=11 ceph-fuse[30250]: ceph mount failed with (110) Connection timed out ceph-fuse[30248]: mount failed: (110) Connection timed out
[ceph@deploynode mycluster]$ sudo ceph-fuse -m 58.220.31.61:6789 /mnt/mycephfs ceph-fuse[30854]: starting ceph client 2015-09-01 15:29:03.046430 7fc53c465760 -1 init, newargv = 0x33ada00 newargc=11 ceph-fuse[30854]: starting fuse [ceph@deploynode mycluster]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 250G 15G 223G 7% / tmpfs 32G 12K 32G 1% /dev/shm /dev/sda1 9.8G 62M 9.2G 1% /boot ceph-fuse 499G 72G 427G 15% /mnt/mycephfs
2015-09-01 17:19:02.268177 mon.0 [INF] pgmap v3744: 192 pgs: 192 active+clean; 34189 MB data, 113 GB used, 359 GB / 498 GB avail; 15046 kB/s wr, 117 op/s 2015-09-01 17:19:03.313847 mon.0 [INF] pgmap v3745: 192 pgs: 192 active+clean; 34213 MB data, 114 GB used, 359 GB / 498 GB avail; 12841 kB/s wr, 152 op/s 2015-09-01 17:19:07.269050 mon.0 [INF] pgmap v3746: 192 pgs: 192 active+clean; 34249 MB data, 114 GB used, 359 GB / 498 GB avail; 12375 kB/s wr, 91 op/s
1. ceph 中文文档路径 http://mirrors.myccdn.info/ceph/doc/docs_zh/output/html/architecture/
2. 一键删除ceph相关配置文件脚本
service ceph -a stop dirs=(/var/lib/ceph/bootstrap-mds/* /var/lib/ceph/bootstrap-osd/* /var/lib/ceph/mds/* \ /var/lib/ceph/mon/* /var/lib/ceph/tmp/* /var/lib/ceph/osd/* /var/run/ceph/* /var/log/ceph/* /var/lib/ceph/*) for d in ${dirs[@]}; do sudo rm -rf $d echo $d done done
3. 测试时有时候小文件过多时,直接用rm -rf *无法删除,可采用如下命令:
其中/tmp/empty/ 为实现创建好的空文件夹
sudo rsync --delete-before -d /tmp/empty/ /mnt/mycephfs/