一、准备工作:
首先会帮你准备一个 ceph-deploy 管理节点、以及三个Ceph 节点(或虚拟机),以此构成 Ceph 存储集群。
以下步骤中admin-node为管理节点,node1为monitor节点,node2、node3为OSD节点。
1、安装源(所有节点):
# sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
#vim /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
# yum update && sudo yum install ceph-deploy
2、安装NTP(所有节点) :
# yum install ntp ntpdate ntp-doc
# vim /etc/ntp.conf
添加:
server 202.120.2.101 iburst
restrict -4 default kod notrap nomodify
restrict -6 default kod notrap nomodify
# service ntpd restart
3、安装SSH(所有节点) :
# yum install openssh-server
# service sshd restart
4、创建部署 CEPH 的用户(所有节点):
# useradd -d /home/cephd -m cephd
注意:用户名不能是ceph
# passwd cephd
# echo "cephd ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephd
# sudo chmod 0440 /etc/sudoers.d/cephd
5、允许SSH无密码登录(管理节点):
# ssh-keygen
输出:
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
# ssh-copy-id cephd@node1
# ssh-copy-id cephd@node2
# ssh-copy-id cephd@node3
# vim ~/.ssh/config
添加:
Host node1
Hostname node1
User cephd
Host node2
Hostname node2
User cephd
Host node3
Hostname node3
User cephd
6、关闭防火墙(所有节点):
或开放端口6789 和 OSD 使用的 6800-7300端口(所有节点)
7、如果报错(所有节点):
在 CentOS 和 RHEL 上执行 ceph-deploy 命令时可能会报错。如果你的 Ceph 节点默认设置了 requiretty ,执行 sudo visudo 禁用它,并找到Defaults requiretty 选项,把它改为 Defaults:ceph !requiretty 或者直接注释掉,这样 ceph-deploy 就可以用之前创建的用户(创建部署 Ceph 的用户 )连接了。
# sudo visudo
将Defaults requiretty 改成 Defaults:ceph !requiretty
8、优先级/首选项(所有节点):
# yum install yum-plugin-priorities
二、搭建存储集群:
1、创建集群目录(管理节点):
# mkdir my-cluster
# cd my-cluster
2、创建集群(管理节点):
注意:如果在某些地方碰到麻烦,想从头再来,可以用下列命令清除配置:
# ceph-deploy purgedata {ceph-node} [{ceph-node}]
# ceph-deploy forgetkeys
用下列命令可以连 Ceph 安装包一起清除:
# ceph-deploy purge {ceph-node} [{ceph-node}]
如果执行了 purge ,你必须重新安装 Ceph 。
# ceph-deploy new {initial-monitor-node(s)}
例如:
# ceph-deploy new node1
在当前目录下用 ls 和 cat 检查 ceph-deploy 的输出,应该有一个 Ceph 配置文件、一个 monitor 密钥环和一个日志文件。详情见 ceph-deploy new -h 。
# vim ceph.conf
增加:
osd pool default size = 2 #把 Ceph 配置文件里的默认副本数从 3 改成 2
public network = {ip-address}/{netmask} #如果有多个网卡可以把 public network 写入[global] 段下
3、安装ceph(管理节点):
# cd my-cluster
# ceph-deploy install admin-node node1 node2 node3 安装ceph
报错:[ERROR ] RuntimeError: remote connection got closed, ensure ``requiretty`` is disabled for network
解决:# sudo visudo 将Defaults requiretty 改成 Defaults:ceph !requiretty
报错:[ERROR ] RuntimeError: NoSectionError: No section: 'ceph'
解决:# yum remove ceph-release
# ceph-deploy mon create-initial 配置初始 monitor、并收集所有密钥
4、添加OSD(OSD节点):
# ssh node2
# mkdir /var/local/osd0
# ssh node3
# mkdir /var/local/osd1
5、准备OSD(管理节点):
# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
# ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1 激活OSD
# ceph-deploy admin admin-node node1 node2 node3 拷贝密钥到其他节点
# chmod +r /etc/ceph/ceph.client.admin.keyring
# ceph health 检查健康状态
HEALTH_OK
三、扩展集群:
把 OSD 添加到 monitor 节点
# ssh node1
# sudo mkdir /var/local/osd2
# ssh admin-node
# ceph-deploy osd prepare node1:/var/local/osd2 准备osd
# ceph-deploy osd activate node1:/var/local/osd2 激活osd
# ceph -w 查看状态
四、添加元数据服务器:
1、创建元数据服务器(管理节点):
# ceph-deploy mds create node1
2、添加 RGW 例程(管理节点):
# ceph-deploy rgw create node1
# vim ceph.conf
添加:
[client]rgw frontends = civetweb port=80
3、添加MONITORS(管理节点):
# ceph-deploy mon add node2 node3 新增两个监视器
报错:[ceph_deploy.admin][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
[ceph_deploy][ERROR ] GenericError: Failed to configure 1 admin hosts
解决:# ceph-deploy --overwrite-conf mon add node2 node3
# ceph quorum_status --format json-pretty 检查法定人数状态
五、存入/检出对象数据(管理节点):
# echo {Test-data} > testfile.txt
# ceph osd pool create data 64 64 创建一个pool,名字为data
pool 'data' created
# rados put test-object-1 testfile.txt --pool=data 将对象存入pool
# rados -p data ls 查看存入对象
test-object-1
# ceph osd map data test-object-1 定位对象
osdmap e19 pool 'data' (6) object 'test-object-1' -> pg 6.74dc35e2 (6.22) -> up ([1], p1) acting ([1], p1)
# rados rm test-object-1 --pool=data 从pool中删除此对象