注!!!(此文档仅适用于使用ceph-deploy工具快速安装)
准备三台主机并设置ip和主机名
注!!!设置ip和主机名是创建虚拟机安装系统时就设置好的
192.168.XXX.XXX node1
192.168.XXX.XXX node2
192.168.XXX.XXX node3
主机名 | 磁盘 | 磁盘 |
---|---|---|
node1(mon1、osd0、osd1) | sdb | sdc |
node2(mon2、osd2、osd3) | sdb | sdc |
node3(mon3、osd4、osd5) | sdb | sdc |
vi /etc/hosts
192.168.XXX.XXX node1
192.168.XXX.XXX node2
192.168.XXX.XXX node3
vi /etc/sysconfig/network-scripts/ifcfg-XXX
BOOTPROTO="static" ----更改项
IPADDR=192.168.XXX.XXX ----增加项
GATEWAY=192.168.XXX.2 ----增加项
NETMASK=255.255.255.0 ----增加项
DNS1=192.168.XXX.2 ----增加项
ping 其他虚拟机名称
ping baidu.com
都没问题则成功
systemctl stop firewalld
systemctl disable firewalld
关闭selinux
vi /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled
sudo yum install ntp ntpdate ntp-doc
ntpdate 0.cn.pool.ntp.org
vi /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-(!!ceph版本!!)/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
如nautilus版本:(以下使用的是清华源)
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/$basearch
enabled=1
priority=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/noarch
enabled=1
priority=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/SRPMS
enabled=1
priority=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
sudo yum update
sudo yum -y install ceph-deploy
sudo yum -y install openssh-server
在每个Ceph节点上创建一个新用户。
ssh 节点名称
sudo useradd -d /home/{username} -m {username}
sudo passwd {username} #为新用户设置密码
例
ssh node2
sudo useradd -d /home/oucephd -m oucephd
sudo passwd oucephd #为新用户设置密码
对于添加到每个Ceph节点的新用户,请确保该用户具有 sudo
权限。
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
例
echo "oucephd ALL = (root) NOPASSWD:ALL" | sudo tee etc/sudoers.d/oucephd
sudo chmod 0440 /etc/sudoers.d/oucephd
在mon1节点上切换到新用户
(如果您认为有必要可以在三个节点上都执行以下操作)
ssh-keygen
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
#一直回车就行
将生成的密钥复制到各个节点
ssh-copy-id {username}@node1
ssh-copy-id {username}@node2
ssh-copy-id {username}@node3
例
ssh-copy-id oucephd@node1
ssh-copy-id oucephd@node2
ssh-copy-id oucephd@node3
修改node1的~/.ssh/config
文件
vi ~/.ssh/config
Host node1
Hostname node1
User {username}
Host node2
Hostname node2
User {username}
Host node3
Hostname node3
User {username}
例
Host node1
Hostname node1
User oucephd
Host node2
Hostname node2
User oucephd
Host node3
Hostname node3
User oucephd
sudo chmod 600 ~/.ssh/config
1.在mon切换到新用户
mkdir my-cluster
cd my-cluster
2.创建一个集群
同时会创建mon节点,可以有多个mon节点
ceph-deploy new node1 node2 node3
my-cluster目录下会产生三个文件
vi my-cluster/ceph.conf
在最后一行添加osd默认数量
osd pool default size = 6
3.安装ceph包
ceph-deploy install node1 node2 node3
#这一步很耗时间
!!如果出现ceph --version报错,在报错节点上yum -y remove ceph-release
4.部署监视器(mon)并收集密钥
ceph-deploy mon create-initial
#完成后目录下会产生若干个*.keyring
如果ceph是 L 以上版本
ceph-deploy osd create --data {device} {ceph-node}
例:
ceph-deploy osd create --data /dev/sdb node1
否则 先准备osd (这里以目录为例,未使用新磁盘)
ssh node2
sudo mkdir /var/local/osd2
sudo chown -R ceph:ceph /var/local/osd2
exit
以同样的方式操作node3
准备osd
目录:
例
ceph-deploy osd prepare node2:/var/local/osd2 node2:/var/local/osd3 node3:/var/local/osd4 node3:/var/local/osd5
磁盘:
例
ceph-deploy osd prepare node2:/dev/sdc node2:/dev/sdb node3:/dev/sdc node3:/dev/sdb
激活osd
目录:
例
ceph-deploy osd activate node2:/var/local/osd2 node2:/var/local/osd3 node3:/var/local/osd4 node3:/var/local/osd5
磁盘:
例
ceph-deploy osd activate node2:/dev/sdc node2:/dev/sdb node3:/dev/sdc node3:/dev/sdb
将管理密钥复制到各个节点
ceph-deploy admin node1 node2 node3
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
安装完成
查看集群状态
ceph -s
显示健康状态为OK则安装成功
。。。
health HEALTH_OK
。。。