192.168.1.220 节点1 (mon, ceph-deploy)
192.168.1.221 节点2 (osd)
192.168.1.222 节点3 (osd)
修改/etc/selinux/config, 将值设为disabled, reboot
这里创建的用户为:cent 密码是cent
sudo useradd -d /home/cent -m cent
sudo passwd cent
echo “cent ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/cent
sudo chmod 0440 /etc/sudoers.d/cent
su cent (切换到cent用户,不能用root或sudo执行ceph-deploy命令,重要:如果你是用不同的用户登录的,就不要用sudo或者root权限运行ceph-deploy,因为在远程的主机上不能发出sudo命令
)
sudo visudo (修改其中Defaults requiretty为Defaults:cent !requiretty)
sudo hostname node1 (其他的两个节点就是node2和node3了)
sudo yum install ntp ntpdate ntp-doc
sudo yum install openssh-server
修改node1的hosts文件,配置节点信息
vim /etc/hosts
192.168.1.220 node1192.168.1.221 node2192.168.1.222 node3
在/etc/yum.repos.d/ 下创建一个ceph.repo文件,写入以下内容
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
上面的这个是163的源,我们也可以使用阿里云的
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
sudo yum install yum-plugin-priorities
sudo yum install ceph-deploy
ssh-keygen (一直回车,使用默认配置)
ssh-copy-id cent@node1
ssh-copy-id cent@node2
ssh-copy-id cent@node3
vim ~/.ssh/config (创建config文件并写入以下内容)
Host node1
Hostname node1
User cent
Host node2
Hostname node2
User cent
Host node3
Hostname node3
User cent
sudo chmod 600 config (赋予config文件权限)
注意:在这里要说的是我搭建的ceph集群(1个mon节点,2个osd节点)没有搭建管理节点,直接把mon当作管理节点使用。
mkdir my-cluster
cd my-cluster
ceph-deploy new node1 (成功后会有ceph.conf)
vim ceph.conf (在global段最后添加)
osd pool default size = 2
在node1中执行:ceph-deploy install node1 node2 node3
安装完成后在node1中执行:ceph-deploy mon create-initial
ssh node2
sudo mkdir /var/local/osd0
sudo chmod -R 777 /var/local/osd0/
exit
ssh node3
sudo mkdir /var/local/osd1
sudo chmod -R 777 /var/local/osd1/
exit
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
ceph-deploy admin node1 node2 node3
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[cent@node1 my-cluster]$ ceph -s
cluster f2891898-aa3b-4bce-8bf1-668b8cf5b45a
health HEALTH_OK
monmap e1: 1 mons at {node1=192.168.1.220:6789/0}
election epoch 3, quorum 0 node1
osdmap e10: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v225: 64 pgs, 1 pools, 0 bytes data, 0 objects
16205 MB used, 40039 MB / 56244 MB avail
64 active+clean
Loaded plugins: langpacks, priorities, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
53 packages excluded due to repository priority protections
Resolving Dependencies
–> Running transaction check
—> Package ceph-deploy.noarch 0:1.5.37-0 will be updated
—> Package ceph-deploy.noarch 0:1.5.38-0 will be an update
–> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-4.el7 will be installed
Removing python-setuptools.noarch 0:0.9.8-4.el7 - u due to obsoletes from installed python2-setuptools-22.0.5-1.el7.noarch
–> Restarting Dependency Resolution with new changes.
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-4.el7 will be installed
–> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-3.el7 will be installed
Removing python-setuptools.noarch 0:0.9.8-3.el7 - u due to obsoletes from installed python2-setuptools-22.0.5-1.el7.noarch
–> Restarting Dependency Resolution with new changes.
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-3.el7 will be installed
–> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch
–> Finished Dependency Resolution
Error: Package: ceph-deploy-1.5.38-0.noarch (ceph-noarch)
Requires: python-distribute
You could try using –skip-broken to work around the problem
Found 1 pre-existing rpmdb problem(s), ‘yum check’ output follows:
ceph-deploy-1.5.37-0.noarch has missing requires of python-distribute
解决:
wget http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.39-0.noarch.rpm
rpm -Uvh ceph-deploy-1.5.39-0.noarch.rpm –nodeps
ceph osd pool create cephpool_01 16 16 #创建一个16个pg,16个pgd的池cephpool,如果进行ceph搭建的时候已经存在pool,可以不用额外创建,例如可以选择上面已经存在的data、metadata、rbd作为pool
pool ‘cephpool’ created ceph osd pool create cephpool_01 16 16 #创建一个16个pg,16个pgd的池cephpool,如果进行ceph搭建的时候已经存在pool,可以不用额外创建,例如可以选择上面已经存在的data、metadata、rbd作为pool pool ‘cephpool’ created ceph osd pool set cephpool size 2 #这个就是副本个数,因为我们就是两个osd,所以就设置为2
注意:如果单单只在mypool这个池中创建对象object2,不拷贝文件的指令为:
rados create object2 -p mypool
[cent@node2 1.5_head]$ pwd
/var/local/osd0/current/1.5_head
[cent@node2 1.5_head]$ ll
total 8
-rw-r--r-- 1 ceph ceph 0 Nov 27 20:26 __head_00000005__1
-rw-r--r-- 1 ceph ceph 42 Nov 27 21:32 object\u01__head_376EEA75__1
[cent@node2 1.5_head]$
[cent@node3 1.5_head]$ pwd
/var/local/osd1/current/1.5_head
[cent@node3 1.5_head]$ ll
total 8
-rw-r--r-- 1 ceph ceph 0 Nov 27 20:26 __head_00000005__1
-rw-r--r-- 1 ceph ceph 42 Nov 27 21:32 object\u01__head_376EEA75__1
[cent@node3 1.5_head]$
$ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05359 root default
-2 0.02679 host node2
0 0.02679 osd.0 up 1.00000 1.00000
-3 0.02679 host node3
1 0.02679 osd.1 up 1.00000 1.00000
[cent@node1 local]$
使用get命令代替put命令
举例,将刚刚上传的文件拷贝到本机,重命名为file
get [outfile] fetch object
$rados get object1 /home/liangwl/getfile -p cephpool
[cent@node1 local]$ rados get object_01 /tmp/file -p cephpool_01
[cent@node1 local]$ cd /tmp/
[cent@node1 tmp]$ ls
file
systemd-private-27b6c2f48ffc423fa461609e5e62a630-ceph-mon@node1.service-nssea4
[cent@node1 tmp]$ ll
total 4
-rw-r--r-- 1 cent cent 42 Nov 27 22:22 file
drwx------ 3 root root 16 Nov 27 19:27 systemd-private-27b6c2f48ffc423fa461609e5e62a630-ceph-mon@node1.service-nssea4
[cent@node1 tmp]$ vim file
I am a student and from chd university!!!