centos7(三节点)搭建ceph环境

我的环境是centos7,三台都搭建好yum源,保证都是ok的。

192.168.1.220 节点1 (mon, ceph-deploy)
192.168.1.221 节点2 (osd)
192.168.1.222 节点3 (osd)

分别在三个节点上都关闭selinux

修改/etc/selinux/config, 将值设为disabled, reboot

分别在三个节点上创建用户并赋予root权限

这里创建的用户为:cent 密码是cent
sudo useradd -d /home/cent -m cent
sudo passwd cent
echo “cent ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/cent
sudo chmod 0440 /etc/sudoers.d/cent
su cent (切换到cent用户,不能用root或sudo执行ceph-deploy命令,重要:如果你是用不同的用户登录的,就不要用sudo或者root权限运行ceph-deploy,因为在远程的主机上不能发出sudo命令
)
sudo visudo (修改其中Defaults requiretty为Defaults:cent !requiretty)
sudo hostname node1 (其他的两个节点就是node2和node3了)
sudo yum install ntp ntpdate ntp-doc
sudo yum install openssh-server

修改主节点node1的hosts文件

修改node1的hosts文件,配置节点信息
vim /etc/hosts
192.168.1.220 node1192.168.1.221 node2192.168.1.222 node3

配置node1的yum源

在/etc/yum.repos.d/ 下创建一个ceph.repo文件,写入以下内容
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1

上面的这个是163的源,我们也可以使用阿里云的
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0

在node1安装ceph-deploy

sudo yum install yum-plugin-priorities
sudo yum install ceph-deploy

配置node1的ssh

ssh-keygen (一直回车,使用默认配置)

ssh-copy-id cent@node1
ssh-copy-id cent@node2
ssh-copy-id cent@node3

vim ~/.ssh/config (创建config文件并写入以下内容)

Host node1
Hostname node1
User cent
Host node2
Hostname node2
User cent
Host node3
Hostname node3
User cent

sudo chmod 600 config (赋予config文件权限)

在node1上创建集群

注意:在这里要说的是我搭建的ceph集群(1个mon节点,2个osd节点)没有搭建管理节点,直接把mon当作管理节点使用。
mkdir my-cluster
cd my-cluster
ceph-deploy new node1 (成功后会有ceph.conf)

vim ceph.conf (在global段最后添加)
osd pool default size = 2

安装cpeh

在node1中执行:ceph-deploy install node1 node2 node3
安装完成后在node1中执行:ceph-deploy mon create-initial

添加并激活osd

ssh node2
sudo mkdir /var/local/osd0
sudo chmod -R 777 /var/local/osd0/
exit

ssh node3
sudo mkdir /var/local/osd1
sudo chmod -R 777 /var/local/osd1/
exit

ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1

ceph-deploy admin node1 node2 node3

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

查看ceph集群状态

[cent@node1 my-cluster]$ ceph -s
cluster f2891898-aa3b-4bce-8bf1-668b8cf5b45a
health HEALTH_OK
monmap e1: 1 mons at {node1=192.168.1.220:6789/0}
election epoch 3, quorum 0 node1
osdmap e10: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v225: 64 pgs, 1 pools, 0 bytes data, 0 objects
16205 MB used, 40039 MB / 56244 MB avail
64 active+clean

问题:

Loaded plugins: langpacks, priorities, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
53 packages excluded due to repository priority protections
Resolving Dependencies
–> Running transaction check
—> Package ceph-deploy.noarch 0:1.5.37-0 will be updated
—> Package ceph-deploy.noarch 0:1.5.38-0 will be an update
–> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-4.el7 will be installed
Removing python-setuptools.noarch 0:0.9.8-4.el7 - u due to obsoletes from installed python2-setuptools-22.0.5-1.el7.noarch
–> Restarting Dependency Resolution with new changes.
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-4.el7 will be installed
–> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-3.el7 will be installed
Removing python-setuptools.noarch 0:0.9.8-3.el7 - u due to obsoletes from installed python2-setuptools-22.0.5-1.el7.noarch
–> Restarting Dependency Resolution with new changes.
–> Running transaction check
—> Package python-setuptools.noarch 0:0.9.8-3.el7 will be installed
–> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch
–> Finished Dependency Resolution
Error: Package: ceph-deploy-1.5.38-0.noarch (ceph-noarch)
Requires: python-distribute
You could try using –skip-broken to work around the problem
Found 1 pre-existing rpmdb problem(s), ‘yum check’ output follows:
ceph-deploy-1.5.37-0.noarch has missing requires of python-distribute
解决:
wget http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.39-0.noarch.rpm
rpm -Uvh ceph-deploy-1.5.39-0.noarch.rpm –nodeps

简单操作:

ceph osd pool create cephpool_01 16 16    #创建一个16个pg,16个pgd的池cephpool,如果进行ceph搭建的时候已经存在pool,可以不用额外创建,例如可以选择上面已经存在的data、metadata、rbd作为pool  
pool ‘cephpool’ created
ceph osd pool create cephpool_01 16 16    #创建一个16个pg,16个pgd的池cephpool,如果进行ceph搭建的时候已经存在pool,可以不用额外创建,例如可以选择上面已经存在的data、metadata、rbd作为pool  pool ‘cephpool’ created
ceph osd pool set cephpool size 2 #这个就是副本个数,因为我们就是两个osd,所以就设置为2

  1. 将文件写入到创建的pool中
    put [infile] write object
    vimcephtest.txtIamastudentandfromchduniversity!!! v i m c e p h t e s t . t x t I a m a s t u d e n t a n d f r o m c h d u n i v e r s i t y ! ! ! ( 文 件 内 容 ) rados put object_01 /home/cent/ceph_test.txt -p cephpool_01 #在cephpool下创建一个名为object1的对象,将本地文件file拷贝到这个pool的对象

注意:如果单单只在mypool这个池中创建对象object2,不拷贝文件的指令为:
rados create object2 -p mypool

  1. 查看object_01的pg map
    206 history
    [cent@node1 ~]$ ceph osd map cephpool_01 object_01
    osdmap e19 pool ‘cephpool_01’ (1) object ‘object_01’ -> pg 1.376eea75 (1.5) -> up ([1,0], p1) acting ([1,0], p1)
    其中,
    osdmap e19 OSD map的版本号
    pool ‘cephpool’ (3) pool的名字和ID
    object ‘object1’object的名字
    pg 1.376eea75 (1.5) pg number,即1.5
    up ([1,0], p1) ,因为我们设置的是2副本,所以每个pg都会被存放在2个OSD上
    可以在node2和node3 即两个osd上看见我的存储的东西:
[cent@node2 1.5_head]$ pwd

/var/local/osd0/current/1.5_head

[cent@node2 1.5_head]$ ll
total 8
-rw-r--r-- 1 ceph ceph  0 Nov 27 20:26 __head_00000005__1
-rw-r--r-- 1 ceph ceph 42 Nov 27 21:32 object\u01__head_376EEA75__1
[cent@node2 1.5_head]$ 



[cent@node3 1.5_head]$ pwd 
/var/local/osd1/current/1.5_head
[cent@node3 1.5_head]$ ll
total 8
-rw-r--r-- 1 ceph ceph  0 Nov 27 20:26 __head_00000005__1
-rw-r--r-- 1 ceph ceph 42 Nov 27 21:32 object\u01__head_376EEA75__1
[cent@node3 1.5_head]$ 
$ceph osd tree
ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.05359 root default                                     
-2 0.02679     host node2                                   
 0 0.02679         osd.0       up  1.00000          1.00000 
-3 0.02679     host node3                                   
 1 0.02679         osd.1       up  1.00000          1.00000 
[cent@node1 local]$ 

使用get命令代替put命令
举例,将刚刚上传的文件拷贝到本机,重命名为file
get  [outfile]         fetch object
$rados get object1 /home/liangwl/getfile -p cephpool


[cent@node1 local]$ rados get object_01 /tmp/file -p cephpool_01
[cent@node1 local]$ cd /tmp/
[cent@node1 tmp]$ ls
file
systemd-private-27b6c2f48ffc423fa461609e5e62a630-ceph-mon@node1.service-nssea4
[cent@node1 tmp]$ ll
total 4
-rw-r--r-- 1 cent cent 42 Nov 27 22:22 file
drwx------ 3 root root 16 Nov 27 19:27 systemd-private-27b6c2f48ffc423fa461609e5e62a630-ceph-mon@node1.service-nssea4
[cent@node1 tmp]$ vim file

I am a student and from chd university!!!

你可能感兴趣的:(ceph,云存储,swift,centos)