Ceph Nautica集群部署

(1)虚拟网络配置

在虚拟机中配置VM net1为仅主机模式,子网IP配置成192.168.200.0网段;配置VMnet8为NAT模式,子网IP配置成192.168.100.0网段。如图7-3所示。

[if !vml]

[endif]


  图7-3 虚拟机网络配置图

(2)虚拟机设置

分别创建3台相同的虚拟机,并将虚拟机设备配置成如图7.4所示。

[if !vml]

[endif]


             图7-4 虚拟机设备配置图

(3)系统设置

将CentOS-7-x86_64-DVD-1908操作系统安装到第1块大小为20GB的硬盘上,为三台虚拟机分别配置主机名:ceph-1、ceph-2、ceph-3。为三台虚拟机分别配置IP地址:192.168.100.101、192.168.100.102、192.168.100.103,子网掩码为255.255.255.0,默认网关为192.168.100.2,DNS服务器为192.168.100.2,使三台虚拟机可以访问Internet。

2. 基础环境配置

(1)主机文件配置

分别在三台虚拟机上配置hosts文件,此处以ceph-1为例。

[root@ceph-1 ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4localhost4.localdomain4

::1         localhost localhost.localdomainlocalhost6 localhost6.localdomain6

192.168.100.101 ceph-1

192.168.100.102 ceph-2

192.168.100.103 ceph-3

(2)创建RSA密钥对

​选定一个节点作为主控节点(这里选的ceph-1主机),建立从主控节点到其他节点的免密登录,将公钥上传到ceph-2和ceph-3节点,主控节点也可以安装ceph。

生成秘钥:ssh-keygen

[root@ceph-1 ~]# ssh-keygen

Generating public/private rsa keypair.

Enter file in which to save the key

(/root/.ssh/id_rsa): 直接回车

Created directory '/root/.ssh'.

Enter passphrase (empty for no passphrase):直接回车

Enter same passphrase again: 直接回车

Your identification has been savedin /root/.ssh/id_rsa.

Your public key has been saved in/root/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:b3AY2P2Atl2XecuhfR3GrGVfjD7B/yt5FQyNlJYomvQroot@ceph-1

The key's randomart image is:

+---[RSA 2048]----+

|            o.= |

|       + + . Bo* |

|      o O + o XO+|

|       + E + oO*B|

|        S o .oo+*|

|         +    .+|

|          o  . o|

|         .  o ..|

|              o. |

+----[SHA256]-----+

[root@ceph-1 ~]# ssh-copy-idroot@ceph-2

/usr/bin/ssh-copy-id: INFO: Sourceof key(s) to be installed: "/root/.ssh/id_rsa.pub"

The authenticity of host 'ceph-2(192.168.100.102)' can't be established.

ECDSA key fingerprint isSHA256:gmUzmidHWka66lieEFZZA50Ty0bX3mgcT0AtJUec0jE.

ECDSA key fingerprint isMD5:f6:d3:6d:5e:6e:8a:c8:53:4b:30:da:e9:2d:b2:62:6f.

Are you sure you want to continueconnecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO:attempting to log in with the new key(s), to filter out any that are alreadyinstalled

/usr/bin/ssh-copy-id: INFO: 1key(s) remain to be installed -- if you are prompted now it is to install thenew keys

root@ceph-2's password: 输入ceph-2节点root用户的密码


Number of key(s) added: 1


Now try logging into the machine,with:   "ssh 'root@ceph-2'"

and check to make sure that onlythe key(s) you wanted were added.


[root@ceph-1 ~]# ssh-copy-idroot@ceph-3

/usr/bin/ssh-copy-id: INFO: Sourceof key(s) to be installed: "/root/.ssh/id_rsa.pub"

The authenticity of host 'ceph-3(192.168.100.103)' can't be established.

ECDSA key fingerprint isSHA256:kiRGSRYgxBjtduDcZ6kBOSSoO3X/5Ji25jrMjpFEc5M.

ECDSA key fingerprint isMD5:0d:75:f4:22:54:0d:ba:f0:a1:ec:6f:be:c7:23:0b:c4.

Are you sure you want to continueconnecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO:attempting to log in with the new key(s), to filter out any that are alreadyinstalled

/usr/bin/ssh-copy-id: INFO: 1key(s) remain to be installed -- if you are prompted now it is to install thenew keys

root@ceph-3's password: 输入ceph-3节点root用户的密码


Number of key(s) added: 1


Now try logging into the machine,with:   "ssh 'root@ceph-3'"

and check to make sure that onlythe key(s) you wanted were added.

(3)禁用防火墙

在三台虚拟机上停止并禁用防火墙,此处以ceph-1为例:

[root@ceph-1 ~]# systemctl stopfirewalld

[root@ceph-1 ~]# systemctl disablefirewalld

Removed symlink/etc/systemd/system/multi-user.target.wants/firewalld.service.

Removed symlink/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

(4)配置SElinux服务

分别在三台虚拟机上将SELinux模式设置为permissive,此处以ceph-1为例。

临时禁用:

[root@ceph-1 ~]# setenforce 0

​如果希望永久生效,则修改/etc/selinux/config

This file controls the state ofSELinux on the system.

SELINUX= can take one of thesethree values:

   enforcing - SELinux security policy is enforced.

   permissive - SELinux prints warnings instead of enforcing.

   disabled - No SELinux policy is loaded.

SELINUX=disabled

SELINUXTYPE= can take one of thesetwo values:

   targeted - Targeted processes are protected,

   minimum - Modification of targeted policy. Only selected

processes are protected.

   mls - Multi Level Security protection.

SELINUXTYPE=targeted

(5)配置YUM源文件

分别在三台虚拟机上删除原有软件源配置文件,以ceph-1为例:

[root@ceph-1 ~]# mkdir /opt/bak

[root@ceph-1 ~]# cd/etc/yum.repos.d/

[root@ceph-1 yum.repos.d]# mv */opt/bak/

将CentOS7-Base-163.repo通过SFTP复制到/etc/yum.repos.d中:

[root@ceph-1 yum.repos.d]# ls

CentOS7-Base-163.repo

[root@ceph-1 yum.repos.d]# yumclean all

[root@ceph-1 yum.repos.d]# yummakecache

(6)安装NTP服务

在ceph-1节点上安装NTP服务器,编辑配置文件,允许192.168.100.0/24访问,启用并启动服务。

[root@ceph-1 yum.repos.d]# yum -yinstall chrony

[root@ceph-1 yum.repos.d]# vi/etc/chrony.conf

添加配置:

allow 192.168.100.0/24

[root@ceph-1 yum.repos.d]#systemctl enable chronyd.service

[root@ceph-1 yum.repos.d]#systemctl restart chronyd.service

(7)同步时间

查看时间同步源

[root@ceph-1 yum.repos.d]# chronycsources -v

210 Number of sources = 4


 .-- Source mode  '^' = server, '='= peer, '#' = local clock.

 / .- Source state '*' = current synced, '+' =combined , '-' = not combined,

| /   '?' = unreachable, 'x' = time may be inerror, '~' = time too variable.

||                                                .- xxxx [ yyyy ] +/- zzzz

||      Reachability register (octal) -.           | xxxx = adjusted offset,

||      Log2(Polling interval) --.      |         |  yyyy = measured offset,

||                                \     |         |  zzzz = estimated error.

||                                 |    |          \

MS Name/IP address         Stratum Poll Reach LastRx Lastsample              

===============================================================================

^- 185.216.231.25                2   6   63    56  -8607us[-8607us] +/-   95ms

^- 203.107.6.88                  2   6   17    61    -17ms[ -17ms] +/-   35ms

^- ntp1.flashdance.cx            2  6    17    61   -24ms[  -24ms] +/-  180ms

^* 119.28.206.193                2   6   17    62  -2422us[ +124ms] +/-   36ms

S栏标记为*的为NTP服务当前使用的NTP服务器。

(8)配置NTP服务

主要是用于ceph之间的时间同步。在所有 Ceph 节点上安装 NTP 服务,以免因时钟漂移导致故障。确保在各 Ceph 节点上启动了 NTP 服务,并且要使用同一个 NTP 服务器。此处以ceph-2为例。

在ceph-2和ceph-3节点上安装NTP服务器,编辑配置文件,添加NTP服务器192.168.100.101,启用并启动服务。

[root@ceph-2 yum.repos.d]# yum -yinstall chrony

[root@ceph-2 yum.repos.d]# vi/etc/chrony.conf

修改配置:

#server 0.centos.pool.ntp.orgiburst

#server 1.centos.pool.ntp.orgiburst

#server 2.centos.pool.ntp.orgiburst

#server 3.centos.pool.ntp.orgiburst

server ceph-1 iburst

[root@ceph-2 yum.repos.d]#systemctl enable chronyd.service

[root@ceph-2 yum.repos.d]# systemctlrestart chronyd.service

(9)检查其他设备时间

分别在ceph-2和ceph-3节点上查看时间同步源,此处以ceph-2为例。

[root@ceph-2 yum.repos.d]# chronycsources -v

210 Number of sources = 1


 .-- Source mode  '^' = server, '='= peer, '#' = local clock.

 / .- Source state '*' = current synced, '+' =combined , '-' = not combined,

| /   '?' = unreachable, 'x' = time may be inerror, '~' = time too variable.

||                                                 .- xxxx [ yyyy ] +/- zzzz

||      Reachability register (octal) -.           | xxxx = adjusted offset,

||      Log2(Polling interval) --.      |         |  yyyy = measured offset,

||                                \     |         |  zzzz = estimated error.

||                                 |    |          \

MS Name/IP address         Stratum Poll Reach LastRx Lastsample              

===============================================================================

^* ceph-1                        3  6    77    62  -238us[-1562us] +/-   39ms

已经与ceph-1节点同步

(10)同步YUM源文件

分别在三台虚拟机上添加ceph软件源配置文件,以ceph-1节点为例。

[root@ceph-1 yum.repos.d]# viceph.repo

[Ceph]

name=Ceph packages for $basearch

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/$basearch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1


[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1


[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/SRPMS

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1

3. 部署ceph集群

(1)安装ceph-deploy

在ceph-1节点上安装ceph-deploy部署工具。

[root@ceph-1 ~]# yum -y install ceph-deploy

(2)安装python-setuptools

在ceph-1节点上安装服务所需要的依赖包。

[root@ceph-1 ~]# yum -y installpython-setuptools

(3)配置新节点

创建集群和monitor,此时会在my-cluster目录下生成几个文件,如ceph.conf;ceph.mon.keyring等。

[root@ceph-1 ~]# mkdir /opt/osd

[root@ceph-1 ~]# cd /opt/osd

[root@ceph-1 osd]# ceph-deploy newceph-1

ceph-deploy的new子命令能够部署名称为ceph-1的新集群,并且它能生成集群配置文件和密钥文件。列出当前的工作目录,可以查看到ceph.conf和ceph.mon.keying文件。

[root@ceph-1 osd]# ll

total 12

-rw-r--r-- 1 root root  229 Sep 20 16:20 ceph.conf

-rw-r--r-- 1 root root 2960 Sep 2016:20 ceph.log

-rw------- 1 root root   73 Sep 20 16:20 ceph.mon.keyring

(4)在三个节点上安装deltarpm

[root@ceph-1 osd]# yum install -ydeltarpm

[root@ceph-2 ~]# yum install -ydeltarpm

[root@ceph-3 ~]# yum install -ydeltarpm

(5)安装ceph软件包

在ceph-1上执行以下命令,使用ceph-deploy工具在所有节点上安装nautilus二进制软件包。

[root@ceph-1 osd]# ceph-deployinstall --release=nautilus ceph-1 ceph-2 ceph-3

(6)部署初始化

在ceph-1上创建第一个ceph monitor。

[root@ceph-1 osd]# ceph-deploy mon create-initial

monitor创建成功后,检查集群的状态,运行 ceph -s可以看到当前集群的状态,3个mon,暂时没有osd,有个pool,pool的pg数目是64个,这个时候ceph集群并不处于健康状态。

[root@ceph-1 osd]# ceph -s

   cluster 4d7e1b04-2a4c-45aa-b6fe-a98241db0c2f

    health HEALTH_ERR

            no osds

    monmap e1: 3 mons at {ceph-1=192.168.100.101:6789/0,ceph-2=192.168.100.102:6789/0,ceph-3=192.168.100.103:6789/0}

            election epoch 4, quorum 0,1,2ceph0,ceph1,ceph2

    osdmap e1: 0 osds: 0 up, 0 in

            flags sortbitwise

      pgmap v2: 64 pgs, 1 pools, 0 bytes data,0 objects

            0 kB used, 0 kB / 0 kB avail

                  64 creating

(7)配置admin key

把配置文件和 admin 密钥拷贝到管理节点和ceph 节点,配置admin key到每个节点。

[root@ceph-1 osd]# ceph-deployadmin ceph-1 ceph-2 ceph-3

(8)创建mgr

创建一个管理器。

[root@ceph-1 osd]# ceph-deploy mgrcreate ceph-1

(9)添加OSD

登陆到Ceph-1节点,分别创建osd节点。

[root@ceph-1 osd]# ceph-deploy osdcreate --data /dev/sdb ceph-1

[root@ceph-1 osd]# ceph-deploy osdcreate --data /dev/sdb ceph-2

[root@ceph-1 osd]# ceph-deploy osdcreate --data /dev/sdb ceph-3

(10)查看ceph集群状态

此时可以看见集群的状态是HEALTH_OK状态。

[root@ceph-1 osd]# ceph -s

 cluster:

   id:     68ecba50-862d-482e-afe2-f95961ec3323

   health: HEALTH_OK


 services:

   mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 4m)

   mgr: ceph-1(active, since 3m)

   osd: 3 osds: 3 up (since 19s), 3 in (since 19s)


 data:

   pools:   0 pools, 0 pgs

   objects: 0 objects, 0 B

   usage:   3.0 GiB used, 294 GiB /297 GiB avail

   pgs:

你可能感兴趣的:(Ceph Nautica集群部署)