Ceph安装--jewel版本

2019独角兽企业重金招聘Python工程师标准>>> hot3.png

软件环境

系统:Centos7.2

Ceph版本:jewel 10.2.1

主机 ip 主机名
ceph1 192.168.13.212 bdc212
ceph2 192.168.13.213 bdc213
ceph3 192.168.13.214 bdc214

 

系统环境配置

1.1 修改主机名

# hostnamectl set-hostname bdc212

# hostnamectl set-hostname bdc213

# hostnamectl set-hostname bdc214

1.2 修改/etc/hosts

# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.13.212  bdc212

192.168.13.213  bdc213

192.168.13.214  bdc214

 

²  所有节点上都要编辑。

1.3 配置ssh互信

ü  创建密钥文件

# ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

8f:8b:c3:a1:6b:59:e0:aa:7f:63:21:97:87:ab:24:a5 root@bdc212

The key's randomart image is:

+--[ RSA 2048]----+

|                 |

|                 |

|                 |

|    .            |

|  .. +  S        |

| o. * +  o       |

|E .+ O .. .      |

| o. O o. .       |

|.oo=.o...        |

+-----------------+

 

ü  先让本机能与localhost无密码登录

# ssh-copy-id localhost

The authenticity of host 'localhost (::1)' can't be established.

ECDSA key fingerprint is a7:52:1c:33:45:12:fc:b9:fa:bd:61:35:7c:b8:51:3b.

Are you sure you want to continue connecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

root@localhost's password:

Number of key(s) added: 1

 

Now try logging into the machine, with:   "ssh 'localhost'"

and check to make sure that only the key(s) you wanted were added.

 

ü  将密钥拷贝到其它节点上

# scp -r ~/.ssh/ bdc213:~/

The authenticity of host 'bdc213 (192.168.13.213)' can't be established.

ECDSA key fingerprint is a6:4c:c8:13:07:59:c2:0f:5d:b2:c8:40:95:d2:d3:4b.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'bdc213,192.168.13.213' (ECDSA) to the list of known hosts.

root@bdc213's password:

known_hosts              100%  530     0.5KB/s   00:00   

id_rsa                    100% 1675     1.6KB/s   00:00   

id_rsa.pub                100%  393     0.4KB/s   00:00   

authorized_keys           100%  393     0.4KB/s   00:00   

# scp -r ~/.ssh/ bdc214:~/

1.4 防火墙配置

Ceph Monitors 之间默认使用 6789 端口通信, OSD 之间默认用 6800:7300 这个范围内的端口通信

Ø  开启防火墙需要打开端口

# firewall-cmd --zone=public --add-port=6789/tcp --permanent

# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent   

Success

注意:虽然这里显示Success,但是似乎还是不生效,所以为避免之后出现防火墙导致的报错,此处先关闭防火漆

Ø  关闭防火墙

# systemctl stop firewalld

# systemctl disable firewalld

 

1.5 配置时间同步

如果系统没有安装ntp,需要先安装ntp服务

# yum install ntp

 

以bdc217节点作为时间同步服务器,其它节点都同步它的时间

编辑bdc217节点的 /etc/ntp.conf

# vi /etc/ntp.conf

添加如下配置,并将其它Server配置注释
restrict 192.168.13.0 mask 255.255.255.0 nomodify notrap

 

server  127.127.1.0

fudge 127.127.1.0 stratum 8

 

在其它节点上编辑/etc/ntp.conf

# vi /etc/ntp.conf

添加如下配置,并将其它Server配置注释
server 192.168.13.212

 

重启所有机器的ntpd服务,并设置开机自启动

# for i in 2 3 4; do ssh bdc21${i} systemctl restart ntpd;done

# for i in 2 3 4; do ssh bdc21${i} systemctl enable ntpd;done

 

查看时间同步是否成功

# for i in 2 3 4; do ssh bdc21${i} ntpq -p;done 

1.6 关闭Selinux

Selinux默认状态

# getenforce

Enforcing

临时关闭:

# setenforce 0

# getenforce

Permissive

永久关闭:

修改配置文件,设置SELINUX=disabled

# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# cat /etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux security policy is enforced.

#     permissive - SELinux prints warnings instead of enforcing.

#     disabled - No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of three two values:

#     targeted - Targeted processes are protected,

#     minimum - Modification of targeted policy. Only selected processes are protected.

#     mls - Multi Level Security protection.

SELINUXTYPE=targeted

修改完之后查看:

# for i in 2 3 4 ;do ssh bdc21${i} getenforce;done 

Permissive

Permissive

Permissive

1.7 修改最大线程数

OSD 数量较多(如 20 个以上)的主机会派生出大量线程,尤其是在恢复和重均衡期间。很多 Linux 内核默认的最大线程数较小(如 32k 个),如果您遇到了这类问题,可以把 kernel.pid_max 值调高些。理论最大值是 4194303 。

# cat /proc/sys/kernel/pid_max

65536

# vi /etc/sysctl.conf

添加:

kernel.pid_max = 4194303

使配置生效

# sysctl -p

kernel.pid_max = 4194303

 

1.8 创建用户

从 Infernalis 版起,用户名 “ceph” 保留给了 Ceph 守护进程。如果 Ceph 节点上已经有了 “ceph” 用户,升级前必须先删掉这个用户。所以创建自己部署ceph的用户,此处使用testceph用户。

创建方法:

使用root用户添加用户

useradd testceph

修改密码:

passwd testceph

或者一条命令:

echo ceph_password | passwd --stdin testceph

 

此处:

# useradd testceph

# echo hadoop| passwd --stdin testceph

 

注意:同样需要给testceph用户配置ssh互信,配置方法同上。

 

1.9 赋予开发用户sudo的权限

另外要给testceph用户赋予sudo的权限:

# echo "testceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/testceph

testceph ALL = (root) NOPASSWD:ALL

# sudo chmod 0440 /etc/sudoers.d/testceph

切换testceph用户执行

$ sudo visudo

将Defaults    requiretty更改为Defaults:ceph !requiretty

1.10 系统内核升级

此步操作可以不做。

Ø  系统初始内核:

# uname -r

3.10.0-327.el7.x86_64

内核升级,安装2个内核包:

由于服务器并没有开通外网服务,所以2个内核包事先在自己的虚拟机上下载

# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install  kernel-lt-devel kernel-lt -y

通过保留yum安装包得到

kernel-lt-4.4.7-1.el7.elrepo.x86_64.rpm  kernel-lt-devel-4.4.7-1.el7.elrepo.x86_64.rpm

Ø  安装软件包

# rpm -ivh kernel-lt-devel-4.4.7-1.el7.elrepo.x86_64.rpm

# rpm -ivh kernel-lt-4.4.7-1.el7.elrepo.x86_64.rpm

注意:此处如果系统是最小化安装的话可能会提示需要perl

Ø  修改内核启动顺序,默认启动的顺序应该为1,升级以后内核是往前面插入,为0

# grub2-set-default 0

Ø  重启系统

# reboot

系统重启之后查看内核版本

# uname -r

4.4.7-1.el7.elrepo.x86_64

1.11 配置本地yum源

除了ceph的本地yum源外,还需要配置系统镜像源,安装一些依赖包。

ü  创建repo文件

# cat /etc/yum.repos.d/ceph-local.repo   

[local_ceph]

name=local_ceph

baseurl=file:///opt/ceph-jewel/rpm-jewel

enabled=1

gpgcheck=0

ü  拷贝软件包

将jewel版本的ceph软件包包括依赖包都拷贝至/opt/ceph-jewel/rpm-jewel目录下

软件包位置

链接:http://pan.baidu.com/s/1eRXLpkA 密码:a3ys

ü  创建本地源

# createrepo /opt/ceph-jewel/rpm-jewel

ü  设置系统默认源不启用

在CentOS-Base.repo文件中,在每个yum源配置中添加enabled=0,让其不启用

Ceph安装

离线版本安装

2.1 安装ceph-deploy

# yum install ceph-deploy

2.2 软件安装

2.2.1 创建集群

# su – testceph

$ mkdir my-cluster

$ cd my-cluster/

$ ceph-deploy new bdc212

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/testceph/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.33): /bin/ceph-deploy new bdc212

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  func                          :

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       :

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True

[ceph_deploy.cli][INFO  ]  mon                           : ['bdc212']

[ceph_deploy.cli][INFO  ]  public_network                : None

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  cluster_network               : None

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph_deploy.cli][INFO  ]  fsid                          : None

[ceph_deploy.new][DEBUG ] Creating new cluster named ceph

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[bdc212][DEBUG ] connection detected need for sudo

[bdc212][DEBUG ] connected to host: bdc212

[bdc212][DEBUG ] detect platform information from remote host

[bdc212][DEBUG ] detect machine type

[bdc212][DEBUG ] find the location of an executable

[bdc212][INFO  ] Running command: sudo /usr/sbin/ip link show

[bdc212][INFO  ] Running command: sudo /usr/sbin/ip addr show

[bdc212][DEBUG ] IP addresses found: ['192.168.13.212', '192.168.8.212']

[ceph_deploy.new][DEBUG ] Resolving host bdc212

[ceph_deploy.new][DEBUG ] Monitor bdc212 at 192.168.13.212

[ceph_deploy.new][DEBUG ] Monitor initial members are ['bdc212']

[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.13.212']

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

 $ ls

ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

2.2.2 修改默认副本数

把 Ceph 配置文件里的默认副本数从 3 改成 2 ,这样只有两个 OSD 也可以达到 active + clean 状态。把下面这行加入 [global] 段:

$ vi ceph.conf

osd pool default size = 2

2.2.3 安装ceph

本地源安装,分别在每台机器上安装

$ yum -y install ceph ceph-radosgw

 

2.2.4 初始化监控节点收集密钥

$ ceph-deploy mon create-initial

2.3 添加OSD

$ ceph-deploy osd prepare bdc212:/dev/sdd

$ ceph-deploy osd prepare bdc212:/dev/sde

$ ceph-deploy osd prepare bdc213:/dev/sdd

$ ceph-deploy osd prepare bdc213:/dev/sde

$ ceph-deploy osd prepare bdc214:/dev/sdd

$ ceph-deploy osd prepare bdc214:/dev/sde

可以在对应机器上查看osd对应的磁盘信息

# ceph-disk list

/dev/dm-0 other, xfs, mounted on /

/dev/dm-1 other, xfs, mounted on /home

/dev/dm-2 other, xfs, mounted on /var

/dev/dm-3 other, xfs, mounted on /data

/dev/sda :

 /dev/sda1 other, LVM2_member

/dev/sdb other, unknown

/dev/sdc other, unknown

/dev/sdd :

 /dev/sdd2 ceph journal, for /dev/sdd1

 /dev/sdd1 ceph data, active, cluster ceph, osd.0, journal /dev/sdd2

/dev/sde :

 /dev/sde2 ceph journal, for /dev/sde1

 /dev/sde1 ceph data, active, cluster ceph, osd.1, journal /dev/sde2

/dev/sdf :

 /dev/sdf2 other, LVM2_member

 /dev/sdf1 other, xfs, mounted on /boot

2.4 添加MON

$ ceph-deploy mon add bdc213

$ ceph-deploy mon add bdc214

 

2.5 查看ceph集群状态

$ sudo ceph -s

    cluster befd161e-ce9e-427d-9d9b-b683b744645c

     health HEALTH_OK

     monmap e3: 3 mons at {bdc212=192.168.13.212:6789/0,bdc213=192.168.13.213:6789/0,bdc214=192.168.13.214:6789/0}

            election epoch 6, quorum 0,1,2 bdc212,bdc213,bdc214

     osdmap e68: 6 osds: 6 up, 6 in

            flags sortbitwise

      pgmap v204: 128 pgs, 1 pools, 0 bytes data, 0 objects

            208 MB used, 22310 GB / 22310 GB avail

                 128 active+clean

参考链接:

http://www.centoscn.com/CentOS/config/2016/0119/6678.html

http://docs.ceph.org.cn/

持续学习中……

转载于:https://my.oschina.net/xiaozhublog/blog/693007

你可能感兴趣的:(Ceph安装--jewel版本)