一丶服务器环境配置
环境
主机名 | ip地址规划 | 系统版本 |
s01-sa-cq | 172.16.30.10,172.16.40.10 | CentOS Linux release 7.3.1611 |
s02-sa-cq | 172.16.30.11,172.16.40.11 | CentOS Linux release 7.3.1611 |
s03-sa-cq | 172.16.30.12,172.16.40.12 | CentOS Linux release 7.3.1611 |
网络配置(所有主机)
ceph对网络带宽要求比较高,用服务器em1,em2,em3网卡做聚合
em4网卡做cloudstack桥接
使用nmcli对 em1,em2,em3做网络组
nmcli connection add type team ifname team0 nmcli con add type team-slave con-name em1 ifname em1 master team0 nmcli con add type team-slave con-name em2 ifname em2 master team0 nmcli con add type team-slave con-name em3 ifname em3 master team0 nmcli connection up em1 nmcli connection up em2 nmcli connection up em3
/etc/sysconfig/network-scripts/ifcfg-team-team0 添加:
BOOTPROTO=static IPADDR=172.16.30.X NETMASK=255.255.255.0
/etc/sysconfig/network-scripts/ifcfg-em4 添加:
DEVICE=em4 ONBOOT=yes IPADDR=172.16.40.X GATEWAY=172.16.40.254 NETMASK=255.255.255.0 DNS1=114.114.114.114
重启网络
/etc/init.d/network restart
关闭selinux
setenforce 0
二丶ceph部署
软件包部署分布
主机名 | 软件包 |
s01-sa-cq | ceph-deploy,ceph-mon,ceph-osd |
s02-sa-cq | ceph-mon,ceph-osd |
s03-sa-cq | ceph-mon,ceph-osd |
添加hosts (ALL)
172.16.30.10 s01-sa-cq 172.16.30.11 s02-sa-cq 172.16.30.12 s03-sa-cq
创建ceph用户 (ALL)
useradd -d /home/ceph -m ceph passwd cephecho "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph chmod 0440 /etc/sudoers.d/ceph
添加ssh免密登录 (s01-sa-cq.juhefu.com)
切换到ceph用户
ssh-keygen ssh-copy-id ceph@s01-sa ssh-copy-id ceph@s02-sa ssh-copy-id ceph@s03-sa
添加/.ssh/config 配置文件 (s01-sa-cq.juhefu.com)
Host s01-sa-cq Hostname s01-sa-cq User ceph Host s02-sa-cq Hostname s02-sa-cq User ceph Host s03-sa-cq Hostname s03-sa-cq User ceph
添加ceph-deploy源
/etc/yum.repos.d/ceph.repo
[ceph-noarch] name=Ceph noarch packagesbase url= enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc
安装epel
yum install epel-release -y
安装ceph-deploy
yum install ceph-deploy -y
安装ceph-mon(s01-sa-cq.juhefu.com)
切换到ceph用户
mkdir my-cluster cd my-cluster ceph-deploy new s01-sa-cq s02-sa-cq s03-sa-cqceph-deploy install s01-sa-cqs02-sa-cq s03-sa-cq #执行时间比较久,国外源 ceph-deploy mon create-initial
完成后,当前目录里应该会出现这些密钥环
-rw------- 1 ceph ceph 113 Aug 15 10:39 ceph.bootstrap-mds.keyring -rw------- 1 ceph ceph 113 Aug 15 10:39 ceph.bootstrap-osd.keyring -rw------- 1 ceph ceph 113 Aug 15 10:39 ceph.bootstrap-rgw.keyring -rw------- 1 ceph ceph 129 Aug 15 10:39 ceph.client.admin.keyring
服务器会启动监听 6789端口
添加osd (s01-sa-cq.juhefu.com)
所有主机执行
mkdir /data/ceph-osd/chown ceph:ceph /data/ceph-osd/
切换ceph用户,进入到my-cluster目录
ceph-deploy osd prepare s01-sa-cq:/data/ceph-osd s02-sa-cq:/data/ceph-osd s03-sa-cq:/data/ceph-osd ceph-deploy osd activate s01-sa-cq:/data/ceph-osd s02-sa-cq:/data/ceph-osd s03-sa-cq:/data/ceph-osd ceph-deploy admin s01-sa-cq s02-sa-cq s03-sa-cq
查看ceph状态
ceph health
问题处理一:
如遇到HEALTH_WARN-Monitor clock skew detected ,是由于服务器时间偏差过大
处理方法
服务器(s01-sa-cq)安装ntpd服务
/etc/ntp.conf 配置文件如下
# For more information about this file, see the man pages # ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5). driftfile /var/lib/ntp/drift # Permit time synchronization with our time source, but do not # permit the source to query or modify the service on this system. restrict default nomodify notrap nopeer noquery # Permit all access over the loopback interface. This could # be tightened as well, but to do so would effect some of # the administrative functions. restrict 127.0.0.1 restrict ::1 # Hosts on local network are less restricted. restrict 172.16.30.0 mask 255.255.255.0 nomodify notrap # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 0.cn.pool.ntp.org server 1.cn.pool.ntp.org server 2.cn.pool.ntp.org server 3.cn.pool.ntp.org #broadcast 192.168.1.255 autokey # broadcast server #broadcastclient # broadcast client #broadcast 224.0.1.1 autokey # multicast server #multicastclient 224.0.1.1 # multicast client #manycastserver 239.255.254.254 # manycast server #manycastclient 239.255.254.254 autokey # manycast client # Enable public key cryptography. #crypto includefile /etc/ntp/crypto/pw # Key file containing the keys and key identifiers used when operating # with symmetric key cryptography. keys /etc/ntp/keys # Specify the key identifiers which are trusted. #trustedkey 4 8 42 # Specify the key identifier to use with the ntpdc utility. #requestkey 8 # Specify the key identifier to use with the ntpq utility. #controlkey 8 # Enable writing of statistics records. #statistics clockstats cryptostats loopstats peerstats # Disable the monitoring facility to prevent amplification attacks using ntpdc # monlist command when default restrict does not include the noquery flag. See # CVE-2013-5211 for more details. # Note: Monitoring will not be disabled with the limited restriction flag. disable monitor [root@s01-sa-cq nginx]# service ntpd restart Redirecting to /bin/systemctl restart ntpd.service
启动ntpd服务
/etc/init.d/network restart
服务器(s02-sa-cq,s03-sa-cq)执行
/etc/init.d/network restart
添加到计划任务
*/30 * * * * /usr/sbin/ntpdate 172.16.30.10
问题处理二:
执行
ceph-deploy install s01-sa-cqs02-sa-cq s03-sa-cq
实在是太久了,还超时
处理办法:
进入源地址http://download.ceph.com/rpm-jewel/el7/x86_64/
手动下载文件
-rw-r--r--. 1 root root 3032 Jul 14 03:56 ceph-10.2.9-0.el7.x86_64.rpm -rw-r--r--. 1 root root 4386004 Jul 14 03:57 ceph-base-10.2.9-0.el7.x86_64.rpm -rw-r--r--. 1 root root 17334804 Jul 14 03:57 ceph-common-10.2.9-0.el7.x86_64.rpm -rw-r--r--. 1 root root 2926812 Jul 14 03:56 ceph-mds-10.2.9-0.el7.x86_64.rpm -rw-r--r--. 1 root root 2925320 Jul 14 03:56 ceph-mon-10.2.9-0.el7.x86_64.rpm -rw-r--r--. 1 root root 9497684 Jul 14 03:57 ceph-osd-10.2.9-0.el7.x86_64.rpm -rw-r--r--. 1 root root 271680 Jul 14 03:57 ceph-radosgw-10.2.9-0.el7.x86_64.rpm -rw-r--r--. 1 root root 20476 Jul 14 03:56 ceph-selinux-10.2.9-0.el7.x86_64.rpm
迅雷下载蛮快的
然后手动传送到服务器
执行安装:
yum install ./ceph-* -y
安装之后在执行ceph-deploy install,检查到已经安装会很快
三丶cloudstack部署
1.软件包部署分布
服务器 | 软件包 |
s01-sa-cq | cloudstack-management ,cloudstack-agent |
s02-sa-cq | cloudstack-agent |
s03-sa-cq | cloudstack-agent |
2.添加cloudstack源
/etc/yum.repos.d/cloudstack.repo
[cloudstack] name=cloudstackbase url= enabled=1 gpgcheck=0
3.管理端安装
1)安装mysql数据库
步骤略
2)安装cloudstack-management
yum install cloudstack-management -y
3)初始化数据库
cloudstack-setup-databases cloud:[email protected] --deploy-as=root:rootpassword
说明:cloud cloudpassword为创建的用户及密码无需提前创建;root rootpassword 为数据库root账户
4)初始化完成会提示
CloudStack has successfully initialized the database.
5)初始化cloudstack管理节点
cloudstack-setup-management
注意如果tomcat版本为7.x , 初始化管理节点需加上 —tomcat7 参数 如:
cloudstack-setup-management --tomcat7
6)启动管理节点
service cloudstack-management start
7)导入KVM系统模板
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /data/cloudstack/secondary -u http://cloudstack.apt-get.eu/systemvm/4.10/systemvm64template-4.10-4.10.0.0-kvm.qcow2.bz2 -h kvm -F
注意下载速度比较慢,可先提前下载;-m参数目录为模板目录
8)配置nfs,作为二级存储,及模板目录
cat /etc/exports/data/cloudstack/secondary *(rw,async,no_root_squash,no_subtree_check) service nfs start #启动nfs
4.agent端安装
1)执行安装
yum install cloudstack-agent kvm python-virtinst libvirt tunctl bridge-utils virt-manager qemu-kvm-tools virt-viewer virt-v2v libguestfs-tools -y
2)修改配置文件
/etc/libvirt/qemu.conf
vnc_listen=0.0.0.0
/etc/libvirt/libvirtd.conf
listen_tls = 0 listen_tcp = 1 tcp_port = "16059" auth_tcp = "none" mdns_adv = 0
/etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen"
/etc/cloudstack/agent/agent.properties
host=s01-sa-cq
3)启动libvirtd
service libvirtd restart
4)初始化cloudstack-agent
cloudstack-setup-agent
cloudstack添加ceph作为主存储
1)创建存储池
ceph osd pool create cloudstack
2)创建ceph用户
ceph auth get-or-create client.cloudstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cloudstack'
ceph auth list
client.cloudstack
key: AQCkuwta3kWNKhAAyGiQJ+CMYjxIZOquOJiLcg==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cloudstack
3)cloudstack添加主存储
协议选择:RBD
RADOS Monitor : 172.16.30.10
RADOS Pool : cloudstack
RADOS User: cloudstack
RADOS Secret: AQCkuwta3kWNKhAAyGiQJ+CMYjxIZOquOJiLcg==