Centos 7 部署 OpenStack_Rocky版高可用集群3-3

Centos 7 部署 OpenStack_Rocky版高可用集群3-3

文章目录

  • Centos 7 部署 OpenStack_Rocky版高可用集群3-3
    • 11、 Horizon集群
        • 11.1 部署dashboard (在全部控制节点操作 cont01 cont02 cont03)
        • 11.2 配置local_settings 文件(在全部控制节点操作 cont01 cont02 cont03)
        • 11.3 重启Httpd和memcached 服务(在全部控制节点操作 cont01 cont02 cont03)
        • 11.4 补充验证虚拟机网络知识(参考其他虚拟为例)
        • 11.5 补充验证虚拟机端口知识(参考其他虚拟为例)
    • 12、 Cinder控制节点集群
        • 12.1 创建cinder的数据库(在任意控制节点创建数据库,后台数据自动同步)
        • 12.2 创建cinder-api(在任意控制节点操作)
        • 12.3 安装cinder服务(在全部控制节点安装cinder服务)
        • 12.4 配置cinder.conf
        • 12.5 配置nova.conf (在全部控制节点操作和存储节点)
        • 12.6 同步cinder数据库 (任意控制节点操作)
        • 12.7 重启nova服务 并开启cinder服务(所有控制节点,要是计算节点有变更nova也要重启)
        • 12.8 验证
        • 12.9 在存储节点上部署cinder (Install and configure a storage node)
        • **虚拟机HA高可用配置**
    • 13、 部署Ceph集群
        • 13.1 设置ceph的yum源(所有mod和OSD上配置 下面用m$ c $代替 mod01 comp01 comp02 comp03)
        • 13.2 安装ceph-deploy(在admin server上,此处我们在mon01)
        • 13.3 安装Ceph包(在admin server上,此处我们在mon01)
        • 13.4 创建ceph集群
          • 13.4.1 创建mon&mgr
          • 13.4.2 创建集群失败需要删除的命令(正常情况下不使用,使用下列命令可能会产生其他错误)
          • 13.4.3 修改集群配置文件(optional)
          • 13.4.3 部署initial monitor
          • 13.4.4 创建ceph keyring
          • 13.4.5 分发ceph keyring
          • 13.4.6 创建ceph mgr
        • 13.5 添加OSD
          • 13.5.1 添加osd.0(磁盘作block,无block.db,无block.wal)
          • 13.5.2 添加osd.1(磁盘作block,无block.db,无block.wal)
        • 13.6 Ceph-deploy命令详解
        • 13.7 排错

11、 Horizon集群

11.1 部署dashboard (在全部控制节点操作 cont01 cont02 cont03)

yum install openstack-dashboard -y
cp -p /etc/openstack-dashboard/local_settings{,.bak}

11.2 配置local_settings 文件(在全部控制节点操作 cont01 cont02 cont03)

[root@cont02:/root]# vim /etc/openstack-dashboard/local_settings

 38 ALLOWED_HOSTS = ['horizon.example.com', 'localhost', '*']
 64 OPENSTACK_API_VERSIONS = {
 65 #    "data-processing": 1.1,
 66     "identity": 3,
 67     "image": 2,
 68     "volume": 2,
 69     "compute": 2,
 70 }

 75 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
 
 97 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

154 CACHES = {
155     'default': {
156         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
157         'LOCATION': 'VirtualIP:11211',
158     },
159 }
160 #//注释掉下面的
161 #CACHES = {
162 #    'default': {
163 #        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
164 #    },
165 #}

184 OPENSTACK_HOST = "VirtualIP"
185 OPENSTACK_KEYSTONE_URL = "http://%s:5001/v3" % OPENSTACK_HOST
186 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

324 OPENSTACK_NEUTRON_NETWORK = {
325     'enable_router': True,
326     'enable_quotas': True,
327     'enable_ipv6': True,
328     'enable_distributed_router': False,
329     'enable_ha_router': False,
330     'enable_lb': True,
331     'enable_firewall': True,
332     'enable_': True,
333     'enable_fip_topology_check': True,

467 TIME_ZONE = "Asia/Shanghai"

[root@cont02:/root]# scp /etc/openstack-dashboard/local_settings cont01:/etc/openstack-dashboard/

[root@cont02:/root]# scp /etc/openstack-dashboard/local_settings cont03:/etc/openstack-dashboard/

[root@cont02:/root]# vim  /etc/httpd/conf.d/openstack-dashboard.conf
##赋权,在第3行后新增 WSGIApplicationGroup %{GLOBAL}
WSGIApplicationGroup %{GLOBAL}
[root@cont01:/root]# vim  /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
[root@cont03:/root]# vim  /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}


11.3 重启Httpd和memcached 服务(在全部控制节点操作 cont01 cont02 cont03)

systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service
**URL中输入:http://192.168.10.20/dashboard
//注:默认账户:admin 密码:admin  域:Default
//注:使用中创建项目时,还需要先创建一个默认角色:user


11.4 补充验证虚拟机网络知识(参考其他虚拟为例)

[root@comp01:/root]# brctl show
bridge name     bridge id               STP enabled     interfaces
qbra3ca3e59-9b          8000.c62ff75153f3       no              qvba3ca3e59-9b
                                                     tapa3ca3e59-9b
[root@comp01:/root]# virsh list --all

Id    Name                           State
----------------------------------------------------

7     instance-00000023              running

[root@comp01:/root]# virsh edit instance-00000023

instance-00000023
4189e9cb-89b6-455e-8317-31c553504aa2

 
   
   J.Fla
   2020-02-15 14:36:50
   
     1024
     5
     0
     0
     1
   
   
     admin
     admin
   
 

1048576
1048576
1

 1024


 
   RDO
   OpenStack Compute
   18.2.3-1.el7
   cc94fa19-a969-4853-a924-a590f38f3515
   4189e9cb-89b6-455e-8317-31c553504aa2
   Virtual Machine
 


 hvm
 
 


 
 


 
 


destroy
restart
destroy

 /usr/libexec/qemu-kvm
 
   
   
   
   f05aa18d-c737-4ce6-bebc-e0170210c45e
   
[root@comp01:/root]# ovs-vsctl show 6d74e3ec-1418-48f4-8df2-cafe4c2196b0 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-c0a84619" Interface "vxlan-c0a84619" type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="192.168.70.27", out_key=flow, remote_ip="192.168.70.25"} Port br-tun Interface br-tun type: internal Port "vxlan-c0a84618" Interface "vxlan-c0a84618" type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="192.168.70.27", out_key=flow, remote_ip="192.168.70.24"} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qvoa3ca3e59-9b" tag: 6 Interface "qvoa3ca3e59-9b" Port br-int Interface br-int type: internal ovs_version: "2.11.0" [root@cont03:/root]# ovs-vsctl show 303929a4-3e6e-4c8e-a0e2-b2456e76c2d4 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-ex Port "ens38" Interface "ens38" Port br-ex Interface br-ex type: internal Port "qg-fbf8520b-c8" Interface "qg-fbf8520b-c8" type: internal Bridge br-tun Controller "tcp:127.0.0.1:6633" fail_mode: secure Port "vxlan-c0a8461b" Interface "vxlan-c0a8461b" type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="192.168.70.26", out_key=flow, remote_ip="192.168.70.27"} Port "vxlan-c0a84618" Interface "vxlan-c0a84618" type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="192.168.70.26", out_key=flow, remote_ip="192.168.70.24"} Port "vxlan-c0a84619" Interface "vxlan-c0a84619" type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="192.168.70.26", out_key=flow, remote_ip="192.168.70.25"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qr-bd6d42a8-70" tag: 4095 Interface "qr-bd6d42a8-70" type: internal ovs_version: "2.11.0" [root@cont03:/root]# ip netns show qrouter-277c18b1-2bf0-4844-9e6b-9c6cf95e2704 (id: 0) [root@cont03:/root]# ip netns list qrouter-277c18b1-2bf0-4844-9e6b-9c6cf95e2704 (id: 0) [root@cont03:/root]# ip netns exec qrouter-277c18b1-2bf0-4844-9e6b-9c6cf95e2704 ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 29: qr-bd6d42a8-70: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:f0:22:a9 brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 brd 192.168.1.255 scope global qr-bd6d42a8-70 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fef0:22a9/64 scope link valid_lft forever preferred_lft forever 30: qg-fbf8520b-c8: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:72:86:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.159.15/24 brd 192.168.159.255 scope global qg-fbf8520b-c8 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe72:86e5/64 scope link valid_lft forever preferred_lft forever

Centos 7 部署 OpenStack_Rocky版高可用集群3-3_第1张图片

11.5 补充验证虚拟机端口知识(参考其他虚拟为例)

[root@cont03:/root]# neutron port-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------+----------------------------------+------
| id                                   | name | tenant_id                        | mac_address       | fixed_ips                                                            +--------------------------------------+------+----------------------------------+------
| 1b23a444-5aa0-453a-aea3-1a607ebfd42b |      | 1e30ea22951a411688b1b9c9d648b8ac | fa:16:3e:17:1a:ac | {"subnet_id": "d9ccfe6a-d60c-47c1-8d1f-1fd83493350f", "ip_address": "192.168.70.4"}    |
| 1d661f10-31e8-454a-a373-e9149f108b31 |      | 1e30ea22951a411688b1b9c9d648b8ac | fa:16:3e:4c:41:e9 | {"subnet_id": "42f5da9f-d553-4300-adc9-193703657cd8", "ip_address": "192.168.159.128"} |
| 5a9eec70-2d5c-443d-981c-89c62428b280 |      |                                  | fa:16:3e:b5:4e:8b | {"subnet_id": "42f5da9f-d553-4300-adc9-193703657cd8", "ip_address": "192.168.159.132"} |
| 65d62104-aa3a-4e9c-94b5-91e49fb68509 |      | 1e30ea22951a411688b1b9c9d648b8ac | fa:16:3e:2f:ee:85 | {"subnet_id": "d9ccfe6a-d60c-47c1-8d1f-1fd83493350f", "ip_address": "192.168.70.5"}    |
| 99d60ed0-9cc2-4db8-b9e7-b915a17dbb5d |      | 1e30ea22951a411688b1b9c9d648b8ac | fa:16:3e:9a:73:6e | {"subnet_id": "d9ccfe6a-d60c-47c1-8d1f-1fd83493350f", "ip_address": "192.168.70.1"}    |
| feb7ba69-6fd5-45c9-bd15-64e6acf03f45 |      | 1e30ea22951a411688b1b9c9d648b8ac | fa:16:3e:3b:2e:57 | {"subnet_id": "d9ccfe6a-d60c-47c1-8d1f-1fd83493350f", "ip_address": "192.168.70.2"}    |
+--------------------------------------+------+----------------------------------+------

[root@cont03:/root]# neutron port-show 99d60ed0-9cc2-4db8-b9e7-b915a17dbb5d
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+-----------------------+---------------------------------------------------------------
| Field                 | Value                                                          
+-----------------------+---------------------------------------------------------------
| admin_state_up        | True                                                                                |
| allowed_address_pairs |                                                                                     |
| binding:host_id       | cont03                                                                              |
| binding:profile       | {}                                                                                  |
| binding:vif_details   | {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}           |
| binding:vif_type      | ovs                                                                                 |
| binding:vnic_type     | normal                                                                              |
| created_at            | 2020-02-16T10:39:41Z                                                                |
| description           |                                                                                     |
| device_id             | b3c1252d-7b2d-477a-8efd-939143525d50                                                |
| device_owner          | network:router_interface                                                            |
| extra_dhcp_opts       |                                                                                     |
| fixed_ips             | {"subnet_id": "d9ccfe6a-d60c-47c1-8d1f-1fd83493350f", "ip_address": "192.168.70.1"} |
| id                    | 99d60ed0-9cc2-4db8-b9e7-b915a17dbb5d                                                |
| mac_address           | fa:16:3e:9a:73:6e                                                                   |
| name                  |                                                                                     |
| network_id            | 83a8e27a-bcb2-4e41-98b1-8c1f7988ff3b                                                |
| port_security_enabled | False                                                                               |
| project_id            | 1e30ea22951a411688b1b9c9d648b8ac                                                    |
| revision_number       | 13                                                                                  |
| security_groups       |                                                                                     |
| status                | ACTIVE                                                                              |
| tags                  |                                                                                     |
| tenant_id             | 1e30ea22951a411688b1b9c9d648b8ac                                                    |
| updated_at            | 2020-02-16T12:25:15Z                                                                |
+-----------------------+---------------------------------------------------------------

[root@cont03:/root]# neutron port-list --fixed-ips ip_address=192.168.159.13
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------+-----------+-------------------+---------
| id                                   | name | tenant_id | mac_address       | fixed_ips                                                                             |
+--------------------------------------+------+-----------+-------------------+---------
| bbbb080d-3e93-4c6e-bc63-36430278f743 |      |           | fa:16:3e:f2:68:85 | {"subnet_id": "77755aae-7576-4f4e-8dd6-e7422e9ffaec", "ip_address": "192.168.159.13"} |
+--------------------------------------+------+-----------+-------------------+---------

12、 Cinder控制节点集群

cinder-api: 接收和响应外部有关块存储请求

cinder-volume: 提供存储空间

cinder-scheduler: 调度器,决定将要分配的空间有哪个cinder-volume提供

cinder-backup: 备份存储

12.1 创建cinder的数据库(在任意控制节点创建数据库,后台数据自动同步)

[root@cont02:/root]# mysql -uroot -p"typora#2019"
MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.011 sec)
MariaDB [(none)]>  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_typora';
Query OK, 0 rows affected (0.009 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'  IDENTIFIED BY 'CINDER_typora';
Query OK, 0 rows affected (0.010 sec)
MariaDB [(none)]> exit
Bye



12.2 创建cinder-api(在任意控制节点操作)

[root@cont02:/root]# . admin-openrc
[root@cont02:/root]# openstack user create --domain default --password=cinder_typora cinder
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | b551ae6e4af54eab8f9db6aa3d354a60 |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@cont02:/root]# openstack role add --project service --user cinder admin
[root@cont02:/root]# openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 1b577e852a184ec29dcc27f7a3eb535b |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+
[root@cont02:/root]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 2ec713d0ac31479db4f062b5bedec2d5 |
| name        | cinderv3                         |
| type        | volumev3                         |
+-------------+----------------------------------+
[root@cont02:/root]# openstack endpoint create --region RegionOne volumev2 public http://VirtualIP:9776/v2/%\(project_id\)s
[root@cont02:/root]# openstack endpoint create --region RegionOne volumev2 internal http://VirtualIP:9776/v2/%\(project_id\)s
[root@cont02:/root]# openstack endpoint create --region RegionOne volumev2 admin http://VirtualIP:9776/v2/%\(project_id\)s
[root@cont02:/root]# openstack endpoint create --region RegionOne volumev3 public http://VirtualIP:9776/v3/%\(project_id\)s
[root@cont02:/root]# openstack endpoint create --region RegionOne volumev3 internal http://VirtualIP:9776/v3/%\(project_id\)s
[root@cont02:/root]# openstack endpoint create --region RegionOne volumev3 admin http://VirtualIP:9776/v3/%\(project_id\)s



12.3 安装cinder服务(在全部控制节点安装cinder服务)

[root@cont01:/root]#  yum install openstack-cinder -y
[root@cont02:/root]#  yum install openstack-cinder -y
[root@cont03:/root]#  yum install openstack-cinder -y

12.4 配置cinder.conf

[root@cont01:/root]# cp -p /etc/cinder/cinder.conf{,.bak}
[root@cont02:/root]# cp -p /etc/cinder/cinder.conf{,.bak}
[root@cont03:/root]# cp -p /etc/cinder/cinder.conf{,.bak}

[root@cont02:/root]# vim /etc/cinder/cinder.conf

   1 [DEFAULT]
   2 
   3 #
   4 # From cinder
   5 #
   6 my_ip = 192.168.10.22
   7 state_path = /var/lib/cinder
   8 auth_strategy = keystone
   9 glance_api_servers = http://VirtualIP:9293
  10 enabled_backends = lvm
  11 #osapi_volume_listen = $my_ip
  12 #osapi_volume_listen_port = 8776
  13 log_dir = /var/log/cinder
  14 transport_url = rabbit://openstack:adminopenstack@cont01:5672,openstack:adminopenstack@cont02:5672,openstack:adminopenstack@cont03:5672

3725 [database]
3726 connection = mysql+pymysql://cinder:CINDER_typora@VirtualIP:3307/cinder

3986 [keystone_authtoken]
3987 www_authenticate_uri = http://VirtualIP:5001
3988 auth_url = http://VirtualIP:5001
3989 memcached_servers = cont01:11211,cont02:11211,cont03:11211
3990 auth_type = password
3991 project_domain_name = default
3992 user_domain_name = default
3993 project_name = service
3994 username = cinder
3995 password = cinder_typora

4286 [oslo_concurrency]
4287 lock_path = $state_path/tmp
4288 #lock_path = /var/lib/cinder/tmp

5255 [lvm]
5256 volume_drive = cinder.volume.drives.lvm.LVMVolumeDrive
5257 volume_group = cinder-volumes
5258 volumes_dir = $state_path/volumes
5259 iscsi_protocol = iscsi
5260 iscsi_helper = lioadm
5261 iscsi_ip_address = 192.168.10.22
//此处挂载192.168.10.22上

[root@cont03:/root]# vim /etc/cinder/cinder.conf

   1 [DEFAULT]
   2 
   3 #
   4 # From cinder
   5 #
   6 my_ip = 192.168.10.23
   7 state_path = /var/lib/cinder
   8 auth_strategy = keystone
   9 glance_api_servers = http://VirtualIP:9293
  10 enabled_backends = lvm
  #11 osapi_volume_listen = $my_ip
  #12 osapi_volume_listen_port = 8776
  13 log_dir = /var/log/cinder
  14 transport_url = rabbit://openstack:adminopenstack@cont01:5672,cont02:5672,cont03:5672

3725 [database]
3726 connection = mysql+pymysql://cinder:CINDER_typora@VirtualIP:3307/cinder

3986 [keystone_authtoken]
3987 www_authenticate_uri = http://VirtualIP:5001
3988 auth_url = http://VirtualIP:5001
3989 memcached_servers = cont01:11211,cont02:11211,cont03:11211
3990 auth_type = password
3991 project_domain_name = default
3992 user_domain_name = default
3993 project_name = service
3994 username = cinder
3995 password = cinder_typora

4286 [oslo_concurrency]
4287 lock_path = $state_path/tmp
4288 #lock_path = /var/lib/cinder/tmp

#5255 [lvm]
#5256 volume_drive = cinder.volume.drives.lvm.LVMVolumeDrive
#5257 volume_group = cinder-volumes
#5258 volumes_dir = $state_path/volumes
#5259 iscsi_protocol = iscsi
#5260 iscsi_helper = lioadm
#5261 iscsi_ip_address = 192.168.124.22
//此处挂载192.168.124.22上

[root@cont01:/root]# vim /etc/cinder/cinder.conf

   1 [DEFAULT]
   2 
   3 #
   4 # From cinder
   5 #
   6 my_ip = 192.168.10.21
   7 state_path = /var/lib/cinder
   8 auth_strategy = keystone
   9 glance_api_servers = http://VirtualIP:9293
  10 enabled_backends = lvm
  11 osapi_volume_listen = $my_ip
  12 osapi_volume_listen_port = 8776
  13 log_dir = /var/log/cinder
  14 transport_url = rabbit://openstack:adminopenstack@cont01:5672,cont02:5672,cont03:5672

3725 [database]
3726 connection = mysql+pymysql://cinder:CINDER_typora@VirtualIP:3307/cinder

3986 [keystone_authtoken]
3987 www_authenticate_uri = http://VirtualIP:5001
3988 auth_url = http://VirtualIP:5001
3989 memcached_servers = cont01:11211,cont02:11211,cont03:11211
3990 auth_type = password
3991 project_domain_name = default
3992 user_domain_name = default
3993 project_name = service
3994 username = cinder
3995 password = cinder_typora

4286 [oslo_concurrency]
4287 lock_path = $state_path/tmp
4288 #lock_path = /var/lib/cinder/tmp

#5255 [lvm]
#5256 volume_drive = cinder.volume.drives.lvm.LVMVolumeDrive
#5257 volume_group = cinder-volumes
#5258 volumes_dir = $state_path/volumes
#5259 iscsi_protocol = iscsi
#5260 iscsi_helper = lioadm
#5261 iscsi_ip_address = 192.168.10.22
//此处挂载192.168.10.22上


12.5 配置nova.conf (在全部控制节点操作和存储节点)

[root@cont03:/root]# vim /etc/nova/nova.conf
 4184 [cinder]
 4185 os_region_name=RegionOne
 
 [root@cont02:/root]# vim /etc/nova/nova.conf
 4184 [cinder]
 4185 os_region_name=RegionOne
 
 [root@cont01:/root]# vim /etc/nova/nova.conf
 4184 [cinder]
 4185 os_region_name=RegionOne

12.6 同步cinder数据库 (任意控制节点操作)

[root@cont02:/root]# su -s /bin/sh -c "cinder-manage db sync" cinder

12.7 重启nova服务 并开启cinder服务(所有控制节点,要是计算节点有变更nova也要重启)

systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service


12.8 验证

[root@cont03:/root]# cinder service-list
+------------------+--------+------+---------+-------+-----------------------
| Binary           | Host   | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+--------+------+---------+-------+-----------------------
| cinder-scheduler | cont01 | nova | enabled | up    | 2020-01-18T11:01:06.000000 | -               |
| cinder-scheduler | cont02 | nova | enabled | up    | 2020-01-18T11:00:54.000000 | -               |
| cinder-scheduler | cont03 | nova | enabled | up    | 2020-01-18T11:01:18.000000 | -               |
+------------------+--------+------+---------+-------+-----------------------
//查看快照
[root@cont01:/root]# cinder snapshot-list
+----+-----------+--------+------+------+
| ID | Volume ID | Status | Name | Size |
+----+-----------+--------+------+------+
+----+-----------+--------+------+------+

12.9 在存储节点上部署cinder (Install and configure a storage node)

注:由于没有单独的存储节点,计划采用Ceph分布式存储集群来作为主存储,这边使用cont02作为存储节点

[root@cont02:/root]# yum install lvm2 device-mapper-persistent-data -y
[root@cont02:/root]# systemctl enable lvm2-lvmetad.service
[root@cont02:/root]# systemctl start lvm2-lvmetad.service
[root@cont02:/root]# systemctl status lvm2-lvmetad.service
[root@cont02:/root]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 931.5G  0 disk 
└─sda1            8:1    0 931.5G  0 part 
sdb               8:16   0 223.6G  0 disk 
├─sdb1            8:17   0     2M  0 part 
├─sdb2            8:18   0   8.6G  0 part /boot
└─sdb3            8:19   0   215G  0 part 
  ├─centos-root 253:0    0   180G  0 lvm  /
  └─centos-swap 253:1    0    35G  0 lvm  [SWAP]
[root@cont02:/root]# pvcreate /dev/sda1
WARNING: xfs signature detected on /dev/sda1 at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/sda1.
  Physical volume "/dev/sda1" successfully created.
[root@cont02:/root]# vgcreate cinder-volumes /dev/sda1
  Volume group "cinder-volumes" successfully created
[root@cont02:/root]# vgdisplay
  --- Volume group ---
  VG Name               centos
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               215.00 GiB
  PE Size               4.00 MiB
  Total PE              55041
  Alloc PE / Size       55040 / 215.00 GiB
  Free  PE / Size       1 / 4.00 MiB
  VG UUID               SqckVt-33fN-P9p6-7h87-02Jw-lKp2-rrqWTz
   
  --- Volume group ---
  VG Name               cinder-volumes
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <931.51 GiB
  PE Size               4.00 MiB
  Total PE              238466
  Alloc PE / Size       0 / 0   
  Free  PE / Size       238466 / <931.51 GiB
  VG UUID               RLDRg5-2QsP-YScs-LMRu-WUwR-wVBF-b4pHfs
[root@cont02:/root]# yum install openstack-cinder targetcli python-keystone -y
[root@cont02:/root]# vim /etc/lvm/lvm.conf
 329         filter = [ "a/sda1/", "r/.*/"]
[root@cont02:/root]# vim /etc/cinder/cinder.conf
   1 [DEFAULT]
   2 
   3 #
   4 # From cinder
   5 #
   6 my_ip = 192.168.10.22
   7 state_path = /var/lib/cinder
   8 auth_strategy = keystone
   9 glance_api_servers = http://VirtualIP:9293
  10 enabled_backends = lvm
  11 osapi_volume_listen = $my_ip
  12 osapi_volume_listen_port = 8776
  13 log_dir = /var/log/cinder
  14 transport_url = rabbit://openstack:adminopenstack@cont01:5672,openstack:adminopenstack@cont02:5672,openstack:adminopenstack@cont03:5672

3725 [database]
3726 connection = mysql+pymysql://cinder:CINDER_typora@VirtualIP:3307/cinder

3986 [keystone_authtoken]
3987 www_authenticate_uri = http://VirtualIP:5001
3988 auth_url = http://VirtualIP:5001
3989 memcached_servers = cont01:11211,cont02:11211,cont03:11211
3990 auth_type = password
3991 project_domain_name = default
3992 user_domain_name = default
3993 project_name = service
3994 username = cinder
3995 password = cinder_typora

4286 [oslo_concurrency]
4287 lock_path = $state_path/tmp

5255 [lvm]
5256 volume_drive = cinder.volume.drives.lvm.LVMVolumeDrive
5257 volume_group = cinder-volumes
5258 volumes_dir = $state_path/volumes
5259 iscsi_protocol = iscsi
5260 iscsi_helper = lioadm
5261 iscsi_ip_address = 192.168.124.22

[root@cont02:/root]# systemctl enable openstack-cinder-volume.service target.service
[root@cont02:/root]# systemctl start openstack-cinder-volume.service target.service
[root@cont02:/root]# systemctl status openstack-cinder-volume.service target.service
[root@cont02:/root]# source openrc 
[root@cont02:/root]# cinder service-list
+------------------+------------+------+---------+-------+---------------------------
| Binary           | Host       | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+------------+------+---------+-------+---------------------------
| cinder-scheduler | cont01     | nova | enabled | up    | 2020-01-19T01:50:30.000000 | -               |
| cinder-scheduler | cont02     | nova | enabled | up    | 2020-01-19T01:50:33.000000 | -               |
| cinder-scheduler | cont03     | nova | enabled | up    | 2020-01-19T01:50:30.000000 | -               |
| cinder-volume    | cont02@lvm | nova | enabled | up    | 2020-01-19T01:50:15.000000 | -               |
+------------------+------------+------+---------+-------+---------------------------

常用检测项
source openrc 
openstack compute service list
openstack network agent list
openstack volume service list
systemctl status rabbitmq-server
rabbitmqctl cluster_status
systemctl status mariadb
systemctl status httpd
systemctl status memcached
nova list
cinder list
neutron l3-agent-list-hosting-router router-id


虚拟机HA高可用配置

计算节点是运行虚拟机的节点,所以我们关心的是当该虚拟机所在的计算节点宕机后虚拟机是否还能继续正常运行,这就是虚拟机的高可用问题,按照我们想要的情况是在该计算节点宕机后该节点上的虚拟机应该能够自动迁移到其它计算节点上继续正常运行,但这也需要一些前提条件,比如虚拟机的存储应该都是存放在共享存储的,而非本地存储。可参考两种方案:(暂不部署)

(1)利用控制节点去检查计算节点的管理网、存储网和租户网,定制特定的策略当哪些网络不可用时就判定该计算节点不可用,接着启动nova evacuate模块功能将虚拟机运行在其它可用的计算节点上。

(2)管理、存储、租户网上分别部署Gossip Pool,计算节点之间同时通过三个Gossip Pool互相检查连通性,每个Gossip Pool里发现的问题节点上报到控制节点上去。

13、 部署Ceph集群

13.1 设置ceph的yum源(所有mod和OSD上配置 下面用m$ c $代替 mod01 comp01 comp02 comp03)


[root@m$c$:/root]# vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/$basearch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/SRPMS
enabled=0
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[root@m$c$:/root]# yum clean all && yum repolist


13.2 安装ceph-deploy(在admin server上,此处我们在mon01)

# 在规划的全部控制管理节点安装ceph-deploy工具
[root@mon01:/root]# yum -y install ceph-deploy
注:如果报错,需要安装yum install python-setuptools
[root@mon01:/root]# ceph-deploy --version
2.0.1



13.3 安装Ceph包(在admin server上,此处我们在mon01)

注:在所有server上安装deltarpm(yum install -y deltarpm)
[root@m&c:/root]#  yum install -y deltarpm
[root@mon01:/root]# ceph-deploy install --release=luminous mon01 comp01 comp02 comp03
[root@mon01:/root]# ceph -v
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
[root@mon01:/root]# rpm -qa | grep ceph
ceph-selinux-12.2.12-0.el7.x86_64
ceph-deploy-2.0.1-0.noarch
ceph-12.2.12-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-common-12.2.12-0.el7.x86_64
ceph-osd-12.2.12-0.el7.x86_64
ceph-radosgw-12.2.12-0.el7.x86_64
centos-release-ceph-luminous-1.1-2.el7.centos.noarch
libcephfs2-12.2.12-0.el7.x86_64
ceph-base-12.2.12-0.el7.x86_64
ceph-mon-12.2.12-0.el7.x86_64
ceph-mds-12.2.12-0.el7.x86_64
python-cephfs-12.2.12-0.el7.x86_64
ceph-mgr-12.2.12-0.el7.x86_64


13.4 创建ceph集群

13.4.1 创建mon&mgr
##以mon01作为initial monitor创建集群
[root@mon01:/root]# mkdir -p /etc/ceph && cd /etc/ceph
[root@mon01:/etc/ceph]#  ceph-deploy new mon01
[root@mon01:/etc/ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring


13.4.2 创建集群失败需要删除的命令(正常情况下不使用,使用下列命令可能会产生其他错误)
//清楚命令 从远处主机上卸载ceph软件包  
[root@mon01:/etc/cephcluster]# ceph-deploy purge mon01 comp01 comp02 comp03
//删除/var/lib/cephcluster目录下的数据,它同样也会删除/etc/ceph下的内容
[root@mon01:/etc/cephcluster]# ceph-deploy purgedata mon01 comp01 comp02 comp03
删除本地目录下的所有验证keyring, 包括client.admin, monitor, bootstrap系列
[root@mon01:/etc/cephcluster]# ceph-deploy forgetkeys
注:若部署出现错误,需要重头开始部署:执行删除后重新部署: rm cephcluster.*


13.4.3 修改集群配置文件(optional)
[root@mon01:/etc/cephcluster]# vim /etc/ceph/ceph.conf
[global]
fsid = 513c642c-3561-46f7-9a6e-e8a4fc82ee06
mon_initial_members = mon01
mon_host = 192.168.10.24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

# 默认的副本数为3
osd_pool_default_size = 2
# public network:前端mon网络,client访问网络;确保public network与mon_host在相同网段,否则初始化时可能会有错误;
##ceph集群使用两个网络:public network和cluster network。前者用于服务client;后者用于集群内部通信,例如osd之间迁移数据。另外,两个网络上都有heartbeat。
##我们配置主机名解析的时候,把主机名解析为public network的地址。这是因为,ceph-deploy是作为client 来操作集群的,ceph集群通过public network服务于clientmonitor是运行于public network上的。这也很容易理解,ceph的client都需要访问monitor,若monitor运行于cluster network上,client无法访问。
# cluster network:后端osd心跳,数据/流复制恢复等网络
public network = 192.168.10.0/24
cluster network = 192.168.7.0/24
# 默认保护机制不允许删除pool,根据情况设置
#mon_allow_pool_delete = true

[root@mon01:/etc/ceph]# vim /etc/ceph/ceph.mon.keyring
##ceph.mon.keyring是monitor的keyring,它定义了monitor的key,以及monitor有什么权限
[mon.]
key = AQAJpypeAAAAABAAHW+AwsGqydCrxpF+LV/OMA==
caps mon = allow *

13.4.3 部署initial monitor
[root@mon01:/etc/ceph]# ceph-deploy mon create mon01
//验证monitor已经运行起来了
[root@mon01:/etc/ceph]# ps -ef | grep ceph
ceph       21253       1  0 16:36 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id cont01 --setuser ceph --setgroup ceph
root       21446    2492  0 16:36 pts/0    00:00:00 grep --color=auto ceph
//验证monitor运行于public network之上
[root@mon01:/etc/ceph]# netstat -anpl | grep 6789 | grep LISTEN
tcp        0      0 192.168.10.24:6789       0.0.0.0:*               LISTEN      21253/ceph-mon


13.4.4 创建ceph keyring
[root@mon01:/etc/ceph]# ceph-deploy gatherkeys mon01
[root@mon01:/etc/ceph]# ll
total 44
-rw-------. 1 root root    71 Jan 24 16:42 ceph.bootstrap-mds.keyring
-rw-------. 1 root root    71 Jan 24 16:42 ceph.bootstrap-mgr.keyring
-rw-------. 1 root root    71 Jan 24 16:42 ceph.bootstrap-osd.keyring
-rw-------. 1 root root    71 Jan 24 16:42 ceph.bootstrap-rgw.keyring
-rw-------. 1 root root    63 Jan 24 16:42 ceph.client.admin.keyring
-rw-r--r--. 1 root root   287 Jan 24 16:36 ceph.conf
-rw-r--r--. 1 root root 15569 Jan 24 16:42 ceph-deploy-ceph.log
-rw-------. 1 root root    73 Jan 24 16:33 ceph.mon.keyring
[root@mon01:/etc/ceph]# cat ceph.client.admin.keyring
[client.admin]
	key = AQDbrSpebQYfMRAA8Uufu2uqJc+ABr13ITjbGw==
[root@mon01:/etc/ceph]# cat ceph.bootstrap-osd.keyring 
[client.bootstrap-osd]
	key = AQDdrSperPuQERAAMumzf6/239VhCZJbLIp8nA==
//admin的key保存在ceph.client.admin.keyring文件里,通过–keyring提供
[root@mon01:/etc/ceph]# ceph --keyring ceph.client.admin.keyring -c ceph.conf -s
  cluster:
    id:     e8898db6-3ee9-44ec-9fdb-0c43f192d6cb
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum mon01
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     

[root@mon01:/etc/ceph]# ceph --keyring ceph.client.admin.keyring -c ceph.conf auth get client.admin
exported keyring for client.admin
[client.admin]
	key = AQDbrSpebQYfMRAA8Uufu2uqJc+ABr13ITjbGw==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"

13.4.5 分发ceph keyring

​ 执行admin的命令,要提供admin的key(–keyring ceph.client.admin.keyring)以及配置文件(-c ceph.conf)。在后续的运维中,我们经常需要在某个server上执行admin命令。每次都提供这些参数比较麻烦。实际上,ceph会默认地从/etc/ceph/中找keyring和ceph.conf。因此,我们可以把ceph.client.admin.keyring和ceph.conf放到每个server的/etc/ceph/。ceph-deploy可以帮我做这些。

[root@mon01:/etc/ceph]# ceph-deploy admin mon01 comp01 comp02 comp03
//检查每个server,发现/etc/ceph/下都多了ceph.client.admin.keyring和ceph.conf这两个文件,现在就不用提供那些参数了:
[root@comp02:/etc/ceph]# ls
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpiCmZL1
[root@comp02:/etc/ceph]# ceph -s
  cluster:
    id:     e8898db6-3ee9-44ec-9fdb-0c43f192d6cb
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum mon01
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     
 
[root@comp02:/etc/ceph]#  ceph auth get client.admin 
exported keyring for client.admin
[client.admin]
	key = AQDbrSpebQYfMRAA8Uufu2uqJc+ABr13ITjbGw==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"

13.4.6 创建ceph mgr

从ceph 12(luminous)开始,需要为每个monitor创建一个mgr

[root@mon01:/etc/ceph]# ceph-deploy mgr create mon01
[root@mon01:/etc/ceph]# ceph -s
  cluster:
    id:     e8898db6-3ee9-44ec-9fdb-0c43f192d6cb
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum mon01
    mgr: mon01(active)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     

13.5 添加OSD

ceph-deploy osd create通过调用ceph-volume来创建OSD。使用bluestore时(默认),需要指定3个device:

device 如何指定 说明
block –data 主要存储,必选。可以是磁盘,分区或者lv
block.db –block-db 可选。若不指定,则对应内容存储于block。可以是分区或者lv
block.wal –block-wal 可选。若不指定,则对应内容存储于block。可以是分区或者lv
注意:

  1. 不可以使用磁盘作为block.db或者block.wal,否则会报错:blkid could not detect a PARTUUID for device;
  2. 若使用磁盘或者分区作block,则ceph-volume会在其上创建lv来使用。若使用分区作block.db或block.wal,则直接使用分区而不创建lv。
13.5.1 添加osd.0(磁盘作block,无block.db,无block.wal)
[root@mon01:/etc/ceph]# ceph-deploy osd create comp01 --data /dev/sdb
[root@mon01:/etc/ceph]# ceph -s
  cluster:
    id:     e8898db6-3ee9-44ec-9fdb-0c43f192d6cb
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum mon01
    mgr: mon01(active)
    osd: 1 osds: 1 up, 1 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   1.00GiB used, 14.0GiB / 15.0GiB avail
    pgs:     
[root@comp01:/etc/ceph]# ceph -s
  cluster:
    id:     e8898db6-3ee9-44ec-9fdb-0c43f192d6cb
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum mon01
    mgr: mon01(active)
    osd: 1 osds: 1 up, 1 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   1.00GiB used, 14.0GiB / 15.0GiB avail
    pgs:     
[root@comp01:/etc/ceph]# mount | grep ceph
tmpfs on /var/lib/ceph/osd/ceph-0 type tmpfs (rw,relatime,seclabel)
[root@comp01:/etc/ceph]# ll /var/lib/ceph/osd/ceph-0 
total 48
-rw-r--r--. 1 ceph ceph 186 Jan 24 17:08 activate.monmap
lrwxrwxrwx. 1 ceph ceph  93 Jan 24 17:08 block -> /dev/ceph-dedc21dd-a432-4b55-a94b-f4cd6f5a2ffc/osd-block-60dfbeb8-cca5-438d-a20c-144c025220b1
-rw-r--r--. 1 ceph ceph   2 Jan 24 17:08 bluefs
-rw-r--r--. 1 ceph ceph  37 Jan 24 17:08 ceph_fsid
-rw-r--r--. 1 ceph ceph  37 Jan 24 17:08 fsid
-rw-------. 1 ceph ceph  55 Jan 24 17:08 keyring
-rw-r--r--. 1 ceph ceph   8 Jan 24 17:08 kv_backend
-rw-r--r--. 1 ceph ceph  21 Jan 24 17:08 magic
-rw-r--r--. 1 ceph ceph   4 Jan 24 17:08 mkfs_done
-rw-r--r--. 1 ceph ceph  41 Jan 24 17:08 osd_key
-rw-r--r--. 1 ceph ceph   6 Jan 24 17:08 ready
-rw-r--r--. 1 ceph ceph  10 Jan 24 17:08 type
-rw-r--r--. 1 ceph ceph   2 Jan 24 17:08 whoami

注:
可见:
1. 使用磁盘vdb创建lv供block使用;
2. osd是mount到tmpfs的(bluefs, ceph_fsid, fsid, keyring等等都存于集群中);
13.5.2 添加osd.1(磁盘作block,无block.db,无block.wal)
[root@mon01:/etc/ceph]# ceph-deploy osd create comp02 --data /dev/sdb
[root@mon01:/etc/ceph]# ceph -s
  cluster:
    id:     e8898db6-3ee9-44ec-9fdb-0c43f192d6cb
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum mon01
    mgr: mon01(active)
    osd: 2 osds: 2 up, 2 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   2.00GiB used, 28.0GiB / 30.0GiB avail
    pgs:     
[root@comp02:/root]# ceph -s
  cluster:
    id:     e8898db6-3ee9-44ec-9fdb-0c43f192d6cb
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum mon01
    mgr: mon01(active)
    osd: 2 osds: 2 up, 2 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   2.00GiB used, 28.0GiB / 30.0GiB avail
    pgs:     
[root@comp02:/root]# ll /var/lib/ceph/osd/ceph-1
total 48
-rw-r--r--. 1 ceph ceph 186 Jan 24 17:47 activate.monmap
lrwxrwxrwx. 1 ceph ceph  93 Jan 24 17:47 block -> /dev/ceph-364713bc-fb89-4ef9-9ff1-986f93463c4b/osd-block-ece03651-d728-4478-8d46-43d31686a58d
-rw-r--r--. 1 ceph ceph   2 Jan 24 17:47 bluefs
-rw-r--r--. 1 ceph ceph  37 Jan 24 17:47 ceph_fsid
-rw-r--r--. 1 ceph ceph  37 Jan 24 17:47 fsid
-rw-------. 1 ceph ceph  55 Jan 24 17:47 keyring
-rw-r--r--. 1 ceph ceph   8 Jan 24 17:47 kv_backend
-rw-r--r--. 1 ceph ceph  21 Jan 24 17:47 magic
-rw-r--r--. 1 ceph ceph   4 Jan 24 17:47 mkfs_done
-rw-r--r--. 1 ceph ceph  41 Jan 24 17:47 osd_key
-rw-r--r--. 1 ceph ceph   6 Jan 24 17:47 ready
-rw-r--r--. 1 ceph ceph  10 Jan 24 17:47 type
-rw-r--r--. 1 ceph ceph   2 Jan 24 17:47 whoami

13.6 Ceph-deploy命令详解

ceph-deploy new [initial-monitor-node(s)]
开始部署一个集群,生成配置文件、keyring、一个日志文件。

ceph-deploy install [HOST] [HOST…]
在远程主机上安装ceph相关的软件包, –release可以指定版本,默认是firefly。

ceph-deploy mon create-initial
部署初始monitor成员,即配置文件中mon initial members中的monitors。部署直到他们形成表决团,然后搜集keys,并且在这个过程中报告monitor的状态。

ceph-deploy mon create [HOST] [HOST…]
显示的部署monitor,如果create后面不跟参数,则默认是mon initial members里的主机。

ceph-deploy mon add [HOST]
将一个monitor加入到集群之中。

ceph-deploy mon destroy [HOST]
在主机上完全的移除monitor,它会停止了ceph-mon服务,并且检查是否真的停止了,创建一个归档文件夹mon-remove在/var/lib/ceph目录下。

ceph-deploy gatherkeys [HOST] [HOST…]
获取提供新节点的验证keys。这些keys会在新的MON/OSD/MD加入的时候使用。

ceph-deploy disk list [HOST]
列举出远程主机上的磁盘。实际上调用ceph-disk命令来实现功能。

ceph-deploy disk prepare [HOST:[DISK]]
为OSD准备一个目录、磁盘,它会创建一个GPT分区,用ceph的uuid标记这个分区,创建文件系统,标记该文件系统可以被ceph使用。

ceph-deploy disk activate [HOST:[DISK]]
激活准备好的OSD分区。它会mount该分区到一个临时的位置,申请OSD ID,重新mount到正确的位置/var/lib/ceph/osd/ceph-{osd id}, 并且会启动ceph-osd。

ceph-deploy disk zap [HOST:[DISK]]
擦除对应磁盘的分区表和内容。实际上它是调用sgdisk –zap-all来销毁GPT和MBR, 所以磁盘可以被重新分区。

ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
为osd准备一个目录、磁盘。它会检查是否超过MAX PIDs,读取bootstrap-osd的key或者写一个(如果没有找到的话),然后它会使用ceph-disk的prepare命令来准备磁盘、日志,并且把OSD部署到指定的主机上。

ceph-deploy osd active HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
激活上一步的OSD。实际上它会调用ceph-disk的active命令,这个时候OSD会up and in。

ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
上两个命令的综合。

ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]…]
列举磁盘分区。

ceph-deploy admin [HOST] [HOST…]
将client.admin的key push到远程主机。将ceph-admin节点下的client.admin keyring push到远程主机/etc/ceph/下面。

ceph-deploy push [HOST] [HOST…]
将ceph-admin下的ceph.conf配置文件push到目标主机下的/etc/ceph/目录。 ceph-deploy pull [HOST]是相反的过程。

ceph-deploy uninstall [HOST] [HOST…]
从远处主机上卸载ceph软件包。有些包是不会删除的,像librbd1, librados2。

ceph-deploy purge [HOST] [HOST…]
类似上一条命令,增加了删除data。

ceph-deploy purgedata [HOST] [HOST…]
删除/var/lib/ceph目录下的数据,它同样也会删除/etc/ceph下的内容。

ceph-deploy forgetkeys
删除本地目录下的所有验证keyring, 包括client.admin, monitor, bootstrap系列。

ceph-deploy pkg –install/–remove [PKGs] [HOST] [HOST…]
在远程主机上安装或者卸载软件包。[PKGs]是逗号分隔的软件包名列表。

13.7 排错

如果出现以下错误代码

[cont03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cont03][DEBUG ] find the location of an executable
[cont03][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[cont03][WARNIN] usage: ceph-volume lvm create [-h] --data DATA [--filestore]
[cont03][WARNIN]                               [--journal JOURNAL] [--bluestore]
[cont03][WARNIN]                               [--block.db BLOCK_DB] [--block.wal BLOCK_WAL]
[cont03][WARNIN]                               [--osd-id OSD_ID] [--osd-fsid OSD_FSID]
[cont03][WARNIN]                               [--cluster-fsid CLUSTER_FSID]
[cont03][WARNIN]                               [--crush-device-class CRUSH_DEVICE_CLASS]
[cont03][WARNIN]                               [--dmcrypt] [--no-systemd]
[cont03][WARNIN] ceph-volume lvm create: error: GPT headers found, they must be removed on: /dev/sdb
[cont03][ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

执行下列操作

//[root@cont03:/etc/ceph]# ceph-disk activate-all
[root@cont03:/etc/ceph]# parted /dev/sdb mklabel gpt -s
[root@cont03:/etc/ceph]# ceph-volume lvm zap /dev/sdb
--> Zapping: /dev/sdb
--> --destroy was not specified, but zapping a whole device will remove the partition table
Running command: wipefs --all /dev/sdb
 stdout: /dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 8 bytes were erased at offset 0x3bffffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: calling ioclt to re-read partition table: Success
Running command: dd if=/dev/zero of=/dev/sdb bs=1M count=10
--> Zapping successful for: /dev/sdb>




待续Ceph集群可参考下文
Openstack集群-Ceph集群作为存储的部署

你可能感兴趣的:(openstack)