openstack 云计算虚拟化技术

openstack 云计算虚拟化技术

1. 基础知识

1. 什么是云计算

云计算是通过虚拟化技术去实现的,他是一种按量付费的模式!

2. 为什么要用云计算

  • 租其他机房特别贵
  • 扩展特别繁琐

3. 云计算的服务类型

  • IAAS 基础设施即服务
  • PAAS 平台即服务【每一种语言就是一个平台】
  • SAAS 软件即服务

4. 云计算 IAAS 有那些功能?

  • 虚拟机管理平台:每台虚拟机的管理都用数据库来统计
  • openstack 实现的是云计算 IAAS,开源的云计算平台,apache 2.0 开源协议。

5. openstack 架构

  • openstack 使用的是 SOA 架构:

即拆业务,把每一个功能都拆成一个独立的 web 服务,每一个服务都拥有至少一个集群

2. 安装部署 openstack

官方文档:https://docs.openstack.org/zh_CN/

OpenStack从入门到放弃_老男孩it教育的技术博客_51CTO博客

1. 环境搭建

节点名称 IP 地址 作用
OSACN 192.168.80.181 控制虚拟机
OSACM 192.168.80.182 搭载虚拟机
  • OSACM(computer) 计算节点必须开启虚拟化
  • OSACM(control) 控制节点内存必须大于 3G

2. 配置 yum 源(两个都做)

  1. 挂载 /mnt 目录,并且设置开机自启

    $ mount /dev/cdrom /mnt
    #ERROR#
    	# 内容:mount: no medium found on /dev/sr0
    	# 解决办法:
    		# 只要在虚拟机设置——硬件——CD/DVD——设备状态的“已连接”和“启动时连接“都勾选就可以了,两个空格都选上就可以了。
    $ echo "mount /dev/cdrom /mnt" >> /etc/rc.local
    $ chmod +x /etc/rc.d/rc.local
    
  2. 上传 openstack_rpm.tar.gz 到 /opt 目录并解压

  3. 创建 local.repo 下载路径

    $ vim /etc/yum.repos.d/local.repo
    -----------------------------------------
    [local]
    name=local
    baseurl=file:///mnt
    gpgcheck=0
    
    [openstack]
    name=openstack
    baseurl=file:///opt/repo
    gpgcheck=0
    -----------------------------------------
    

3. 安装相关组件(基础服务)

  1. 在所有节点安装open stack 客户端和 openstack-selinux

    yum install -y openstackclient openstack-selinux
    
  2. 在控制节点安装数据库相关组件

    yum install -y mariadb mariadb-server python2-PyMySQL
    cat /etc/my.cnf.d/openstack.cnf
    -----------------------------------------
    [mysqld]
    bind-address = 192.168.80.181
    default-storage-engine=innodb
    innodb_file_per_table
    max_connections = 4096
    collation-server = utf8_general_ci
    character-set-server = utf8
    -----------------------------------------
    $ systemctl start mariadb
    $ systemctl enable mariadb
    $ mysql_secure_installation 
    $ 回车
    $ y 
    ...
    $ y
    
  3. 在控制节点上安装消息队列

    $ yum install -y rabbitmq-server
    $ systemctl start rabbitmq-server.service 
    $ systemctl enable rabbitmq-server.service 
    $ rabbitmqctl add_user openstack RABBIT_PASS  # 设置用户名和密码
    Creating user "openstack" ...
    $ rabbitmqctl set_permissions openstack ".*" ".*" ".*"  # 给用户设置可读可写权限
    Setting permissions for user "openstack" in vhost "/" ...
    $ rabbitmq-plugins enable rabbitmq_management # 加载网页组件
    ## 查看是否有 5762、15762、25762 端口
    
  4. memcached 缓存 token

    $ yum install -y memcached python-memcached
    $ sed -i "s#127.0.0.1#192.168.80.181#g" /etc/sysconfig/memcached
    $ systemctl start memcached.service 
    $ systemctl enable memcached.service
    ## 查看是否有 11211 端口     
    

4. 认证服务(仅控制节点)

  1. 功能

    1. 认证管理、授权管理、服务目录(电话本)
  2. 认证服务的搭建

    1. 创库授权

      $ mysql -uroot -p1
      $ create database keystone;
      $ grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'KEYSTONE_DBPASS';
      $ grant all privileges on keystone.* to 'keystone'@'%' identified by 'KEYSTONE_DBPASS';
      
    2. 安装 keystone 相关安装包

      $ yum install -y openstack-keystone httpd mod_wsgi
      
    3. 修改配置文件

      $ cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
      $ grep -Ev '^$|#' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
      $ vim /etc/keystone/keystone.conf
      -------------------------------------------
      [DEFAULT]
      
      admin_token = ADMIN_TOKEN
      
      [database]
      connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
      ## 协议://数据库用户:密码@主机名/数据库的表名
      [token]
      provider = fernet
      -------------------------------------------
      ## 也可以这样
      $ yum install -y openstack-utils
      openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ADMIN_TOKEN
      
    4. 初始化身份认证服务的数据库和Fernet keys

      $ su -s /bin/sh -c "keystone-manage db_sync" keystone  # 切换到keystone 用户下执行使用 /bin/sh 执行了引号内的命令
      $ mysql -uroot -p1 keystone -e "show tables;"  # 有表上步成功
      $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
      $ ls /etc/keystone  # 有 fernet-keys 上步成功
      
    5. 配置 httpd

      echo "ServerName ocman" >> /etc/httpd/conf/httpd.conf  # 使 apache 启动更快
      $ vim /etc/httpd/conf.d/wsgi-keystone.conf
      -------------------------------------------
      Listen 5000
      Listen 35357
      
      
          WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
          WSGIProcessGroup keystone-public
          WSGIScriptAlias / /usr/bin/keystone-wsgi-public
          WSGIApplicationGroup %{GLOBAL}
          WSGIPassAuthorization On
          ErrorLogFormat "%{cu}t %M"
          ErrorLog /var/log/httpd/keystone-error.log
          CustomLog /var/log/httpd/keystone-access.log combined
      
          
              Require all granted
          
      
      
      
          WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
          WSGIProcessGroup keystone-admin
          WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
          WSGIApplicationGroup %{GLOBAL}
          WSGIPassAuthorization On
          ErrorLogFormat "%{cu}t %M"
          ErrorLog /var/log/httpd/keystone-error.log
          CustomLog /var/log/httpd/keystone-access.log combined
      
          
              Require all granted
          
      
      -------------------------------------------
      ## 查看是否有 5000、35357 端口
      
    6. 创建服务和注册 API

      ## 增加环境变量
      $ export OS_TOKEN=ADMIN_TOKEN  # 配置认证令牌
      $ export OS_URL=http://osacn:35357/v3  # 配置端点 URL
      $ export OS_IDENTITY_API_VERSION=3  # 配置认证 API 版本
      ## 创建认证服务
      $ openstack service create \
        --name keystone --description "OpenStack Identity" identity
       ## 创建三个角色
       $ openstack endpoint create --region RegionOne \
        identity public http://osacn:5000/v3
       $ openstack endpoint create --region RegionOne \
        identity internal http://osacn:5000/v3
       $ openstack endpoint create --region RegionOne \
        identity admin http://osacn:35357/v3
       ## 以上四步有表格即为成功
      
    7. 创建域(地区)、项目(团队)、用户、角色

      $ openstack domain create --description "Default Domain" default  # 创建域
      $ openstack project create --domain default --description "Admin Project" admin  # 创建项目
      $ openstack user create --domain default --password ADMIN_PASS admin  # 创建用户
      $ openstack role create admin  # 创建角色
      $ openstack role add --project admin --user admin admin  # 将项目、用户和角色关联起来
      $ openstack project create --domain default --description "Service Project" service  # 创建服务项目
      
    8. 验证是否成功

      vim admin-openrc
      --------------------------------------------
      export OS_PROJECT_DOMAIN_NAME=default
      export OS_USER_DOMAIN_NAME=default
      export OS_PROJECT_NAME=admin
      export OS_USERNAME=admin
      export OS_PASSWORD=ADMIN_PASS
      export OS_AUTH_URL=http://osacn:35357/v3
      export OS_IDENTITY_API_VERSION=3
      --------------------------------------------
      vim /root/.bashrc
      --------------------------------------------
      source admin-openrc
      --------------------------------------------
      env | grep OS # 获取 OS开头的环境变量
      unset OS_TOKEN # 删除环境变量
      $ openstack token issue  # 验证 keystone 是否成功
      

5. 镜像服务(仅控制节点)

  1. 数据库创建、授权

    $ mysql -uroot -p1
    $ CREATE DATABASE glance;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'   IDENTIFIED BY 'GLANCE_DBPASS';
    
  2. 在 keystone 创建 glance 用户关联角色

    $ openstack user create --domain default --password GLANCE_PASS glance
    $ openstack role add --project service --user glance admin 
    
  3. 在 keystone 上创建服务,注册 API

    $ openstack service create --name glance \
      --description "OpenStack Image" image
     $ openstack endpoint create --region RegionOne \
      image public http://osacn:9292
     $ openstack endpoint create --region RegionOne \
      image internal http://osacn:9292
     $ openstack endpoint create --region RegionOne \
      image admin http://osacn:9292 
    
  4. 下载安装 glance

    yum install -y openstack-glance
    
  5. 配置相关组件并开启服务

    $ cp  /etc/glance/glance-api.conf  /etc/glance/glance-api.conf.bak 
    $ grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
    $ vim /etc/glance/glance-api.conf
    --------------------------------------------
    [DEFAULT]
    [cors]
    [cors.subdomain]
    [database]
    connection = mysql+pymysql://glance:GLANCE_DBPASS@osacn/glance
    [glance_store]
    stores = file,http
    default_store = file
    filesystem_store_datadir = /var/lib/glance/images/
    [image_format]
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = glance
    password = GLANCE_PASS
    [matchmaker_redis]
    [oslo_concurrency]
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_policy]
    [paste_deploy]
    flavor = keystone
    --------------------------------------------
    $ cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
    $ grep -Ev '^$|#' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
    $ vim /etc/glance/glance-registry.conf
    --------------------------------------------
    [DEFAULT]
    [database]
    connection = mysql+pymysql://glance:GLANCE_DBPASS@osacn/glance
    [glance_store]
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = glance
    password = GLANCE_PASS
    [matchmaker_redis]
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_policy]
    [paste_deploy]
    flavor = keystone
    [profiler]
    --------------------------------------------
    $ su -s /bin/sh -c "glance-manage db_sync" glance
    $ mysql -uroot -p1 glance -e "show tables"  # 查看是否有表
    $ systemctl enable openstack-glance-api.service openstack-glance-registry.service
    $ systemctl start openstack-glance-api.service openstack-glance-registry.service
    
  6. 验证是否成功

    ## 查看 9191、9292 端口是否监听
    ## 上传镜像
    $ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
    $ openstack image create "cirros" \
      --file cirros-0.3.4-x86_64-disk.img \
      --disk-format qcow2 --container-format bare \
      --public
     $ ls /var/lib/glance/images  # 查看是否有内容
     $ openstack image list
     +---------------+--------+--------+
    | ID         | Name   | Status |
    +----------------+--------+--------+
    | e447460b-89a2 | cirros | active |
    +----------------+--------+--------+
    

5. 计算服务

(一)节点概述

  1. nova-api:接收并响应所有的计算服务请求,管理虚拟机生命周期
  2. nova-computer:真正的管理虚拟机(调用 libvirt)
  3. nova-schedulder:nova 调度器(挑选出最适合的 nova-computer 来创建虚拟机)
  4. nova-conductor:帮助 nova-computer 代理修改数据库中虚拟机的状态
  5. nova-network:早期管理虚拟机的网络
  6. nova-consoleauth 和 nova-novncproxy:web 版的 vnc 来直接操作云主机
  7. nova-api-metadata:接收来自虚拟机发送的元数据

(二)控制节点的操作

  1. 数据库的创建、授权

    $ mysql -u root -p1
    $ CREATE DATABASE nova_api;
    $ CREATE DATABASE nova;
    $ GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
      IDENTIFIED BY 'NOVA_DBPASS';
    $ GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
      IDENTIFIED BY 'NOVA_DBPASS';
    $ GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
      IDENTIFIED BY 'NOVA_DBPASS';
    $ GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
      IDENTIFIED BY 'NOVA_DBPASS';
    
  2. 在 keystone 创建 glance 用户关联角色

    $ openstack user create --domain default \
      --password NOVA_PASS nova
     $  openstack role add --project service --user nova admin 
    
  3. 创建服务实体和 API 端点

    $ openstack service create --name nova \
      --description "OpenStack Compute" compute
    $ openstack endpoint create --region RegionOne \
      compute public http://osacn:8774/v2.1/%\(tenant_id\)s
    $ openstack endpoint create --region RegionOne \
      compute internal http://osacn:8774/v2.1/%\(tenant_id\)s
    $ openstack endpoint create --region RegionOne \
      compute admin http://controller:8774/v2.1/%\(tenant_id\)s
    
  4. 安装软件包

    yum install openstack-nova-api openstack-nova-conductor \
      openstack-nova-console openstack-nova-novncproxy \
      openstack-nova-scheduler -y
    
    
  5. 配置相关配置文件

    $ cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
    $ grep -Ev "^$|#" /etc/nova/nova.conf.bak > /etc/nova/nova.conf
    $ vim /etc/nova/nova.conf
    --------------------------------------------
    [DEFAULT]
    enabled_apis = osapi_compute,metadata
    rpc_backend = rabbit
    auth_strategy = keystone
    my_ip = 192.168.80.181
    use_neutron = True
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    [api_database]
    connection = mysql+pymysql://nova:NOVA_DBPASS@osacn/nova_api
    [barbican]
    [cache]
    [cells]
    [cinder]
    [conductor]
    [cors]
    [cors.subdomain]
    [database]
    connection = mysql+pymysql://nova:NOVA_DBPASS@osacn/nova
    [ephemeral_storage_encryption]
    [glance]
    api_servers = http://osacn:9292
    [guestfs]
    [hyperv]
    [image_file_url]
    [ironic]
    [keymgr]
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = NOVA_PASS
    [libvirt]
    [matchmaker_redis]
    [metrics]
    [neutron]
    [osapi_v21]
    [oslo_concurrency]
    lock_path = /var/lib/nova/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    rabbit_host = osacn
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS
    [oslo_middleware]
    [oslo_policy]
    [rdp]
    [serial_console]
    [spice]
    [ssl]
    [trusted_computing]
    [upgrade_levels]
    [vmware]
    [vnc]
    vncserver_listen = $my_ip
    vncserver_proxyclient_address = $my_ip
    [workarounds]
    [xenserver]
    --------------------------------------------
    
  6. 同步数据库并开启服务

    $ su -s /bin/sh -c "nova-manage api_db sync" nova
    $ su -s /bin/sh -c "nova-manage db sync" nova
    $ systemctl enable openstack-nova-api.service \
      openstack-nova-consoleauth.service openstack-nova-scheduler.service \
      openstack-nova-conductor.service openstack-nova-novncproxy.service
    $ systemctl start openstack-nova-api.service \
      openstack-nova-consoleauth.service openstack-nova-scheduler.service \
      openstack-nova-conductor.service openstack-nova-novncproxy.service
    
  7. 验证是否成功

    nova service-list  # 看到三个 up
    +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
    | Id | Binary           | Host  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
    +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
    | 1  | nova-consoleauth | osacn | internal | enabled | up    | 2021-08-27T01:15:56.000000 | -               |
    | 2  | nova-scheduler   | osacn | internal | enabled | up    | 2021-08-27T01:15:57.000000 | -               |
    | 3  | nova-conductor   | osacn | internal | enabled | up    | 2021-08-27T01:15:59.000000 | -               |
    +----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
    ## 监听 6080 端口
    

(三)计算节点

  1. 安装软件包

    $ yum install openstack-nova-compute -y
    
  2. 配置相关配置文件

    cat /etc/nova/nova.conf
    --------------------------------------------
    [DEFAULT]
    rpc_backend = rabbit
    auth_strategy = keystone
    my_ip = 192.168.80.182
    use_neutron = True
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    [api_database]
    [barbican]
    [cache]
    [cells]
    [cinder]
    [conductor]
    [cors]
    [cors.subdomain]
    [database]
    [ephemeral_storage_encryption]
    [glance]
    api_servers = http://osacn:9292
    [guestfs]
    [hyperv]
    [image_file_url]
    [ironic]
    [keymgr]
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = NOVA_PASS
    [libvirt]
    virt_type = qemu   # 注意 egrep -c '(vmx|svm)' /proc/cpuinfo 的值返回零时有这一行!
    [matchmaker_redis]
    [metrics]
    [neutron]
    [osapi_v21]
    [oslo_concurrency]
    lock_path = /var/lib/nova/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    rabbit_host = osacn
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS
    [oslo_middleware]
    [oslo_policy]
    [rdp]
    [serial_console]
    [spice]
    [ssl]
    [trusted_computing]
    [upgrade_levels]
    [vmware]
    [vnc]
    enabled = True
    vncserver_listen = 0.0.0.0
    vncserver_proxyclient_address = $my_ip
    novncproxy_base_url = http://enabled = True
    vncserver_listen = 0.0.0.0
    vncserver_proxyclient_address = $my_ip
    novncproxy_base_url = http://192.168.80.181:6080/vnc_auto.html
    [workarounds]
    [xenserver]
    --------------------------------------------
    
  3. 启动服务

    $ yum install -y qemu*  # 解决 bug
    $ systemctl enable libvirtd.service openstack-nova-compute.service
    $ systemctl start libvirtd.service openstack-nova-compute.service
    
  4. 验证

    ## 在控制节点上操作
    nova service-list   # 看到四个 up
    

5. 网络服务

(一)基础知识

  1. neutron-server 端口(9696):接收和响应外部的网络管理请求
  2. neutron-linuxbridge-agent:负责创建桥接网卡
  3. neutron-dhcp-agent:负责分配 IP
  4. neutron-metadata-agent:配合 nova-metadata-api 实现虚拟机定制化操作
  5. L3-agent:实现三层网络 vxlan(网络层)

(二)控制节点配置

  1. 数据库的创建、授权

    $ mysql -u root -p1
    $ CREATE DATABASE neutron;
    $ GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
      IDENTIFIED BY 'NEUTRON_DBPASS';
    $ GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
      IDENTIFIED BY 'NEUTRON_DBPASS';
    
  2. 创建服务证书

    $ openstack user create --domain default --password NEUTRON_PASS neutron
    $ openstack role add --project service --user neutron admin
    
  3. 创建服务实体

    $ openstack service create --name neutron \
      --description "OpenStack Networking" network
      
    
  4. 创建网络服务 API

    $ openstack endpoint create --region RegionOne \
      network public http://oscan:9696
    $ openstack endpoint create --region RegionOne \
      network internal http://oscan:9696
    $ openstack endpoint create --region RegionOne \
      network admin http://oscan:9696
    
  5. 安装并配置服务组件

    $ yum install openstack-neutron openstack-neutron-ml2 \
      openstack-neutron-linuxbridge ebtables
    cat /etc/neutron/neutron.conf
    --------------------------------------------
    [DEFAULT]
    core_plugin = ml2
    service_plugins =
    rpc_backend = rabbit
    auth_strategy = keystone
    notify_nova_on_port_status_changes = True
    notify_nova_on_port_data_changes = True
    [agent]
    [cors]
    [cors.subdomain]
    [database]
    connection = mysql+pymysql://neutron:NEUTRON_DBPASS@osacn/neutron
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = NEUTRON_PASS
    [matchmaker_redis]
    [nova]
    auth_url = http://osacn:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = nova
    password = NOVA_PASS
    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    rabbit_host = osacn
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS
    [oslo_policy]
    [qos]
    [quotas]
    [ssl]
    --------------------------------------------
    $ cat /etc/neutron/plugins/ml2/ml2_conf.ini
    --------------------------------------------
    [DEFAULT]
    [ml2]
    type_drivers = flat,vlan
    tenant_network_types =
    mechanism_drivers = linuxbridge
    extension_drivers = port_security
    [ml2_type_flat]
    flat_networks = provider
    [ml2_type_geneve]
    [ml2_type_gre]
    [ml2_type_vlan]
    [ml2_type_vxlan]
    [securitygroup]
    enable_ipset = True
    --------------------------------------------
    $ cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    --------------------------------------------
    [DEFAULT]
    [agent]
    [linux_bridge]
    physical_interface_mappings = provider:eno33554984
    [securitygroup]
    enable_security_group = True
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    [vxlan]
    enable_vxlan = False
    --------------------------------------------
    $ cat /etc/neutron/dhcp_agent.ini
    --------------------------------------------
    [DEFAULT]
    interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    enable_isolated_metadata = True
    [AGENT]
    --------------------------------------------
    $ cat /etc/neutron/metadata_agent.ini
    --------------------------------------------
    [DEFAULT]
    nova_metadata_ip = osacn
    metadata_proxy_shared_secret = METADATA_SECRET
    [AGENT]
    --------------------------------------------
    $ cat /etc/nova/nova.conf
    --------------------------------------------
    ...
    [neutron]
    url = http://osacn:9696
    auth_url = http://osacn:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = NEUTRON_PASS
    
    service_metadata_proxy = True
    metadata_proxy_shared_secret = METADATA_SECRET
    ...
    --------------------------------------------
    $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    $ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
      --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    
  6. 启动并检验是否成功

    $ systemctl enable neutron-server.service \
      neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service
    $ systemctl start neutron-server.service \
      neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service
    $ neutron agent-list  # 有三个启动即为成功
    

(三)计算节点

  1. 安装相关服务

    $ yum install -y openstack-neutron-linuxbridge ebtables ipset
    
  2. 配置通用组件的配置文档

    cat /etc/neutron/neutron.conf
    --------------------------------------------
    [DEFAULT]
    rpc_backend = rabbit
    auth_strategy = keystone
    [agent]
    [cors]
    [cors.subdomain]
    [database]
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = NEUTRON_PASS
    [matchmaker_redis]
    [nova]
    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    rabbit_host = osacn
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS
    [oslo_policy]
    [qos]
    [quotas]
    [ssl]
    --------------------------------------------
    cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    --------------------------------------------
    [DEFAULT]
    [agent]
    [linux_bridge]
    physical_interface_mappings = provider:eno33554984
    [securitygroup]
    enable_security_group = True
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    [vxlan]
    enable_vxlan = False
    --------------------------------------------
    cat /etc/nova/nova.conf
    --------------------------------------------
    ...
    [neutron]
    url = http://osacn:9696
    auth_url = http://osacn:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = NEUTRON_PASS
    --------------------------------------------
    
  3. 启动服务并检测

    $ systemctl restart openstack-nova-compute.service
    $ systemctl enable neutron-linuxbridge-agent.service
    $ systemctl start neutron-linuxbridge-agent.service
    ## 在控制节点执行以下命令
    neutron agent-list # 在 agent-type 列中找到计算节点的主机名即可
    

6. dashboard(计算节点)

  1. 下载相关应用

    $ yum install openstack-dashboard -y
    
  2. 配置相关文档

    这里直接向 /etc//openstack-dashboard/ 下导入 local_settings , 将 主机名修改正确(修改成主节点的主机名)

  3. 启动服务

    $ systemctl restart httpd.service memcached.service
    $ cat /etc/httpd/conf.d/openstack-dashboard.conf
    --------------------------------------------
    WSGIDaemonProcess dashboard
    WSGIProcessGroup dashboard
    WSGISocketPrefix run/wsgi
    WSGIApplicationGroup %{GLOBAL}  # 添加最后一行
    --------------------------------------------
    

6. 启动一个实例

  1. 创建网络(网络名+子网)

    $ neutron net-create --shared --provider:physical_network provider --provider:network_type flat codefun
    ## 参数解释
    	--share 允许所有项目使用虚拟网络
    	# provider 开头的是在配置文件中配置完的,网络名称是 codefun
    $ neutron subnet-create --name provider \
      --allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS \
      --dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY \
      provider PROVIDER_NETWORK_CIDR
     ## 参数解释
     	1. IP 地址的范围内不能有已使用的
     	2. DNS 解析可以使用 /etc/resole.conf 中的
     	3. 网关为主机网关
    # 例如:
    $ neutron subnet-create --name codebad \
      --allocation-pool start=192.168.80.200,end=192.168.80.250 \
      --dns-nameserver 114.114.114.114 --gateway 192.168.80.2 \
      codefun 192.168.80.0/24
     neutron subnet-create --name codebadny \
      --allocation-pool start=172.16.0.200,end=172.16.0.250 \
      --dns-nameserver 114.114.114.114 --gateway 1172.16.0.2 \
      codefunny 172.16.0.0/24
    
  2. 创建云主机的硬件配置方案

    openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
    
  3. 创建密钥对

    ssh-keygen -q -N "" -f ~/.ssh/id_rsa  # 创建密钥对
    openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey  # 将公共密钥上传至open stack
    
  4. 创建安全组规则

    $ openstack security group rule create --proto icmp default
    +-----------------------+--------------------------------------+
    | Field                 | Value                                |
    +-----------------------+--------------------------------------+
    | id                    | 8e4db7e7-e29d-4705-86de-7770d60eefb4 |
    | ip_protocol           | icmp                                 |
    | ip_range              | 0.0.0.0/0                            |
    | parent_group_id       | fbd20348-ffdd-46b6-bbc9-259d28b76609 |
    | port_range            |                         |
    | remote_security_group |                           |
    +-----------------------+--------------------------------------+
    $ openstack security group rule create --proto tcp --dst-port 22 default
    +-----------------------+--------------------------------------+
    | Field                 | Value                 |
    +-----------------------+--------------------------------------+
    | id                    | a6313f44-e85c-4dc5-a1d5-01d955151ea7 |
    | ip_protocol           | tcp                      |
    | ip_range              | 0.0.0.0/0               |
    | parent_group_id       | fbd20348-ffdd-46b6-bbc9-259d28b76609 |
    | port_range            | 22:22                    |
    | remote_security_group |                          |
    +-----------------------+--------------------------------------+
    
  5. 启动一个实例

    $ neutron net-list  # 获取 net-id 
    +---------------------------+---------+----------------------------+
    | id                        | name    | subnets                    |
    +---------------------------+---------+----------------------------+
    | e694a207-3385-4024-a57a-  | codefun | 48c64edf-b12d-445c-        |
    | dfc5ad5ec28d              |         | b4e4-0e08faa5edfd          |
    |                           |         | 192.168.80.0/24            |
    +---------------------------+---------+----------------------------+
    $ openstack server create --flavor m1.nano --image cirros --nic net-id=e694a207-3385-4024-a57a-dfc5ad5ec28d --security-group default --key-name mykey codefun001
    
  6. 检测实例是否创建成功

    $ nova list 
    

7. 添加一个计算节点

  1. 配置 yum 源
  2. 编写 hosts 文件
  3. yum install -y openstackclient openstack-selinux
  4. 安装计算服务
  5. 安装网络服务
  6. 在网页端使用主机聚集来测试是否成功

8. 镜像服务迁移

  1. 在控制节点上关闭 glance 服务(两个),并取消开机自启

  2. 在控制节点上将 glance 服务的数据库导出,并推送至目标服务器

    $ mysqldump -uroot -p1 -B glance > glance.sql
    $ scp -rp glance.sql osacm:/root/
    
  3. 在目标服务器上安装数据库并开启、设置开机自启、数据库初始化,glance 服务(两个),并导入数据库

    $ yum install mariadb mariadb-server python2-PyMySQL -y
    $ systemctl start mariadb && systemctl enable mariadb
    $ mysql_secure_installation 
    $ mysql < glance.sql
    $ yum install -y openstack-glance
    
  4. 将控制节点上glance的配置文件传至目标服务器,然后在目标服务器上修改配置文件,启动 glance 服务(两个)、开机自启

    $ scp /etc/glance/glance-registry.conf osacm:/etc/glance/glance-registry.conf
    $ scp /etc/glance/glance-api.conf osacm:/etc/glance/glance-api.conf
    $ sed -i 's#GLANCE_DBPASS@osacn#GLANCE_DBPASS@osacm#g' /etc/glance/glance-api.conf
    $ sed -i 's#GLANCE_DBPASS@osacn#GLANCE_DBPASS@osacm#g' /etc/glance/glance-registry.conf
    $ systemctl start openstack-glance-api.service openstack-glance-registry.service 
     $ systemctl enable openstack-glance-api.service openstack-glance-registry.service
     $ netstat -lntp # 查看是否有 9191 9292
    
  5. 将控制节点的镜像上传至目标服务器,并在目标服务器上修改镜像的属主、属组

    $ scp /var/lib/glance/images/* osacm:/var/lib/glance/images/
    $ chown -R glance:glance /var/lib/glance/images/*
    
  6. 修改 glance1在 keystone 上注册的 api,修改所有与 openstack 有关的 nova.conf ,并重启相应的 nova 服务

    $ openstack endpoint create --region RegionOne   image public http://osacm:9292
    $ openstack endpoint create --region RegionOne   image internal http://osacm:9292
    $ openstack endpoint create --region RegionOne   image admin http://osacm:9292
    $ sed -i "s#http://osacn:9292#http://osacm:9292#g" /etc/nova/nova.conf
    $ systemctl restart openstack-nova-api.service
    $ systemctl restart openstack-nova-compute.service
    
  7. 检验

    ## 能够上传镜像,并且能够创建实例即可!
    

7. cinder 块存储

(一)主要部件及其作用

  1. cinder-api:接收外部api的请求
  2. cinder-volume:提供存储空间
  3. cinder-schedule:调度器,决定将要分配的空间由哪一个cinder-volume提供
  4. cinder-backup:备份存储

(二)安装 cinder——计算节点

  1. 数据库的创建和授权

    $ mysql -uroot -p1
    $ CREATE DATABASE cinder;
    $ GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
      IDENTIFIED BY 'CINDER_DBPASS';
    $ GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
      IDENTIFIED BY 'CINDER_DBPASS';
    
  2. 创建服务证书

    openstack user create --domain default --password CINDER_PASS cinder
    openstack role add --project service --user cinder admin
    openstack service create --name cinder \
      --description "OpenStack Block Storage" volume
    openstack service create --name cinderv2 \
      --description "OpenStack Block Storage" volumev2
    openstack endpoint create --region RegionOne \
      volume public http://osacn:8776/v1/%\(tenant_id\)s
    openstack endpoint create --region RegionOne \
      volume internal http://osacn:8776/v1/%\(tenant_id\)s  
    openstack endpoint create --region RegionOne \
      volume admin http://osacn:8776/v1/%\(tenant_id\)s  
    openstack endpoint create --region RegionOne \
      volumev2 public http://osacn:8776/v2/%\(tenant_id\)s
    openstack endpoint create --region RegionOne \
      volumev2 internal http://osacn:8776/v2/%\(tenant_id\)s  
      openstack endpoint create --region RegionOne \
      volumev2 admin http://osacn:8776/v2/%\(tenant_id\)s
    
  3. 安装并配置配置文件

    $ yum install openstack-cinder
    $ cp /etc/cinder/cinder.conf{,.bak} 
      561  grep -Ev "^$|#" /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
    $ vim /etc/cinder/cinder.conf
    --------------------------------------------------
    [database]
    connection = mysql+pymysql://cinder:CINDER_DBPASS@osacn/cinder
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = cinder
    password = CINDER_PASS
    [matchmaker_redis]
    [oslo_concurrency]
    lock_path = /var/lib/cinder/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    rabbit_host = osacn
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS
    [oslo_middleware]
    [oslo_policy]
    [oslo_reports]
    [oslo_versionedobjects]
    [ssl]
    --------------------------------------------------
    $ openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
    $ su -s /bin/sh -c "cinder-manage db sync" cinder
    
  4. 启动服务,并下载 lvm 检查

    $ systemctl restart openstack-nova-api.service
    $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
    $ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
    $ yum install -y lvm2
    $ systemctl start lvm2-lvmpolld.service 
    $ systemctl enable lvm2-lvmpolld.service 
    $ cinder service-list  # 可以看到 up
    

(二)安装 cinder——存储节点

  1. 首先要先添加两块硬盘

    ## 添加硬盘步骤略
    ## 不用重启使虚拟机识别硬盘的方法:
    echo "- - -" > /sys/class/scsi_host/host0/scan # 如果不行,接着使用:
    echo "- - -" > /sys/class/scsi_host/host1/scan
    echo "- - -" > /sys/class/scsi_host/host2/scan
    ...
    
  2. 安装 lvm2 并配置配置文件 ,并创建卷组

    $ yum install lvm2
    $ systemctl enable lvm2-lvmetad.service
    $ systemctl start lvm2-lvmetad.service
    $ pvcreate /dev/sdb /dev/sdc
    $ vgcreate cinder-ssd /dev/sdb
    $ vgcreate "cinder-sata" /dev/sdc
    $ vim /etc/lvm/lvm.conf 
    --------------------------------------------------
    130         filter = [ "a/sdb/","a/sdc/","r/.*/"]
    --------------------------------------------------
    
  3. 安装 cinder 及其相关软件配置配置文件,并启动。

    $ yum install openstack-cinder targetcli python-keystone-y
    $ yum install openstack-cinder targetcli python-keystone -y
    $ vim /etc/cinder/cinder.conf 
    $ cp /etc/cinder/cinder.conf{,.bak}
    $ grep -Ev "^$|#" /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
    $ vim /etc/cinder/cinder.conf
    --------------------------------------------------
    [DEFAULT]
    rpc_backend = rabbit
    auth_strategy = keystone
    my_ip = 192.168.80.182
    enabled_backends = ssd,sata  # 对应下面所创建的 []
    glance_api_servers = http://osacm:9292
    [BACKEND]
    [BRCD_FABRIC_EXAMPLE]
    [CISCO_FABRIC_EXAMPLE]
    [COORDINATION]
    [FC-ZONE-MANAGER]
    [KEYMGR]
    [cors]
    [cors.subdomain]
    [database]
    connection = mysql+pymysql://cinder:CINDER_DBPASS@osacn/cinder
    [keystone_authtoken]
    auth_uri = http://osacn:5000
    auth_url = http://osacn:35357
    memcached_servers = osacn:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = cinder
    password = CINDER_PASS
    [matchmaker_redis]
    [oslo_concurrency]
    lock_path = /var/lib/cinder/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    rabbit_host = osacn
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS
    [oslo_middleware]
    [oslo_policy]
    [oslo_reports]
    [oslo_versionedobjects]
    [ssl]
    [ssd] # 对应 enabled_backends
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    volume_group = cinder-ssd  # 对应创建的卷组
    iscsi_protocol = iscsi
    iscsi_helper = lioadm
    volume_backend_name = ssd  # 标记,以便于以后指哪打哪
    [sata] # 对应 enabled_backends
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    volume_group = cinder-sata  # 对应创建的卷组
    iscsi_protocol = iscsi
    iscsi_helper = lioadm
    volume_backend_name = sata 
    --------------------------------------------------
    $ systemctl enable openstack-cinder-volume.service target.service
    $ systemctl start openstack-cinder-volume.service target.service
    
  4. 验证是否成功

    ## 控制节点上
    $ cinder service-list  # 有三个 up
    ## 在 web 端,能够创建卷组!在 存储节点上能够 lvs 查看创建的卷组。
    

8. 添加 flat 网段

  1. 添加网卡(不能是桥接!!)

  2. 修改配置文件

    ## 控制节点:
    $ vim /etc/neutron/plugins/ml2/ml2_conf.ini
    --------------------------------------------------
    ...
    [ml2_type_flat]
    flat_networks = provider,net172_16
    ...
    --------------------------------------------------
    $ vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    --------------------------------------------------
    ...
    [linux_bridge]
    physical_interface_mappings = provider:eno33554984,net172_16:brqe694a207-33
    ...
    --------------------------------------------------
    systemctl restart neutron-server.service neutron-linuxbridge-agent.service
    ## 计算节点
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    --------------------------------------------------
    ...
    [linux_bridge]
    physical_interface_mappings = provider:eno33554984,net172_16:brqe694a207-33
    ...
    --------------------------------------------------
    $ systemctl restart neutron-linuxbridge-agent.service 
    
  3. 创建网络

    $ neutron net-create --shared --provider:physical_network net172_16 --provider:network_type flat codefunny
    $ neutron subnet-create --name codebadny   --allocation-pool start=172.16.0.200,end=172.16.0.250   --dns-nameserver 114.114.114.114 --gateway 172.16.0.2   codefunny 172.16.0.0/24
    

7. cinder 对接 nfs

  1. 在主节点上下载 nfs-utils,并修改配置文件 /etc/exports,创建目录,重启 nfs服务

    ## 配置文件修改如下
    /data 192.168.80.0/24(rw,async,no_root_quash,no_all_squash)
    
  2. 存储节点上配置 cinder.conf ,创建文件重启cinder服务

    $ vim /etc/cinder/cinder.conf
    --------------------------------------------------
    [DEFAULT]
    ...
    enabled_backends = ssd,sata,nfs
    ...
    [nfs]
    volume_driver = cinder.volume.drivers.nfs.NfsDriver
    nfs_shares_config = /etc/cinder/nfs_shares
    volume_backend_name = nfs
    --------------------------------------------------
    $ vim /etc/cinder/nfs_shares
    --------------------------------------------------
    192.168.80.181:/data
    --------------------------------------------------
    $ systemctl restart openstack-cinder-volume.service 
    
  3. 检验

    $ cinder service-list
    

8. 将控制节点兼职为计算节点

  1. 安装相关软件,配置相关文件

    $ yum install -y openstack-nova-compute
    $ vim /etc/nova/nova.conf
    --------------------------------------------------
    ...
    [libvirt]
    cpu_mode = none
    virt_type = qemu
    ...
    [vnc]
    enabled = True
    vncserver_listen = 0.0.0.0
    vncserver_proxyclient_address = $my_ip
    novncproxy_base_url = http://osacn:6080/vnc_auto.html
    ...
    --------------------------------------------------
    $ systemctl start libvirtd.service openstack-nova-compute.service 
    $ systemctl enable libvirtd.service openstack-nova-compute.service
    $ nova service-list
    

实例的冷迁移

  1. 实现互相迁移节点的免密钥登录

    ## 一台主机
    $ usermod -s /bin/bash nova
    $ su - nova
    -bash-4.2$ cp /etc/skel/.bash* .
    -bash-4.2$ logout
    $ su - nova
    nova@osacm ~]$ ssh-keygen -t rsa -q -N ''
    [nova@osacm ~]$ cd .ssh/
    [nova@osacm .ssh]$ cp -fa id_rsa.pub authorized_keys
    [nova@osacm ~]$ scp -rp .ssh [email protected]:`pwd`
    ## 另一台主机
    $ usermod -s /bin/bash nova
    [root@osacn ~]# su - nova
    -bash-4.2$ logout
    $ chown -R  nova:nova /var/lib/nova/.ssh/
    
  2. 编写配置文件,重启服务

    ## 计算节点
    vi /etc/nova/nova.conf
    --------------------------------------------------
    ...
    [DEFAULT]
    allow_resize_to_same_host = True
    --------------------------------------------------
    $ systemctl restart openstack-nova-compute.service 
    ## 控制节点
    vi /etc/nova/nova.conf
    --------------------------------------------------
    ...
    [DEFAULT]
    scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
    ...
    --------------------------------------------------
    systemctl restart openstack-nova-scheduler.service 
    
  3. 在 web 上检测即可

你可能感兴趣的:(虚拟化技术,运维,云计算)