kolla-ansible 部署OpenStack queens版本(转)

一. 实验环境:

  • 3台主机安装CentOS7 minimal系统64G内存,800G+1T * 3硬盘(其中1T盘为后期ceph部署做准备),4个千兆网卡:

    用途 网口 ip地址段
    控制网络 enp2s0f0 192.168.118.0/24
    openstack external enp2s0f1 无ip
    neutron vxlan tunnel enp2s0f2 10.0.1.0/24
    ceph集群后端 enp2s0f3 10.0.0.0/24
  • 主机网络规划:

    host IP address remark
    controller203 192.168.118.203 1
    compute204 192.168.118.204 2
    compute205 192.168.118.205 3
    kolla 192.168.118.212
    virtulal IP 192.168.118.209
    虚拟地址池 192.168.118.216-220

    kolla-ansible 部署OpenStack queens版本(转)_第1张图片

二. 控制以及计算节点初始化操作:

  • 使用以下脚本对每个计算机进行初始化配置(kolla为0)执行 sh initnode.sh n(n代表第几台主机)

    # /usr/bin/bash
    
    if  !( test -f nodes )
    then
    	exit 1
    fi
    	
    systemctl stop firewalld && systemctl disable firewalld
    yum update -y
    yum install -y wget vim net-tools
    wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum install -y docker-ce
    mkdir -pv /etc/docker
    systemctl restart docker && systemctl status docker
    
    #set hostanme
    kolla=`sed '/^kolla=/!d;s/.*=//' /opentack/nodes`
    if [ $1 -eq 0 ] ; then
    	echo "\n$kolla\tkolla" > /etc/hostname
    	`hostname kolla`
    else if [ $1 -lt 3 ] ; then
    		`hostname controller0${
            1}`
    		echo "controller0${1}" > /etc/hostname
    else
    		name=${
           printf "%03s" $1}
    		`hostname conmpute${
            name}`
    		echo "compute${name}" > /etc/hostname	
    fi
    #set hosts
    nodes=`sed '/^nodes=/!d;s/.*=//' /openstack/nodes`
    array=(${nodes//,/ }) 
    i=1
    for var in ${array[@]}
    do
    	if [ $i -lt 4 ]; then
    		echo -e "\n$var\tcontroller0$i"  >>  /etc/hosts
    	else
    		name=${
           printf "%03s" $i}
    		echo -e "\n$var\tcomppute$name"  >>  /etc/hosts
    	fi
    	$i=$i+1
    done
    
    reboot
    
  • 各节点主机初始化内容:

    • 配置网卡信息
    • 关闭防火墙
    • 安装docker
    • 修改hostname以及添加hosts信息

三. kolla主机配置

设置各节点主机之间免密登录

生成并存储秘钥

ssh-keygen
pub_key=`cat ~/.ssh/id_rsa.pub`
echo "$pub_key root@kolla" >> ~/.ssh/authorized_keys
echo "$pub_key root@controller01" >> ~/.ssh/authorized_keys
echo "$pub_key root@controller02" >> ~/.ssh/authorized_keys
echo "$pub_key root@controller03" >> ~/.ssh/authorized_keys
#echo "$pub_key root@compute001" >> ~/.ssh/authorized_keys
#echo "$pub_key root@compute002" >> ~/.ssh/authorized_keys

将authorized_key文件发放到各主机的~/.ssh/目录

scp  ~/.ssh/authorized_keys  root@controller01:~/.ssh/
scp  ~/.ssh/authorized_keys  root@controller02:~/.ssh/
scp  ~/.ssh/authorized_keys  root@controller03:~/.ssh/
#scp  ~/.ssh/authorized_keys  root@compute001:~/.ssh/
#scp  ~/.ssh/authorized_keys  root@compute002:~/.ssh/

配置docker仓库:

配置国内镜像:

[root@kolla ~]# mkdir -p /etc/docker
[root@kolla ~]# vim /etc/docker/daemon.json
{
     
	"registry-mirrors": [
    "https://registry.docker-cn.com",
    "https://docker.mirrors.ustc.edu.cn",
    "http://hub-mirror.c.163.com",
    "https://cr.console.aliyun.com/",
    "http://f2d6cb40.m.daocloud.io"
   	 ]
}

启动docker

[root@kolla ~]# systemctl daemon-reload && systemctl enable	docker && systemctl restart docker

检查镜像站点配置是否正确

[root@kolla ~]# docker pull hello-world

安装依赖软件

安装pip并更新

[root@kolla ~]# yum insatll epel-release -y
[root@kolla ~]# yum insatll python-pip -y
[root@kolla ~]# pip install -U pip

修改pip源

[root@kolla ~]# mkdir ~/.pip
[root@kolla ~]# vim ~/.pip/pip.conf
[global]
trusted-host = pypi.douban.com
index-url = http://pypi.douban.com/simple

安装其他依赖包

[root@kolla ~]# yum install python-devel libffi-devel gcc openssl-devel libselinux-python -y

安装配置ansible:

先使用pip安装再使用yum安装,防止某些py包版本太低

[root@kolla ~]# pip install ansible
[root@kolla ~]# yum install ansible -y

在/etc/ansible/ansible.cfg配置文件中添加以下内容:

[defaults]
host_key_checking=False
pipelining=True
forks=100

安装配置kolla-ansible:

使用pip安装kolla-ansible:

pip install kolla-ansible

复制global.yml和password.yml文件到/etc/kolla目录:

cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/

复制all-in-one 和multinode 文件到当前操作目录:

cp /usr/share/kolla-ansible/ansible/inventory/*` .
	 ```
#### 修改global.yml文件
[global.yml](http://paste.ubuntu.org.cn/4360073)
```bash
---
# You can use this file to override _any_ variable throughout Kolla.
# Additional options can be found in the
# 'kolla-ansible/ansible/group_vars/all.yml' file. Default value of all the
# commented parameters are shown here, To override the default value uncomment
# the parameter and change its value.
 
###############
# Kolla options
###############
# Valid options are [ COPY_ONCE, COPY_ALWAYS ]
#config_strategy: "COPY_ALWAYS"
 
# Valid options are ['centos', 'debian', 'oraclelinux', 'rhel', 'ubuntu']
kolla_base_distro: "centos"
 
# Valid options are [ binary, source ]
kolla_install_type: "source"
 
# Valid option is Docker repository tag
openstack_release: "queens"
 
# Location of configuration overrides
#node_custom_config: "/etc/kolla/config"
 
# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. If you want to run an
# All-In-One without haproxy and keepalived, you can set enable_haproxy to no
# in "OpenStack options" section, and set this value to the IP of your
# 'network_interface' as set in the Networking section below.
kolla_internal_vip_address: "192.168.118.209"
 
# This is the DNS name that maps to the kolla_internal_vip_address VIP. By
# default it is the same as kolla_internal_vip_address.
#kolla_internal_fqdn: "{
     { kolla_internal_vip_address }}"
 
# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. It defaults to the
# kolla_internal_vip_address, allowing internal and external communication to
# share the same address.  Specify a kolla_external_vip_address to separate
# internal and external requests between two VIPs.
#kolla_external_vip_address: "{
     { kolla_internal_vip_address }}"
 
# The Public address used to communicate with OpenStack as set in the public_url
# for the endpoints that will be created. This DNS name should map to
# kolla_external_vip_address.
#kolla_external_fqdn: "{
     { kolla_external_vip_address }}"
 
################
# Docker options
################
# Below is an example of a private repository with authentication. Note the
# Docker registry password can also be set in the passwords.yml file.
 
docker_registry: "192.168.118.212:4000"
#docker_namespace: "companyname"
#docker_registry_username: "sam"
#docker_registry_password: "correcthorsebatterystaple"
 
###################
# Messaging options
###################
# Below is an example of an separate backend that provides brokerless
# messaging for oslo.messaging RPC communications
 
#om_rpc_transport: "amqp"
#om_rpc_user: "{
     { qdrouterd_user }}"
#om_rpc_password: "{
     { qdrouterd_password }}"
#om_rpc_port: "{
     { qdrouterd_port }}"
#om_rpc_group: "qdrouterd"
 
 
##############################
# Neutron - Networking Options
##############################
# This interface is what all your api services will be bound to by default.
# Additionally, all vxlan/tunnel and storage network traffic will go over this
# interface by default. This interface must contain an IPv4 address.
# It is possible for hosts to have non-matching names of interfaces - these can
# be set in an inventory file per host or per group or stored separately, see
#     http://docs.ansible.com/ansible/intro_inventory.html
# Yet another way to workaround the naming problem is to create a bond for the
# interface on all hosts and give the bond name here. Similar strategy can be
# followed for other types of interfaces.
network_interface: "enp0s31f6"
 
# These can be adjusted for even more customization. The default is the same as
# the 'network_interface'. These interfaces must contain an IPv4 address.
#kolla_external_vip_interface: "{
     { network_interface }}"
#api_interface: "{
     { network_interface }}"
#storage_interface: "{
     { network_interface }}"
#cluster_interface: "{
     { network_interface }}"
#tunnel_interface: "{
     { network_interface }}"
#dns_interface: "{
     { network_interface }}"
 
# This is the raw interface given to neutron as its external network port. Even
# though an IP address can exist on this interface, it will be unusable in most
# configurations. It is recommended this interface not be configured with any IP
# addresses for that reason.
#neutron_external_interface: "eth1"
 
# Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, opendaylight ]
#neutron_plugin_agent: "openvswitch"
 
 
####################
# keepalived options
####################
# Arbitrary unique number from 0..255
#keepalived_virtual_router_id: "51"
 
 
#############
# TLS options
#############
# To provide encryption and authentication on the kolla_external_vip_interface,
# TLS can be enabled.  When TLS is enabled, certificates must be provided to
# allow clients to perform authentication.
#kolla_enable_tls_external: "no"
#kolla_external_fqdn_cert: "{
     { node_config_directory }}/certificates/haproxy.pem"
 
 
##############
# OpenDaylight
##############
#enable_opendaylight_qos: "no"
#enable_opendaylight_l3: "yes"
 
###################
# OpenStack options
###################
# Use these options to set the various log levels across all OpenStack projects
# Valid options are [ True, False ]
#openstack_logging_debug: "False"
 
# Valid options are [ none, novnc, spice, rdp ]
#nova_console: "novnc"
 
# OpenStack services can be enabled or disabled with these options
enable_aodh: "yes"
enable_barbican: "yes"
enable_blazar: "yes"
enable_ceilometer: "yes"
enable_central_logging: "yes"
enable_ceph: "yes"
enable_ceph_mds: "no"
enable_ceph_rgw: "no"
enable_ceph_nfs: "no"
enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_cinder_backend_hnas_iscsi: "no"
enable_cinder_backend_hnas_nfs: "no"
enable_cinder_backend_iscsi: "no"
enable_cinder_backend_lvm: "no"
enable_cinder_backend_nfs: "no"
enable_cloudkitty: "yes"
enable_collectd: "yes"
enable_congress: "yes"
enable_designate: "yes"
enable_destroy_images: "yes"
enable_etcd: "yes"
enable_fluentd: "yes"
enable_freezer: "yes"
enable_gnocchi: "yes"
enable_grafana: "yes"
enable_haproxy: "yes"
enable_heat: "yes"
enable_horizon: "yes"
enable_horizon_cloudkitty: "{
     { enable_cloudkitty | bool }}"
enable_horizon_designate: "{
     { enable_designate | bool }}"
enable_horizon_freezer: "{
     { enable_freezer | bool }}"
enable_horizon_ironic: "{
     { enable_ironic | bool }}"
enable_horizon_karbor: "{
     { enable_karbor | bool }}"
enable_horizon_magnum: "{
     { enable_magnum | bool }}"
enable_horizon_manila: "{
     { enable_manila | bool }}"
enable_horizon_mistral: "{
     { enable_mistral | bool }}"
enable_horizon_murano: "{
     { enable_murano | bool }}"
enable_horizon_neutron_lbaas: "{
     { enable_neutron_lbaas | bool }}"
enable_horizon_sahara: "{
     { enable_sahara | bool }}"
enable_horizon_searchlight: "{
     { enable_searchlight | bool }}"
enable_horizon_senlin: "{
     { enable_senlin | bool }}"
enable_horizon_solum: "{
     { enable_solum | bool }}"
enable_horizon_tacker: "{
     { enable_tacker | bool }}"
enable_horizon_trove: "{
     { enable_trove | bool }}"
enable_horizon_watcher: "{
     { enable_watcher | bool }}"
enable_horizon_zun: "{
     { enable_zun | bool }}"
enable_hyperv: "yes"
enable_influxdb: "yes"
enable_ironic: "yes"
enable_ironic_pxe_uefi: "yes"
enable_karbor: "yes"
enable_kuryr: "yes"
enable_magnum: "yes"
enable_manila: "yes"
enable_manila_backend_generic: "yes"
enable_manila_backend_hnas: "yes"
enable_manila_backend_cephfs_native: "yes"
enable_manila_backend_cephfs_nfs: "yes"
enable_mistral: "yes"
enable_mongodb: "yes"
enable_murano: "yes"
enable_multipathd: "yes"
enable_neutron_bgp_dragent: "yes"
enable_neutron_dvr: "yes"
enable_neutron_lbaas: "yes"
enable_neutron_fwaas: "yes"
enable_neutron_qos: "yes"
enable_neutron_agent_ha: "yes"
enable_neutron_aas: "yes"
enable_neutron_sriov: "yes"
enable_neutron_sfc: "yes"
enable_nova_fake: "yes"
enable_nova_serialconsole_proxy: "yes"
enable_octavia: "yes"
enable_opendaylight: "yes"
enable_openvswitch: "{
     { neutron_plugin_agent != 'linuxbridge' }}"
enable_ovs_dpdk: "no"
enable_osprofiler: "yes"
enable_panko: "yes"
enable_qdrouterd: "yes"
enable_rally: "yes"
enable_redis: "yes"
enable_sahara: "yes"
enable_searchlight: "yes"
enable_senlin: "yes"
enable_skydive: "yes"
enable_solum: "yes"
enable_swift: "no"
enable_telegraf: "yes"
enable_tacker: "yes"
enable_tempest: "yes"
enable_trove: "yes"
enable_vitrage: "yes"
enable_vmtp: "yes"
enable_watcher: "yes"
enable_zun: "no"
 
##############
# Ceph options
##############
# Ceph can be setup with a caching to improve performance. To use the cache you
# must provide separate disks than those for the OSDs
#ceph_enable_cache: "no"
 
# Set to no if using external Ceph without cephx.
#external_ceph_cephx_enabled: "yes"
 
# Ceph is not able to determine the size of a cache pool automatically,
# so the configuration on the absolute size is required here, otherwise the flush/evict will not work.
#ceph_target_max_bytes: ""
#ceph_target_max_objects: ""
 
# Valid options are [ forward, none, writeback ]
#ceph_cache_mode: "writeback"
 
# A requirement for using the erasure-coded pools is you must setup a cache tier
# Valid options are [ erasure, replicated ]
#ceph_pool_type: "replicated"
 
# Integrate ceph rados object gateway with openstack keystone
#enable_ceph_rgw_keystone: "no"
 
# Set the pgs and pgps for pool
#ceph_pool_pg_num: 128
#ceph_pool_pgp_num: 128
 
#############################
# Keystone - Identity Options
#############################
 
# Valid options are [ uuid, fernet ]
#keystone_token_provider: 'uuid'
 
# Interval to rotate fernet keys by (in seconds). Must be an interval of
# 60(1 min), 120(2 min), 180(3 min), 240(4 min), 300(5 min), 360(6 min),
# 600(10 min), 720(12 min), 900(15 min), 1200(20 min), 1800(30 min),
# 3600(1 hour), 7200(2 hour), 10800(3 hour), 14400(4 hour), 21600(6 hour),
# 28800(8 hour), 43200(12 hour), 86400(1 day), 604800(1 week).
#fernet_token_expiry: 86400
 
 
########################
# Glance - Image Options
########################
# Configure image backend.
#glance_backend_file: "yes"
#glance_backend_ceph: "no"
#glance_backend_vmware: "no"
#glance_backend_swift: "no"
 
 
##################
# Barbican options
##################
# Valid options are [ simple_crypto, p11_crypto ]
#barbican_crypto_plugin: "simple_crypto"
#barbican_library_path: "/usr/lib/libCryptoki2_64.so"
 
################
## Panko options
################
# Valid options are [ mongodb, mysql ]
#panko_database_type: "mysql"
 
#################
# Gnocchi options
#################
# Valid options are [ file, ceph ]
#gnocchi_backend_storage: "{
     { 'ceph' if enable_ceph|bool else 'file' }}"
 
 
################################
# Cinder - Block Storage Options
################################
# Enable / disable Cinder backends
#cinder_backend_ceph: "{
     { enable_ceph }}"
#cinder_backend_vmwarevc_vmdk: "no"
#cinder_volume_group: "cinder-volumes"
 
# Valid options are [ nfs, swift, ceph ]
#cinder_backup_driver: "ceph"
#cinder_backup_share: ""
#cinder_backup_mount_options_nfs: ""
 
 
###################
# Designate options
###################
# Valid options are [ bind9 ]
#designate_backend: "bind9"
#designate_ns_record: "sample.openstack.org"
 
########################
# Nova - Compute Options
########################
#nova_backend_ceph: "{
     { enable_ceph }}"
 
# Valid options are [ qemu, kvm, vmware, xenapi ]
#nova_compute_virt_type: "kvm"
 
# The number of fake driver per compute node
#num_nova_fake_per_node: 5
 
#################
# Hyper-V options
#################
# Hyper-V can be used as hypervisor
#hyperv_username: "user"
#hyperv_password: "password"
#vswitch_name: "vswitch"
# URL from which Nova Hyper-V MSI is downloaded
#nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
 
#############################
# Horizon - Dashboard Options
#############################
#horizon_backend_database: "{
     { enable_murano | bool }}"
 
#############################
# Ironic options
#############################
#ironic_dnsmasq_dhcp_range:
 
######################################
# Manila - Shared File Systems Options
######################################
# HNAS backend configuration
#hnas_ip:
#hnas_user:
#hnas_password:
#hnas_evs_id:
#hnas_evs_ip:
#hnas_file_system_name:
 
################################
# Swift - Object Storage Options
################################
# Swift expects block devices to be available for storage. Two types of storage
# are supported: 1 - storage device with a special partition name and filesystem
# label, 2 - unpartitioned disk  with a filesystem. The label of this filesystem
# is used to detect the disk which Swift will be using.
 
# Swift support two matching modes, valid options are [ prefix, strict ]
#swift_devices_match_mode: "strict"
 
# This parameter defines matching pattern: if "strict" mode was selected,
# for swift_devices_match_mode then swift_device_name should specify the name of
# the special swift partition for example: "KOLLA_SWIFT_DATA", if "prefix" mode was
# selected then swift_devices_name should specify a pattern which would match to
# filesystems' labels prepared for swift.
#swift_devices_name: "KOLLA_SWIFT_DATA"
 
 
################################################
# Tempest - The OpenStack Integration Test Suite
################################################
# following value must be set when enable tempest
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:
 
# tempest_image_alt_id: "{
     { tempest_image_id }}"
# tempest_flavor_ref_alt_id: "{
     { tempest_flavor_ref_id }}"
 
###################################
# VMware - OpenStack VMware support
###################################
#vmware_vcenter_host_ip:
#vmware_vcenter_host_username:
#vmware_vcenter_host_password:
#vmware_datastore_name:
#vmware_vcenter_name:
#vmware_vcenter_cluster_name:
 
#######################################
# XenAPI - Support XenAPI for XenServer
#######################################
# XenAPI driver use HIMN(Host Internal Management Network)
# to communicate with XenServer host.
#xenserver_himn_ip:
#xenserver_username:
#xenserver_connect_protocol:

拉取镜像

kolla-ansible pull -vvv

再次修改global.yml文件(因为上一个文件拉取的镜像缺少nova-compute等镜像)

global.yml

# Location of configuration overrides
node_custom_config: "/etc/kolla/config"
 
# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. If you want to run an
# All-In-One without haproxy and keepalived, you can set enable_haproxy to no
# in "OpenStack options" section, and set this value to the IP of your
# 'network_interface' as set in the Networking section below.
kolla_internal_vip_address: "192.168.216.160"
 
################
# Docker options
################
# Below is an example of a private repository with authentication. Note the
# Docker registry password can also be set in the passwords.yml file.
 
#docker_registry: "kolla:4000"
#docker_namespace: "kolla"
#docker_registry_username: "sam"
#docker_registry_password: "correcthorsebatterystaple"
 
##############################
# Neutron - Networking Options
##############################
 
# This is the raw interface given to neutron as its external network port. Even
# though an IP address can exist on this interface, it will be unusable in most
# configurations. It is recommended this interface not be configured with any IP
# addresses for that reason.
#neutron_external_interface: "ens35"
 
# OpenStack services can be enabled or disabled with these options
#enable_aodh: "no"
#enable_barbican: "no"
#enable_blazar: "no"
enable_ceilometer: "yes"
#enable_central_logging: "no"
#enable_ceph: "no"
#enable_ceph_mds: "no"
#enable_ceph_rgw: "no"
#enable_ceph_nfs: "no"
enable_chrony: "yes"
enable_cinder: "yes"
#enable_cinder_backup: "yes"
#enable_cinder_backend_hnas_iscsi: "no"
#enable_cinder_backend_hnas_nfs: "no"
#enable_cinder_backend_iscsi: "no"
enable_cinder_backend_lvm: "yes"
#enable_cinder_backend_nfs: "no"
#enable_cloudkitty: "no"
#enable_collectd: "no"
#enable_congress: "no"
#enable_designate: "no"
#enable_destroy_images: "no"
#enable_etcd: "no"
#enable_fluentd: "yes"
#enable_freezer: "no"
enable_gnocchi: "yes"
#enable_grafana: "no"
#enable_haproxy: "yes"
#enable_heat: "yes"
#enable_horizon: "yes"
#enable_horizon_cloudkitty: "{
     { enable_cloudkitty | bool }}"
#enable_horizon_designate: "{
     { enable_designate | bool }}"
#enable_horizon_freezer: "{
     { enable_freezer | bool }}"
#enable_horizon_ironic: "{
     { enable_ironic | bool }}"
#enable_horizon_karbor: "{
     { enable_karbor | bool }}"
#enable_horizon_magnum: "{
     { enable_magnum | bool }}"
#enable_horizon_manila: "{
     { enable_manila | bool }}"
#enable_horizon_mistral: "{
     { enable_mistral | bool }}"
#enable_horizon_murano: "{
     { enable_murano | bool }}"
#enable_horizon_neutron_lbaas: "{
     { enable_neutron_lbaas | bool }}"
#enable_horizon_sahara: "{
     { enable_sahara | bool }}"
#enable_horizon_searchlight: "{
     { enable_searchlight | bool }}"
#enable_horizon_senlin: "{
     { enable_senlin | bool }}"
#enable_horizon_solum: "{
     { enable_solum | bool }}"
#enable_horizon_tacker: "{
     { enable_tacker | bool }}"
#enable_horizon_trove: "{
     { enable_trove | bool }}"
#enable_horizon_watcher: "{
     { enable_watcher | bool }}"
#enable_horizon_zun: "{
     { enable_zun | bool }}"
#enable_hyperv: "no"
#enable_influxdb: "no"
#enable_ironic: "no"
#enable_ironic_pxe_uefi: "no"
#enable_karbor: "no"
#enable_kuryr: "no"
#enable_magnum: "no"
#enable_manila: "no"
#enable_manila_backend_generic: "no"
#enable_manila_backend_hnas: "no"
#enable_manila_backend_cephfs_native: "no"
#enable_manila_backend_cephfs_nfs: "no"
#enable_mistral: "no"
#enable_mongodb: "no"
#enable_murano: "no"
#enable_multipathd: "no"
#enable_neutron_bgp_dragent: "no"
#enable_neutron_dvr: "no"
#enable_neutron_lbaas: "no"
#enable_neutron_fwaas: "no"
#enable_neutron_qos: "no"
#enable_neutron_agent_ha: "no"
#enable_neutron_aas: "no"
#enable_neutron_sriov: "no"
#enable_neutron_sfc: "no"
#enable_nova_fake: "no"
#enable_nova_serialconsole_proxy: "no"
#enable_octavia: "no"
#enable_opendaylight: "no"
#enable_openvswitch: "{
     { neutron_plugin_agent != 'linuxbridge' }}"
#enable_ovs_dpdk: "no"
#enable_osprofiler: "no"
#enable_panko: "no"
#enable_qdrouterd: "no"
#enable_rally: "no"
#enable_redis: "no"
#enable_sahara: "no"
#enable_searchlight: "no"
#enable_senlin: "no"
#enable_skydive: "no"
#enable_solum: "no"
#enable_swift: "no"
#enable_telegraf: "no"
#enable_tacker: "no"
#enable_tempest: "no"
#enable_trove: "no"
#enable_vitrage: "no"
																							#enable_vmtp: "no"
#enable_watcher: "no"
#enable_zun: "no"
 
########################
# Nova - Compute Options
########################
#nova_backend_ceph: "{
     { enable_ceph }}"
 
# Valid options are [ qemu, kvm, vmware, xenapi ]
nova_compute_virt_type: "kvm"

拉取镜像

 kolla-ansible pull -vvv

上传镜像到本地registry仓库:

配置Docker共享挂载:

[root@kolla ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@kolla ~]# vim /etc/systemd/system/docker.service.d/kolla.conf
[Service]
MountFlags=shared
[root@kolla ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker

启动registry容器,并将端口映射到4000端口

[root@kolla /]# docker run -d --name registry --restart=always -p 4000:5000 -v /opt/registry:/var/lib/registry registry:2.6.2

修改Docker服务配置,信任本地Registry服务

[root@kolla /]# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --insecure-registry kolla:4000

重新启动docker服务

systemctl daemon-reload && systemctl restart docker

测试registry服务是否正常:

[root@kolla ~]# curl -X GET http://kolla:4000/v2/_catalog
{
     "repositories":[]}

修改镜像tag:

for i in `docker images|grep -v registry|grep -v R|awk '{print $1}'`;
do 
	docker image tag $i:queens kolla:4000/$i:queens;
done

push到本地库

for i in `docker images|grep kolla:4000|awk '{print $1}'`;
do
	docker push $i:queens;
done

查看镜像是否上传成功:

curl -XGET http://kolla:4000/v2/_catalog
{
     
	"repositories": [
	"kolla/centos-source-aodh-api",
	"kolla/centos-source-aodh-evaluator",
	"kolla/centos-source-aodh-listener",
	"kolla/centos-source-aodh-notifier",
	"kolla/centos-source-barbican-api",
	"kolla/centos-source-barbican-keystone-listener",
	"kolla/centos-source-barbican-worker",
	"kolla/centos-source-blazar-api",
	"kolla/centos-source-blazar-manager",
	"kolla/centos-source-ceilometer-central",
	"kolla/centos-source-ceilometer-compute",
	"kolla/centos-source-ceilometer-notification",
	"kolla/centos-source-ceph-mds",
	"kolla/centos-source-ceph-mgr",
	"kolla/centos-source-ceph-mon",
	"kolla/centos-source-ceph-nfs",
	"kolla/centos-source-ceph-osd",
	"kolla/centos-source-ceph-rgw",
	"kolla/centos-source-chrony",
	"kolla/centos-source-cinder-api",
	"kolla/centos-source-cinder-backup",
	"kolla/centos-source-cinder-scheduler",
	"kolla/centos-source-cinder-volume",
	"kolla/centos-source-cloudkitty-api",
	"kolla/centos-source-cloudkitty-processor",
	"kolla/centos-source-collectd",
	"kolla/centos-source-congress-api",
	"kolla/centos-source-congress-datasource",
	"kolla/centos-source-congress-policy-engine",
	"kolla/centos-source-cron",
	"kolla/centos-source-designate-api",
	"kolla/centos-source-designate-backend-bind9",
	"kolla/centos-source-designate-central",
	"kolla/centos-source-designate-mdns",
	"kolla/centos-source-designate-producer",
	"kolla/centos-source-designate-sink",
	"kolla/centos-source-designate-worker",
	"kolla/centos-source-dnsmasq",
	"kolla/centos-source-elasticsearch",
	"kolla/centos-source-etcd",
	"kolla/centos-source-fluentd",
	"kolla/centos-source-freezer-api",
	"kolla/centos-source-glance-api",
	"kolla/centos-source-gnocchi-api",
	"kolla/centos-source-gnocchi-metricd",
	"kolla/centos-source-gnocchi-statsd",
	"kolla/centos-source-grafana",
	"kolla/centos-source-haproxy",
	"kolla/centos-source-heat-api",
	"kolla/centos-source-heat-api-cfn",
	"kolla/centos-source-heat-engine",
	"kolla/centos-source-horizon",
	"kolla/centos-source-influxdb",
	"kolla/centos-source-ironic-api",
	"kolla/centos-source-ironic-conductor",
	"kolla/centos-source-ironic-inspector",
	"kolla/centos-source-ironic-pxe",
	"kolla/centos-source-iscsid",
	"kolla/centos-source-karbor-api",
	"kolla/centos-source-karbor-operationengine",
	"kolla/centos-source-karbor-protection",
	"kolla/centos-source-keepalived",
	"kolla/centos-source-keystone",
	"kolla/centos-source-kibana",
	"kolla/centos-source-kolla-toolbox",
	"kolla/centos-source-kuryr-libnetwork",
	"kolla/centos-source-magnum-api",
	"kolla/centos-source-magnum-conductor",
	"kolla/centos-source-manila-api",
	"kolla/centos-source-manila-data",
	"kolla/centos-source-manila-scheduler",
	"kolla/centos-source-manila-share",
	"kolla/centos-source-mariadb",
	"kolla/centos-source-memcached",
	"kolla/centos-source-mistral-api",
	"kolla/centos-source-mistral-engine",
	"kolla/centos-source-mistral-executor",
	"kolla/centos-source-mongodb",
	"kolla/centos-source-multipathd",
	"kolla/centos-source-murano-api",
	"kolla/centos-source-murano-engine",
	"kolla/centos-source-neutron-bgp-dragent",
	"kolla/centos-source-neutron-dhcp-agent",
	"kolla/centos-source-neutron-l3-agent",
	"kolla/centos-source-neutron-lbaas-agent",
	"kolla/centos-source-neutron-metadata-agent",
	"kolla/centos-source-neutron-openvswitch-agent",
	"kolla/centos-source-neutron-server",
	"kolla/centos-source-neutron-server-opendaylight",
	"kolla/centos-source-neutron-sriov-agent",
	"kolla/centos-source-neutron-aas-agent",
	"kolla/centos-source-nova-api",
	"kolla/centos-source-nova-compute",
	"kolla/centos-source-nova-compute-ironic",
	"kolla/centos-source-nova-conductor",
	"kolla/centos-source-nova-consoleauth",
	"kolla/centos-source-nova-libvirt",
	"kolla/centos-source-nova-novncproxy",
	"kolla/centos-source-nova-placement-api",
	"kolla/centos-source-nova-scheduler"]
}

修改部署配置文件

修改当前目录下的multinode文件:mutinode

# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
controller01
controller02
controller03
 
# The above can also be specified as follows:
#control[01:03]     ansible_user=kolla
 
# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
controller01
controller02
controller03
 
# inner-compute is the groups of compute nodes which do not have
# external reachability
[inner-compute]
 
# external-compute is the groups of compute nodes which can reach
# outside
[external-compute]
compute01
compute02
 
[compute:children]
inner-compute
external-compute
 
[monitoring]
controller01
 
# When compute nodes and control nodes use different interfaces,
# you need to comment out "api_interface" and other interfaces from the globals.yml
# and specify like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1
 
[storage]
compute01
compute02
 
[deployment]
localhost       ansible_connection=local
 
[baremetal:children]
control
network
compute
storage
monitoring
 
# You can explicitly specify which hosts run each project by updating the
# groups in the sections below. Common services are grouped together.
[chrony-server:children]
haproxy
 
[chrony:children]
control
network
compute
storage
monitoring
 
[collectd:children]
compute
 
[grafana:children]
monitoring
 
[etcd:children]
control
compute
 
[influxdb:children]
monitoring
 
[karbor:children]
control
 
[kibana:children]
control
 
[telegraf:children]
compute
control
monitoring
network
storage
 
[elasticsearch:children]
control
 
[haproxy:children]
network
 
[hyperv]
#hyperv_host
 
[hyperv:vars]
#ansible_user=user
#ansible_password=password
#ansible_port=5986
#ansible_connection=winrm
#ansible_winrm_server_cert_validation=ignore
 
[mariadb:children]
control
 
[rabbitmq:children]
control
 
[outward-rabbitmq:children]
control
 
[qdrouterd:children]
control
 
[mongodb:children]
control
 
[keystone:children]
control
 
[glance:children]
control
 
[nova:children]
control
 
[neutron:children]
network
 
[openvswitch:children]
network
compute
manila-share
 
[opendaylight:children]
network
 
[cinder:children]
control
 
[cloudkitty:children]
control
 
[freezer:children]
control
 
[memcached:children]
control
 
[horizon:children]
control
 
[swift:children]
control
 
[barbican:children]
control
 
[heat:children]
control
 
[murano:children]
control
 
[solum:children]
control
 
[ironic:children]
control
 
[ceph:children]
control
 
[magnum:children]
control
 
[sahara:children]
control
 
[mistral:children]
control
 
[manila:children]
control
 
[ceilometer:children]
control
 
[aodh:children]
control
 
[congress:children]
control
 
[panko:children]
control
 
[gnocchi:children]
control
 
[tacker:children]
control
 
[trove:children]
control
 
# Tempest
[tempest:children]
control
 
[senlin:children]
control
 
[vmtp:children]
control
 
[vitrage:children]
control
 
[watcher:children]
control
 
[rally:children]
control
 
[searchlight:children]
control
 
[octavia:children]
control
 
[designate:children]
control
 
[placement:children]
control
 
[bifrost:children]
deployment
 
[zun:children]
control
 
[skydive:children]
monitoring
 
[redis:children]
control
 
[blazar:children]
control
 
# Additional control implemented here. These groups allow you to control which
# services run on which hosts at a per-service level.
#
# Word of caution: Some services are required to run on the same host to
# function appropriately. For example, neutron-metadata-agent must run on the
# same host as the l3-agent and (depending on configuration) the dhcp-agent.
 
# Glance
[glance-api:children]
glance
 
[glance-registry:children]
glance
 
# Nova
[nova-api:children]
nova
 
[nova-conductor:children]
nova
 
[nova-consoleauth:children]
nova
 
[nova-novncproxy:children]
nova
 
[nova-scheduler:children]
nova
 
[nova-spicehtml5proxy:children]
nova
 
[nova-compute-ironic:children]
nova
 
[nova-serialproxy:children]
nova
 
# Neutron
[neutron-server:children]
control
 
[neutron-dhcp-agent:children]
neutron
 
[neutron-l3-agent:children]
neutron
 
[neutron-lbaas-agent:children]
neutron
 
[neutron-metadata-agent:children]
neutron
 
[neutron-aas-agent:children]
neutron
 
[neutron-bgp-dragent:children]
neutron
 
# Ceph
[ceph-mds:children]
ceph
 
[ceph-mgr:children]
ceph
 
[ceph-nfs:children]
ceph
 
[ceph-mon:children]
ceph
 
[ceph-rgw:children]
ceph
 
[ceph-osd:children]
storage
 
# Cinder
[cinder-api:children]
cinder
 
[cinder-backup:children]
storage
 
[cinder-scheduler:children]
cinder
 
[cinder-volume:children]
storage
 
# Cloudkitty
[cloudkitty-api:children]
cloudkitty
 
[cloudkitty-processor:children]
cloudkitty
 
# Freezer
[freezer-api:children]
freezer
 
# iSCSI
[iscsid:children]
compute
storage
ironic
 
[tgtd:children]
storage
 
# Karbor
[karbor-api:children]
karbor
 
[karbor-protection:children]
karbor
 
[karbor-operationengine:children]
karbor
 
# Manila
[manila-api:children]
manila
 
[manila-scheduler:children]
manila
 
[manila-share:children]
network
 
[manila-data:children]
manila
 
# Swift
[swift-proxy-server:children]
swift
 
[swift-account-server:children]
storage
 
[swift-container-server:children]
storage
 
[swift-object-server:children]
storage
 
# Barbican
[barbican-api:children]
barbican
 
[barbican-keystone-listener:children]
barbican
 
[barbican-worker:children]
barbican
 
# Heat
[heat-api:children]
heat
 
[heat-api-cfn:children]
heat
 
[heat-engine:children]
heat
 
# Murano
[murano-api:children]
murano
 
[murano-engine:children]
murano
 
# Ironic
[ironic-api:children]
ironic
 
[ironic-conductor:children]
ironic
 
[ironic-inspector:children]
ironic
 
[ironic-pxe:children]
ironic
 
# Magnum
[magnum-api:children]
magnum
 
[magnum-conductor:children]
magnum
 
# Sahara
[sahara-api:children]
sahara
 
[sahara-engine:children]
sahara
 
# Solum
[solum-api:children]
solum
 
[solum-worker:children]
solum
 
[solum-deployer:children]
solum
 
[solum-conductor:children]
solum
 
# Mistral
[mistral-api:children]
mistral
 
[mistral-executor:children]
mistral
 
[mistral-engine:children]
mistral
 
# Ceilometer
[ceilometer-central:children]
ceilometer
 
[ceilometer-notification:children]
ceilometer
 
[ceilometer-compute:children]
compute
 
# Aodh
[aodh-api:children]
aodh
 
[aodh-evaluator:children]
aodh
 
[aodh-listener:children]
aodh
 
[aodh-notifier:children]
aodh
 
# Congress
[congress-api:children]
congress
 
[congress-datasource:children]
congress
 
[congress-policy-engine:children]
congress
 
# Panko
[panko-api:children]
panko
 
# Gnocchi
[gnocchi-api:children]
gnocchi
 
[gnocchi-statsd:children]
gnocchi
 
[gnocchi-metricd:children]
gnocchi
 
# Trove
[trove-api:children]
trove
 
[trove-conductor:children]
trove
 
[trove-taskmanager:children]
trove
 
# Multipathd
[multipathd:children]
compute
 
# Watcher
[watcher-api:children]
watcher
 
[watcher-engine:children]
watcher
 
[watcher-applier:children]
watcher
 
# Senlin
[senlin-api:children]
senlin
 
[senlin-engine:children]
senlin
 
# Searchlight
[searchlight-api:children]
searchlight
 
[searchlight-listener:children]
searchlight
 
# Octavia
[octavia-api:children]
octavia
 
[octavia-health-manager:children]
octavia
 
[octavia-housekeeping:children]
octavia
 
[octavia-worker:children]
octavia
 
# Designate
[designate-api:children]
designate
 
[designate-central:children]
designate
 
[designate-producer:children]
designate
 
[designate-mdns:children]
network
 
[designate-worker:children]
designate
 
[designate-sink:children]
designate
 
[designate-backend-bind9:children]
designate
 
# Placement
[placement-api:children]
placement
 
# Zun
[zun-api:children]
zun
 
[zun-compute:children]
compute
 
# Skydive
[skydive-analyzer:children]
skydive
 
[skydive-agent:children]
compute
network
 
# Tacker
[tacker-server:children]
tacker
 
[tacker-conductor:children]
tacker
 
# Vitrage
[vitrage-api:children]
vitrage
 
[vitrage-notifier:children]
vitrage
 
[vitrage-graph:children]
vitrage
 
[vitrage-collector:children]
vitrage
 
[vitrage-ml:children]
vitrage
 
# Blazar
[blazar-api:children]
blazar
 
[blazar-manager:children]
blazar
 
  • 修改/etc/kolla/global.yml文件:global.yml
# Location of configuration overrides
node_custom_config: "/etc/kolla/config"
 
# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. If you want to run an
# All-In-One without haproxy and keepalived, you can set enable_haproxy to no
# in "OpenStack options" section, and set this value to the IP of your
# 'network_interface' as set in the Networking section below.
 
################
# Docker options
################
# Below is an example of a private repository with authentication. Note the
# Docker registry password can also be set in the passwords.yml file.
 
docker_registry: "kolla:4000"
docker_namespace: "kolla"
#docker_registry_username: "sam"
#docker_registry_password: "correcthorsebatterystaple"
 
##############################
# Neutron - Networking Options
##############################
# This interface is what all your api services will be bound to by default.
# Additionally, all vxlan/tunnel and storage network traffic will go over this
# interface by default. This interface must contain an IPv4 address.
# It is possible for hosts to have non-matching names of interfaces - these can
# be set in an inventory file per host or per group or stored separately, see
#     http://docs.ansible.com/ansible/intro_inventory.html
# Yet another way to workaround the naming problem is to create a bond for the
# interface on all hosts and give the bond name here. Similar strategy can be
# followed for other types of interfaces.
network_interface: "enp0s31f6"
 
# This is the raw interface given to neutron as its external network port. Even
# though an IP address can exist on this interface, it will be unusable in most
# configurations. It is recommended this interface not be configured with any IP
# addresses for that reason.
 
# Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, opendaylight ]
neutron_plugin_agent: "openvswitch"
 
 
####################
# keepalived options
####################
# Arbitrary unique number from 0..255
#keepalived_virtual_router_id: "51"
 
 
# Valid options are [ none, novnc, spice, rdp ]
#nova_console: "novnc"
 
# OpenStack services can be enabled or disabled with these options
#enable_aodh: "no"
#enable_barbican: "no"
#enable_blazar: "no"
enable_ceilometer: "yes"
enable_central_logging: "yes"
#enable_ceph: "no"
#enable_ceph_mds: "no"
#enable_ceph_rgw: "no"
#enable_ceph_nfs: "no"
enable_chrony: "yes"
enable_cinder: "yes"
#enable_cinder_backup: "yes"
#enable_cinder_backend_hnas_iscsi: "no"
#enable_cinder_backend_hnas_nfs: "no"
#enable_cinder_backend_iscsi: "no"
enable_cinder_backend_lvm: "yes"
#enable_cinder_backend_nfs: "no"
#enable_cloudkitty: "no"
#enable_collectd: "no"
#enable_congress: "no"
#enable_designate: "no"
#enable_destroy_images: "no"
#enable_etcd: "no"
#enable_fluentd: "yes"
#enable_freezer: "no"
enable_gnocchi: "yes"
#enable_grafana: "no"
#enable_haproxy: "yes"
#enable_heat: "yes"
#enable_horizon: "yes"
#enable_hyperv: "no"
#enable_influxdb: "no"
#enable_ironic: "no"
#enable_ironic_pxe_uefi: "no"
#enable_karbor: "no"
#enable_kuryr: "no"
#enable_magnum: "no"
#enable_manila: "no"
#enable_manila_backend_generic: "no"
#enable_manila_backend_hnas: "no"
#enable_manila_backend_cephfs_native: "no"
#enable_manila_backend_cephfs_nfs: "no"
#enable_mistral: "no"
#enable_mongodb: "no"
#enable_murano: "no"
#enable_multipathd: "no"
#enable_neutron_bgp_dragent: "no"
#enable_neutron_dvr: "no"
#enable_neutron_lbaas: "no"
#enable_neutron_fwaas: "no"
#enable_neutron_qos: "no"
#enable_neutron_agent_ha: "no"
#enable_neutron_aas: "no"
#enable_neutron_sriov: "no"
#enable_neutron_sfc: "no"
#enable_nova_fake: "no"
#enable_nova_serialconsole_proxy: "no"
#enable_octavia: "no"
#enable_opendaylight: "no"
#enable_openvswitch: "{
     { neutron_plugin_agent != 'linuxbridge' }}"
#enable_ovs_dpdk: "no"
#enable_osprofiler: "no"
#enable_panko: "no"
#enable_qdrouterd: "no"
#enable_rally: "no"
#enable_redis: "no"
#enable_sahara: "no"
#enable_searchlight: "no"
#enable_senlin: "no"
#enable_skydive: "no"
#enable_solum: "no"
#enable_swift: "no"
#enable_telegraf: "no"
#enable_tacker: "no"
#enable_tempest: "no"
#enable_trove: "no"
#enable_vitrage: "no"
#enable_vmtp: "no"
#enable_watcher: "no"
#enable_zun: "no"
 
########################
# Glance - Image Options
########################
# Configure image backend.
#glance_backend_file: "no"
glance_backend_ceph: "yes"
#glance_backend_vmware: "no"
#glance_backend_swift: "no"
 
 
# Nova - Compute Options
########################
#nova_backend_ceph: "{
     { enable_ceph }}"
 
# Valid options are [ qemu, kvm, vmware, xenapi ]
nova_compute_virt_type: "kvm"
 

部署:

生成随机密码文件:

kolla-genpwd

修改horizon登录界面admin密码:

[root@kolla ~]# vim /etc/kolla/passwords.yml
keepalived_password: mFbTVxF6XyrrT8NqaN5UpFB098GEXuZ9oQyfQI14
keystone_admin_password: 123  # 更改此处
keystone_database_password: C4EzIx0zhoFjsG9dA9TBRaZfbFIdT3f9sCe7jGyg

引导配置各节点依赖软件:

kolla-ansible -i ./multinode bootstrap-servers
PLAY RECAP *************************************************************************************************************************************************************
compute01                  : ok=38   changed=7    unreachable=0    failed=0   
compute02                  : ok=38   changed=7    unreachable=0    failed=0   
controller01               : ok=38   changed=7    unreachable=0    failed=0   
controller02               : ok=39   changed=17   unreachable=0    failed=0   
controller03               : ok=38   changed=7    unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0 

进行预部署检查:

kolla-ansible -i ./multinode prechecks
PLAY RECAP ************************************************************************************************************************************************************
compute01                  : ok=26   changed=1    unreachable=0    failed=0   
compute02                  : ok=26   changed=1    unreachable=0    failed=0   
controller01               : ok=91   changed=1    unreachable=0    failed=0   
controller02               : ok=87   changed=1    unreachable=0    failed=0   
controller03               : ok=87   changed=1    unreachable=0    failed=0   
localhost                  : ok=6    changed=1    unreachable=0    failed=0 

Cinder出现错误

TASK [cinder : Checking LVM volume group exists for Cinder] ***********************************************************************************************************
skipping: [controller01]
skipping: [controller02]
skipping: [controller03]
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|failed` use `result is failed`. This feature will be removed in version 2.9. 
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
fatal: [compute01]: FAILED! => {
     "changed": false, "cmd": ["vgs", "cinder-volumes"], "delta": "0:00:00.009794", "end": "2018-10-13 18:33:13.868282", "failed_when_result": true, "msg": "non-zero return code", "rc": 5, "start": "2018-10-13 18:33:13.858488", "stderr": "  Volume group \"cinder-volumes\" not found\n  Cannot process volume group cinder-volumes", "stderr_lines": ["  Volume group \"cinder-volumes\" not found", "  Cannot process volume group cinder-volumes"], "stdout": "", "stdout_lines": []}
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|failed` use `result is failed`. This feature will be removed in version 2.9. 
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
fatal: [compute02]: FAILED! => {
     "changed": false, "cmd": ["vgs", "cinder-volumes"], "delta": "0:00:00.010114", "end": "2018-10-13 18:33:13.860281", "failed_when_result": true, "msg": "non-zero return code", "rc": 5, "start": "2018-10-13 18:33:13.850167", "stderr": "  Volume group \"cinder-volumes\" not found\n  Cannot process volume group cinder-volumes", "stderr_lines": ["  Volume group \"cinder-volumes\" not found", "  Cannot process volume group cinder-volumes"], "stdout": "", "stdout_lines": []}

解决方案:

[root@compute02 .ssh]# vgdisplay
  --- Volume group ---
  VG Name               centos
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476806
  Alloc PE / Size       476806 / <1.82 TiB
  Free  PE / Size       0 / 0   
  VG UUID               FEgDXH-SBlh-x29N-qU0f-Wajd-2sJ6-rbUre5
   
[root@compute02 .ssh]# dd if=/dev/zero of=./disk.img count=200 bs=512MB
200+0 records in
200+0 records out
102400000000 bytes (102 GB) copied, 509.072 s, 201 MB/s
[root@compute02 .ssh]# losetup -f
/dev/loop0
[root@compute02 .ssh]# losetup /dev/loop0 disk.img
[root@compute02 .ssh]# pvcreate /dev/loop0
  Physical volume "/dev/loop0" successfully created.
[root@compute02 .ssh]# vgcreate cinder-volumes /dev/loop0
  Volume group "cinder-volumes" successfully created

进行正式部署:

```bash
kolla-ansible -i ./multinode deploy
```

四.初始化OpenStack

删除ipadress的py包并重新安装

版本过低下一步客户端安装会出错,原先安装其他包的时候作为依赖包安装的ipaddress无法通过pip删除并升级,只能手动删除再安装最新版本:

[root@kolla ~]# cd /usr/lib/python2.7/site-packages/
[root@kolla site-packages]# rm -rf ipaddress*
[root@kolla site-packages]# pip install ipaddress

安装OpenStack CLI客户端:

[root@kolla site-packages]# pip install python-openstackclient python-glanceclient python-neutronclient

设置环境变量:

[root@kolla site-packages]# . /etc/kolla/admin-openrc.sh 

编辑初始化脚本中的网络配置:

[root@kolla ~]# vim /usr/share/kolla-ansible/init-runonce
EXT_NET_CIDR='10.132.226.0/24'
EXT_NET_RANGE='start=10.132.226.130,end=10.132.226.169'
EXT_NET_GATEWAY='10.132.226.254'

执行初始化脚本:

[root@kolla ~]# . /usr/share/kolla-ansible/init-runonce
Checking for locally available cirros image.
None found, downloading cirros image.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
100 12.1M  100 12.1M    0     0  2040k      0  0:00:06  0:00:06 --:--:-- 2716k
Creating glance image.
······
Done.

To deploy a demo instance, run:

openstack server create \
    --image cirros \
    --flavor m1.tiny \
    --key-name mykey \
    --nic net-id=89a1f674-e89f-4e6d-b96d-2875446adc1e \
    demo1

你可能感兴趣的:(kolla-ansible 部署OpenStack queens版本(转))