容器化安装openstack单节点环境

容器化安装openstack单节点环境

准备工作

环境搭建基于vmware esxi上的虚拟机进行搭建

  1. 虚拟机信息

    cpu 2*2 memory 40G ram 8G nic 2
    centos 7
    [root@kolla ~]# uname -a
    Linux kolla 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017   x86_64 x86_64 x86_64 GNU/Linux
    [root@kolla ~]# cat /etc/redhat-release
    CentOS Linux release 7.4.1708 (Core)
    [root@kolla ~]#
    
  2. 源信息

    采用原始默认源信息,这里没有更换网络源,可参看源更新

    yum makecache
    yum -y update
    yum -y install vim
    yum -y install net-tools
    
  3. 网卡信息

    ens160作为openstack环境管理网ip,隧道网ip,存储网Ip。ens160也作为host默认路由,可以 连接外网,用于远程horizon登录和下载依赖包。

    ens192作为外网ip,需保证ens192可联通互联网,后面br-ex绑定这个网卡,虚拟机是通过这块网卡访问外网

    vi /etc/sysconfig/network-scripts/ifcfg-ens160

    TYPE=Ethernet
    BOOTPROTO=none
    DEFROUTE=yes
    PEERDNS=yes
    PEERROUTES=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_PEERDNS=yes
    IPV6_PEERROUTES=yes
    IPV6_FAILURE_FATAL=no
    NAME=ens160
    UUID=93124eda-37a5-4cc0-8ff7-820606256ce3
    DEVICE=ens160
    ONBOOT=yes
    IPADDR=192.168.26.6
    PREFIX=24
    GATEWAY=192.168.26.254
    DNS1=10.19.8.10
    DNS2=10.19.8.12
    

    vi /etc/sysconfig/network-scripts/ifcfg-ens192

    TYPE=Ethernet
    BOOTPROTO=static
    DEFROUTE=no
    PEERDNS=yes
    PEERROUTES=no
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_PEERDNS=yes
    IPV6_PEERROUTES=yes
    IPV6_FAILURE_FATAL=no
    NAME=ens192
    UUID=b16e217e-f9fc-40cd-a9ab-3eed7e3ea6a4
    DEVICE=ens192
    ONBOOT=yes
    IPADDR=192.168.30.31
    PREFIX=24
    
  4. 修改主机名

    vi /etc/hostname
    kolla
    
    vi /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    
    192.168.26.6 kolla kolla
    
  5. 安装NTP

    # CentOS 7
    yum install ntp
    systemctl enable ntpd.service
    systemctl start ntpd.service
    

    手工同步时间

    ntpdate 0.centos.pool.ntp.org
    
  6. 关闭 Selinux

    vi /etc/selinux/config
    
    SELINUX=disabled
    

    重启系统,然后检查Selinux是否关闭

    [root@kolla ~]# sestatus
    SELinux status:                 disabled
    [root@kolla ~]#
    
  7. 关闭Firewalld

    由于是开发测试环境,需要开启大量的服务端口,为了简化配置,所以关闭防火墙,但是如果是生产环境或者暴露公网IP的服务器,强烈建议不要关闭防火墙

    systemctl status firewalld
    systemctl stop firewalld
    systemctl disable firewalld
    systemctl status firewalld
    firewall-cmd --state
    
  8. 查看虚拟机是开启了虚拟,参考1,参考2

    # egrep "vmx|svm" /proc/cpuinfo
    

    如果没有开启虚拟化请开启虚拟化
    重启系统reboot生效

    note:这一步就是确认虚拟机能支持嵌套虚拟化,个人感觉这一步多余,之前用devstack安装也没有做这一步操作,通过配置虚拟化类型为qemu,也可以在虚拟机中创建虚拟机。

安装基础包

  1. 一定要先启用EPEL的repo源

    # yum install epel-release -y
    
  2. 安装基础软件包

    # yum install vim net-tools tmux python-devel libffi-devel gcc openssl-devel git python-pip -y
    

安装docker

  1. 设置repo

    # tee /etc/yum.repos.d/docker.repo << 'EOF'
    [dockerrepo]
    name=Docker Repository
    baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
    enabled=1
    gpgcheck=1
    gpgkey=https://yum.dockerproject.org/gpg
    EOF
    
  2. 安装Docker 1.12.5

    目前最新版本的Docker是1.13.1,Kolla目前支持的Docker是1.12.x,所以我们要指定Docker的版本来安装,并且一定要采用Docker官方的源,不能使用红帽的源,红帽的源的Docker是有bug。

    # yum install docker-engine-1.12.5 docker-engine-selinux-1.12.5 -y
    
  3. 设置docker

    # mkdir /etc/systemd/system/docker.service.d
    # tee /etc/systemd/system/docker.service.d/kolla.conf << 'EOF'
    [Service]
    MountFlags=shared
    EOF
    
  4. 编辑/usr/lib/systemd/system/docker.service

    # ExecStart=/usr/bin/dockerd
    ExecStart=/usr/bin/dockerd --insecure-registry 192.168.26.6:4000    
    

    重启服务

    # systemctl daemon-reload
    # systemctl enable docker
    # systemctl restart docker
    # systemctl status docker
    
  5. 搭建Registry服务器

    默认docker的registry是使用5000端口,对于OpenStack来说,有端口冲突,所以我将端口改成了4000。

    # docker run -d -v /opt/registry:/var/lib/registry -p 4000:5000 \
    --restart=always --name registry registry:2
    
    [root@kolla ~]# docker ps -a
    de67c9de6dbb        registry:2                                                                "/entrypoint.sh /etc/"   28 hours ago        Up 2 hours          0.0.0.0:4000->5000/tcp   registry
    [root@kolla ~]# netstat -antl |grep 4000
    tcp6       0      0 :::4000                 :::*                    LISTEN
    [root@kolla ~]#
    

安装Ansible和openstack镜像包

Kolla项目的Mitaka版本要求ansible版本低于2.0,Newton版本以后的就只支持2.x以上的版本。

# yum -y install ansible

kolla官方提供的ocata镜像

# wget http://tarballs.openstack.org/kolla/images/centos-source-registry-ocata.tar.gz
# du -sh centos-source-registry-ocata.tar.gz 
3.0G    centos-source-registry-ocata.tar.gz
# tar zxf centos-source-registry-ocata.tar.gz -C /opt/registry/

kolla-ansible

  1. 下载kolla-ansible的代码

    # cd
    # git clone http://git.trystack.cn/openstack/kolla-ansible -b stable/ocata
    [root@kolla ~]# ls
    anaconda-ks.cfg  centos-source-registry-ocata.tar.gz  kolla-ansible  test
    
  2. 安装kolla-ansible

    # cd kolla-ansible/
    # pip install . -i https://pypi.tuna.tsinghua.edu.cn/simple
    

    如果pip速度很慢,后面可以加上参数-i https://pypi.tuna.tsinghua.edu.cn/simple,指定国内的pip源

  3. 复制相关文件

    # cp -r etc/kolla /etc/kolla/
    # cd
    

    如果是在虚拟机里装kolla,希望可以启动再启动虚拟机,那么你需要把virt_type=qemu,默认是kvm

    mkdir -p /etc/kolla/config/nova
    cat << EOF > /etc/kolla/config/nova/nova-compute.conf
    [libvirt]
    virt_type=qemu
    cpu_mode = none
    EOF
    

配置kolla

  1. 生成密码文件

    # cd
    # kolla-genpwd
    
  2. 编辑 /etc/kolla/passwords.yml

    # vim /etc/kolla/passwords.yml
    keystone_admin_password: pass
    

    这是登录Dashboard,admin使用的密码,你可以根据自己需要进行修改,其他项目的密码可以暂时不用修改。

  3. 编辑/etc/kolla/globals.yml文件

    # vim /etc/kolla/globals.yml
    kolla_base_distro: "centos"
    kolla_install_type: "source"
    kolla_internal_vip_address: "192.168.26.6"
    enable_haproxy: "no"
    docker_registry: "192.168.26.6:4000"
    docker_namespace: "lokolla"
    network_interface: "ens160"
    neutron_external_interface: "ens192"
    
    #以下非必选,主要为研究容器网络提供环境
    enable_barbican: "yes"
    enable_etcd: "yes"
    enable_kuryr: "yes"
    enable_magnum: "yes"
    enable_neutron_fwaas: "yes"
    enable_neutron_qos: "yes"
    enable_neutron_aas: "yes"
    enable_neutron_lbaas: "yes" 
    

环境部署

  1. 安装前环境验证

    # cd 
    # kolla-ansible prechecks -i kolla-ansible/ansible/inventory/all-in-one
    
  2. pull镜像

    # kolla-ansible pull -i kolla-ansible/ansible/inventory/all-in-one
    

    通过docker images可以查看到下载到的镜像,这里都是从本地Registry中去下载。

    [root@kolla ~]# docker images
    REPOSITORY                                                           TAG                 IMAGE ID            CREATED             SIZE
    registry                                                             2                   2ba7189700c8        7 days ago          33.25 MB
    192.168.26.6:4000/lokolla/centos-source-neutron-aas-agent         4.0.3               d274179cefdb        5 weeks ago         958.5 MB
    192.168.26.6:4000/lokolla/centos-source-neutron-lbaas-agent          4.0.3               618824432fcf        5 weeks ago         956.3 MB
    192.168.26.6:4000/lokolla/centos-source-neutron-server               4.0.3               1e1e09a9ee06        5 weeks ago         934.4 MB
    192.168.26.6:4000/lokolla/centos-source-neutron-metadata-agent       4.0.3               00802df4d1e0        5 weeks ago         925.9 MB
    192.168.26.6:4000/lokolla/centos-source-neutron-dhcp-agent           4.0.3               3ff08e6c7056        5 weeks ago         925.9 MB
    192.168.26.6:4000/lokolla/centos-source-neutron-l3-agent             4.0.3               3ff08e6c7056        5 weeks ago         925.9 MB
    192.168.26.6:4000/lokolla/centos-source-neutron-openvswitch-agent    4.0.3               3ff08e6c7056        5 weeks ago         925.9 MB
    
  3. 开始部署

    # kolla-ansible deploy -i kolla-ansible/ansible/inventory/all-in-one
    

    通过docker ps -a 可以查看host节点上启动的服务容器。

    [root@kolla ~]# docker ps -a
    CONTAINER ID        IMAGE                                                                      COMMAND                  CREATED             STATUS                         PORTS                    NAMES
    9fa7fadf4ada        192.168.26.6:4000/lokolla/centos-source-barbican-worker:4.0.3              "kolla_start"            20 minutes ago      Up 20 minutes                                           barbican_worker
    305e0fde1932        192.168.26.6:4000/lokolla/centos-source-barbican-keystone-listener:4.0.3   "kolla_start"            20 minutes ago      Up 20 minutes                                           barbican_keystone_listener
    53ab1e8b349c        192.168.26.6:4000/lokolla/centos-source-barbican-api:4.0.3                 "kolla_start"            20 minutes ago      Up 20 minutes                                           barbican_api
    eebc6700f49c        192.168.26.6:4000/lokolla/centos-source-magnum-conductor:4.0.3             "kolla_start"            21 minutes ago      Up 21 minutes                                           magnum_conductor
    c6fe9d3db22b        192.168.26.6:4000/lokolla/centos-source-magnum-api:4.0.3                   "kolla_start"            21 minutes ago      Up 21 minutes                                           magnum_api
    c1615d5ca993        192.168.26.6:4000/lokolla/centos-source-horizon:4.0.3
    

大约15分钟基本上就部署完成了

验证部署

# kolla-ansible post-deploy

这样就创建 /etc/kolla/admin-openrc.sh文件

安装安装OpenStack client端

# pip install python-openstackclient 
# pip install python-neutronclient

安装成功

账号:admin
密码:pass

创建初始网络

编辑 /usr/share/kolla-ansible/init-runonce,主要是修改外网,以及提前下载好cirros镜像文件到根目录

EXT_NET_CIDR='192.168.18.0/24'
EXT_NET_RANGE='start=192.168.18.10,end=192.168.18.20'
EXT_NET_GATEWAY='192.168.18.2'
运行脚本
source /etc/kolla/admin-openrc.sh
bash /usr/share/kolla-ansible/init-runonce

查看服务进程

[root@kolla ~]# source /etc/kolla/admin-openrc.sh
[root@kolla ~]# openstack endpoint list
+----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------+
| ID                               | Region    | Service Name | Service Type    | Enabled | Interface | URL                                         |
+----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------+
| 087a98b580bd4245a6a87c1e80e6a9ad | RegionOne | nova         | compute         | True    | internal  | http://192.168.26.6:8774/v2.1/%(tenant_id)s |
| 19d0bfd37c25417aa7ffbe35e716703e | RegionOne | heat         | orchestration   | True    | internal  | http://192.168.26.6:8004/v1/%(tenant_id)s   |
| 1cff21f0e2a44f98bc8fad932acd4ea9 | RegionOne | heat-cfn     | cloudformation  | True    | internal  | http://192.168.26.6:8000/v1                 |
| 28c235a2e2be412d9d47fcf2c404ad3b | RegionOne | glance       | image           | True    | internal  | http://192.168.26.6:9292                    |
| 291a66db6c194803a21cfb2b48a1cd2b | RegionOne | nova         | compute         | True    | public    | http://192.168.26.6:8774/v2.1/%(tenant_id)s |
| 349911a982054b38b9e90c9446bb27f6 | RegionOne | keystone     | identity        | True    | internal  | http://192.168.26.6:5000/v3                 |
| 3e6730810d2f40d99f44d54975ffd901 | RegionOne | nova_legacy  | compute_legacy  | True    | admin     | http://192.168.26.6:8774/v2/%(tenant_id)s   |
| 3f711ee8871049d4a9a32694e2e34323 | RegionOne | glance       | image           | True    | admin     | http://192.168.26.6:9292                    |
| 3f9185f3e9074372b17832d5ccd2880f | RegionOne | placement    | placement       | True    | admin     | http://192.168.26.6:8780                    |
| 44c5216c6419429b815022d6bd933f8d | RegionOne | neutron      | network         | True    | admin     | http://192.168.26.6:9696                    |
| 45f257be0c1842da8eb5dcc710066da9 | RegionOne | magnum       | container-infra | True    | public    | http://192.168.26.6:9511/v1                 |
| 56f49a99f74247e495962a859f4471c9 | RegionOne | magnum       | container-infra | True    | admin     | http://192.168.26.6:9511/v1                 |
| 59e8e7da86234870a7f81dfab4baa1e9 | RegionOne | glance       | image           | True    | public    | http://192.168.26.6:9292                    |
| 666c5fe273f8452398093f3b5f0970ae | RegionOne | heat-cfn     | cloudformation  | True    | public    | http://192.168.26.6:8000/v1                 |
| 676861441dad42438e2108b85c436980 | RegionOne | keystone     | identity        | True    | admin     | http://192.168.26.6:35357/v3                |
| 71b9eedfdf38416fb266a4f7d299c7e2 | RegionOne | barbican     | key-manager     | True    | internal  | http://192.168.26.6:9311                    |
| 76638deee6934b388eb15b99c4fa721e | RegionOne | magnum       | container-infra | True    | internal  | http://192.168.26.6:9511/v1                 |
| 794148cebd954861962ce08749b66220 | RegionOne | placement    | placement       | True    | internal  | http://192.168.26.6:8780                    |
| 856a3e105dd142fa89f909179ca9f8ab | RegionOne | nova_legacy  | compute_legacy  | True    | internal  | http://192.168.26.6:8774/v2/%(tenant_id)s   |
| 88b31538377640dc8abede2f64b9373e | RegionOne | heat         | orchestration   | True    | admin     | http://192.168.26.6:8004/v1/%(tenant_id)s   |
| 936b3a82d90f4b478a19d50700cca521 | RegionOne | nova_legacy  | compute_legacy  | True    | public    | http://192.168.26.6:8774/v2/%(tenant_id)s   |
| 979750e12eb64a0ab5387f2608d7c4e1 | RegionOne | barbican     | key-manager     | True    | admin     | http://192.168.26.6:9311                    |
| ab64a5b21b2946a28e13394410252cf0 | RegionOne | nova         | compute         | True    | admin     | http://192.168.26.6:8774/v2.1/%(tenant_id)s |
| b8b4ce49af234c4ab94dcc302aab9025 | RegionOne | heat-cfn     | cloudformation  | True    | admin     | http://192.168.26.6:8000/v1                 |
| c699b14daa904a87ae9e742a0751d752 | RegionOne | neutron      | network         | True    | internal  | http://192.168.26.6:9696                    |
| cd0b966e5af1494e9a83f4edd9ea5907 | RegionOne | neutron      | network         | True    | public    | http://192.168.26.6:9696                    |
| d25c1895b81c481788b64aa9c582bc9d | RegionOne | placement    | placement       | True    | public    | http://192.168.26.6:8780                    |
| da55a95bc2f44d569de35fde29aeb3d6 | RegionOne | barbican     | key-manager     | True    | public    | http://192.168.26.6:9311                    |
| e140680f69214a919ed115076af2396d | RegionOne | keystone     | identity        | True    | public    | http://192.168.26.6:5000/v3                 |
| fe93e6bee5984a7b98f917ea4581db69 | RegionOne | heat         | orchestration   | True    | public    | http://192.168.26.6:8004/v1/%(tenant_id)s   |
+----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------+
[root@kolla ~]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------------+-------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type           | host  | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+----------------------+-------+-------------------+-------+----------------+---------------------------+
| 3da872fa-644d-48f1-afb1-8fe92c919517 | Loadbalancerv2 agent | kolla |                   | :-)   | True           | neutron-lbaasv2-agent     |
| 67acdc8b-199e-42b4-a546-d845ed04f338 | Metadata agent       | kolla |                   | :-)   | True           | neutron-metadata-agent    |
| 87938525-0141-40f2-9caf-bc4cee707998 | DHCP agent           | kolla | nova              | :-)   | True           | neutron-dhcp-agent        |
| 9e7ad6d1-5c7c-4719-9bb2-27a72fc9f77f | Open vSwitch agent   | kolla |                   | :-)   | True           | neutron-openvswitch-agent |
| a45f0cbf-131d-4373-ac3a-8a38ee5345c6 | L3 agent             | kolla | nova              | :-)   | True           | neutron--agent         |
+--------------------------------------+----------------------+-------+-------------------+-------+----------------+---------------------------+
[root@kolla ~]#
[root@kolla ~]# nova list
+--------------------------------------+------+--------+------------+-------------+----------------------+
| ID                                   | Name | Status | Task State | Power State | Networks             |
+--------------------------------------+------+--------+------------+-------------+----------------------+
| 260614aa-46a5-4228-894a-ad742014a608 | vm1  | ACTIVE | -          | Running     | net-1=192.168.100.12 |
+--------------------------------------+------+--------+------------+-------------+----------------------+
[root@kolla ~]# ip netns
qrouter-596a29cd-d388-4230-ada1-6fe55527b3a0
qdhcp-4ad9e339-37a1-4f56-981e-b5931ce163e0
[root@kolla ~]# ip netns qdhcp-4ad9e339-37a1-4f56-981e-b5931ce163e0 ssh [email protected]
Command "qdhcp-4ad9e339-37a1-4f56-981e-b5931ce163e0" is unknown, try "ip netns help".
[root@kolla ~]# ip netns exec qdhcp-4ad9e339-37a1-4f56-981e-b5931ce163e0 ssh [email protected]
[email protected]'s password:
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:49:AD:03
          inet addr:192.168.100.12  Bcast:192.168.100.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe49:ad03/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:9645 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9364 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:919030 (897.4 KiB)  TX bytes:911432 (890.0 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=39.659 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=39.660 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=39.218 ms

发现:我之前创建的网络和虚拟机资源都还存在,不会因为重新部署了其他组件而被删掉,而且安装时间也快,确实比devstack安装方便多了。

容器环境分析

  1. 容器网络
    如下,单节点中只用到了bridge,host两种容器网络

    [root@kolla ~]# docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    eb1df69b26bb        bridge              bridge              local
    b0d2f66a46b4        host                host                local
    572b9f4905f9        none                null                local
    [root@kolla ~]#
    

    查看bridge网络类型,发现只有本地Registry容器才会用到bridge容器网络。

    [root@kolla ~]# docker network inspect eb1df69b26bb
    [
        {
            "Name": "bridge",
            "Id": "eb1df69b26bb67297bd1680e26e9795a987d7c3ff2baa4974074965449018df7",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": [
                    {
                        "Subnet": "172.17.0.0/16",
                        "Gateway": "172.17.0.1"
                    }
                ]
            },
            "Internal": false,
            "Containers": {
                "de67c9de6dbbea5db5c1b2f4b35af25afd82cf29213c5541434f62326c5b797f": {
                    "Name": "registry",
                    "EndpointID": "7d896b5ca3cafca283dd8680e6499faea28199edea8e713de80ec6e9817099c0",
                    "MacAddress": "02:42:ac:11:00:02",
                    "IPv4Address": "172.17.0.2/16",
                    "IPv6Address": ""
                }
            },
            "Options": {
                "com.docker.network.bridge.default_bridge": "true",
                "com.docker.network.bridge.enable_icc": "true",
                "com.docker.network.bridge.enable_ip_masquerade": "true",
                "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
                "com.docker.network.bridge.name": "docker0",
                "com.docker.network.driver.mtu": "1500"
            },
            "Labels": {}
        }
    ]
    

    其他openstack的服务进程都被划分到不同的容器中,都连接到host网络,这些容器共享Docker host的网络栈,容器中的网络配置和host完全一模一样,这样可以提高网络的传输效率,但是需要考虑端口冲突。

    [root@kolla ~]# docker network inspect b0d2f66a46b4
    [
        {
            "Name": "host",
            "Id": "b0d2f66a46b47d62157d46629b850ec114dd46d47f45b674acfc100452b1b2f2",
            "Scope": "local",
            "Driver": "host",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": []
            },
            "Internal": false,
            "Containers": {
                "0adaebeb4b26f619a0693073ecafc6166bd3bda05f81f540e4d1771179412a77": {
                    "Name": "nova_consoleauth",
                    "EndpointID": "e878b2762ea2934e4b89028e15a97a8e0b409dc9aaa576a606880ef069f37ff2",
                    "MacAddress": "",
                    "IPv4Address": "",
                    "IPv6Address": ""
                },
                "153380898dd637192d1abff8d394f34909ae0513e1c8257456891dd518c69f3b": {
                    "Name": "nova_novncproxy",
                    "EndpointID": "ae6197850617e601ade1adefacef3d4d05c9b37b814100c0c955c5901e1973d2",
                    "MacAddress": "",
                    "IPv4Address": "",
                    "IPv6Address": ""
                },
    ......
    
    "Options": {},
    "Labels": {}
        }
    ]
    [root@kolla ~]#
    
  2. 虚拟网络存在形式

    ovs中的ovsdb-server及openvswitch-vswitchd进程划分到两个容器中进行封装隔离。

    [root@kolla ~]# docker exec -it 2215252eed10 bash
    (openvswitch-vswitchd)[root@kolla /]# ovs-vsctl show
    5540056a-5846-4872-acb8-3999d01a57cc
        Manager "ptcp:6640:192.168.26.6"
        Bridge br-tun
            Controller "tcp:127.0.0.1:6633"
                is_connected: true
            fail_mode: secure
            Port br-tun
                Interface br-tun
                    type: internal
            Port patch-int
                Interface patch-int
                    type: patch
                    options: {peer=patch-tun}
        Bridge br-ex
            Controller "tcp:127.0.0.1:6633"
                is_connected: true
            fail_mode: secure
            Port "ens192"
                Interface "ens192"
            Port phy-br-ex
                Interface phy-br-ex
                    type: patch
                    options: {peer=int-br-ex}
            Port br-ex
                Interface br-ex
                    type: internal
        Bridge br-int
            Controller "tcp:127.0.0.1:6633"
                is_connected: true
            fail_mode: secure
            Port patch-tun
                Interface patch-tun
                    type: patch
                    options: {peer=patch-int}
            Port "qg-4f5e66b1-29"
                tag: 2
                Interface "qg-4f5e66b1-29"
                    type: internal
            Port br-int
                Interface br-int
                    type: internal
            Port "tap4da1ee5b-90"
                tag: 1
                Interface "tap4da1ee5b-90"
                    type: internal
            Port "qr-0005589e-db"
                tag: 1
                Interface "qr-0005589e-db"
                    type: internal
            Port "qvob242a35d-32"
                tag: 1
                Interface "qvob242a35d-32"
            Port int-br-ex
                Interface int-br-ex
                    type: patch
                    options: {peer=phy-br-ex}
    (openvswitch-vswitchd)[root@kolla /]#
    

    虚拟机对应的tap设备绑定在本机host的linux bridge网桥上,然后通过ip link命令通过veth(qvb–qvo)连接qbr 到openvswitch-vswitchd所在容器的网络命名空间,然后网络命名空间内将qvo添加到br-int

    [root@kolla ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    docker0     8000.02421711eb61   no      veth702ab42
    qbrb242a35d-32      8000.6a6c089dc458   no      qvbb242a35d-32
                                tapb242a35d-32
    [root@kolla ~]#
    

    iptables规则也存在host上

参考

https://www.lijiawang.org/posts/kolla-openstack-ocata.html

http://www.chenshake.com/kolla-installation/

https://blog.newtouch.com/ocata-kolla-all-in-one/

本文链接

你可能感兴趣的:(openstack,kolla)