http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-private.html
安装,多了一些,问题的处理
写了个通用的数据脚本mysql_openstack.sh
用来跟踪数据库的变化,
比如我们看keystone库有哪些数据,就是用
./mysql_openstack.sh keystone
脚本如下
#!/bin/sh #for i in `awk ' {if(NR>4 && NR<40)print $2};' a.log ` #sed -i '/^#/d' cinder.conf #sed -i '/^$/d' cinder.conf mysql_user=root mysql_password=haoning mysql_host=ocontrol if [ "$1" = "" ] then echo "please use ./mysql_openstack.sh [dbname], for example: ./mysql_openstack.sh keystone"; echo "this will exit." exit 0; fi echo "use db " $1 for i in ` mysql -u$mysql_user -h$mysql_host -p$mysql_password $1 -e "show tables" |awk ' {if(NR>1)print $1};'|grep -v ml2_vxlan_allocations` do echo "\"select * from \`$i\`\""; mysql -u$mysql_user -h$mysql_host -p$mysql_password $1 -e "select * from \`$i\`"; done
centos7.2升级iproute到3.10.0-54.el7后导致ip netns的结果后面带了个(id: 0),以至于openstack的L版本的neutron在建立router的时候解析不了,一直报错“ Cannot create namespace file” 解决办法
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
基础:
yum install centos-release-openstack-liberty yum upgrade -y yum install python-openstackclient openstack-selinux -y rm -f /etc/localtime cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime #ntpdate ntp.ubuntu.com 配置/etc/hostname /etc/hosts 192.168.139.193 controller 192.168.139.192 compute 192.168.139.191 net
其实191没用上
网络节点也安装在了controller节点上
systemctl stop firewalld.service systemctl disable firewalld.service yum install mariadb mariadb-server MySQL-python -y systemctl start mariadb.service systemctl enable mariadb.service
安装mysql
vim /etc/my.cnf.d/mariadb_openstack.cnf
[mysqld] bind-address 192.168.139.193 default-storage-engine innodb innodb_file_per_table collation-server utf8_general_ci init-connect 'SET NAMES utf8' character-set-server utf8 GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
消息队列
yum install rabbitmq-server -y systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service #如果清空 #rabbitmqctl stop_app #rabbitmqctl reset rabbitmqctl add_user openstack haoning rabbitmqctl set_permissions openstack ".*" ".*" ".*"
安装mongo,似乎没用上
yum install mongodb-server mongodb -y vim /etc/mongod.conf bind_ip 192.168.139.193 systemctl enable mongod.service systemctl start mongod.service
一下密码全都设置成haoning
----------------------------------------------------------------------
■■■■■■■■■■■■■■■■■■keystone begin■■■■■■■■■■■■■■■■■■
keystone安装,在controller节点:
CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
$ openssl rand -hex 10
06a0afd32e5265a9eba8
需要把memcache也装上,作为keystone的缓存。
这里keystone是通过阿帕奇发布
yum install openstack-keystone httpd mod_wsgi memcached python-memcached -y systemctl enable memcached.service systemctl start memcached.service
如果清空memcache使用
#echo "flush_all" | nc 127.0.0.1 11211
#清空
修改配置文件后得到
sed -i '/^#/d' /etc/keystone/keystone.conf
sed -i '/^$/d' /etc/keystone/keystone.conf
---------------------------
/etc/keystone/keystone.conf
[DEFAULT] [DEFAULT] admin_token=26d3c805d5033f6052b9 verbose=True [assignment] [auth] [cache] [catalog] [cors] [cors.subdomain] [credential] [database] connection=mysql://keystone:haoning@controller/keystone [domain_config] [endpoint_filter] [endpoint_policy] [eventlet_server] [eventlet_server_ssl] [federation] [fernet_tokens] [identity] [identity_mapping] [kvs] [ldap] [matchmaker_redis] [matchmaker_ring] [memcache] servers=localhost:11211 [oauth1] [os_inherit] [oslo_messaging_amqp] [oslo_messaging_qpid] [oslo_messaging_rabbit] [oslo_middleware] [oslo_policy] [paste_deploy] [policy] [resource] [revoke] driver=sql [role] [saml] [signing] [ssl] [token] provider=uuid driver=memcache [tokenless_auth] [trust]
同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
-------------------------
安http服务器
/etc/httpd/conf/httpd.conf ServerName controller /etc/httpd/conf.d/wsgi-keystone.conf Listen 5000 Listen 35357WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined= 2.4> Require all granted Order allow,deny Allow from all WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined= 2.4> Require all granted Order allow,deny Allow from all
systemctl enable httpd.service
systemctl start httpd.service
###创建服务:
第一次使用,由于keystone没完成,所以手动写上token
后续keystone安装后就另一种用法,source admin...
export OS_TOKEN=e9fc0e473e1b3072fc66 export OS_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3
向数据库里加 keystone 的endpoint
可以使用上面的脚本./mysql_openstack.sh keystone查看数据库变化
openstack service create --name keystone --description "OpenStack Identity" identity openstack endpoint create --region wuhan identity public http://controller:5000/v2.0 openstack endpoint create --region wuhan identity internal http://controller:5000/v2.0 openstack endpoint create --region wuhan identity admin http://controller:35357/v2.0
###创建用户角色等 projects, users, and roles
openstack project create --domain default --description "Admin Project" admin #openstack user create --domain default --password-prompt admin openstack user create --domain default --password haoning admin openstack role create admin openstack role add --project admin --user admin admin openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" demo #openstack user create --domain default --password-prompt demo openstack user create --domain default --password haoning demo openstack role create user openstack role add --project demo --user demo user ###Verify operation Edit the /usr/share/keystone/keystone-dist-paste.ini file and remove admin_token_auth from the [pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] sections #unset OS_TOKEN OS_URL openstack --os-auth-url http://controller:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue openstack --os-auth-url http://controller:5000/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name demo --os-username demo --os-auth-type password token issue ###Create OpenStack client environment scripts unset OS_TOKEN OS_URL [root@controller ~]# cat admin-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=haoning export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 export PS1='[\u@\h \W(keystone_admin_v3)]\$ ' [root@controller ~]# cat demo-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=demo export OS_TENANT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=haoning export OS_AUTH_URL=http://controller:5000/v3 export OS_IMAGE_API_VERSION=2 export OS_IDENTITY_API_VERSION=3 export PS1='[\u@\h \W(keystone_demo_v3)]\$ ' source admin-openrc.sh unset OS_TOKEN OS_URL openstack token issue 执行这个之后再使用 openstack user list 等命令
■■■■■■■■■■■■■■■■■■keystone end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■glance begin■■■■■■■■■■■■■■■■■■
#换个窗口 安装glance
CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
source admin-openrc.sh openstack user create --domain default --password haoning glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image service" image openstack endpoint create --region wuhan image public http://controller:9292 openstack endpoint create --region wuhan image internal http://controller:9292 openstack endpoint create --region wuhan image admin http://controller:9292 yum install openstack-glance python-glance python-glanceclient -y
---------------------------
配置/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver noop openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:haoning@controller/glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_plugin password openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_id default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_id default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password haoning openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-api.conf glance_store default_store file openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
------------------------------------
配置 /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver noop openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:haoning@controller/glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_plugin password openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_id default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_id default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password haoning openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
#★★★★★这里要注意一下,一些默认配置需要手工的去掉
#Comment out or remove any other options in the [keystone_authtoken] section
写入数据库,并启动和验证
su -s /bin/sh -c "glance-manage db_sync" glance systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service #Verify operation echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress glance image-list
■■■■■■■■■■■■■■■■■■glance end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■nova begin■■■■■■■■■■■■■■■■■■
#★★★★在controller节点 安装nova的客户端功能,真正起作用的在compute节点
CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY 'haoning'; flush privileges; source admin-openrc.sh openstack user create --domain default --password haoning nova openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region wuhan compute public http://controller:8774/v2/%\(tenant_id\)s openstack endpoint create --region wuhan compute internal http://controller:8774/v2/%\(tenant_id\)s openstack endpoint create --region wuhan compute admin http://controller:8774/v2/%\(tenant_id\)s yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
-------------
配置 /etc/nova/nova.conf
-----------------------------------
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.139.193 openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata openstack-config --set /etc/nova/nova.conf DEFAULT verbose True openstack-config --set /etc/nova/nova.conf database connection mysql://nova:haoning@controller/nova openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova openstack-config --set /etc/nova/nova.conf keystone_authtoken password haoning openstack-config --set /etc/nova/nova.conf vnc vncserver_listen $my_ip openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $my_ip openstack-config --set /etc/nova/nova.conf glance host controller openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
-----------------------------------------------
同步数据库并在controller节点上启动
su -s /bin/sh -c "nova-manage db sync" nova #/var/log/nova 检查log是否成功 systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl restart openstack-nova-api.service systemctl restart openstack-nova-cert.service systemctl restart openstack-nova-consoleauth.service systemctl restart openstack-nova-scheduler.service systemctl restart openstack-nova-conductor.service systemctl restart openstack-nova-novncproxy.service
#★★★在compute节点--------------上装nova-compute功能
安装
yum install openstack-nova-compute sysfsutils openstack-utils -y
#openstack-utils 才有openstack-config 可以rpm -qa openstack*
查看
配置/etc/nova/nova.conf
----------------------------------------
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.139.192 openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT verbose True openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova openstack-config --set /etc/nova/nova.conf keystone_authtoken password haoning openstack-config --set /etc/nova/nova.conf vnc enabled True openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $my_ip openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html openstack-config --set /etc/nova/nova.conf glance host controller openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
-------------------
/etc/nova/nova.conf
如果没有进行kvm穿透,这里得到的结果是0,就需要设置成qemu
如何kvm穿透
自行google
KVM虚拟化之嵌套虚拟化nested
kvm穿透,在libvirt的配置文件里面修改,查看其他的
#egrep -c '(vmx|svm)' /proc/cpuinfo
#virt_type qemu
启动libvert和nova
systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service #Verify operation source admin-openrc.sh nova service-list nova endpoints nova image-list
■■■■■■■■■■■■■■■■■■nova end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■neutron begin■■■■■■■■■■■■■■■■■■
在controller节点安装neutron
数据库
CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
每一步变化都可以./mysql_openstack.sh neutron 查看数据库变化
openstack user create --domain default --password haoning neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region wuhan network public http://controller:9696 openstack endpoint create --region wuhan network internal http://controller:9696 openstack endpoint create --region wuhan network admin http://controller:9696
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
使用官方文档的linuxbridge和vxlan方式
建立共有网络public,和私有网络private
共有网络的vm使用flat模式和本地局域网一样就可以
私有网络,类似nat,
可以建一个浮动ip挂载有私有网络的vm上,让一个vm既有私有网络又有共有网络
安装
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
---------------------
配置文件
/etc/neutron/neutron.conf
neutron的基础网络配置
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:haoning@controller/neutron openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password haoning openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf nova auth_plugin password openstack-config --set /etc/neutron/neutron.conf nova project_domain_id default openstack-config --set /etc/neutron/neutron.conf nova user_domain_id default openstack-config --set /etc/neutron/neutron.conf nova region_name wuhan openstack-config --set /etc/neutron/neutron.conf nova project_name service openstack-config --set /etc/neutron/neutron.conf nova username nova openstack-config --set /etc/neutron/neutron.conf nova password haoning openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
---------------
配置
/etc/neutron/plugins/ml2/ml2_conf.ini
二层网络用来配置vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks public openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
--------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
linuxbridge相关的
brctl show 和ip netns查看
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:eth1 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.139.193 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
--------------------
配置
/etc/neutron/l3_agent.ini
三层网络
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge openstack-config --set /etc/neutron/l3_agent.ini DEFAULT verbose True
#The external_network_bridge option intentionally lacks a value to enable multiple external networks on a single agent
#☆☆☆☆★★★★★
#Comment out or remove any other options in the [keystone_authtoken] section.
------------------------------------
配置获取ip的dhcp服务,
/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
-----------------------------------
dhcp服务是通过dnsmasq进程起的
/etc/neutron/dnsmasq-neutron.conf echo "dhcp-option-force=26,1450" >/etc/neutron/dnsmasq-neutron.conf
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
-------------------------------
配置vm的metadata
/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_uri http://controller:5000 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:35357 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region wuhan openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_plugin password openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_domain_id default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT user_domain_id default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_name service openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT username neutron openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT password haoning openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
#★★★★★★★★★★注意这里metadata_agent.ini要删掉一部分东西
#admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD%
------------------------------------------------------------------------------------------------------------
在nova的配置文件上加上neutron的关联,
所有节点都要加上
/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf neutron auth_plugin password openstack-config --set /etc/nova/nova.conf neutron project_domain_id default openstack-config --set /etc/nova/nova.conf neutron user_domain_id default openstack-config --set /etc/nova/nova.conf neutron region_name wuhan openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password haoning openstack-config --set /etc/nova/nova.conf service_metadata_proxy True openstack-config --set /etc/nova/nova.conf metadata_proxy_shared_secret METADATA_SECRET
------------------------------------------------------------------------------------------------------------
同步数据库并启动
引用
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
##########For networking option 2, also enable and start the layer-3 service:
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
检查错误:
cd /var/log/neutron
grep ERROR *
★★★★★★★★★★★★★★★compute 节点★☆★★★★★★★★★★★★★
在compute节点上添加neutron的agent服务
配合vm启动的时候关联网络
安装:
yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
-------------------------------------
配置
/etc/neutron/neutron.conf
-------------------
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password haoning openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
-----------------------
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
------------------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:eth1 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.139.192 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
----------------------这里controller节点和compute节点都要加★★★★---------
上面说过的,compute节点上也要把nova的配置文件上添加neutron的关联
注意改完配需要重启
/etc/nova/nova.conf
------------------
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf neutron auth_plugin password openstack-config --set /etc/nova/nova.conf neutron project_domain_id default openstack-config --set /etc/nova/nova.conf neutron user_domain_id default openstack-config --set /etc/nova/nova.conf neutron region_name wuhan openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password haoning
-----------------
compute几点只有nova的compute服务和neutron的aget服务
启动:
systemctl restart openstack-nova-compute.service systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service
###Verify operation验证:
[root@controller neutron(keystone_admin_v3)]# neutron ext-list +-----------------------+--------------------------+ | alias | name | +-----------------------+--------------------------+ | flavors | Neutron Service Flavors | | security-group | security-group | | dns-integration | DNS Integration | | net-mtu | Network MTU | | port-security | Port Security | | binding | Port Binding | | provider | Provider Network | | agent | agent | | quotas | Quota management support | | subnet_allocation | Subnet Allocation | | dhcp_agent_scheduler | DHCP Agent Scheduler | | rbac-policies | RBAC Policies | | external-net | Neutron external network | | multi-provider | Multi Provider Network | | allowed-address-pairs | Allowed Address Pairs | | extra_dhcp_opt | Neutron Extra DHCP opts | +-----------------------+--------------------------+
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
开始配置neutron的网络
如果安装过程中
/var/log/keystone /var/log/nova /var/log/neutron都没有错误
即可
source admin-openrc.sh
neutron agent-list
#public network建立共有网络的相对简单:
neutron net-create public --shared --provider:physical_network public --provider:network_type flat #neutron subnet-create public PUBLIC_NETWORK_CIDR --name public --allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS --dns-nameserver DNS_RESOLVER --gateway PUBLIC_NETWORK_GATEWAY neutron subnet-create public 192.168.139.0/20 --name public --allocation-pool start=192.168.139.201,end=192.168.139.210 --dns-nameserver 8.8.4.4 --gateway 192.168.128.1
#private network建立一个私有的网络
neutron net-create private neutron subnet-create private 172.16.1.0/24 --name private --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 #Create a router neutron net-update public --router:external neutron router-create router neutron router-interface-add router private #Added interface 65b58347-09fa-43dd-914d-31b4885d84ef to router router. neutron router-gateway-set router public
#Verify operation验证
ip netns neutron router-port-list router ping -c 4 192.168.139.202 brctl show ip netns
#建立vm ,先简历秘钥对
source admin...... (demo为啥不好使?)
ssh-keygen -q -N "" nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey nova keypair-list
#安全组
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
#Launch an instance on the public network--------begin----------
source admin-openrc.sh
nova flavor-list
nova image-list
neutron net-list
nova secgroup-list
#nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance
#Replace PUBLIC_NET_ID with the ID of the public provider network.
nova boot --flavor m1.tiny --image cirros --nic net-id=89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 --security-group default --key-name mykey public-instance
nova list
nova get-vnc-console public-instance novnc
#Launch an instance on the public network--------end----------
#删除网络
如果遇到问题,需要把简历的网络一次删除
这里有顺序,先删router相关的再删子网,再删网络
neutron router-list neutron router-gateway-clear f16bd408-181d-40d9-8998-5d556fec7e0f neutron router-interface-delete f16bd408-181d-40d9-8998-5d556fec7e0f private neutron router-delete f16bd408-181d-40d9-8998-5d556fec7e0f neutron subnet-list neutron subnet-delete 7b03ef7d-144f-479e-bf3c-4a880a48ac3d neutron subnet-delete f3eb1841-6666-4821-8fea-0d8d98352c73 neutron net-list neutron net-delete 89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 neutron net-delete 6ac13027-8e87-4696-b01f-5198a3ffa509
################public network begin#############
完整的简历共有网络
★★★★★★★★★★★★★★★★★★★★★★★
neutron net-create public --shared --provider:physical_network public --provider:network_type flat neutron net-list neutron subnet-create public 192.168.139.0/20 --name public --allocation-pool start=192.168.139.221,end=192.168.139.230 --dns-nameserver 8.8.4.4 --gateway 192.168.128.1 neutron subnet-list $ ssh-keygen -q -N "" $ nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey nova keypair-list nova secgroup-list nova secgroup-list-rules 1f676a35-7a31-4265-aa2b-cc4317de8633 nova help|grep secgroup nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 nova flavor-list nova image-list neutron net-list nova secgroup-list #nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance #Replace PUBLIC_NET_ID with the ID of the public provider network. nova boot --flavor m1.tiny --image cirros --nic net-id=79fa9460-1f32-4141-9a16-08dea9355e2a --security-group default --key-name mykey public-instance nova list #能看到ip nova get-vnc-console public-instance novnc 默认密码 cirros cubswin:) ping 192.168.139.222 ssh [email protected] ip netns ifconfig
#建立个私有网络的vm
这个最重要,容易出错
neutron net-create private #neutron subnet-create private PRIVATE_NETWORK_CIDR --name private --dns-nameserver DNS_RESOLVER --gateway PRIVATE_NETWORK_GATEWAY neutron subnet-create private 172.16.1.0/24 --name private --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 neutron net-list neutron subnet-list #Private project networks connect to public provider networks using a virtual router. Each router contains an interface to at least one private project network and a gateway on a public provider network. #The public provider network must include the router: external option to enable project routers to use it for connectivity to external networks such as the Internet. The admin or other privileged user must include this option during network creation or add it later. In this case, we can add it to the existing public provider network. #Add the router: external option to the public provider network: neutron net-update public --router:external [root@controller ~(keystone_admin_v3)]# neutron router-create router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | distributed | False | | external_gateway_info | | | ha | False | | id | fc43e7ee-44d1-483b-a5b2-6622637bb106 | | name | router | | routes | | | status | ACTIVE | | tenant_id | a847d63d35e54622b641ea6b74c3c126 | +-----------------------+--------------------------------------+ [root@controller ~(keystone_admin_v3)]# neutron router-list +--------------------------------------+--------+-----------------------+-------------+-------+ | id | name | external_gateway_info | distributed | ha | +--------------------------------------+--------+-----------------------+-------------+-------+ | fc43e7ee-44d1-483b-a5b2-6622637bb106 | router | null | False | False | +--------------------------------------+--------+-----------------------+-------------+-------+ [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# neutron router-interface-add router private Added interface 9d9f73e7-a5e4-4d89-93c7-df135ac39a26 to router router. [root@controller ~(keystone_admin_v3)]# neutron router-gateway-set router public Set gateway for router router [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# neutron router-list +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | id | name | external_gateway_info | distributed | ha | +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | fc43e7ee-44d1-483b-a5b2-6622637bb106 | router | {"network_id": "79fa9460-1f32-4141-9a16-08dea9355e2a", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "1403be6d-fb25-4789-80ce-d570f291c6e4", "ip_address": "192.168.139.223"}]} | False | False | +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ [root@controller ~(keystone_admin_v3)]# neutron net-list +--------------------------------------+---------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+-------------------------------------------------------+ | 79fa9460-1f32-4141-9a16-08dea9355e2a | public | 1403be6d-fb25-4789-80ce-d570f291c6e4 192.168.128.0/20 | | 1fd72b95-0264-4fca-8173-f321239a55fa | private | 97d8a9a1-d1b3-4091-9ee0-51af01c84b4b 172.16.1.0/24 | +--------------------------------------+---------+-------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# neutron subnet-list +--------------------------------------+---------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+---------+------------------+--------------------------------------------------------+ | 1403be6d-fb25-4789-80ce-d570f291c6e4 | public | 192.168.128.0/20 | {"start": "192.168.139.221", "end": "192.168.139.230"} | | 97d8a9a1-d1b3-4091-9ee0-51af01c84b4b | private | 172.16.1.0/24 | {"start": "172.16.1.2", "end": "172.16.1.254"} | +--------------------------------------+---------+------------------+--------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# ip netns qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106 (id: 2) qdhcp-1fd72b95-0264-4fca-8173-f321239a55fa (id: 1) qdhcp-79fa9460-1f32-4141-9a16-08dea9355e2a (id: 0) [root@controller ~(keystone_admin_v3)]# neutron router-port-list router +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | 8b18370f-345e-42bc-b4eb-30391866e757 | | fa:16:3e:78:33:40 | {"subnet_id": "1403be6d-fb25-4789-80ce-d570f291c6e4", "ip_address": "192.168.139.223"} | | 9d9f73e7-a5e4-4d89-93c7-df135ac39a26 | | fa:16:3e:6b:08:b5 | {"subnet_id": "97d8a9a1-d1b3-4091-9ee0-51af01c84b4b", "ip_address": "172.16.1.1"} | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# brctl show bridge name bridge id STP enabled interfaces brq1fd72b95-02 8000.063507f2cee3 no tap65ad6fb9-ea tap9d9f73e7-a5 vxlan-18 brq79fa9460-1f 8000.505112aa8214 no eth1 tap4b63544c-9b virbr0 8000.5254009c2b11 yes virbr0-nic [root@controller ~(keystone_admin_v3)]# ip link 1: lo:mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 50:52:18:aa:81:11 brd ff:ff:ff:ff:ff:ff 3: eth1: mtu 1500 qdisc pfifo_fast master brq79fa9460-1f state UP mode DEFAULT qlen 1000 link/ether 50:51:12:aa:82:14 brd ff:ff:ff:ff:ff:ff 4: virbr0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT link/ether 52:54:00:9c:2b:11 brd ff:ff:ff:ff:ff:ff 5: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500 link/ether 52:54:00:9c:2b:11 brd ff:ff:ff:ff:ff:ff 13: tap4b63544c-9b@if2: mtu 1500 qdisc pfifo_fast master brq79fa9460-1f state UP mode DEFAULT qlen 1000 link/ether b6:67:ca:38:ff:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 14: brq79fa9460-1f: mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 50:51:12:aa:82:14 brd ff:ff:ff:ff:ff:ff 15: tap65ad6fb9-ea@if2: mtu 1450 qdisc pfifo_fast master brq1fd72b95-02 state UP mode DEFAULT qlen 1000 link/ether e6:b7:1f:bc:bb:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 1 16: vxlan-18: mtu 1450 qdisc noqueue master brq1fd72b95-02 state UNKNOWN mode DEFAULT link/ether 06:35:07:f2:ce:e3 brd ff:ff:ff:ff:ff:ff 17: brq1fd72b95-02: mtu 1450 qdisc noqueue state UP mode DEFAULT link/ether 06:35:07:f2:ce:e3 brd ff:ff:ff:ff:ff:ff 18: tap9d9f73e7-a5@if2: mtu 1450 qdisc pfifo_fast master brq1fd72b95-02 state UP mode DEFAULT qlen 1000 link/ether 5e:84:a7:84:27:5b brd ff:ff:ff:ff:ff:ff link-netnsid 2 [root@controller ~(keystone_admin_v3)]#
遇到的问题
#如果升级了iproute
#如果报错:l3-agent.log
# 2016-03-12 21:29:26.103 1170 ERROR neutron.agent.l3.agent Stderr: Cannot create namespace file "/var/run/netns/qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106": File exists
#问题是
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
#需要修改[url]https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py[/url]
#Replace PRIVATE_NET_ID with the ID of the private project network
#nova boot --flavor m1.tiny --image cirros --nic net-id=PRIVATE_NET_ID --security-group default --key-name mykey private-instance
nova boot --flavor m1.tiny --image cirros --nic net-id=1fd72b95-0264-4fca-8173-f321239a55fa --security-group default --key-name mykey private-instance [root@controller linux(keystone_admin_v3)]# nova list +--------------------------------------+------------------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------+--------+------------+-------------+------------------------+ | edd82475-1585-469f-aa43-95fa80f1f812 | private-instance | ACTIVE | - | Running | private=172.16.1.3 | | bad46269-19d1-423a-88be-652a06c08723 | public-instance | ACTIVE | - | Running | public=192.168.139.222 | +--------------------------------------+------------------+--------+------------+-------------+------------------------+ nova get-vnc-console private-instance novnc neutron floatingip-create public [root@controller linux(keystone_admin_v3)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | 2cf23d2c-748f-4242-89eb-1d53721560a1 | | 192.168.139.225 | | +--------------------------------------+------------------+---------------------+---------+ nova floating-ip-associate private-instance 192.168.139.225 neutron floatingip-create public nova floating-ip-associate private-instance 203.0.113.104 [root@controller linux(keystone_admin_v3)]# nova list +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ | edd82475-1585-469f-aa43-95fa80f1f812 | private-instance | ACTIVE | - | Running | private=172.16.1.3, 192.168.139.225 | | bad46269-19d1-423a-88be-652a06c08723 | public-instance | ACTIVE | - | Running | public=192.168.139.222 | +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ ssh [email protected] [root@controller linux(keystone_admin_v3)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+--------------------------------------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+--------------------------------------+ | 2cf23d2c-748f-4242-89eb-1d53721560a1 | 172.16.1.3 | 192.168.139.225 | af79694b-2f59-4bf0-a0d0-6c619de49941 | +--------------------------------------+------------------+---------------------+--------------------------------------+ [root@controller linux(keystone_admin_v3)]#
################public network end#############
###★★★★★Networking Option 2: Self-service networks-------end★★★★★★★★
■■■■■■■■■■■■■■■■■■neutron end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon begin■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon end■■■■■■■■■■■■■■■■■■