openstack 安装手册

#Base
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
yum install -y centos-release-openstack-liberty
yum install -y python-openstackclient
yum install -y mariadb mariadb-server MySQL-python
yum install -y rabbitmq-server
yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached
yum install -y openstack-glance python-glance python-glanceclient
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset
yum install -y openstack-dashboard
yum -y install vim  tree unzip lrzsz
=====================linux-node2.oldboyedu.com安装
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
yum install -y centos-release-openstack-liberty
yum install -y python-openstackclient
yum install -y openstack-nova-compute sysfsutils 
yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset
yum -y install vim  tree unzip lrzsz




[root@linux-node1 ~]# vi /etc/hosts
192.168.56.11 linux-node1 linux-node1.oldboyedu.com
192.168.56.12 linux-node2 linux-node2.oldboyedu.com


关闭selinux
setenforce 0  
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config 


关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service


时间同步
[root@linux-node1 ~]# yum install chrony
[root@linux-node1 ~]# vi /etc/chrony.conf
打开注释
allow 192.168/16


[root@linux-node1 ~]# systemctl enable chronyd.service
[root@linux-node1 ~]# systemctl start chronyd.service


时区设置
[root@linux-node1 ~]# timedatectl set-timezone Asia/Shanghai 
[root@linux-node1 ~]# date
Sun Dec 27 14:12:12 CST 2015
##################################################################################################
安装及修改数据库配置文件
[root@linux-node1 ~]# cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
cp: overwrite ‘/etc/my.cnf’? y
[root@linux-node1 ~]# vim /etc/my.cnf
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
设置开机mysql自动启动
[root@linux-node1 ~]# systemctl enable mariadb.service
ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
[root@linux-node1 ~]# systemctl start mariadb.service
Mysql设置密码
[root@linux-node1 ~]# mysql_secure_installation 
/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found


NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!


In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.


Enter current password for root (enter for none): --回车键
OK, successfully used password, moving on...


Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.


Set root password? [Y/n] y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!




By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.


Remove anonymous users? [Y/n] y
 ... Success!


Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.


Disallow root login remotely? [Y/n] y
 ... Success!


By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.


Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!


Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.


Reload privilege tables now? [Y/n] y
 ... Success!


Cleaning up...


All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.


Thanks for using MariaDB!


创建数据库
#keystone数据库
mysql -u root -p -e "CREATE DATABASE keystone;"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';"
#Glance数据库
mysql -u root -p -e "CREATE DATABASE glance;"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';"
#Nova数据库
mysql -u root -p -e "CREATE DATABASE nova;"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';"
#Neutron 数据库
mysql -u root -p -e "CREATE DATABASE neutron;"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';"
#Cinder数据库
mysql -u root -p -e "CREATE DATABASE cinder;"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';"


[root@linux-node1 ~]# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 25
Server version: 5.5.44-MariaDB-log MariaDB Server


Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
+--------------------+
8 rows in set (0.00 sec)
################################################################################################
rabbitmq消息服务器
rabbitmq服务开机自动启动
[root@linux-node1 ~]# systemctl enable rabbitmq-server.service
ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'
[root@linux-node1 ~]# systemctl start rabbitmq-server.service
查看端口:rabbitmq的端口是5672
[root@linux-node1 ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      4184/beam.smp       
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      4041/mysqld         
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      4199/epmd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1111/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2218/master         
tcp6       0      0 :::5672                 :::*                    LISTEN      4184/beam.smp       
tcp6       0      0 :::22                   :::*                    LISTEN      1111/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      2218/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           868/chronyd         
udp6       0      0 ::1:323                 :::*                                868/chronyd 
创建openstack的用户名和密码
[root@linux-node1 ~]#  rabbitmqctl add_user openstack openstack
Creating user "openstack" ...
...done.
用户授权
[root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
...done.
列出rabbitmq的插件
[root@linux-node1 ~]# rabbitmq-plugins list
[ ] amqp_client                       3.3.5
[ ] cowboy                            0.5.0-rmq3.3.5-git4b93c2d
[ ] eldap                             3.3.5-gite309de4
[ ] mochiweb                          2.7.0-rmq3.3.5-git680dba8
[ ] rabbitmq_amqp1_0                  3.3.5
[ ] rabbitmq_auth_backend_ldap        3.3.5
[ ] rabbitmq_auth_mechanism_ssl       3.3.5
[ ] rabbitmq_consistent_hash_exchange 3.3.5
[ ] rabbitmq_federation               3.3.5
[ ] rabbitmq_federation_management    3.3.5
[ ] rabbitmq_management               3.3.5
[ ] rabbitmq_management_agent         3.3.5
[ ] rabbitmq_management_visualiser    3.3.5
[ ] rabbitmq_mqtt                     3.3.5
[ ] rabbitmq_shovel                   3.3.5
[ ] rabbitmq_shovel_management        3.3.5
[ ] rabbitmq_stomp                    3.3.5
[ ] rabbitmq_test                     3.3.5
[ ] rabbitmq_tracing                  3.3.5
[ ] rabbitmq_web_dispatch             3.3.5
[ ] rabbitmq_web_stomp                3.3.5
[ ] rabbitmq_web_stomp_examples       3.3.5
[ ] sockjs                            0.3.4-rmq3.3.5-git3132eb9
[ ] webmachine                        1.10.3-rmq3.3.5-gite9359c7
rabbitmq管理插件启动
[root@linux-node1 ~]# rabbitmq-plugins enable rabbitmq_management 
The following plugins have been enabled:
  mochiweb
  webmachine
  rabbitmq_web_dispatch
  amqp_client
  rabbitmq_management_agent
  rabbitmq_management
Plugin configuration has changed. Restart RabbitMQ for changes to take effect.
重新启动rabbitmq
[root@linux-node1 ~]# systemctl restart rabbitmq-server.service
再次查看监听的端口:web管理端口:15672
[root@linux-node1 ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      4581/beam.smp       
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      4041/mysqld         
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      4597/epmd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1111/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      4581/beam.smp       
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2218/master         
tcp6       0      0 :::5672                 :::*                    LISTEN      4581/beam.smp       
tcp6       0      0 :::22                   :::*                    LISTEN      1111/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      2218/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           868/chronyd         
udp6       0      0 ::1:323                 :::*                                868/chronyd  
打开http://192.168.56.11:15672/  用户名 guest      密码 guest 
登录进去之后:
Admin------->复制administrator------->点击openstack------>Update this user-------->
Tags:粘帖administrator--------->密码都设置为openstack-------->logout
然后在登陆:用户名 openstack  密码  openstack
##############################################################################################################
Keystone 验证服务
[root@linux-node1 ~]# yum -y install lrzsz unzip


[root@linux-node1 ~]# openssl rand -hex 10
bd56bcaa58d488a45188
[root@linux-node1 ~]# grep -n '^[a-z]'  /etc/keystone/keystone.conf
12:admin_token = bd56bcaa58d488a45188
107:verbose = true
495:connection = mysql://keystone:[email protected]/keystone
1305:servers = 192.168.56.11:11211
1341:driver = sql
1903:provider = uuid
1908:driver = memcache
同步数据库:注意权限,所以要用su -s 切换到keystone用户下执行:
[root@linux-node1 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
No handlers could be found for logger "oslo_config.cfg"
验证数据是否创建成功
[root@linux-node1 ~]# mysql -ukeystone -pkeystone
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 27
Server version: 5.5.44-MariaDB-log MariaDB Server


Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> use keystone
Database changed
MariaDB [keystone]> show tables
    -> ;
+------------------------+
| Tables_in_keystone     |
+------------------------+
| access_token           |
| assignment             |
| config_register        |
| consumer               |
| credential             |
| domain                 |
| endpoint               |
| endpoint_group         |
| federation_protocol    |
| group                  |
| id_mapping             |
| identity_provider      |
| idp_remote_ids         |
| mapping                |
| migrate_version        |
| policy                 |
| policy_association     |
| project                |
| project_endpoint       |
| project_endpoint_group |
| region                 |
| request_token          |
| revocation_event       |
| role                   |
| sensitive_config       |
| service                |
| service_provider       |
| token                  |
| trust                  |
| trust_role             |
| user                   |
| user_group_membership  |
| whitelisted_config     |
+------------------------+
33 rows in set (0.00 sec)
启动memcache服务
[root@linux-node1 ~]# systemctl start memcached.service
新建keystone配置文件,并用apache来代理它:5000  正常的api来访问  35357  管理访问的端口
[root@linux-node1 ~]# vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357



    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    = 2.4>
      ErrorLogFormat "%{cu}t %M"
   

    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined


   
        = 2.4>
            Require all granted
       

       
            Order allow,deny
            Allow from all
       

   





    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    = 2.4>
      ErrorLogFormat "%{cu}t %M"
   

    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined


   
        = 2.4>
            Require all granted
       

       
            Order allow,deny
            Allow from all
       

   


必须要配置httpd的ServerName,否则keystone服务不能起来
[root@linux-node1 ~]# grep -n '^ServerName' /etc/httpd/conf/httpd.conf      
95:ServerName 192.168.56.11:80
启动memcache与httpd服务
[root@linux-node1 ~]# systemctl enable memcached
ln -s '/usr/lib/systemd/system/memcached.service' '/etc/systemd/system/multi-user.target.wants/memcached.service'
[root@linux-node1 ~]#  systemctl enable httpd
ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
[root@linux-node1 ~]# systemctl start httpd
查看端口
[root@linux-node1 ~]# netstat -lntup|grep httpd
tcp6       0      0 :::5000                 :::*                    LISTEN      5361/httpd          
tcp6       0      0 :::80                   :::*                    LISTEN      5361/httpd          
tcp6       0      0 :::35357                :::*                    LISTEN      5361/httpd   
创建验证用户及地址版本信息
[root@linux-node1 ~]# grep -n '^admin_token' /etc/keystone/keystone.conf
12:admin_token = bd56bcaa58d488a45188
[root@linux-node1 ~]# export OS_TOKEN=bd56bcaa58d488a45188
[root@linux-node1 ~]# export OS_URL=http://192.168.56.11:35357/v3
[root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3
[root@linux-node1 ~]# env
XDG_SESSION_ID=5
HOSTNAME=linux-node1
TERM=xterm
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=192.168.56.1 53607 22
SSH_TTY=/dev/pts/0
USER=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
MAIL=/var/spool/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
OS_IDENTITY_API_VERSION=3
PWD=/root
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
OS_TOKEN=bd56bcaa58d488a45188
SHLVL=1
HOME=/root
LOGNAME=root
SSH_CONNECTION=192.168.56.1 53607 192.168.56.11 22
LESSOPEN=||/usr/bin/lesspipe.sh %s
OS_URL=http://192.168.56.11:35357/v3
XDG_RUNTIME_DIR=/run/user/0
_=/usr/bin/env
创建租户用户
[root@linux-node1 ~]# openstack project create --domain default   --description "Admin Project" admin
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Admin Project                    |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 53ae41ed2d2e4240980ba4e7dfadf4de |
| is_domain   | False                            |
| name        | admin                            |
| parent_id   | None                             |
+-------------+----------------------------------+
创建admin的用户
[root@linux-node1 ~]# openstack user create --domain default --password-prompt admin
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 8a77d890d93b4656a3b7893cb90eb733 |
| name      | admin                            |
+-----------+----------------------------------+
创建admin的角色
[root@linux-node1 ~]# openstack role create admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 6e626607a20c47bda440c5c94dc3bcde |
| name  | admin                            |
+-------+----------------------------------+
把admin用户加入到admin项目,并赋予admin的角色
[root@linux-node1 ~]# openstack role add --project admin --user admin admin
创建普通用户密码及角色
[root@linux-node1 ~]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Demo Project                     |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 7d196866c81342f49ef1b083eb45e828 |
| is_domain   | False                            |
| name        | demo                             |
| parent_id   | None                             |
+-------------+----------------------------------+


[root@linux-node1 ~]# openstack user create --domain default --password=demo demo
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 7c241c29598b4b30a6e41e9168308b93 |
| name      | demo                             |
+-----------+----------------------------------+
[root@linux-node1 ~]# openstack role create user
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | bafdaf22db03409bbde3b9d360b5d1e4 |
| name  | user                             |
+-------+----------------------------------+
[root@linux-node1 ~]# openstack role add --project demo --user demo user
创建一个Service的项目
[root@linux-node1 ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 7489313a1a164b879cf3618a0fdaaa0e |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | None                             |
+-------------+----------------------------------+
查看创建的用户及角色
[root@linux-node1 ~]# openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 7c241c29598b4b30a6e41e9168308b93 | demo  |
| 8a77d890d93b4656a3b7893cb90eb733 | admin |
+----------------------------------+-------+
[root@linux-node1 ~]# openstack role list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 6e626607a20c47bda440c5c94dc3bcde | admin |
| bafdaf22db03409bbde3b9d360b5d1e4 | user  |
+----------------------------------+-------+
[root@linux-node1 ~]# openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 53ae41ed2d2e4240980ba4e7dfadf4de | admin   |
| 7489313a1a164b879cf3618a0fdaaa0e | service |
| 7d196866c81342f49ef1b083eb45e828 | demo    |
+----------------------------------+---------+
keystone本身也需要注册
[root@linux-node1 ~]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identity               |
| enabled     | True                             |
| id          | 88eeee6ec1434b6dbee4b011abcc1957 |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+
公共的api接口
[root@linux-node1 ~]# openstack endpoint create --region RegionOne identity public http://192.168.56.11:5000/v2.0
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 2c211a7252624a31a9102750487b7a7b |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 88eeee6ec1434b6dbee4b011abcc1957 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.56.11:5000/v2.0   |
+--------------+----------------------------------+
私有的api接口
[root@linux-node1 ~]# openstack endpoint create --region RegionOne identity internal http://192.168.56.11:5000/v2.0
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4d3a182881d44459a927f36df286e2bb |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 88eeee6ec1434b6dbee4b011abcc1957 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.56.11:5000/v2.0   |
+--------------+----------------------------------+
管理的api接口
[root@linux-node1 ~]# openstack endpoint create --region RegionOne identity admin http://192.168.56.11:35357/v2.0
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4fd0af813acc48738bc6003ebe9da49e |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 88eeee6ec1434b6dbee4b011abcc1957 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.56.11:35357/v2.0  |
+--------------+----------------------------------+
查看api接口
[root@linux-node1 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                             |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| 2c211a7252624a31a9102750487b7a7b | RegionOne | keystone     | identity     | True    | public    | http://192.168.56.11:5000/v2.0  |
| 4d3a182881d44459a927f36df286e2bb | RegionOne | keystone     | identity     | True    | internal  | http://192.168.56.11:5000/v2.0  |
| 4fd0af813acc48738bc6003ebe9da49e | RegionOne | keystone     | identity     | True    | admin     | http://192.168.56.11:35357/v2.0 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
使用用户名密码的方式登录:必须要先取消环境变量
[root@linux-node1 ~]# unset OS_TOKEN
[root@linux-node1 ~]# unset OS_URL
[root@linux-node1 ~]# openstack --os-auth-url http://192.168.56.11:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue
Password: admin
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2015-12-27T08:51:56.977380Z      |
| id         | 4d095f0f361247ecb1fd3719f70a4cea |
| project_id | 53ae41ed2d2e4240980ba4e7dfadf4de |
| user_id    | 8a77d890d93b4656a3b7893cb90eb733 |
+------------+----------------------------------+
说明keystone成功了。


便快捷的使用keystone,我们需要设置两个环境变量:
[root@linux-node1 ~]# vim admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.56.11:35357/v3
export OS_IDENTITY_API_VERSION=3
linux-node1 ~]# vim demo-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.56.11:5000/v3
export OS_IDENTITY_API_VERSION=3
添加执行权限
[root@linux-node1 ~]# chmod +x admin-openrc.sh demo-openrc.sh 
[root@linux-node1 ~]# openstack token issue
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2015-12-27T09:04:20.860096Z      |
| id         | 80e87eb4b69642be8137ffec8018858d |
| project_id | 53ae41ed2d2e4240980ba4e7dfadf4de |
| user_id    | 8a77d890d93b4656a3b7893cb90eb733 |
+------------+----------------------------------+
##############################################################################################################
Glance部署
修改配置文件添加数据库连接glance-api.conf与glance-registry.conf
[root@linux-node1 ~]# vi +538 /etc/glance/glance-api.conf
[root@linux-node1 ~]# grep -n '^connection' /etc/glance/glance-api.conf
538:connection=mysql://glance:[email protected]/glance
[root@linux-node1 ~]# vi +363 /etc/glance/glance-registry.conf 
[root@linux-node1 ~]# grep -n '^connection' /etc/glance/glance-registry.conf 
363:connection=mysql://glance:[email protected]/glance
同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "glance-manage db_sync" glance
查看数据库同步是否成功
[root@linux-node1 ~]# mysql -uglance -pglance -h 192.168.56.11
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 34
Server version: 5.5.44-MariaDB-log MariaDB Server


Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> use glance
Database changed
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| artifact_blob_locations          |
| artifact_blobs                   |
| artifact_dependencies            |
| artifact_properties              |
| artifact_tags                    |
| artifacts                        |
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
| metadef_namespace_resource_types |
| metadef_namespaces               |
| metadef_objects                  |
| metadef_properties               |
| metadef_resource_types           |
| metadef_tags                     |
| migrate_version                  |
| task_info                        |
| tasks                            |
+----------------------------------+
20 rows in set (0.00 sec)
创建glance用户
[root@linux-node1 ~]# source admin-openrc.sh 
[root@linux-node1 ~]# openstack user create --domain default --password=glance glance
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | c6a15226c2be4dd1957fb441fb9e8464 |
| name      | glance                           |
+-----------+----------------------------------+
将此用户加入到项目里面并给它赋予admin的权限
[root@linux-node1 ~]# openstack role add --project service --user glance admin
配置keystone与glance-api.conf的链接
[root@linux-node1 glance]# grep -n '^[a-z]' /etc/glance/glance-api.conf 
363:verbose=True
491:notification_driver = noop
538:connection=mysql://glance:[email protected]/glance
642:default_store=file
701:filesystem_store_datadir=/var/lib/glance/images/
974:auth_uri = http://192.168.56.11:5000
975:auth_url = http://192.168.56.11:35357
976:auth_plugin = password
977:project_domain_id = default
978:user_domain_id = default
979:project_name = service
980:username = glance
981:password = glance
1484:flavor=keystone
配置keystone与glance-registry.conf的链接
[root@linux-node1 glance]# grep -n '^[a-z]' /etc/glance/glance-registry.conf 
363:connection=mysql://glance:[email protected]/glance
763:auth_uri = http://192.168.56.11:5000
764:auth_url = http://192.168.56.11:35357
765:auth_plugin = password
766:project_domain_id = default
767:user_domain_id = default
768:project_name = service
769:username = glance
770:password = glance
1256:flavor=keystone
启动glance服务并设置开机启动
[root@linux-node1 glance]# systemctl enable openstack-glance-api
ln -s '/usr/lib/systemd/system/openstack-glance-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service'
[root@linux-node1 glance]# systemctl enable openstack-glance-registry
ln -s '/usr/lib/systemd/system/openstack-glance-registry.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service'
[root@linux-node1 glance]# systemctl start openstack-glance-api
[root@linux-node1 glance]# systemctl start openstack-glance-registry
监听端口: registry:9191     api:9292
[root@linux-node1 glance]# netstat -antup
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:9191            0.0.0.0:*               LISTEN      6570/python2        
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      4581/beam.smp       
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      4041/mysqld         
tcp        0      0 0.0.0.0:11211           0.0.0.0:*               LISTEN      5245/memcached      
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      6555/python2        
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      4597/epmd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1111/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      4581/beam.smp       
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2218/master         
tcp        0      0 127.0.0.1:42434         127.0.0.1:4369          ESTABLISHED 4581/beam.smp       
tcp        0      0 192.168.56.11:60237     192.168.56.11:3306      ESTABLISHED 5369/(wsgi:keystone 
tcp        0      0 192.168.56.11:60247     192.168.56.11:3306      ESTABLISHED 5373/(wsgi:keystone 
tcp        0      0 192.168.56.11:3306      192.168.56.11:60247     ESTABLISHED 4041/mysqld         
tcp        0      0 192.168.56.11:44854     192.168.56.11:11211     ESTABLISHED 5373/(wsgi:keystone 
tcp        0      0 192.168.56.11:3306      192.168.56.11:60237     ESTABLISHED 4041/mysqld         
tcp        0      0 192.168.56.11:11211     192.168.56.11:44854     ESTABLISHED 5245/memcached      
tcp        0      0 192.168.56.11:3306      192.168.56.11:60244     ESTABLISHED 4041/mysqld         
tcp        0      0 127.0.0.1:4369          127.0.0.1:42434         ESTABLISHED 4597/epmd           
tcp        0      0 192.168.56.11:15672     192.168.56.1:62402      ESTABLISHED 4581/beam.smp       
tcp        0      0 192.168.56.11:44859     192.168.56.11:11211     ESTABLISHED 5370/(wsgi:keystone 
tcp        0      0 192.168.56.11:3306      192.168.56.11:60239     ESTABLISHED 4041/mysqld         
tcp        0      0 192.168.56.11:3306      192.168.56.11:60241     ESTABLISHED 4041/mysqld         
tcp        0     52 192.168.56.11:22        192.168.56.1:53607      ESTABLISHED 2441/sshd: root@pts 
tcp        0      0 192.168.56.11:60239     192.168.56.11:3306      ESTABLISHED 5370/(wsgi:keystone 
tcp        0      0 192.168.56.11:60241     192.168.56.11:3306      ESTABLISHED 5371/(wsgi:keystone 
tcp        0      0 192.168.56.11:11211     192.168.56.11:44859     ESTABLISHED 5245/memcached      
tcp        0      0 192.168.56.11:60244     192.168.56.11:3306      ESTABLISHED 5372/(wsgi:keystone 
tcp6       0      0 :::5000                 :::*                    LISTEN      5361/httpd          
tcp6       0      0 :::5672                 :::*                    LISTEN      4581/beam.smp       
tcp6       0      0 :::11211                :::*                    LISTEN      5245/memcached      
tcp6       0      0 :::80                   :::*                    LISTEN      5361/httpd          
tcp6       0      0 :::22                   :::*                    LISTEN      1111/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      2218/master         
tcp6       0      0 :::35357                :::*                    LISTEN      5361/httpd          
udp        0      0 127.0.0.1:323           0.0.0.0:*                           868/chronyd         
udp        0      0 0.0.0.0:11211           0.0.0.0:*                           5245/memcached      
udp6       0      0 ::1:323                 :::*                                868/chronyd         
udp6       0      0 :::11211                :::*                                5245/memcached     


glance服务创建
[root@linux-node1 ~]# source admin-openrc.sh 
[root@linux-node1 ~]# openstack service create --name glance --description "OpenStack Image service" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image service          |
| enabled     | True                             |
| id          | 1c9b0d790cee4e3e81b0a71c16b72fd2 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne   image public http://192.168.56.11:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0273eb69de7d48a4a0829af85579a1e3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 1c9b0d790cee4e3e81b0a71c16b72fd2 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://192.168.56.11:9292        |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne   image internal http://192.168.56.11:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a05eba9ba09a471ca5e7f328fa500e6c |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 1c9b0d790cee4e3e81b0a71c16b72fd2 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://192.168.56.11:9292        |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne   image admin http://192.168.56.11:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 2f89afebf0414db483e9067c24f249c3 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 1c9b0d790cee4e3e81b0a71c16b72fd2 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://192.168.56.11:9292        |
+--------------+----------------------------------+
环境变量添加OS_IMAGE_API_VERSION
[root@linux-node1 ~]# echo "export OS_IMAGE_API_VERSION=2" \
>   | tee -a admin-openrc.sh demo-openrc.sh
export OS_IMAGE_API_VERSION=2
[root@linux-node1 ~]# glance image-list
+----+------+
| ID | Name |
+----+------+
如果执行glance image-list命令出现以上画面则表示glance安装成功了。


上传镜像
[root@linux-node1 ~]# glance image-create --name "cirros" \
> --file cirros-0.3.4-x86_64-disk.img \
> --disk-format qcow2 --container-format bare \
> --visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2015-12-27T10:30:12Z                 |
| disk_format      | qcow2                                |
| id               | ad3eb543-166c-48bc-8e2b-fb6a853d9b06 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | f4dc313fb5164d99972355fe93a44045     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2015-12-27T10:30:12Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+
查看镜像
[root@linux-node1 ~]# glance image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| ad3eb543-166c-48bc-8e2b-fb6a853d9b06 | cirros |
+--------------------------------------+--------+
###################################################################################################################
Nova控制节点(openstack虚拟机必备组件:keystone,glance,nova,neutron)
配置nova.conf文件
[root@linux-node1 ~]# grep -n '^[a-z]'  /etc/nova/nova.conf
61:rpc_backend=rabbit
124:my_ip=192.168.56.11
268:enabled_apis=osapi_compute,metadata
425:auth_strategy=keystone
1053:network_api_class=nova.network.neutronv2.api.API
1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
1331:security_group_api=neutron
1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver
1828:vncserver_listen=$my_ip
1832:vncserver_proxyclient_address=$my_ip
2213:connection=mysql://nova:[email protected]/nova
2334:host=$my_ip
2542:auth_uri = http://192.168.56.11:5000
2543:auth_url = http://192.168.56.11:35357
2544:auth_plugin = password
2545:project_domain_id = default
2546:user_domain_id = default
2547:project_name = service
2548:username = nova
2549:password = nova
3033:url = http://192.168.56.11:9696
3034:auth_url = http://192.168.56.11:35357
3035:auth_plugin = password
3036:project_domain_id = default
3037:user_domain_id = default
3038:region_name = RegionOne
3039:project_name = service
3040:username = neutron
3041:password = neutron
3049:service_metadata_proxy=true
3053:metadata_proxy_shared_secret=neutron
3804:lock_path=/var/lib/nova/tmp
3967:rabbit_host=192.168.56.11
3971:rabbit_port=5672
3983:rabbit_userid=openstack
3987:rabbit_password=openstack


同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
[root@linux-node1 ~]# mysql -unova -pnova -h 192.168.56.11
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 37
Server version: 5.5.44-MariaDB-log MariaDB Server


Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> use nova
Database changed
MariaDB [nova]> show tables;
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
| certificates                               |
| compute_nodes                              |
| console_pools                              |
| consoles                                   |
| dns_domains                                |
| fixed_ips                                  |
| floating_ips                               |
| instance_actions                           |
| instance_actions_events                    |
| instance_extra                             |
| instance_faults                            |
| instance_group_member                      |
| instance_group_policy                      |
| instance_groups                            |
| instance_id_mappings                       |
| instance_info_caches                       |
| instance_metadata                          |
| instance_system_metadata                   |
| instance_type_extra_specs                  |
| instance_type_projects                     |
| instance_types                             |
| instances                                  |
| key_pairs                                  |
| migrate_version                            |
| migrations                                 |
| networks                                   |
| pci_devices                                |
| project_user_quotas                        |
| provider_fw_rules                          |
| quota_classes                              |
| quota_usages                               |
| quotas                                     |
| reservations                               |
| s3_images                                  |
| security_group_default_rules               |
| security_group_instance_association        |
| security_group_rules                       |
| security_groups                            |
| services                                   |
| shadow_agent_builds                        |
| shadow_aggregate_hosts                     |
| shadow_aggregate_metadata                  |
| shadow_aggregates                          |
| shadow_block_device_mapping                |
| shadow_bw_usage_cache                      |
| shadow_cells                               |
| shadow_certificates                        |
| shadow_compute_nodes                       |
| shadow_console_pools                       |
| shadow_consoles                            |
| shadow_dns_domains                         |
| shadow_fixed_ips                           |
| shadow_floating_ips                        |
| shadow_instance_actions                    |
| shadow_instance_actions_events             |
| shadow_instance_extra                      |
| shadow_instance_faults                     |
| shadow_instance_group_member               |
| shadow_instance_group_policy               |
| shadow_instance_groups                     |
| shadow_instance_id_mappings                |
| shadow_instance_info_caches                |
| shadow_instance_metadata                   |
| shadow_instance_system_metadata            |
| shadow_instance_type_extra_specs           |
| shadow_instance_type_projects              |
| shadow_instance_types                      |
| shadow_instances                           |
| shadow_key_pairs                           |
| shadow_migrate_version                     |
| shadow_migrations                          |
| shadow_networks                            |
| shadow_pci_devices                         |
| shadow_project_user_quotas                 |
| shadow_provider_fw_rules                   |
| shadow_quota_classes                       |
| shadow_quota_usages                        |
| shadow_quotas                              |
| shadow_reservations                        |
| shadow_s3_images                           |
| shadow_security_group_default_rules        |
| shadow_security_group_instance_association |
| shadow_security_group_rules                |
| shadow_security_groups                     |
| shadow_services                            |
| shadow_snapshot_id_mappings                |
| shadow_snapshots                           |
| shadow_task_log                            |
| shadow_virtual_interfaces                  |
| shadow_volume_id_mappings                  |
| shadow_volume_usage_cache                  |
| snapshot_id_mappings                       |
| snapshots                                  |
| tags                                       |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
+--------------------------------------------+
105 rows in set (0.00 sec)


[root@linux-node1 ~]# source admin-openrc.sh 
[root@linux-node1 ~]# openstack user create --domain default --password=nova nova
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 8503f85364a34893af574b4b3e036aa1 |
| name      | nova                             |
+-----------+----------------------------------+
[root@linux-node1 ~]# openstack role add --project service --user nova admin
设置开机自启动
[root@linux-node1 ~]# systemctl enable openstack-nova-api.service \
> openstack-nova-cert.service openstack-nova-consoleauth.service \
> openstack-nova-scheduler.service openstack-nova-conductor.service \
> openstack-nova-novncproxy.service
ln -s '/usr/lib/systemd/system/openstack-nova-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-api.service'
ln -s '/usr/lib/systemd/system/openstack-nova-cert.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-cert.service'
ln -s '/usr/lib/systemd/system/openstack-nova-consoleauth.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service'
ln -s '/usr/lib/systemd/system/openstack-nova-scheduler.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service'
ln -s '/usr/lib/systemd/system/openstack-nova-conductor.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service'
ln -s '/usr/lib/systemd/system/openstack-nova-novncproxy.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service'
启动全部服务
[root@linux-node1 ~]# systemctl start openstack-nova-api.service \
> openstack-nova-cert.service openstack-nova-consoleauth.service \
> openstack-nova-scheduler.service openstack-nova-conductor.service \
> openstack-nova-novncproxy.service
注册服务
[root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | e8fb4155650443a3a09796a3925c94d2 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | ac483255b396403c9c3e4fedd4350465           |
| interface    | public                                     |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | e8fb4155650443a3a09796a3925c94d2           |
| service_name | nova                                       |
| service_type | compute                                    |
| url          | http://192.168.56.11:8774/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | 77f123d03e844f20abf05b18da65c675           |
| interface    | internal                                   |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | e8fb4155650443a3a09796a3925c94d2           |
| service_name | nova                                       |
| service_type | compute                                    |
| url          | http://192.168.56.11:8774/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | a185298ca9804a549f97da9236701773           |
| interface    | admin                                      |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | e8fb4155650443a3a09796a3925c94d2           |
| service_name | nova                                       |
| service_type | compute                                    |
| url          | http://192.168.56.11:8774/v2/%(tenant_id)s |
+--------------+--------------------------------------------+


验证是否成功
[root@linux-node1 ~]# openstack host list
+-------------+-------------+----------+
| Host Name   | Service     | Zone     |
+-------------+-------------+----------+
| linux-node1 | scheduler   | internal |
| linux-node1 | cert        | internal |
| linux-node1 | conductor   | internal |
| linux-node1 | consoleauth | internal |
+-------------+-------------+----------+
如果出现此四个服务则代表nova创建成功了
###################################################################################################################
Nova计算节点
nova-compute一般运行在计算节点上,通过message queue接收并管理VM的生命周期
nova-compute通过libvirt管理KVM,通过XenAPI管理Xen
[root@linux-node2 ~]# grep -n '^[a-z]' /etc/nova/nova.conf 
61:rpc_backend=rabbit
124:my_ip=192.168.56.12
268:enabled_apis=osapi_compute,metadata
425:auth_strategy=keystone
1053:network_api_class=nova.network.neutronv2.api.API
1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
1331:security_group_api=neutron
1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver
1820:novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html
1828:vncserver_listen=0.0.0.0
1832:vncserver_proxyclient_address=192.168.56.12
1835:vnc_enabled=true
1838:vnc_keymap=en-us
2213:connection=mysql://nova:[email protected]/nova
2334:host=192.168.56.11
2542:auth_uri = http://192.168.56.11:5000
2543:auth_url = http://192.168.56.11:35357
2544:auth_plugin = password
2545:project_domain_id = default
2546:user_domain_id = default
2547:project_name = service
2548:username = nova
2549:password = nova
2727:virt_type=kvm
3033:url = http://192.168.56.11:9696
3034:auth_url = http://192.168.56.11:35357
3035:auth_plugin = password
3036:project_domain_id = default
3037:user_domain_id = default
3038:region_name = RegionOne
3039:project_name = service
3040:username = neutron
3041:password = neutron
3804:lock_path=/var/lib/nova/tmp
3967:rabbit_host=192.168.56.11
3971:rabbit_port=5672
3983:rabbit_userid=openstack
3987:rabbit_password=openstack


[root@linux-node2 ~]# systemctl enable libvirtd openstack-nova-compute
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@linux-node2 ~]# systemctl start libvirtd openstack-nova-compute
######################################################################################################################
然后在linux-node1上面查看注册状态
[root@linux-node1 ~]# openstack host list
+-------------+-------------+----------+
| Host Name   | Service     | Zone     |
+-------------+-------------+----------+
| linux-node1 | scheduler   | internal |
| linux-node1 | cert        | internal |
| linux-node1 | conductor   | internal |
| linux-node1 | consoleauth | internal |
| linux-node2 | compute     | nova     |
+-------------+-------------+----------+
计算节点上nova安装成功并注册成功


镜像出于活动的状态
[root@linux-node1 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| ad3eb543-166c-48bc-8e2b-fb6a853d9b06 | cirros | ACTIVE |        |
+--------------------------------------+--------+--------+--------+


验证nova与keystone的连接,如下说明成功
[root@linux-node1 ~]# nova endpoints
WARNING: glance has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| glance    | Value                            |
+-----------+----------------------------------+
| id        | 36da860e8a764037ae36de815aadcc84 |
| interface | public                           |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://192.168.56.11:9292        |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance    | Value                            |
+-----------+----------------------------------+
| id        | 907c268057db4cb3a4f5d50379b5ca47 |
| interface | admin                            |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://192.168.56.11:9292        |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance    | Value                            |
+-----------+----------------------------------+
| id        | f58155e69a814ef68b395bf9493ec525 |
| interface | internal                         |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://192.168.56.11:9292        |
+-----------+----------------------------------+
WARNING: nova has no endpoint in ! Available endpoints for this service:
+-----------+---------------------------------------------------------------+
| nova      | Value                                                         |
+-----------+---------------------------------------------------------------+
| id        | 77f123d03e844f20abf05b18da65c675                              |
| interface | internal                                                      |
| region    | RegionOne                                                     |
| region_id | RegionOne                                                     |
| url       | http://192.168.56.11:8774/v2/f4dc313fb5164d99972355fe93a44045 |
+-----------+---------------------------------------------------------------+
+-----------+---------------------------------------------------------------+
| nova      | Value                                                         |
+-----------+---------------------------------------------------------------+
| id        | a185298ca9804a549f97da9236701773                              |
| interface | admin                                                         |
| region    | RegionOne                                                     |
| region_id | RegionOne                                                     |
| url       | http://192.168.56.11:8774/v2/f4dc313fb5164d99972355fe93a44045 |
+-----------+---------------------------------------------------------------+
+-----------+---------------------------------------------------------------+
| nova      | Value                                                         |
+-----------+---------------------------------------------------------------+
| id        | ac483255b396403c9c3e4fedd4350465                              |
| interface | public                                                        |
| region    | RegionOne                                                     |
| region_id | RegionOne                                                     |
| url       | http://192.168.56.11:8774/v2/f4dc313fb5164d99972355fe93a44045 |
+-----------+---------------------------------------------------------------+
WARNING: keystone has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| keystone  | Value                            |
+-----------+----------------------------------+
| id        | c63a949ecdf248b68f1e714e94d238ba |
| interface | admin                            |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://192.168.56.11:35357/v2.0  |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone  | Value                            |
+-----------+----------------------------------+
| id        | c6ca8b1002ac4fb581295d8ed62b0951 |
| interface | internal                         |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://192.168.56.11:5000/v2.0   |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone  | Value                            |
+-----------+----------------------------------+
| id        | e806479ecc114442b23360142b85bde6 |
| interface | public                           |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://192.168.56.11:5000/v2.0   |
+-----------+----------------------------------+
###################################################################################################################
Neutron部署
注册网络服务
[root@linux-node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 3984fda1dd02410ca60d29d5ba2200fd |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ee4ceeb3619041f186e45a9ff092b7d2 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 3984fda1dd02410ca60d29d5ba2200fd |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://192.168.56.11:9696        |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 337595b0538044478ba592c90af8b7c9 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 3984fda1dd02410ca60d29d5ba2200fd |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://192.168.56.11:9696        |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 458b7c8f4dde4890b5046184844b462a |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 3984fda1dd02410ca60d29d5ba2200fd |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://192.168.56.11:9696        |
+--------------+----------------------------------+


[root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/neutron.conf 
20:state_path = /var/lib/neutron
60:core_plugin = ml2
77:service_plugins = router
92:auth_strategy = keystone
360:notify_nova_on_port_status_changes = True
364:notify_nova_on_port_data_changes = True
367:nova_url = http://192.168.56.11:8774/v2
573:rpc_backend=rabbit
717:auth_uri = http://192.168.56.11:5000
718:auth_url = http://192.168.56.11:35357
719:auth_plugin = password
720:project_domain_id = default
721:user_domain_id = default
722:project_name = service
723:username = neutron
724:password = neutron
737:connection = mysql://neutron:[email protected]:3306/neutron
780:auth_url = http://192.168.56.11:35357
781:auth_plugin = password
782:project_domain_id = default
783:user_domain_id = default
784:region_name = RegionOne
785:project_name = service
786:username = nova
787:password = nova
818:lock_path = $state_path/lock
998:rabbit_host = 192.168.56.11
1002:rabbit_port = 5672
1014:rabbit_userid = openstack
1018:rabbit_password = openstack


[root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/plugins/ml2/ml2_conf.ini
5:type_drivers = flat,vlan,gre,vxlan,geneve
12:tenant_network_types = vlan,gre,vxlan,geneve
18:mechanism_drivers = openvswitch,linuxbridge
27:extension_drivers = port_security
67:flat_networks = physnet1
120:enable_ipset = True


[root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
9:physical_interface_mappings = physnet1:eth0
16:enable_vxlan = false
51:prevent_arp_spoofing = True
57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
61:enable_security_group = True


[root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/dhcp_agent.ini
27:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
31:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
52:enable_isolated_metadata = true


[root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/metadata_agent.ini
4:auth_uri = http://192.168.56.11:5000
5:auth_url = http://192.168.56.11:35357
6:auth_region = RegionOne
7:auth_plugin = password
8:project_domain_id = default
9:user_domain_id = default
10:project_name = service
11:username = neutron
12:password = neutron
29:nova_metadata_ip = 192.168.56.11
52:metadata_proxy_shared_secret = neutron


[root@linux-node1 ~]# grep -n '^[a-z]' /etc/nova/nova.conf 
61:rpc_backend=rabbit
124:my_ip=192.168.56.11
268:enabled_apis=osapi_compute,metadata
425:auth_strategy=keystone
1053:network_api_class=nova.network.neutronv2.api.API
1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
1331:security_group_api=neutron
1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver
1828:vncserver_listen=$my_ip
1832:vncserver_proxyclient_address=$my_ip
2213:connection=mysql://nova:[email protected]/nova
2334:host=$my_ip
2542:auth_uri = http://192.168.56.11:5000
2543:auth_url = http://192.168.56.11:35357
2544:auth_plugin = password
2545:project_domain_id = default
2546:user_domain_id = default
2547:project_name = service
2548:username = nova
2549:password = nova
3033:url = http://192.168.56.11:9696
3034:auth_url = http://192.168.56.11:35357
3035:auth_plugin = password
3036:project_domain_id = default
3037:user_domain_id = default
3038:region_name = RegionOne
3039:project_name = service
3040:username = neutron
3041:password = neutron
3049:service_metadata_proxy=true
3053:metadata_proxy_shared_secret=neutron
3804:lock_path=/var/lib/nova/tmp
3967:rabbit_host=192.168.56.11
3971:rabbit_port=5672
3983:rabbit_userid=openstack
3987:rabbit_password=openstack


[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini


[root@linux-node1 ~]# openstack user create --domain default --password=neutron neutron
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 28fb100f0efc4d648344614458e1bf7e |
| name      | neutron                          |
+-----------+----------------------------------+
[root@linux-node1 ~]# openstack role add --project service --user neutron admin


更新数据库
[root@linux-node1 ~]# openstack role add --project service --user neutron admin
[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
No handlers could be found for logger "neutron.quota"
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> juno, juno_initial
INFO  [alembic.runtime.migration] Running upgrade juno -> 44621190bc02, add_uniqueconstraint_ipavailability_ranges
INFO  [alembic.runtime.migration] Running upgrade 44621190bc02 -> 1f71e54a85e7, ml2_network_segments models change for multi-segment network.
INFO  [alembic.runtime.migration] Running upgrade 1f71e54a85e7 -> 408cfbf6923c, remove ryu plugin
INFO  [alembic.runtime.migration] Running upgrade 408cfbf6923c -> 28c0ffb8ebbd, remove mlnx plugin
INFO  [alembic.runtime.migration] Running upgrade 28c0ffb8ebbd -> 57086602ca0a, scrap_nsx_adv_svcs_models
INFO  [alembic.runtime.migration] Running upgrade 57086602ca0a -> 38495dc99731, ml2_tunnel_endpoints_table
INFO  [alembic.runtime.migration] Running upgrade 38495dc99731 -> 4dbe243cd84d, nsxv
INFO  [alembic.runtime.migration] Running upgrade 4dbe243cd84d -> 41662e32bce2, L3 DVR SNAT mapping
INFO  [alembic.runtime.migration] Running upgrade 41662e32bce2 -> 2a1ee2fb59e0, Add mac_address unique constraint
INFO  [alembic.runtime.migration] Running upgrade 2a1ee2fb59e0 -> 26b54cf9024d, Add index on allocated
INFO  [alembic.runtime.migration] Running upgrade 26b54cf9024d -> 14be42f3d0a5, Add default security group table
INFO  [alembic.runtime.migration] Running upgrade 14be42f3d0a5 -> 16cdf118d31d, extra_dhcp_options IPv6 support
INFO  [alembic.runtime.migration] Running upgrade 16cdf118d31d -> 43763a9618fd, add mtu attributes to network
INFO  [alembic.runtime.migration] Running upgrade 43763a9618fd -> bebba223288, Add vlan transparent property to network
INFO  [alembic.runtime.migration] Running upgrade bebba223288 -> 4119216b7365, Add index on tenant_id column
INFO  [alembic.runtime.migration] Running upgrade 4119216b7365 -> 2d2a8a565438, ML2 hierarchical binding
INFO  [alembic.runtime.migration] Running upgrade 2d2a8a565438 -> 2b801560a332, Remove Hyper-V Neutron Plugin
INFO  [alembic.runtime.migration] Running upgrade 2b801560a332 -> 57dd745253a6, nuage_kilo_migrate
INFO  [alembic.runtime.migration] Running upgrade 57dd745253a6 -> f15b1fb526dd, Cascade Floating IP Floating Port deletion
INFO  [alembic.runtime.migration] Running upgrade f15b1fb526dd -> 341ee8a4ccb5, sync with cisco repo
INFO  [alembic.runtime.migration] Running upgrade 341ee8a4ccb5 -> 35a0f3365720, add port-security in ml2
INFO  [alembic.runtime.migration] Running upgrade 35a0f3365720 -> 1955efc66455, weight_scheduler
INFO  [alembic.runtime.migration] Running upgrade 1955efc66455 -> 51c54792158e, Initial operations for subnetpools
INFO  [alembic.runtime.migration] Running upgrade 51c54792158e -> 589f9237ca0e, Cisco N1kv ML2 driver tables
INFO  [alembic.runtime.migration] Running upgrade 589f9237ca0e -> 20b99fd19d4f, Cisco UCS Manager Mechanism Driver
INFO  [alembic.runtime.migration] Running upgrade 20b99fd19d4f -> 034883111f, Remove allow_overlap from subnetpools
INFO  [alembic.runtime.migration] Running upgrade 034883111f -> 268fb5e99aa2, Initial operations in support of subnet allocation from a pool
INFO  [alembic.runtime.migration] Running upgrade 268fb5e99aa2 -> 28a09af858a8, Initial operations to support basic quotas on prefix space in a subnet pool
INFO  [alembic.runtime.migration] Running upgrade 28a09af858a8 -> 20c469a5f920, add index for port
INFO  [alembic.runtime.migration] Running upgrade 20c469a5f920 -> kilo, kilo
INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py
INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam
INFO  [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes
INFO  [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework
INFO  [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac
INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule.
INFO  [alembic.runtime.migration] Running upgrade 30018084ec99, 8675309a5c4f -> 4ffceebfada, network_rbac
INFO  [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables
INFO  [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal
INFO  [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys
INFO  [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver
INFO  [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables
INFO  [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage
INFO  [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash
INFO  [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers
INFO  [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool
INFO  [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
INFO  [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
INFO  [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
  OK
 
重新驱动下服务:
[root@linux-node1 ~]# systemctl restart openstack-nova-api
开机自动加载neutron及启动neutron服务
[root@linux-node1 ~]# systemctl enable neutron-server.service \
>   neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
>   neutron-metadata-agent.service
ln -s '/usr/lib/systemd/system/neutron-server.service' '/etc/systemd/system/multi-user.target.wants/neutron-server.service'
ln -s '/usr/lib/systemd/system/neutron-linuxbridge-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service'
ln -s '/usr/lib/systemd/system/neutron-dhcp-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service'
ln -s '/usr/lib/systemd/system/neutron-metadata-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service'
[root@linux-node1 ~]# systemctl restart neutron-server.service \
>   neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
>   neutron-metadata-agent.service


查看网卡的配置
[root@linux-node1 ~]# neutron agent-list 
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host        | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+
| 6f8de3d4-09e7-4824-a469-ad3a6de45a20 | Linux bridge agent | linux-node1 | :-)   | True           | neutron-linuxbridge-agent |
| 7666c62c-f400-4528-9978-b1efbe0e8f6a | Metadata agent     | linux-node1 | :-)   | True           | neutron-metadata-agent    |
| d3901d1a-6cda-444c-b501-e46c55b2c5b3 | DHCP agent         | linux-node1 | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+


###############################################################################################################
计算节点:(将neutron的配置文件拷贝到计算节点)
[root@linux-node2 ~]# grep -n '^[a-z]'  /etc/neutron/neutron.conf
20:state_path = /var/lib/neutron
60:core_plugin = ml2
77:service_plugins = router
92:auth_strategy = keystone
360:notify_nova_on_port_status_changes = True
364:notify_nova_on_port_data_changes = True
367:nova_url = http://192.168.56.11:8774/v2
573:rpc_backend=rabbit
717:auth_uri = http://192.168.56.11:5000
718:auth_url = http://192.168.56.11:35357
719:auth_plugin = password
720:project_domain_id = default
721:user_domain_id = default
722:project_name = service
723:username = neutron
724:password = neutron
737:connection = mysql://neutron:[email protected]:3306/neutron
780:auth_url = http://192.168.56.11:35357
781:auth_plugin = password
782:project_domain_id = default
783:user_domain_id = default
784:region_name = RegionOne
785:project_name = service
786:username = nova
787:password = nova
818:lock_path = $state_path/lock
998:rabbit_host = 192.168.56.11
1002:rabbit_port = 5672
1014:rabbit_userid = openstack
1018:rabbit_password = openstack


[root@linux-node2 ~]# grep -n '^[a-z]'  /etc/neutron/plugins/ml2/linuxbridge_agent.ini
9:physical_interface_mappings = physnet1:eth0
16:enable_vxlan = false
51:prevent_arp_spoofing = True
57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
61:enable_security_group = True


[root@linux-node2 ~]# grep -n '^[a-z]'  /etc/neutron/plugins/ml2/ml2_conf.ini
5:type_drivers = flat,vlan,gre,vxlan,geneve
12:tenant_network_types = vlan,gre,vxlan,geneve
18:mechanism_drivers = openvswitch,linuxbridge
27:extension_drivers = port_security
67:flat_networks = physnet1
120:enable_ipset = True


[root@linux-node2 ~]# grep -n '^[a-z]'  /etc/nova/nova.conf 
61:rpc_backend=rabbit
124:my_ip=192.168.56.12
268:enabled_apis=osapi_compute,metadata
425:auth_strategy=keystone
1053:network_api_class=nova.network.neutronv2.api.API
1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
1331:security_group_api=neutron
1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver
1820:novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html
1828:vncserver_listen=0.0.0.0
1832:vncserver_proxyclient_address=192.168.56.12
1835:vnc_enabled=true
1838:vnc_keymap=en-us
2213:connection=mysql://nova:[email protected]/nova
2334:host=192.168.56.11
2542:auth_uri = http://192.168.56.11:5000
2543:auth_url = http://192.168.56.11:35357
2544:auth_plugin = password
2545:project_domain_id = default
2546:user_domain_id = default
2547:project_name = service
2548:username = nova
2549:password = nova
2727:virt_type=kvm
3033:url = http://192.168.56.11:9696
3034:auth_url = http://192.168.56.11:35357
3035:auth_plugin = password
3036:project_domain_id = default
3037:user_domain_id = default
3038:region_name = RegionOne
3039:project_name = service
3040:username = neutron
3041:password = neutron
3804:lock_path=/var/lib/nova/tmp
3967:rabbit_host=192.168.56.11
3971:rabbit_port=5672
3983:rabbit_userid=openstack
3987:rabbit_password=openstack


[root@linux-node2 ~]# systemctl restart openstack-nova-compute
[root@linux-node2 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@linux-node2 ~]# systemctl enable neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@linux-node2 ~]# systemctl restart neutron-linuxbridge-agent.service
在控制节点查看
[root@linux-node1 ~]# neutron agent-list
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host        | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+
| 6f8de3d4-09e7-4824-a469-ad3a6de45a20 | Linux bridge agent | linux-node1 | :-)   | True           | neutron-linuxbridge-agent |
| 7666c62c-f400-4528-9978-b1efbe0e8f6a | Metadata agent     | linux-node1 | :-)   | True           | neutron-metadata-agent    |
| 8d54c032-699e-4149-8980-284b127b08b2 | Linux bridge agent | linux-node2 | :-)   | True           | neutron-linuxbridge-agent |
| d3901d1a-6cda-444c-b501-e46c55b2c5b3 | DHCP agent         | linux-node1 | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+
代表计算节点的Linux bridge agent已成功连接到控制节点。
#####################################################################################################################################


创建一个网络
[root@linux-node1 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 5eb49e52-08a9-4a12-9454-b3188a140a21 |
| mtu                       | 0                                    |
| name                      | flat                                 |
| port_security_enabled     | True                                 |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | f4dc313fb5164d99972355fe93a44045     |
+---------------------------+--------------------------------------+
创建一个子网
[root@linux-node1 ~]# neutron subnet-create flat 192.168.56.0/24 --name flat-subnet --allocation-pool start=192.168.56.100,end=192.168.56.200 --dns-nameserver 192.168.56.2 --gateway 192.168.56.2
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | {"start": "192.168.56.100", "end": "192.168.56.200"} |
| cidr              | 192.168.56.0/24                                      |
| dns_nameservers   | 192.168.56.2                                         |
| enable_dhcp       | True                                                 |
| gateway_ip        | 192.168.56.2                                         |
| host_routes       |                                                      |
| id                | 1196139b-c975-410c-a038-d2889ec8f255                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | flat-subnet                                          |
| network_id        | 5eb49e52-08a9-4a12-9454-b3188a140a21                 |
| subnetpool_id     |                                                      |
| tenant_id         | f4dc313fb5164d99972355fe93a44045                     |
+-------------------+------------------------------------------------------+
查看网络和子网
[root@linux-node1 ~]# neutron subnet-list 
+--------------------------------------+-------------+-----------------+------------------------------------------------------+
| id                                   | name        | cidr            | allocation_pools                                     |
+--------------------------------------+-------------+-----------------+------------------------------------------------------+
| 1196139b-c975-410c-a038-d2889ec8f255 | flat-subnet | 192.168.56.0/24 | {"start": "192.168.56.100", "end": "192.168.56.200"} |
+--------------------------------------+-------------+-----------------+------------------------------------------------------+
[root@linux-node1 ~]# source demo-openrc.sh 
[root@linux-node1 ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): 
[root@linux-node1 ~]# nova keypair-add --pub-key .ssh/id_rsa.pub mykey
[root@linux-node1 ~]# nova keypair-list 
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | de:1e:9d:2b:2e:aa:a9:3d:6d:e4:9e:62:7b:16:34:a5 |
+-------+-------------------------------------------------+
加2个安全组
[root@linux-node1 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
[root@linux-node1 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
查看虚拟机类型
[root@linux-node1 ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
镜像
[root@linux-node1 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| ad3eb543-166c-48bc-8e2b-fb6a853d9b06 | cirros | ACTIVE |        |
+--------------------------------------+--------+--------+--------+
网络
[root@linux-node1 ~]# neutron net-list
+--------------------------------------+------+------------------------------------------------------+
| id                                   | name | subnets                                              |
+--------------------------------------+------+------------------------------------------------------+
| 5eb49e52-08a9-4a12-9454-b3188a140a21 | flat | 1196139b-c975-410c-a038-d2889ec8f255 192.168.56.0/24 |
+--------------------------------------+------+------------------------------------------------------+
安全组
[root@linux-node1 ~]# nova secgroup-list
+--------------------------------------+---------+------------------------+
| Id                                   | Name    | Description            |
+--------------------------------------+---------+------------------------+
| 715e16c4-a3b7-4121-a4c1-e44c0995063f | default | Default security group |
+--------------------------------------+---------+------------------------+
创建虚拟机
[root@linux-node1 ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=5eb49e52-08a9-4a12-9454-b3188a140a21 --security-group default --key-name mykey hello-instance
+--------------------------------------+-----------------------------------------------+
| Property                             | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          |                                               |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | -                                             |
| OS-SRV-USG:terminated_at             | -                                             |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| adminPass                            | mHWxqJuqxG4b                                  |
| config_drive                         |                                               |
| created                              | 2015-12-27T12:57:33Z                          |
| flavor                               | m1.tiny (1)                                   |
| hostId                               |                                               |
| id                                   | 0723632f-18a0-4f61-9dd0-17e1aa97920d          |
| image                                | cirros (ad3eb543-166c-48bc-8e2b-fb6a853d9b06) |
| key_name                             | mykey                                         |
| metadata                             | {}                                            |
| name                                 | hello-instance                                |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| security_groups                      | default                                       |
| status                               | BUILD                                         |
| tenant_id                            | eef39c42d1ae4853b07755c1143bf0c6              |
| updated                              | 2015-12-27T12:57:33Z                          |
| user_id                              | c6a2e48c7d834623933f161f19711ebb              |
+--------------------------------------+-----------------------------------------------+
查看创建的虚拟机状态
[root@linux-node1 ~]# nova list
+--------------------------------------+----------------+--------+------------+-------------+---------------------+
| ID                                   | Name           | Status | Task State | Power State | Networks            |
+--------------------------------------+----------------+--------+------------+-------------+---------------------+
| 0723632f-18a0-4f61-9dd0-17e1aa97920d | hello-instance | ACTIVE | -          | Running     | flat=192.168.56.101 |
+--------------------------------------+----------------+--------+------------+-------------+---------------------+


[root@linux-node1 ~]# ssh [email protected]
The authenticity of host '192.168.56.101 (192.168.56.101)' can't be established.
RSA key fingerprint is a9:52:a9:7b:fb:9b:c9:bb:a7:08:92:01:f7:5d:d8:cc.
Are you sure you want to continue connecting (yes/no)? ye
Please type 'yes' or 'no': yes
Warning: Permanently added '192.168.56.101' (RSA) to the list of known hosts.
$ whoami
cirros
已创建成功并且可以登录了


用命令获取虚拟机的url地址
[root@linux-node1 ~]# nova get-vnc-console hello-instance novnc
+-------+------------------------------------------------------------------------------------+
| Type  | Url                                                                                |
+-------+------------------------------------------------------------------------------------+
| novnc | http://192.168.56.11:6080/vnc_auto.html?token=a3e0cf13-5232-41f4-a1a5-5598b2479114 |
+-------+------------------------------------------------------------------------------------+
在浏览器中输入 http://192.168.56.11:6080/vnc_auto.html?token=a3e0cf13-5232-41f4-a1a5-5598b2479114
则可以登录到虚拟机


###############################################################################################################################
[root@linux-node1 ~]# grep -n '^[A-Z]' /etc/openstack-dashboard/local_settings 
9:DEBUG = False
10:TEMPLATE_DEBUG = DEBUG
15:WEBROOT = '/dashboard/'
29:ALLOWED_HOSTS = ['*',]
92:LOCAL_PATH = '/tmp'
103:SECRET_KEY='3f29f265e53c94596fdf'
108:CACHES = {
115:CACHES = {
122:EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
138:OPENSTACK_HOST = "192.168.56.11"
139:OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
140:OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
170:OPENSTACK_KEYSTONE_BACKEND = {
201:OPENSTACK_HYPERVISOR_FEATURES = {
209:OPENSTACK_CINDER_FEATURES = {
216:OPENSTACK_NEUTRON_NETWORK = {
280:IMAGE_CUSTOM_PROPERTY_TITLES = {
292:IMAGE_RESERVED_CUSTOM_PROPERTIES = []
309:API_RESULT_LIMIT = 1000
310:API_RESULT_PAGE_SIZE = 20
313:SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
316:DROPDOWN_MAX_ITEMS = 30
320:TIME_ZONE = "Asia/Shanghai"
363:POLICY_FILES_PATH = '/etc/openstack-dashboard'
364:POLICY_FILES_PATH = '/etc/openstack-dashboard'
387:LOGGING = {
499:SECURITY_GROUP_RULES = {
650:REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES']
[root@linux-node1 ~]# systemctl restart httpd


http://192.168.56.11/dashboard/

























你可能感兴趣的:(云计算)