Ubuntu 20.04集群手动安装OpenStack Yoga

文章目录

    • 基础配置
      • 基础中的基础
      • 软件配置
    • OpenStack
      • keystone
        • X11转发
      • Glance
        • 控制节点
      • Placement
        • 控制节点
      • Nova
        • 控制节点
        • 计算节点
      • Neuron
        • 控制节点
        • 网络节点
        • 计算节点
      • Horizon
      • Swift
        • 控制节点
        • 存储节点
          • 创建环(控制节点)
        • 最终
          • 报错
      • Kuryr-libnetwork
        • 控制节点
        • 计算节点
      • Zun
        • 前置条件
        • 控制节点
          • 会出现报错
        • 计算节点
        • 验证

基础配置

基础中的基础

我这里用的是Ubuntu 20.04,开局白板啥都没有。
需要的几个前置:

  • 安装系统的名称设置
  • 配置networking
  • 配置ssh

换源
apt源
/etc/apt/sources.list换成自己的源安装
pip源

名称
在安装ubuntu 20.04的时候,需要填写几个名称

  • name:应该是root用户名
  • computer’s name:这个就是主机名hostname
  • username:普通用户名

networking
好像已经使用netplan和NetworkManager管理网络,但是因为centos不用NM,而且为了方便,我这里使用network管理网络。

下面的都是错的
下面的都是错的
下面的都是错的
下面的都是错的
首先需要安装几个软件,分别是networking服务、网桥工具以及网络工具(比如ifconfig)。
只有有了网桥工具才能正常创建网桥,不然会报错

apt install ifupdown bridge-utils net-tools --allow-unauthenticated

我这里的集群环境是

  • 局域网网段:192.168.113.0/24
  • 管理网桥br-mgmt:172.23.57.0/24

配置网络环境,网桥如何搭建的可以参考我的allinone文章,这里直接放配置文件/etc/network/interfaces

filename:/etc/network/interfaces

auto lo
iface lo inet loopback

# The primary network interface

auto ens33
iface ens33 inet manual

# inside bridge network port

auto br-ens33
iface br-ens33 inet static
address 192.168.1.210
netmask 255.255.255.0
gateway 192.168.1.1
bridge_ports ens33
bridge_stp off
bridge_fd 1

auto br-mgmt
iface br-mgmt inet static
address 10.17.23.10
netmask 255.255.255.0
bridge_ports ens33
bridge_stp off
bridge_fd 2

下面的才是对的
下面的才是对的
下面的才是对的
下面的才是对的
Ubuntu 通过 Netplan 配置网络教程
配置网桥的方式,似乎根本行不通,主网桥没问题,但是管理网桥一直ping不同,traceroute以后发现好像ping发不出主机我不知道为什么。
然后修改一下想法,直接给网卡设置多个IP吧,多网段就行了。
修改配置文件/etc/netplan/01-network-manager-all.yaml

network:
  version: 2
  render: NetworkManager
  ethernets:
    enp12s0:
      addresses:
        - 192.168.113.10/24
        - 172.23.57.10/24
      #gateway: 192.168.113.1  不需要gateway了
      routes:
        - to: 0.0.0.0/0
          via: 192.168.113.1
        - to: 172.23.57.0/24
          via: 172.23.57.1
      nameservers:
        addresses: [114.114.114.114,8.8.4.4]

然后重启网络systemctl restart NetworkManager即可
执行netplan apply应用网络配置。
然后需要重启一下主机,让他应用配置并且生成相应的路由。
ssh
然后为了能够访问,需要配置一下ssh。
安装ssh

apt install openssh-server -y

然后修改一下配置文件/etc/ssh/sshd_config

注释:PermitRootLogin prohibit-password
添加:PermitRootLogin yes

注释:PasswordAuthentication no
添加:PasswordAuthentication yes

重启服务

service ssh restart

现在基础的基础就差不多了,然后就可以比如说ssh进来测试之类的了

软件配置

参考这个文章吧
Ubuntu 20.04 搭建OpenStack Yoga(allinone)

OpenStack

keystone

这里的keystone我选择装在主控,两个子集群公用一个keystone
首先我觉得这里得安装个数据库
按照上面的步骤安装数据库
创建数据库并授权

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '12345678';

安装

apt install keystone crudini

修改配置文件/etc/keystone/keystone.conf

crudini --set /etc/keystone/keystone.conf database connection "mysql+pymysql://keystone:${password}@controller/keystone"
crudini --set /etc/keystone/keystone.conf token provider fernet

同步数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

初始化一下Fernet key仓库

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

创建一个管理员用户,这里需要区别操作一下,创建用户应该都是在主控这里进行的,通过Region区分
这里创建一个controller1集群RegionThree的admin用户

keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

这里应该安装一下openstack的一些东西,因为需要在主控执行一些命令,比如创建端点服务之类的

apt install python3-openstackclient

这里需要创建一个脚本


创建服务项目和普通用户等

openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password demo demo
openstack role create role_demo
openstack role add --project demo --user demo role_demo
X11转发

当我在xshell里面执行openstack命令的时候报错需要x11转发之类的。需要下载xmanager,我不知道这是啥原因。
但是解决方法就是右键这个连接的属性,然后找到隧道,关闭x11的转发,重新建立连接就好了。

Glance

Glance Installation

控制节点

在我装完我才想起来,这个是不是应该装在存储节点上。。。。

创建数据库

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '12345678';

创建服务项目和用户
这里需要注意,这里使用的是指定service name,如果是这种公用keystone的情况,service创建一个就行了,多个集群共用一个service,不用重复执行了

openstack user create --domain default --password 12345678 glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image

然后创建端点组

openstack endpoint create --region RegionThree image public http://subcontroller1:9292
openstack endpoint create --region RegionThree image internal http://subcontroller1:9292
openstack endpoint create --region RegionThree image admin http://subcontroller1:9292

修改配置文件/etc/glance/glance-api.conf,多集群这里记得后面的controller正确书写

[database]
connection = mysql+pymysql://glance:12345678@controller1/glance

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 12345678

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[oslo_limit]
auth_url = http://controller:5000
auth_type = password
user_domain_id = default
username = MY_SERVICE
system_scope = all
password = 12345678
service_name = glance
region_name = RegionThree

配置一下权限
openstack role add --user MY_SERVICE --user-domain Default --system all reader
更新数据库

su -s /bin/sh -c "glance-manage db_sync" glance

重启服务service glance-api restart

现在就安装好了
然后上传一个镜像文件测试一下

glance image-create --name "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility=public

查看glance image-list

Placement

控制节点

先初始化数据库授权

CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '12345678';

创建用户和服务
如果多集群使用一个keystone,只需要创建一次即可

openstack user create --domain default --password 12345678 placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement

创建端点组

openstack endpoint create --region RegionThree placement public http://subcontroller1:8778
openstack endpoint create --region RegionThree placement internal http://subcontroller1:8778
openstack endpoint create --region RegionThree placement admin http://subcontroller1:8778

安装placement

apt install placement-api

修改配置文件/etc/placement/placement.conf

[placement_database]
connection = mysql+pymysql://placement:12345678@controller1/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 12345678

同步数据库

su -s /bin/sh -c "placement-manage db sync" placement

重启阿帕奇

service apache2 restart

现在安装完成了,然后就可以验证了,执行如下命令

root@controller1 (admin) placement : # placement-status upgrade check
+-------------------------------------------+
| Upgrade Check Results                     |
+-------------------------------------------+
| Check: Missing Root Provider IDs          |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Incomplete Consumers               |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Policy File JSON to YAML Migration |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------

这里还可以装一个奇怪包osc-placement

pip3 install osc-placement

Nova

Compute service

控制节点

先创建数据库

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '12345678';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '12345678';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '12345678';

创建服务项目和用户
这里需要注意,这里使用的是指定service name,如果是这种公用keystone的情况,service创建一个就行了,多个集群共用一个service,不用重复执行了

openstack user create --domain default --password 12345678 nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute

创建端点组

openstack endpoint create --region RegionThree compute public http://controller1:8774/v2.1
openstack endpoint create --region RegionThree compute internal http://controller1:8774/v2.1
openstack endpoint create --region RegionThree compute admin http://controller1:8774/v2.1

安装一下包

apt install nova-api nova-conductor nova-novncproxy nova-scheduler

修改配置文件/etc/nova/nova.conf

[api_database]
connection = mysql+pymysql://nova:12345678@controller1/nova_api

[database]
connection = mysql+pymysql://nova:12345678@controller1/nova

[DEFAULT]
transport_url = rabbit://openstack:12345678@controller1:5672/

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 12345678

[DEFAULT]
my_ip = 172.23.47.10

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller1:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionThree
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 12345678

同步数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

#验证一下
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+
|  Name |                 UUID                 |               Transport URL               |               Database Connection                | Disabled |
+-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                   none:/                  | mysql+pymysql://nova:****@controller1/nova_cell0 |  False   |
| cell1 | 0648a73b-fc8b-45cc-a025-0346be9f5bb8 | rabbit://openstack:****@controller1:5672/ |    mysql+pymysql://nova:****@controller1/nova    |  False   |
+-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+

重启服务

service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

验证一下

# openstack compute service list
+----+----------------+-------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host        | Zone     | Status  | State | Updated At                 |
+----+----------------+-------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor | controller1 | internal | enabled | up    | 2022-05-21T03:40:04.000000 |
|  4 | nova-scheduler | controller1 | internal | enabled | up    | 2022-05-21T03:39:59.000000 |
+----+----------------+-------------+----------+---------+-------+----------------------------+

# nova-status upgrade check
+--------------------------------------------------------------------+
| Upgrade Check Results                                              |
+--------------------------------------------------------------------+
| Check: Cells v2                                                    |
| Result: Success                                                    |
| Details: No host mappings or compute nodes were found. Remember to |
|   run command 'nova-manage cell_v2 discover_hosts' when new        |
|   compute hosts are deployed.                                      |
+--------------------------------------------------------------------+
| Check: Placement API                                               |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Cinder API                                                  |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Policy Scope-based Defaults                                 |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Policy File JSON to YAML Migration                          |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Older than N-1 computes                                     |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: hw_machine_type unset                                       |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
计算节点

安装软件,这里安装systemd是因为版本不对,可以升级成最新的,但是已经安装了一个旧版本的,不会自己升级,所以这里指定一下。

apt install nova-compute systemd

修改配置文件/etc/nova/nova.conf

[DEFAULT]
transport_url = rabbit://openstack:12345678@controller1
my_ip = 172.23.47.40

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 12345678

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller1:6080/vnc_auto.html

[glance]
api_servers = http://controller1:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionThree
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 12345678

修改配置文件/etc/nova/nova-compute.conf

[libvirt]
virt_type = qemu

重启服务

service nova-compute restart

安装好以后,在控制节点直接查看一下

root@controller1 (admin) compute : # openstack compute service list
+----+----------------+-------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host        | Zone     | Status  | State | Updated At                 |
+----+----------------+-------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor | controller1 | internal | enabled | up    | 2022-05-21T08:14:55.000000 |
|  4 | nova-scheduler | controller1 | internal | enabled | up    | 2022-05-21T08:14:49.000000 |
|  8 | nova-compute   | compute1    | nova     | enabled | up    | 2022-05-21T08:14:51.000000 |
+----+----------------+-------------+----------+---------+-------+----------------------------+

然后弄剩下两个计算节点

Neuron

Install and configure for Ubuntu
这里网路应该选择自服务self-service网络,不用提供者provider网络
这里是分为三个部分,控制节点、网络节点和计算节点
其中控制节点安装neutron-server;网络节点安装agent;计算节点安装

控制节点

初始化数据库

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '12345678';

创建服务项目和用户
这里需要注意,这里使用的是指定service name,如果是这种公用keystone的情况,service创建一个就行了,多个集群共用一个service,不用重复执行了

openstack user create --domain default --password 12345678 neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

创建端点组

openstack endpoint create --region RegionThree network public http://subcontroller1:9696
openstack endpoint create --region RegionThree network internal http://subcontroller1:9696
openstack endpoint create --region RegionThree network admin http://subcontroller1:9696

这里选择自服务网络
控制节点只需要安装neutron-server即可

apt install neutron-server

修改配置文件/etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:12345678@controller1/neutron

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
auth_strategy = keystone
transport_url = rabbit://openstack:12345678@controller1
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 12345678

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionThree
project_name = service
username = nova
password = 12345678

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

修改配置文件/etc/nova/nova.conf

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionThree
project_name = service
username = neutron
password = 12345678
service_metadata_proxy = true
metadata_proxy_shared_secret = 12345678

同步一下数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启服务

service nova-api restart
service neutron-server restart
网络节点

把所有agent安装在网络节点
安装agent

apt install crudini neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

修改配置文件/etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:12345678@controller1/neutron

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
auth_strategy = keystone
transport_url = rabbit://openstack:12345678@controller1
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 12345678

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionThree
project_name = service
username = nova
password = 12345678

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

修改配置文件/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan

[ml2]
tenant_network_types = vxlan

[ml2]
mechanism_drivers = linuxbridge,l2population

[ml2]
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini,这里的provider可以修改,保持一致就好了;这个local_ip就是网络节点的管理IP

[linux_bridge]
physical_interface_mappings = provider:enp5s0

[vxlan]
enable_vxlan = true
local_ip = 172.23.47.20
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改配置文件/etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

修改配置文件/etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

修改配置文件/etc/neutron/metadata_agent.ini,这里的secret就是控制节点nova.conf里面写的密码,要一致

[DEFAULT]
nova_metadata_host = controller1
metadata_proxy_shared_secret = 12345678

重启服务

service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart

配置好以后,去控制节点查看一下agent状态,如果有报错,在网络节点查看日志找找问题,看看服务状态哪个down了之类的。

root@controller1 (admin) controller : # openstack network agent list
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host    | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| 3a0e2653-fa7e-4872-befe-ed7305636137 | L3 agent           | network | nova              | :-)   | UP    | neutron-l3-agent          |
| 7dd6732f-f371-495f-bfde-e4a0fe24b31c | Metadata agent     | network | None              | :-)   | UP    | neutron-metadata-agent    |
| d58c7c56-9be5-4218-840e-f38a8d243405 | Linux bridge agent | network | None              | :-)   | UP    | neutron-linuxbridge-agent |
| e18aa730-bf1b-4105-a710-3fbf3c0998ac | DHCP agent         | network | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
计算节点

Install and configure compute node
需要在三个计算节点都执行安装,要替换一下IP和物理网络接口等
安装代理neutron

apt install neutron-linuxbridge-agent

修改配置文件/etc/neutron/neutron.conf

[DEFAULT]
transport_url = rabbit://openstack:12345678@controller1
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 12345678

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp5s0

[vxlan]
enable_vxlan = true
local_ip = 172.23.47.40
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改配置文件/etc/nova/nova.conf

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionThree
project_name = service
username = neutron
password = 12345678

重启服务

service compute-api restart
service neutron-linuxbridge-agent restart

之后在控制节点查看网络代理,可以发现计算节点也被识别了

root@controller1 (admin) compute : # openstack network agent list
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host     | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| 003e0f36-ebc6-4744-b541-903626dd679a | Linux bridge agent | compute2 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 3a0e2653-fa7e-4872-befe-ed7305636137 | L3 agent           | network  | nova              | :-)   | UP    | neutron-l3-agent          |
| 52cb0d27-402c-4ee6-bb2f-09e89990b0e4 | Linux bridge agent | compute3 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 7dd6732f-f371-495f-bfde-e4a0fe24b31c | Metadata agent     | network  | None              | :-)   | UP    | neutron-metadata-agent    |
| d58c7c56-9be5-4218-840e-f38a8d243405 | Linux bridge agent | network  | None              | :-)   | UP    | neutron-linuxbridge-agent |
| e18aa730-bf1b-4105-a710-3fbf3c0998ac | DHCP agent         | network  | nova              | :-)   | UP    | neutron-dhcp-agent        |
| f3bd552e-863b-41c4-ba21-7763546897d4 | Linux bridge agent | compute1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+

Horizon

Installation Guide
安装一下

apt install openstack-dashboard

修改配置文件/etc/openstack-dashboard/local_settings.py

OPENSTACK_HOST = "controller1"
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://controller:5000/identity/v3"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
TIME_ZONE = "Asia/Shanghai"

修改配置文件/etc/apache2/conf-available/openstack-dashboard.conf,添加如下如果不存在

WSGIApplicationGroup %{GLOBAL}

更新配置

systemctl reload apache2.service

然后访问controller1/horizon,使用admin和12345678和Default就可以登陆了。
好像换了个界面,所以原来的dashbaord URL换成了现在的horizon

Swift

Object Storage Install Guide

控制节点

Install and configure the controller node for Ubuntu
直接创建服务用户

openstack user create --domain default --password-prompt swift
openstack role add --project service --user swift admin
openstack service create --name swift --description "OpenStack Object Storage" object-store

创建端点

openstack endpoint create --region RegionThree object-store public http://subcontroller1:8080/v1/AUTH_%\(project_id\)s
openstack endpoint create --region RegionThree object-store internal http://subcontroller1:8080/v1/AUTH_%\(project_id\)s
openstack endpoint create --region RegionThree object-store admin http://subcontroller1:8080/v1

安装一下软件apt-get install swift swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached
创建文件夹,复制配置文件(应该是需要下载的,但是不好下,就提前下载好)

mkdir -p /etc/swift
# curl -o /etc/swift/proxy-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/proxy-server.conf-sample
cp ../proxy-server.conf /etc/swift/proxy-server.conf

修改配置文件/etc/swift/proxy-server.conf

[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller1:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = 12345678
delay_auth_decision = True

[filter:cache]
use = egg:swift#memcache
memcache_servers = controller1:11211
存储节点

Install and configure the storage nodes for Ubuntu and Debian
我只有一块硬盘,这里使用回环设备安装
先安装一下

apt-get install xfsprogs rsync -y --allow-unauthenticated

创建回环设备loop,这里扩展大小为100GB,修改为xfs文件系统,修改启动项添加挂载点,然后创建挂载目录,进行挂载

mkdir -p /srv
truncate -s 100GB /srv/swift-disk
mkfs.xfs /srv/swift-disk
echo '/srv/swift-disk /srv/node/sdb1 xfs loop,noatime 0 0' >> /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1

这里挂载点这么写是为了配合控制节点的设置*-server.conf里面的devices路径,控制节点会组合一个路径来存储节点寻找挂载点,找不到就会日志报错503
现在这个回环设备就创建好了,可以通过blkid以及df命令查看相关信息。
如下可以看到这个loop6就是我创建的回环设备

root@storage:~# blkid
/dev/sda5: UUID="26d78ec3-ea1a-4c5a-b101-f6b17bc1ffb9" TYPE="ext4" PARTUUID="394b008d-05"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: UUID="0cb3ef32-5113-49b8-8723-675395a94a68" TYPE="xfs"
/dev/sda1: UUID="1737-3AD1" TYPE="vfat" PARTUUID="394b008d-01"
root@storage:~# df
Filesystem     1K-blocks     Used Available Use% Mounted on
udev             4013976        0   4013976   0% /dev
tmpfs             809492     1644    807848   1% /run
/dev/sda5      479151816 11489596 443252876   3% /
tmpfs            4047444        0   4047444   0% /dev/shm
tmpfs               5120        4      5116   1% /run/lock
tmpfs            4047444        0   4047444   0% /sys/fs/cgroup
/dev/loop2         63488    63488         0 100% /snap/core20/1328
/dev/loop0         55552    55552         0 100% /snap/snap-store/558
/dev/loop3         44672    44672         0 100% /snap/snapd/14978
/dev/loop4         66816    66816         0 100% /snap/gtk-common-themes/1519
/dev/loop1           128      128         0 100% /snap/bare/5
/dev/loop5        254848   254848         0 100% /snap/gnome-3-38-2004/99
/dev/sda1         523248        4    523244   1% /boot/efi
tmpfs             809488       84    809404   1% /run/user/1000
tmpfs             809488        0    809488   0% /run/user/0
/dev/loop6      97608568   713596  96894972   1% /mnt/node/sdb1

创建或修改文件/etc/rsyncd.conf,添加如下内容

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = MANAGEMENT_INTERFACE_IP_ADDRESS

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

修改配置文件/etc/default/rsync

RSYNC_ENABLE=true

启动服务

service rsync start

进一步
安装软件

apt-get install swift swift-account swift-container swift-object -y --allow-unauthenticated

下载复制配置文件

cp ../account-server.conf /etc/account-server.conf
cp ../container-server.conf /etc/container-server.conf
cp ../object-server.conf /etc/object-server.conf
# curl -o /etc/swift/account-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/account-server.conf-sample
# curl -o /etc/swift/container-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/container-server.conf-sample
# curl -o /etc/swift/object-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/object-server.conf-sample

修改配置文件/etc/swift/account-server.conf

[DEFAULT]
...
bind_ip = 172.23.47.30
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

剩下两个也一样,就是换个名字和端口
修改配置文件/etc/swift/container-server.conf

[DEFAULT]
...
bind_ip = 172.23.47.30
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

修改配置文件/etc/swift/object-server.conf

[DEFAULT]
...
bind_ip = 172.23.47.30
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

创建一下文件夹然后授权

mkdir -p srv/node
chown -R swift:swift /srv/node

mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift
创建环(控制节点)

account.builder
创建account环,这里的设备是sdb1是因为上面回环设备挂载的路径是/srv/node/sdb1,而我们在*-server.conf配置文件里面默认写的devices父路径是srv/node/,所以这里参数device应该写sdb1,这样组合起来才是存储节点的挂在路径

在创建的时候后面指定了三个数字:

  • 第一个指定了要生成的partition的数量,比如2^8=256,10就是2 ^10=1024
  • 第二个执行了副本的数量,此处为1,如果为3的话应该是得需要3个设备,我设置为3,只有一个设备的时候,报错了
  • 第三个的含义为至少移动分区一次间隔的小时数
swift-ring-builder account.builder create 10 1 1
swift-ring-builder account.builder add --region 3 --zone 1 --ip 172.23.47.30 --port 6202 --device sdb1 --weight 100

查看一下创建的环
这一步好像文件是临时的,重启断电就没了

swift-ring-builder account.builder

更新生效一下

swift-ring-builder account.builder rebalance

container.builder

swift-ring-builder container.builder create 10 1 1
swift-ring-builder container.builder add --region 3 --zone 1 --ip 172.23.47.30 --port 6201 --device sdb1 --weight 100
swift-ring-builder container.builder rebalance

object.builder

swift-ring-builder object.builder create 10 1 1
swift-ring-builder object.builder add --region 3 --zone 1 --ip 172.23.47.30 --port 6200 --device sdb1 --weight 100
swift-ring-builder object.builder rebalance

然后把生成的几个*.ring.gz发送到所有存储节点的/etc/swift底下。

scp *.ring.gz [email protected]:/etc/swift/
最终

需要把swift.conf配置文件都赋值给控制节点和存储节点
同时所有节点授权

chown -R root:swift /etc/swift

控制节点启动

service memcached restart
service swift-proxy restart

存储节点启动服务

systemctl enable swift-account.service swift-account-auditor.service swift-account-reaper.service swift-account-replicator.service
systemctl start swift-account.service swift-account-auditor.service swift-account-reaper.service swift-account-replicator.service

systemctl enable swift-container.service swift-container-auditor.service swift-container-replicator.service swift-container-updater.service
systemctl start swift-container.service swift-container-auditor.service swift-container-replicator.service swift-container-updater.service

systemctl enable swift-object.service swift-object-auditor.service swift-object-replicator.service swift-object-updater.service
systemctl start swift-object.service swift-object-auditor.service swift-object-replicator.service swift-object-updater.service

执行如下命令验证

swift stat
报错

503
查看控制节点的日志/var/log/syslog报错如下

May 23 10:24:11 controller1 proxy-server: ERROR Insufficient Storage 172.23.47.30:6202/loop6 (txn: tx36fac85032214835994e6-00628af04b)
May 23 10:24:11 controller1 proxy-server: Account HEAD returning 503 for [507] (txn: tx36fac85032214835994e6-00628af04b)
May 23 10:24:11 controller1 proxy-server: - - 23/May/2022/02/24/11 HEAD /v1/AUTH_064d9534a0fa4e97844f4766b7b7786a%3Fformat%3Djson HTTP/1.0 503 - Swift - - - - tx36fac85032214835994e6-00628af04b - 0.0084 RL - 1653272651.345141411 1653272651.353558302 -
May 23 10:24:11 controller1 proxy-server: Account HEAD returning 503 for [] (txn: tx36fac85032214835994e6-00628af04b) (client_ip: 192.168.112.10)
May 23 10:24:11 controller1 proxy-server: 192.168.112.10 192.168.112.10 23/May/2022/02/24/11 HEAD /v1/AUTH_064d9534a0fa4e97844f4766b7b7786a%3Fformat%3Djson HTTP/1.0 503 - python-swiftclient-3.13.1 gAAAAABiiu5z3EHN... - - - tx36fac85032214835994e6-00628af04b - 0.5529 - - 1653272651.336066008 1653272651.889011860 -
May 23 10:24:12 controller1 proxy-server: Account HEAD returning 503 for [] (txn: tx3633142ed25d4f0eb57eb-00628af04c)
May 23 10:24:12 controller1 proxy-server: - - 23/May/2022/02/24/12 HEAD /v1/AUTH_064d9534a0fa4e97844f4766b7b7786a%3Fformat%3Djson HTTP/1.0 503 - Swift - - - - tx3633142ed25d4f0eb57eb-00628af04c - 0.0014 RL - 1653272652.895800829 1653272652.897246361 -
May 23 10:24:12 controller1 proxy-server: Account HEAD returning 503 for [] (txn: tx3633142ed25d4f0eb57eb-00628af04c) (client_ip: 192.168.112.10)
May 23 10:24:12 controller1 proxy-server: 192.168.112.10 192.168.112.10 23/May/2022/02/24/12 HEAD /v1/AUTH_064d9534a0fa4e97844f4766b7b7786a%3Fformat%3Djson HTTP/1.0 503 - python-swiftclient-3.13.1 gAAAAABiiu5z3EHN... - - - tx3633142ed25d4f0eb57eb-00628af04c - 0.0058 - - 1653272652.894711971 1653272652.900510550 -
May 23 10:24:14 controller1 proxy-server: Account HEAD returning 503 for [] (txn: txda53cbac119d4bbbb2334-00628af04e)
May 23 10:24:14 controller1 proxy-server: - - 23/May/2022/02/24/14 HEAD /v1/AUTH_064d9534a0fa4e97844f4766b7b7786a%3Fformat%3Djson HTTP/1.0 503 - Swift - - - - txda53cbac119d4bbbb2334-00628af04e - 0.0014 RL - 1653272654.906485319 1653272654.907923698 -
May 23 10:24:14 controller1 proxy-server: Account HEAD returning 503 for [] (txn: txda53cbac119d4bbbb2334-00628af04e) (client_ip: 192.168.112.10)

好像是说loop6这个东西有问题,然后查看一下存储节点的日志
/var/log/syslog,他说/srv/node/loop6没有挂载,但是我这里应该是/srv/node/sdb1可能是这个路径有问题?

May 23 12:01:10 storage account-replicator: diff:0 diff_capped:0 empty:0 hashmatch:0 no_change:0 remote_merge:0 rsync:0 ts_repl:0
May 23 12:01:18 storage object-auditor: Begin object audit "forever" mode (ZBF)
May 23 12:01:18 storage object-auditor: Object audit (ZBF) "forever" mode completed: 0.00s. Total quarantined: 0, Total errors: 0, Total files/sec: 0.00, Total bytes/sec: 0.00, Auditing time: 0.00, Rate: 0.00
May 23 12:01:24 storage object-auditor: Begin object audit "forever" mode (ZBF)
May 23 12:01:24 storage object-auditor: Object audit (ZBF) "forever" mode completed: 0.00s. Total quarantined: 0, Total errors: 0, Total files/sec: 0.00, Total bytes/sec: 0.00, Auditing time: 0.00, Rate: 0.00
May 23 12:01:24 storage object-auditor: Begin object audit "forever" mode (ALL)
May 23 12:01:24 storage object-auditor: Object audit (ALL) "forever" mode completed: 0.00s. Total quarantined: 0, Total errors: 0, Total files/sec: 0.00, Total bytes/sec: 0.00, Auditing time: 0.00, Rate: 0.00
May 23 12:01:26 storage container-replicator: Skipping: /srv/node/loop6 is not mounted
May 23 12:01:26 storage container-replicator: Beginning replication run
May 23 12:01:26 storage container-replicator: Replication run OVER
May 23 12:01:26 storage container-replicator: Attempted to replicate 0 dbs in 0.00090 seconds (0.00000/s)
May 23 12:01:26 storage container-replicator: Removed 0 dbs
May 23 12:01:26 storage container-replicator: 0 successes, 1 failures
May 23 12:01:26 storage container-replicator: diff:0 diff_capped:0 empty:0 hashmatch:0 no_change:0 remote_merge:0 rsync:0 ts_repl:0

这个问题其实是挂载点的问题,我的控制节点配置查询的挂载点是/srv/node/loop6,但是存储节点实际上的挂载点是/mnt/node/sdb1,这就不一致了,需要修改存储节点的挂载点,然后重建三个环,把device从loop6改成sdb1。
重启服务就好了。

莫名其妙就好了
在重启swift-proxy的时候,报错了,好像是keystone和这个swift不太兼容匹配,但是我看不出问题在哪

May 22 00:17:23 controller1 proxy-server: Pipeline was modified. New pipeline is "catch_errors gatekeeper healthcheck proxy-logging cache listing_formats container_sync bulk ratelimit authtoken keystoneauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server".
May 22 00:17:23 controller1 proxy-server: Starting Keystone auth_token middleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "bind_port" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "keep_idle" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "user" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "swift_dir" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "log_name" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "auth_url" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "project_domain_id" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "user_domain_id" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "project_name" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "username" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "password" is not known to keystonemiddleware
May 22 00:17:23 controller1 proxy-server: STDERR: The option "__name__" is not known to keystonemiddleware

Kuryr-libnetwork

这里是官方文档

控制节点

创建一个用户就好了

openstack user create --domain default --password 12345678 kuryr
openstack role add --project service --user kuryr admin
计算节点

创建用户和组

groupadd --system kuryr
useradd --home-dir "/var/lib/kuryr" \
      --create-home \
      --system \
      --shell /bin/false \
      -g kuryr \
      kuryr
mkdir -p /etc/kuryr
chown kuryr:kuryr /etc/kuryr

然后安装相关依赖,安装kuryr。
这里可能会报错PBR之类的,如果有,添加一个环境变量就好export PBR_VERSION=1.2.3
这里直接下载可能会不稳定,提前下载好复制进去也可以

apt-get install python3-pip
cd /var/lib/kuryr
git clone -b master https://opendev.org/openstack/kuryr-libnetwork.git
chown -R kuryr:kuryr kuryr-libnetwork
cd kuryr-libnetwork
pip3 install -r requirements.txt
python3 setup.py install

生成配置文件

su -s /bin/sh -c "./tools/generate_config_file_samples.sh" kuryr
su -s /bin/sh -c "cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf" kuryr

修改配置文件/etc/kuryr/kuryr.conf,这里需要指定一下region

[DEFAULT]
bindir = /usr/local/libexec/kuryr

[neutron]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
username = kuryr
user_domain_name = Default
password = 12345678
project_name = service
project_domain_name = Default
auth_type = password
region_name=RegionThree

创建服务文件/etc/systemd/system/kuryr-libnetwork.service

[Unit]
Description = Kuryr-libnetwork - Docker network plugin for Neutron

[Service]
ExecStart = /usr/local/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf
CapabilityBoundingSet = CAP_NET_ADMIN
AmbientCapabilities = CAP_NET_ADMIN

[Install]
WantedBy = multi-user.target

启动服务

systemctl enable kuryr-libnetwork
systemctl start kuryr-libnetwork

Zun

前置条件

这里需要安装一些前置,比如docker和etcd
Docker
直接安装

apt install docker-ce docker-ce-cli containerd.io -y --allow-unauthenticated

启动服务

systemctl enable docker
systemctl enable containerd
systemctl start docker
systemctl start containerd
控制节点

先初始化数据库

CREATE DATABASE zun;
GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'localhost' IDENTIFIED BY '12345678';
GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'%' IDENTIFIED BY '12345678';

创建用户和服务

openstack user create --domain default --password 12345678 zun
openstack role add --project service --user zun admin
openstack service create --name zun --description "Container Service" container

创建端点组

openstack endpoint create --region RegionThree container public http://subcontroller1:9517/v1
openstack endpoint create --region RegionThree container admin http://subcontroller1:9517/v1
openstack endpoint create --region RegionThree container internal http://subcontroller1:9517/v1

创建系统用户和默认文件夹

groupadd --system zun
useradd --home-dir "/var/lib/zun" \
      --create-home \
      --system \
      --shell /bin/false \
      -g zun \
      zun
mkdir -p /etc/zun
chown zun:zun /etc/zun

安装一些必备的

apt-get install python3-pip git -y --unauthenticated

使用源码进行安装,这里可以选择下载,一般会很慢,我这里直接下载好,复制过去

cd /var/lib/zun
#git clone https://opendev.org/openstack/zun.git
cp /xxx/zun-stable-yoga.tar.gz /var/lib/zun
tar -vxzf zun-stable-yoga.tar.gz

chown -R zun:zun zun
cd zun
export PBR_VERSION=1.2.3
pip3 install -r requirements.txt
python3 setup.py install

生成配置文件

su -s /bin/sh -c "oslo-config-generator --config-file etc/zun/zun-config-generator.conf" zun
su -s /bin/sh -c "cp etc/zun/zun.conf.sample /etc/zun/zun.conf" zun
su -s /bin/sh -c "cp etc/zun/api-paste.ini /etc/zun" zun

修改配置文件/etc/zun/zun.conf
这个api里面的host_ip最好写成外部可以访问的,比如开始我写的是172的内网,监听172,发送请求的时候用的subcontroller1就是192,不匹配所以无法连接,我就直接改成0.0.0.0了。

[DEFAULT]
transport_url = rabbit://openstack:12345678@controller1

[api]
host_ip = 0.0.0.0
port = 9517

[database]
connection = mysql+pymysql://zun:12345678@controller1/zun

[keystone_auth]
memcached_servers = controller1:11211
www_authenticate_uri = http://controller:5000
project_domain_name = Default
project_name = service
user_domain_name = Default
password = 12345678
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL

[keystone_authtoken]
memcached_servers = controller1:11211
www_authenticate_uri = http://controller:5000
project_domain_name = Default
project_name = service
user_domain_name = Default
password = 12345678
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL

[oslo_concurrency]
lock_path = /var/lib/zun/tmp

[oslo_messaging_notifications]
driver = messaging

[websocket_proxy]
wsproxy_host = 172.23.47.10
wsproxy_port = 6784
base_url = ws://controller1:6784/

同步数据库

su -s /bin/sh -c "zun-db-manage upgrade" zun

创建服务文件/etc/systemd/system/zun-api.service

[Unit]
Description = OpenStack Container Service API

[Service]
ExecStart = /usr/local/bin/zun-api
User = zun

[Install]
WantedBy = multi-user.target

创建服务文件/etc/systemd/system/zun-wsproxy.service

[Unit]
Description = OpenStack Container Service Websocket Proxy

[Service]
ExecStart = /usr/local/bin/zun-wsproxy
User = zun

[Install]
WantedBy = multi-user.target

启动服务,查看状态

systemctl enable zun-api
systemctl enable zun-wsproxy

systemctl start zun-api
systemctl start zun-wsproxy

systemctl status zun-api
systemctl status zun-wsproxy
会出现报错

ValueError: There must be at least one plugin active.
解决方法

cp -r /var/lib/zun/zun/zun/db/sqlalchemy/alembic /usr/local/lib/python3.8/dist-packages/zun/db/sqlalchemy/
cp /var/lib/zun/zun/zun/db/sqlalchemy/alembic.ini /usr/local/lib/python3.8/dist-packages/zun/db/sqlalchemy/
计算节点

创建系统用户和文件夹

groupadd --system zun
useradd --home-dir "/var/lib/zun" \
      --create-home \
      --system \
      --shell /bin/false \
      -g zun \
      zun
mkdir -p /etc/zun
chown zun:zun /etc/zun
mkdir -p /etc/cni/net.d
chown zun:zun /etc/cni/net.d

安装软件

apt-get install python3-pip git numactl

使用源码进行安装,这里可以选择下载,一般会很慢,我这里直接下载好,复制过去

cd /var/lib/zun
#git clone https://opendev.org/openstack/zun.git
cp /xxx/zun-stable-yoga.tar.gz /var/lib/zun
tar -vxzf zun-stable-yoga.tar.gz

chown -R zun:zun zun
cd zun
export PBR_VERSION=1.2.3
pip3 install -r requirements.txt
python3 setup.py install

生成配置文件,如果权限不足,就直接cp过去

su -s /bin/sh -c "oslo-config-generator --config-file etc/zun/zun-config-generator.conf" zun
su -s /bin/sh -c "cp etc/zun/zun.conf.sample /etc/zun/zun.conf" zun
su -s /bin/sh -c "cp etc/zun/rootwrap.conf /etc/zun/rootwrap.conf" zun
su -s /bin/sh -c "mkdir -p /etc/zun/rootwrap.d" zun
su -s /bin/sh -c "cp etc/zun/rootwrap.d/* /etc/zun/rootwrap.d/" zun
su -s /bin/sh -c "cp etc/cni/net.d/* /etc/cni/net.d/" zun

这一步不知道是干啥的

echo "zun ALL=(root) NOPASSWD: /usr/local/bin/zun-rootwrap \
    /etc/zun/rootwrap.conf *" | sudo tee /etc/sudoers.d/zun-rootwrap

修改配置文件/etc/zun/zun.conf

[DEFAULT]
transport_url = rabbit://openstack:12345678@controller1
state_path = /var/lib/zun

[database]
connection = mysql+pymysql://zun:12345678@controller1/zun

[keystone_auth]
memcached_servers = controller1:11211
www_authenticate_uri = http://controller:5000
project_domain_name = Default
project_name = service
user_domain_name = Default
password = 12345678
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL

[keystone_authtoken]
memcached_servers = controller1:11211
www_authenticate_uri= http://controller:5000
project_domain_name = Default
project_name = service
user_domain_name = Default
password = 12345678
username = zun
auth_url = http://controller:5000
auth_type = password

[oslo_concurrency]
lock_path = /var/lib/zun/tmp

[compute]
host_shared_with_nova = true

再授权一下

chown zun:zun /etc/zun/zun.conf

配置一下docker和kuryr

mkdir -p /etc/systemd/system/docker.service.d

编辑/etc/systemd/system/docker.service.d/docker.conf

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --group zun -H tcp://compute1:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller1:2379

重启docker

systemctl daemon-reload
systemctl restart docker

修改配置文件/etc/kuryr/kuryr.conf

[DEFAULT]
capability_scope = global
process_external_connectivity = False

重启服务

systemctl restart kuryr-libnetwork

containerd
生成containerd配置文件

containerd config default > /etc/containerd/config.toml

修改配置文件/etc/containerd/config.toml,这个ID这样获取getent group zun | cut -d: -f3

[grpc]
gid = ZUN_GROUP_ID

授权并且重启服务

chown zun:zun /etc/containerd/config.toml
systemctl restart containerd

cni
这个不知道有啥用
这个也可以下载下来,然后命令的话就是先解压缩到指定路径下,然后执行loopback,但是会报错,环境变量有问题,不过好像不用管

mkdir -p /opt/cni/bin
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz \
      | tar -C /opt/cni/bin -xzvf - ./loopback

安装一下

install -o zun -m 0555 -D /usr/local/bin/zun-cni /opt/cni/bin/zun-cni

修改配置文件/etc/systemd/system/zun-compute.service

[Unit]
Description = OpenStack Container Service Compute Agent

[Service]
ExecStart = /usr/local/bin/zun-compute
User = zun

[Install]
WantedBy = multi-user.target

修改配置文件/etc/systemd/system/zun-cni-daemon.service

[Unit]
Description = OpenStack Container Service CNI daemon

[Service]
ExecStart = /usr/local/bin/zun-cni-daemon
User = zun

[Install]
WantedBy = multi-user.target

启动服务

systemctl enable zun-compute
systemctl start zun-compute

systemctl enable zun-cni-daemon
systemctl start zun-cni-daemon

systemctl status zun-compute
systemctl status zun-cni-daemon
验证

验证的时候需要安装一下

pip3 install python-zunclient

执行openstack appcontainer service list

root@controller1 (admin) install-scripts : # openstack appcontainer service list
+----+----------+-------------+-------+----------+-----------------+----------------------------+-------------------+
| Id | Host     | Binary      | State | Disabled | Disabled Reason | Updated At                 | Availability Zone |
+----+----------+-------------+-------+----------+-----------------+----------------------------+-------------------+
|  1 | compute1 | zun-compute | up    | False    | None            | 2022-05-22T17:26:00.000000 | nova              |
|  2 | compute2 | zun-compute | up    | False    | None            | 2022-05-22T17:26:29.000000 | nova              |
|  3 | compute3 | zun-compute | up    | False    | None            | 2022-05-22T17:26:15.000000 | nova              |
+----+----------+-------------+-------+----------+-----------------+----------------------------+-------------------+

你可能感兴趣的:(云计算,云计算)