This guide includes steps to create multi-node HA openstack cloud, with Ceph as Glance and Cinder backend, Swift as object store, openvswitch as quantum plugin.
4 type of nodes: Controller, Network, Compute and Swift
Architecture:
IP addresses and disks allocation:
Host name | HW model | Role | em-1 (external) | em-2 (mgmt) | em-3 (vm traffic) | em-4 (storage) |
R710-1 | R710 | Swift | 211.95.100.134 | 10.10.10.1 | ||
R710-2 | R710 | Controller_bak | 211.95.100.133 | 10.10.10.2 | 10.30.30.2 | |
R710-3 | R710 | Controller | 211.95.100.132 | 10.10.10.3 | 10.30.30.3 | |
R610-4 | R610 | Network | 211.95.100.131 | 10.10.10.4 | 10.20.20.4 | 10.30.30.4 |
R610-5 | R610 | Network_bak | 211.95.100.130 | 10.10.10.5 | 10.20.20.5 | 10.30.30.5 |
R710L-6 | R710L | Compute | 211.95.100.135 | 10.10.10.6 | 10.20.20.6 | 10.30.30.6 |
R710-7 | R710 | Compute | 211.95.100.136 | 10.10.10.7 | 10.20.20.7 | 10.30.30.7 |
R710-8 | R710 | Compute | 211.95.100.137 | 10.10.10.8 | 10.20.20.8 | 10.30.30.8 |
R710L-9 | R710L | Compute | 211.95.100.138 | 10.10.10.9 | 10.20.20.9 | 10.30.30.9 |
VIP-APIs | 211.95.100.143 | 10.10.10.200 | ||||
VIP-Mysql | 10.10.10.100 | |||||
VIP-Rabbitmq | 10.10.10.101 |
10.30.30.3 R710-3
10.30.30.2 R710-2
10.30.30.5 R610-5
|
apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y
|
apt-get install -y ntp
|
echo "server 10.10.10.3" >> /etc/ntp.conf
echo "server 10.10.10.2" >> /etc/ntp.conf
service ntp restart
|
apt-get install -y vlan bridge-utils
|
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl -p
|
auto em1
iface em1 inet static
address 119.100.200.131
netmask 255.255.255.224
gateway 119.100.200.129
dns-nameservers 8.8.8.8
#Openstack management
auto em2
iface em2 inet static
address 10.10.10.5
netmask 255.255.255.0
#VM traffic
auto em3
iface em3 inet static
address 10.20.20.5
netmask 255.255.255.0
#Storage network for ceph
auto em4
iface em4 inet static
address 10.30.30.5
netmask 255.255.255.0
|
service networking restart
|
apt-get install -y openvswitch-switch openvswitch-datapath-dkms
|
#br-int will be used for VM integration
ovs-vsctl add-br br-int
#br-ex is used to make to VM accessible from the external network
ovs-vsctl add-br br-ex
#br-em3 is used to establish VM internal traffic
ovs-vsctl add-br br-em3
ovs-vsctl add-port br-em3 em3
|
apt-get -y install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent
|
[DEFAULT]
auth_strategy = keystone
rabbit_host = 10.10.10.101
rabbit_password=yourpassword
[keystone_authtoken]
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = yourpassword
signing_dir = /var/lib/quantum/keystone-signing
|
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = yourpassword
|
[DATABASE]
sql_connection = mysql://quantum:[email protected]/quantum
[OVS]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:1100
integration_bridge = br-int
bridge_mappings = physnet1:br-em3
[SECURITYGROUP]
firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
|
[DEFAULT]
auth_url = http://10.10.10.200:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = yourpassword
nova_metadata_ip = 10.10.10.200
nova_metadata_port = 8775
metadata_proxy_shared_secret = demo
|
Defaults:quantum !requiretty
quantum ALL=NOPASSWD: ALL
|
cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done
|
auto em1
iface em1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
|
#Internet connectivity will be lost after this step but this won't affect OpenStack's work
ovs-vsctl add-port br-ex em1
|
auto br-ex
iface br-ex inet static
address 119.100.200.130
netmask 255.255.255.224
gateway 119.100.200.129
dns-nameservers 8.8.8.8
|
service networking restart
cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done
|
apt-get install -y keepalived haproxy
|
ENABLED=0
|
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 4096
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
maxconn 8000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen dashboard_cluster
bind 10.10.10.200:80
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:80 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:80 check inter 2000 rise 2 fall 5
listen dashboard_cluster_internet
bind 119.100.200.143:80
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:80 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:80 check inter 2000 rise 2 fall 5
listen glance_api_cluster
bind 10.10.10.200:9292
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:9292 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:9292 check inter 2000 rise 2 fall 5
listen glance_api_internet_cluster
bind 119.100.200.143:9292
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:9292 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:9292 check inter 2000 rise 2 fall 5
listen glance_registry_cluster
bind 10.10.10.200:9191
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:9191 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:9191 check inter 2000 rise 2 fall 5
listen glance_registry_internet_cluster
bind 119.100.200.143:9191
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:9191 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:9191 check inter 2000 rise 2 fall 5
listen keystone_admin_cluster
bind 10.10.10.200:35357
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:35357 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:35357 check inter 2000 rise 2 fall 5
server control03 192.168.220.43:35357 check inter 2000 rise 2 fall 5
listen keystone_internal_cluster
bind 10.10.10.200:5000
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:5000 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:5000 check inter 2000 rise 2 fall 5
listen keystone_public_cluster
bind 119.100.200.143:5000
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:5000 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:5000 check inter 2000 rise 2 fall 5
listen memcached_cluster
bind 10.10.10.200:11211
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:11211 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:11211 check inter 2000 rise 2 fall 5
listen nova_compute_api1_cluster
bind 10.10.10.200:8773
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:8773 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8773 check inter 2000 rise 2 fall 5
listen nova_compute_api1_internet_cluster
bind 119.100.200.143:8773
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:8773 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8773 check inter 2000 rise 2 fall 5
listen nova_compute_api2_cluster
bind 10.10.10.200:8774
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:8774 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8774 check inter 2000 rise 2 fall 5
listen nova_compute_api2_internet_cluster
bind 119.100.200.143:8774
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:8774 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8774 check inter 2000 rise 2 fall 5
listen nova_compute_api3_cluster
bind 10.10.10.200:8775
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:8775 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8775 check inter 2000 rise 2 fall 5
listen nova_compute_api3_internet_cluster
bind 119.100.200.143:8775
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:8775 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8775 check inter 2000 rise 2 fall 5
listen cinder_api_cluster
bind 10.10.10.200:8776
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:8776 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8776 check inter 2000 rise 2 fall 5
listen cinder_api_internet_cluster
bind 119.100.200.143:8776
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:8776 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:8776 check inter 2000 rise 2 fall 5
listen novnc_cluster
bind 10.10.10.200:6080
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:6080 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:6080 check inter 2000 rise 2 fall 5
listen novnc__internet_cluster
bind 119.100.200.143:6080
balance source
option tcpka
option tcplog
server R710-3 10.10.10.3:6080 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:6080 check inter 2000 rise 2 fall 5
listen quantum_api_cluster
bind 10.10.10.200:9696
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:9696 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:9696 check inter 2000 rise 2 fall 5
listen quantum_api_internet_cluster
bind 119.100.200.143:9696
balance source
option tcpka
option httpchk
option tcplog
server R710-3 10.10.10.3:9696 check inter 2000 rise 2 fall 5
server R710-2 10.10.10.2:9696 check inter 2000 rise 2 fall 5
|
service haproxy stop
|
apt-get install pacemaker corosync
|
corosync-keygen
#Copy generated key to another node
scp /etc/corosync/authkey R610-4:/etc/corosync/authkey
|
rrp_mode: active
interface {
# The following values need to be set based on your environment
ringnumber: 0
bindnetaddr: 10.10.10.5
mcastaddr: 226.94.1.3
mcastport: 5405
}
interface {
# The following values need to be set based on your environment
ringnumber: 1
bindnetaddr: 10.30.30.5
mcastaddr: 226.94.1.4
mcastport: 5405
}
|
#Edit /etc/default/corosync
START=yes
service corosync start
|
crm_mon
|
cd /usr/lib/ocf/resource.d/heartbeat
wget https://raw.github.com/russki/cluster-agents/master/haproxy
chmod 755 haproxy
|
crm configure
property stonith-enabled=false
property no-quorum-policy=ignore
rsc_defaults resource-stickiness=100
rsc_defaults failure-timeout=0
rsc_defaults migration-threshold=10
property pe-warn-series-max="1000"
property pe-input-series-max="1000"
property pe-error-series-max="1000"
property cluster-recheck-interval="5min"
primitive vip-mgmt ocf:heartbeat:IPaddr2 params ip=10.10.10.200 cidr_netmask=24 op monitor interval=5s
primitive vip-internet ocf:heartbeat:IPaddr2 params ip=119.100.200.143 cidr_netmask=24op monitor interval=5s
primitive haproxy ocf:heartbeat:haproxy params conffile="/etc/haproxy/haproxy.cfg" op monitor interval="5s"
colocation haproxy-with-vips INFINITY: haproxy vip-mgmt vip-internet
order haproxy-after-IP mandatory: vip-mgmt vip-internet haproxy
verify
commit
#Check pacemaker resource running
crm_mon -1
|
We use R610-5 node as 3rd Ceph monitor node
wget -q -O- ' https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
apt-get update -y
apt-get install ceph
|
#R610-5
mkdir /var/lib/ceph/mon/ceph-c
|
During disk partitioning selection, leave around 200GB space in the volume group, we need some space for Mysql and Rabbitmq DRBD resources.
10.30.30.3 R710-3
10.30.30.2 R710-2
10.30.30.5 R610-5
|
apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y
|
apt-get install -y ntp
|
#use 10.10.10.3 for controller_bak node
echo "server 10.10.10.2" >> /etc/ntp.conf
service ntp restart
|
apt-get install -y vlan bridge-utils
|
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl -p
|
auto em1
iface em1 inet static
address 119.100.200.132
netmask 255.255.255.224
gateway 119.100.200.129
dns-nameservers 8.8.8.8
#Openstack management
auto em2
iface em2 inet static
address 10.10.10.3
netmask 255.255.255.0
#Storage network for ceph
auto em4
iface em4 inet static
address 10.30.30.3
netmask 255.255.255.0
|
service networking restart
|
apt-get install -y mysql-server python-mysqldb
|
Make sure on both controller node, mysql has same UID and GID, if they are different, change them to the same. |
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
|
#Comment following line:
#start on runlevel [2345]
|
service mysql stop
|
apt-get install rabbitmq-server
|
Make sure on both controller node, rabbitmq has same UID and GID, if they are different, change them to the same. |
update-rc.d -f rabbitmq-server remove
|
service rabbitmq-server stop
|
apt-get install drbd8-utils xfsprogs
|
update-rc.d -f drbd remove
|
lvcreate lvcreate R710-3-vg -n drbd0 -L 100G
lvcreate lvcreate R710-3-vg -n drbd1 -L 10G
#Replace VG name to R710-2-vg on node R710-2
|
modprobe drbd
#Add drbd to /etc/modules
echo "drbd" >> /etc/modules
|
resource drbd-mysql {
device /dev/drbd0;
meta-disk internal;
on R710-2 {
address 10.30.30.2:7788;
disk /dev/mapper/R710--2--vg-drbd0;
}
on R710-3 {
address 10.30.30.3:7788;
disk /dev/mapper/R710--3--vg-drbd0;
}
syncer {
rate 40M;
}
net {
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
}
}
|
resource drbd-rabbitmq{
device /dev/drbd1;
meta-disk internal;
on R710-2 {
address 10.30.30.2:7789;
disk /dev/mapper/R710--2--vg-drbd1;
}
on R710-3 {
address 10.30.30.3:7789;
disk /dev/mapper/R710--3--vg-drbd1;
}
syncer {
rate 40M;
}
net {
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
}
}
|
#[Both node]Once the configuration file is saved, we can test its correctness as follows:
drbdadm dump drbd-mysql
drbdadm dump drbd-rabbitmq
#[Both node]Create the metadata as follows:
drbdadm create-md drbd-mysql
drbdadm create-md drbd-rabbitmq
#[Both node]Bring resources up:
drbdadm up drbd-mysql
drbdadm up drbd-rabbitmq
#[Both node]Check that both DRBD nodes have made communication and we'll see that the data is inconsistent as no initial synchronization has been made. For this we do the following:
drbd-overview
#And the result will be similar to:
0:drbd-mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
|
#Do this on 1st node only:
drbdadm -- --overwrite-data-of-peer primary drbd-mysql
drbdadm -- --overwrite-data-of-peer primary drbd-rabbitmq
#And this should show something similar to:
0:drbd-mysql Connected Secondary/Secondary UpToDate/UpToDate C r-----
|
#Do this on 1st node only:
mkfs -t xfs /dev/drbd0
mkfs -t xfs /dev/drbd1
|
#Do following on 1st node only
#mysql
mkdir /mnt/mysql
chown mysql:mysql /mnt/mysql
mount /dev/drbd0 /mnt/mysql
mv /var/lib/mysql/* /mnt/mysql
umount /mnt/mysql
#rabbitmq
mkdir /mnt/rabbitmq
chown rabbitmq:rabbitmq /mnt/rabbitmq
scp -p /var/lib/rabbitmq/.erlang.cookie R710-2:/var/lib/rabbitmq/
ssh R710-2 chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
mount /dev/drbd1 /mnt/rabbitmq
cp -a /var/lib/rabbitmq/.erlang.cookie /mnt/rabbitmq
umount /mnt/rabbitmq
|
#Check output of drbd-overview, do following only after the synchronization is finished.
drbdadm secondary drbd-mysql
drbdadm secondary drbd-rabbitmq
|
apt-get install pacemaker corosync
|
corosync-keygen
#Copy generated key to another node
scp /etc/corosync/authkey R710-2:/etc/corosync/authkey
|
rrp_mode: active
interface {
# The following values need to be set based on your environment
ringnumber: 0
bindnetaddr: 10.10.10.2
mcastaddr: 226.94.1.1
mcastport: 5405
}
interface {
# The following values need to be set based on your environment
ringnumber: 1
bindnetaddr: 10.30.30.2
mcastaddr: 226.94.1.2
mcastport: 5405
}
|
#Edit /etc/default/corosync
START=yes
service corosync start
|
crm_mon
|
crm configure
property stonith-enabled=false
property no-quorum-policy=ignore
rsc_defaults resource-stickiness=100
primitive drbd-mysql ocf:linbit:drbd \
params drbd_resource="drbd-mysql" \
op monitor interval="50s" role="Master" timeout="30s" \
op monitor interval="60s" role="Slave" timeout="30s"
primitive fs-mysql ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/mnt/mysql" fstype="xfs" \
meta target-role="Started"
primitive mysql ocf:heartbeat:mysql \
params config="/etc/mysql/my.cnf" datadir="/mnt/mysql"binary="/usr/bin/mysqld_safe" pid="/var/run/mysqld/mysqld.pid"socket="/var/run/mysqld/mysqld.sock" log="/var/log/mysql/mysql.log"additional_parameters="--bind-address=10.10.10.100" \
op start interval="0" timeout="120s" \
op stop interval="0" timeout="120s" \
op monitor interval="15s" \
meta target-role="Started"
primitive vip-mysql ocf:heartbeat:IPaddr2 \
params ip="10.10.10.100" cidr_netmask="24"
group g-mysql fs-mysql vip-mysql mysql
ms ms-drbd-mysql drbd-mysql \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"notify="true" is-managed="true" target-role="Started"
colocation c-fs-mysql-on-drbd inf: g-mysql ms-drbd-mysql:Master
order o-drbd-before-fs-mysql inf: ms-drbd-mysql:promote g-mysql:start
verify
commit
#Check pacemaker resource running
crm_mon
|
crm configure
primitive drbd-rabbitmq ocf:linbit:drbd \
params drbd_resource="drbd-rabbitmq" \
op monitor interval="50s" role="Master" timeout="30s" \
op monitor interval="60s" role="Slave" timeout="30s"
primitive fs-rabbitmq ocf:heartbeat:Filesystem \
params device="/dev/drbd1" directory="/mnt/rabbitmq" fstype="xfs" \
meta target-role="Started"
primitive rabbitmq ocf:rabbitmq:rabbitmq-server \
params mnesia_base="/mnt/rabbitmq"
primitive vip-rabbitmq ocf:heartbeat:IPaddr2 \
params ip="10.10.10.101" cidr_netmask="24"
group g-rabbitmq fs-rabbitmq vip-rabbitmq rabbitmq
ms ms-drbd-rabbitmq drbd-rabbitmq \
meta notify="true" master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
colocation c-fs-rabbitmq-on-drbd inf: g-rabbitmq ms-drbd-rabbitmq:Master
order o-drbd-before-fs-rabbitmq inf: ms-drbd-rabbitmq:promote g-rabbitmq:start
verify
commit
#Check pacemaker resource running
crm_mon
|
#Check rabbitmq is running on which node by:
crm_mon -1
#Change the password on the active rabbitmq node:
rabbitmqctl change_password guest yourpassword
|
#Connet to mysql via its VIP:
mysql -u root -p -h 10.10.10.100
#Keystone
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'yourpassword';
#Glance
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'yourpassword';
#Quantum
CREATE DATABASE quantum;
GRANT ALL ON quantum.* TO 'quantum'@'%' IDENTIFIED BY 'yourpassword';
#Nova
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'yourpassword';
#Cinder
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'yourpassword';
quit;
|
2 controller nodes are Ceph monitor(MON) and storage(OSD) nodes
wget -q -O- ' https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
apt-get update -y
apt-get install ceph python-ceph
|
#R710-3
ssh-keygen -N '' -f ~/.ssh/id_rsa
ssh-copy-id R710-2
ssh-copy-id R610-5
|
#R710-3
mkdir /var/lib/ceph/osd/ceph-11
mkdir /var/lib/ceph/osd/ceph-12
mkdir /var/lib/ceph/osd/ceph-13
mkdir /var/lib/ceph/osd/ceph-14
mkdir /var/lib/ceph/mon/ceph-a
parted /dev/sdb mklabel msdos
parted /dev/sdc mklabel msdos
parted /dev/sdd mklabel msdos
parted /dev/sde mklabel msdos
#R710-2
mkdir /var/lib/ceph/osd/ceph-21
mkdir /var/lib/ceph/osd/ceph-22
mkdir /var/lib/ceph/osd/ceph-23
mkdir /var/lib/ceph/osd/ceph-24
mkdir /var/lib/ceph/mon/ceph-b
parted /dev/sdb mklabel msdos
parted /dev/sdc mklabel msdos
parted /dev/sdd mklabel msdos
parted /dev/sde mklabel msdos
|
[global]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
[osd]
osd journal size = 1000
osd mkfs type = xfs
[mon.a]
host = R710-3
mon addr = 10.30.30.3:6789
[mon.b]
host = R710-2
mon addr = 10.30.30.2:6789
[mon.c]
host = R610-5
mon addr = 10.30.30.5:6789
[osd.11]
host = R710-3
devs = /dev/sdb
[osd.12]
host = R710-3
devs = /dev/sdc
[osd.13]
host = R710-3
devs = /dev/sdd
[osd.14]
host = R710-3
devs = /dev/sde
[osd.21]
host = R710-2
devs = /dev/sdb
[osd.22]
host = R710-2
devs = /dev/sdc
[osd.23]
host = R710-2
devs = /dev/sdd
[osd.24]
host = R710-2
devs = /dev/sde
[client.volumes]
keyring = /etc/ceph/ceph.client.volumes.keyring
[client.images]
keyring = /etc/ceph/ceph.client.images.keyring
#Copy ceph.conf to other 2 ceph nodes
scp /etc/ceph/ceph.conf R710-2:/etc/ceph
scp /etc/ceph/ceph.conf R610-5:/etc/ceph
|
cp /etc/ceph
mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
service ceph -a start
|
ceph -s
#expected output:
health HEALTH_OK
monmap e1: 3 mons at {a=10.30.30.3:6789/0,b=10.30.30.2:6789/0,c=10.30.30.5:6789/0}, election epoch 30, quorum 0,1,2 a,b,c
osdmap e72: 8 osds: 8 up, 8 in
pgmap v5128: 1984 pgs: 1984 active+clean; 25057 MB data, 58472 MB used, 14835 GB /14892 GB avail
mdsmap e1: 0/0/1 up
|
#R710-3
ceph osd pool create volumes 128
ceph osd pool create images 128
|
apt-get install -y keystone
|
[DEFAULT]
admin_token = yourpassword
[ssl]
enable = False
[signing]
token_format = UUID
[sql]
connection = mysql://keystone:[email protected]/keystone
|
service keystone restart
keystone-manage db_sync
|
Retrieve scripts:
wget https://github.com/abckey/OpenStack-Grizzly-Install-Guide/raw/master/KeystoneScripts/keystone_basic.sh
wget https://github.com/abckey/OpenStack-Grizzly-Install-Guide/raw/master/KeystoneScripts/keystone_endpoints.sh
|
Modify ADMIN_PASSWORD variable to your own value Modify SERVICE_TOKEN variable to your own value Modify USER_PROJECT variable to your operation user and project name |
Modify the HOST_IP and EXT_HOST_IP variables to the HA Proxy O&M VIP and external VIP before executing the scripts. In this example, it’s 10.10.10.200 and 119.100.200.143.
Modify the SWIFT_PROXY_IP and EXT_SWIFT_PROXY_IP variables to the Swift proxy server O&M IP and external IP. In this example, it’s 10.10.10.1 and 119.100.200.134.
Modify the MYSQL_USER and MYSQL_PASSWORD variables according your setup
Run scripts:
sh -x keystone_basic.sh
sh -x keystone_endpoints.sh
|
vi keystonerc
#Paste the following:
export OS_USERNAME=admin
export OS_PASSWORD=yourpassword
export OS_TENANT_NAME=admin
export OS_AUTH_URL=" http://10.10.10.200:5000/v2.0/"
export PS1='[\u@\h \W(keystone_admin)]\$ '
# Source it:
source keystonerc
|
keystone user-list
|
apt-get install -y glance
|
[DEFAULT]
sql_connection = mysql://glance:[email protected]/glance
[keystone_authtoken]
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = yourpassword
[paste_deploy]
flavor= keystone
|
[DEFAULT]
sql_connection = mysql://glance:[email protected]/glance
[keystone_authtoken]
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = yourpassword
[paste_deploy]
flavor= keystone
|
#R710-3
ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.images |sudo tee /etc/ceph/ceph.client.images.keyring
chown glance:glance /etc/ceph/ceph.client.images.keyring
scp -p //etc/ceph/ceph.client.images.keyring R710-2:/etc/ceph/
ssh R710-2 chown glance:glance /etc/ceph/ceph.client.images.keyring
|
service glance-api restart; service glance-registry restart
|
glance-manage db_sync
|
glance image-create --name cirros-qcow2--is-public true --container-format bare --disk-format qcow2 --copy-from https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
|
glance image-list
#Also you can check if it's stored on ceph rbd images pool:
rados -p images ls
|
apt-get install -y quantum-server
|
[DATABASE]
sql_connection = mysql://quantum:[email protected]/quantum
[OVS]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:1100
integration_bridge = br-int
[SECURITYGROUP]
firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
|
[DEFAULT]
rabbit_host = 10.10.10.101
rabbit_password=yourpassword
[keystone_authtoken]
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = yourpassword
signing_dir = /var/lib/quantum/keystone-signing
|
service quantum-server restart
|
apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor
|
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = yourpassword
signing_dir = /tmp/keystone-signing-nova
|
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
#verbose=True
#debug=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
#Message queue and DB
rabbit_host=10.10.10.101
rabbit_password=yourpassword
nova_url=http://10.10.10.200:8774/v1.1/
sql_connection=mysql://nova:[email protected]/nova
# Auth
use_deprecated_auth=false
auth_strategy=keystone
# Imaging service
glance_api_servers=10.10.10.200:9292
image_service=nova.image.glance.GlanceImageService
# Vnc configuration
novncproxy_base_url=http://119.100.200.143:6080/vnc_auto.html
novncproxy_port=6080
#vncserver_proxyclient_address=10.10.10.3
#vncserver_listen=0.0.0.0
novncproxy_host=10.10.10.3
#Memcached
memcached_servers=10.10.10.3:11211,10.10.10.2:11211
# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.10.10.200:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=yourpassword
quantum_admin_auth_url=http://10.10.10.200:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
#If you want Quantum + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=quantum
#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
#Metadata
service_quantum_metadata_proxy = True
quantum_metadata_proxy_shared_secret = howard
# Compute #
compute_driver=libvirt.LibvirtDriver
# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL
|
#Change line 40 from:
host = instance.get('host')
#To:
host = str(instance.get('host'))
|
key = str(key)
|
nova-manage db sync
|
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
|
nova-manage service list
|
apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms
|
sed -i 's/false/true/g' /etc/default/iscsitarget
|
service iscsitarget start
service open-iscsi start
|
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 10.10.10.200
service_port = 5000
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = yourpassword
signing_dir = /var/lib/cinder
|
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
sql_connection = mysql://cinder:[email protected]/cinder
#iscsi_helper=ietadm
#iscsi_ip_address=10.10.10.3
rabbit_host = 10.10.10.101
rabbit_password=yourpassword
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
rbd_user=volumes
rbd_secret_uuid={uuid of secret} #The uuid will be generated from 1st compute node in compute node config section
|
...
start on runlevel [2345]
stop on runlevel [!2345]
env CEPH_ARGS="--id volumes"
chdir /var/run
...
|
#R710-3
ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.volumes | sudo tee /etc/ceph/ceph.client.volumes.keyring
chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
scp -p /etc/ceph/ceph.client.volumes.keyring R710-2:/etc/ceph/
ssh R710-2 chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
|
cinder-manage db sync
|
cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
|
apt-get install -y openstack-dashboard memcached
#Remove ubuntu theme if you like:
dpkg --purge openstack-dashboard-ubuntu-theme
|
OPENSTACK_HOST = "10.10.10.200"
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '10.10.10.200:11211'
}
}
|
-l 10.10.10.3
|
service apache2 restart
service memcached restart
|
http://119.100.200.143/horizon
10.30.30.3 R710-3
10.30.30.2 R710-2
10.30.30.5 R610-5
|
apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y
|
apt-get install -y ntp
|
echo "server 10.10.10.3" >> /etc/ntp.conf
echo "server 10.10.10.2" >> /etc/ntp.conf
service ntp restart
|
apt-get install -y vlan bridge-utils
|
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl -p
|
auto em1
iface em1 inet static
address 119.100.200.136
netmask 255.255.255.224
gateway 119.100.200.129
dns-nameservers 8.8.8.8
#Openstack management
auto em2
iface em2 inet static
address 10.10.10.7
netmask 255.255.255.0
#VM traffic
auto em3
iface em3 inet static
address 10.20.20.7
netmask 255.255.255.0
#Storage network for ceph
auto em4
iface em4 inet static
address 10.30.30.7
netmask 255.255.255.0
|
service networking restart
|
Logical processor | Enabled |
Virtualization technology | Enabled |
Trubo Mode | Enabled |
C1E | Disabled |
C states | Disabled |
apt-get install -y cpu-checker
kvm-ok
|
apt-get install -y kvm libvirt-bin pm-utils
|
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]
|
virsh net-destroy default
virsh net-undefine default
|
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
|
env libvirtd_opts="-d -l"
|
libvirtd_opts="-d -l"
|
service libvirt-bin restart
|
apt-get install -y openvswitch-switch openvswitch-datapath-dkms
|
ovs-vsctl add-br br-int
ovs-vsctl add-br br-em3
ovs-vsctl add-port br-em3 em3
|
apt-get -y install quantum-plugin-openvswitch-agent
|
[DATABASE]
sql_connection = mysql://quantum:[email protected]/quantum
[OVS]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:1100
integration_bridge = br-int
bridge_mappings = physnet1:br-em3
[SECURITYGROUP]
firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
|
[DEFAULT]
auth_strategy = keystone
rabbit_host = 10.10.10.3:5672,10.10.10.2:5672
rabbit_password=yourpassword
rabbit_ha_queues=True
[keystone_authtoken]
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = yourpassword
signing_dir = /var/lib/quantum/keystone-signing
|
service quantum-plugin-openvswitch-agent restart
|
wget -q -O - ' https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -
echo "deb http://ceph.com/debian-cuttlefish/ precise main" >> /etc/apt/sources.list.d/ceph.list
apt-get upgrade librbd
|
#R710-3
ceph auth get-key client.volumes | ssh 10.10.10.7 tee /home/sysadmin/client.volumes.key
ceph auth get-key client.volumes | ssh 10.10.10.8 tee /home/sysadmin/client.volumes.key
...
|
#R710-7
cd /home/sysadmin
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<usage type='ceph'>
<name>client.volumes secret</name>
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
<uuid of secret is output here>
sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key)
#Dump defined secret to xml, to be imported to other compute nodes
virsh secret-dumpxml <uuid of secret> > dump.xml
#Copy the dump.xml to other compute nodes
scp dump.xml 10.10.10.8:/home/sysadmin
...
|
#R710-8 and other compute nodes
virsh secret-define --file dump.xml
virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key)
|
apt-get install -y nova-compute-kvm nova-novncproxy
|
Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = yourpassword
signing_dir = /tmp/keystone-signing-nova
|
[DEFAULT]
libvirt_type=kvm
compute_driver=libvirt.LibvirtDriver
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
|
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
#debug=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
# Message queue and DB connection
rabbit_host = 10.10.10.101
rabbit_password=yourpassword
sql_connection=mysql://nova:[email protected]/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
# Auth
use_deprecated_auth=false
auth_strategy=keystone
# Imaging service
glance_api_servers=10.10.10.200:9292
image_service=nova.image.glance.GlanceImageService
# Vnc configuration
novncproxy_base_url=http://119.100.200.143:6080/vnc_auto.html
vncserver_proxyclient_address=10.10.10.7
vncserver_listen=10.10.10.7
# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.10.10.200:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=yourpassword
quantum_admin_auth_url=http://10.10.10.200:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
#If you want Quantum + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=quantum
#Metadata
service_quantum_metadata_proxy = True
quantum_metadata_proxy_shared_secret = howard
# Compute #
compute_driver=libvirt.LibvirtDriver
# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL
|
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
|
nova-manage service list
|
In this case, we install a All-in-One Swift node with 4 internal disks simulating 4 zones. Swift Proxy or OS itself has no HA protection
apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y
|
apt-get install -y ntp
|
echo "server 10.10.10.3" >> /etc/ntp.conf
echo "server 10.10.10.2" >> /etc/ntp.conf
service ntp restart
|
auto em1
iface em1 inet static
address 119.100.200.134
netmask 255.255.255.224
gateway 119.100.200.129
dns-nameservers 8.8.8.8
#Openstack management
auto em2
iface em2 inet static
address 10.10.10.1
netmask 255.255.255.0
|
service networking restart
|
apt-get install swift swift-account swift-container swift-object python-swiftclient python-webob
|
[swift-hash]
swift_hash_path_suffix = howard
|
mkfs.xfs -i size=1024 /dev/sdb
mkfs.xfs -i size=1024 /dev/sdc
mkfs.xfs -i size=1024 /dev/sdd
mkfs.xfs -i size=1024 /dev/sde
echo "/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
echo "/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
echo "/dev/sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
echo "/dev/sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc
mkdir -p /srv/node/sdd
mkdir -p /srv/node/sde
mount /srv/node/sdb
mount /srv/node/sdc
mount /srv/node/sdd
mount /srv/node/sde
for x in 1 2 3 4;do mkdir -p /srv/$x/node/sdx$x;done
ln -s /srv/node/sdb /srv/1
ln -s /srv/node/sdc /srv/2
ln -s /srv/node/sdd /srv/3
ln -s /srv/node/sde /srv/4
mkdir -p /etc/swift/object-server /etc/swift/container-server /etc/swift/account-server
rm -f /etc/swift/*server.conf
mkdir -p /var/cache/swift /var/cache/swift2 /var/cache/swift3 /var/cache/swift4 /etc/swift/keystone-signing-swift
chown -R swift:swift /etc/swift/ /srv/* /var/run/swift/ /var/cache/swift*
#Add following to /etc/rc.local before "exit 0"
mkdir -p /var/cache/swift /var/cache/swift2 /var/cache/swift3 /var/cache/swift4
chown -R swift:swift /var/cache/swift*
mkdir -p /var/run/swift
chown -R swift:swift /var/run/swift
|
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 127.0.0.1
[account6012]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/account6012.lock
[account6022]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/account6022.lock
[account6032]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/account6032.lock
[account6042]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/account6042.lock
[container6011]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/container6011.lock
[container6021]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/container6021.lock
[container6031]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/container6031.lock
[container6041]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/container6041.lock
[object6010]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/object6010.lock
[object6020]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/object6020.lock
[object6030]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/object6030.lock
[object6040]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/object6040.lock
|
#Edit /etc/default/rsync
RSYNC_ENABLE=true
service rsync restart
|
# Uncomment the following to have a log containing all logs together
#local1,local2,local3,local4,local5.* /var/log/swift/all.log
# Uncomment the following to have hourly proxy logs for stats processing
#$template HourlyProxyLog,"/var/log/swift/hourly/%$YEAR%%$MONTH%%$DAY%%$HOUR%"
#local1.*;local1.!notice ?HourlyProxyLog
local1.*;local1.!notice /var/log/swift/proxy.log
local1.notice /var/log/swift/proxy.error
local1.* ~
local2.*;local2.!notice /var/log/swift/storage1.log
local2.notice /var/log/swift/storage1.error
local2.* ~
local3.*;local3.!notice /var/log/swift/storage2.log
local3.notice /var/log/swift/storage2.error
local3.* ~
local4.*;local4.!notice /var/log/swift/storage3.log
local4.notice /var/log/swift/storage3.error
local4.* ~
local5.*;local5.!notice /var/log/swift/storage4.log
local5.notice /var/log/swift/storage4.error
local5.* ~
|
$PrivDropToGroup adm
|
mkdir -p /var/log/swift/hourly
chown -R syslog.adm /var/log/swift
chmod -R g+w /var/log/swift
service rsyslog restart
|
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_port = 6012
user = swift
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]
|
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_port = 6022
user = swift
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]
|
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_port = 6032
user = swift
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]
|
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_port = 6042
user = swift
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]
|
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_port = 6011
user = swift
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[container-auditor]
[container-sync]
|
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_port = 6021
user = swift
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[container-auditor]
[container-sync]
|
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_port = 6031
user = swift
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[container-auditor]
[container-sync]
|
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_port = 6041
user = swift
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[container-auditor]
[container-sync]
|
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_port = 6010
user = swift
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
|
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_port = 6020
user = swift
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
|
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_port = 6030
user = swift
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
|
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_port = 6040
user = swift
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
|
cd /etc/swift
swift-ring-builder object.builder create 14 3 1
swift-ring-builder object.builder add z1-127.0.0.1:6010/sdx1 1
swift-ring-builder object.builder add z2-127.0.0.1:6020/sdx2 1
swift-ring-builder object.builder add z3-127.0.0.1:6030/sdx3 1
swift-ring-builder object.builder add z4-127.0.0.1:6040/sdx4 1
swift-ring-builder object.builder rebalance
swift-ring-builder container.builder create 14 3 1
swift-ring-builder container.builder add z1-127.0.0.1:6011/sdx1 1
swift-ring-builder container.builder add z2-127.0.0.1:6021/sdx2 1
swift-ring-builder container.builder add z3-127.0.0.1:6031/sdx3 1
swift-ring-builder container.builder add z4-127.0.0.1:6041/sdx4 1
swift-ring-builder container.builder rebalance
swift-ring-builder account.builder create 14 3 1
swift-ring-builder account.builder add z1-127.0.0.1:6012/sdx1 1
swift-ring-builder account.builder add z2-127.0.0.1:6022/sdx2 1
swift-ring-builder account.builder add z3-127.0.0.1:6032/sdx3 1
swift-ring-builder account.builder add z4-127.0.0.1:6042/sdx4 1
swift-ring-builder account.builder rebalance
|
apt-get instal swift-proxy memcached python-keystoneclient
|
[DEFAULT]
bind_port = 8080
user = swift
workers = 8
log_facility = LOG_LOCAL1
#eventlet_debug = true
[pipeline:main]
pipeline = healthcheck cache tempauth authtoken keystone proxy-server
#pipeline = healthcheck cache tempauth proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
[filter:proxy-logging]
use = egg:swift#proxy_logging
memcache_servers = 127.0.0.1:11211
[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin, SwiftOperator, Member
is_admin = true
cache = swift.cache
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = swift
admin_password = yourpassword
auth_host = 10.10.10.200
auth_port = 5000
auth_protocol = http
signing_dir = /etc/swift/keystone-signing-swift
admin_token = yourpassword
|
chown -R swift:swift /etc/swift
|
swift-init proxy start
swift-init all start
|
source /home/sysadmin/keystonerc
swift stat
|
You will see something similar to:
[root@R710-1 ~(keystone_admin)]# swift stat
Account: AUTH_22daa9971308497bbac6dc0fa51179fb
Containers: 0
Objects: 0
Bytes: 0
Accept-Ranges: bytes
X-Timestamp: 1366796485.27588
Content-Type: text/plain; charset=utf-8
|
To start your first VM, we first need to create an internal network, router and an external network.
source /home/sysadmin/keystonerc
|
keystone tenant-list
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| e0da799e414f43a09aa5de7f317cf213 | admin | True |
| 56c5fbd23bf8440d8898797fe8537839 | demo | True |
| 4f6c6b1a141f44a4ad0523a5ddb3f862 | service | True |
+----------------------------------+---------+---------+
|
quantum net-create --tenant-id 56c5fbd23bf8440d8898797fe8537839 net-1
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 554e786e-8a37-4dcc-840b-ced0eb4fd985 |
| name | net-1 |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 1000 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 56c5fbd23bf8440d8898797fe8537839 |
+---------------------------+--------------------------------------+
|
quantum subnet-create --tenant-id 56c5fbd23bf8440d8898797fe8537839 --name subnet-1 net-1192.168.100.0/24
Created a new subnet:
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.100.2", "end": "192.168.100.254"} |
| cidr | 192.168.100.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.100.1 |
| host_routes | |
| id | 91dcb379-1d79-44da-9d14-87df64e2e5e8 |
| ip_version | 4 |
| name | subnet-1 |
| network_id | 554e786e-8a37-4dcc-840b-ced0eb4fd98 |
| tenant_id | 56c5fbd23bf8440d8898797fe8537839 |
+------------------+------------------------------------------------------+
|
quantum router-create --tenant-id 56c5fbd23bf8440d8898797fe8537839 router-net-1
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | 6ec77746-fd71-4ee1-862e-0cbc6f893c8c |
| name | router-net-1 |
| status | ACTIVE |
| tenant_id | 56c5fbd23bf8440d8898797fe8537839 |
+-----------------------+--------------------------------------+
|
quantum router-interface-add 6ec77746-fd71-4ee1-862e-0cbc6f893c8c subnet-1
Added interface to router 6ec77746-fd71-4ee1-862e-0cbc6f893c8c
|
quantum net-create external-net --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 5235abad-f911-4a1d-98ad-d9cb3d2fcf84 |
| name | external-net |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 1001 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | e0da799e414f43a09aa5de7f317cf213 |
+---------------------------+--------------------------------------+
|
quantum subnet-create --name subnet-external external-net --allocation-pool start=108.166.166.66,end=108.166.166.66 108.166.166.64/26 -- --enable_dhcp=False
Created a new subnet:
+------------------+----------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------+
| allocation_pools | {"start": "108.166.166.66", "end": "108.166.166.66"} |
| cidr | 108.166.166.64/26 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 108.166.166.65 |
| host_routes | |
| id | 329a3754-9a07-4842-8cd8-acdf4e5206a9 |
| ip_version | 4 |
| name | subnet-external |
| network_id | 5235abad-f911-4a1d-98ad-d9cb3d2fcf84 |
| tenant_id | e0da799e414f43a09aa5de7f317cf213 |
+------------------+----------------------------------------------------+
|
quantum router-gateway-set 6ec77746-fd71-4ee1-862e-0cbc6f893c8c external-net
Set gateway for router 6ec77746-fd71-4ee1-862e-0cbc6f893c8c
|
#Check dhcp agent list
quantum agent-list
+--------------------------------------+--------------------+--------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+--------+-------+----------------+
| 35b8b317-4108-4504-9a89-063231d008aa | Open vSwitch agent | R610-4 | :-) | True |
| 423fdb06-26b1-415b-b605-de9c313daf0f | Open vSwitch agent | R610-5 | :-) | True |
| 4efc2293-b42f-415e-89ad-5ce5265b1df4 | DHCP agent | R610-4 | :-) | True |
| 553a32bf-b8aa-41e9-b1fc-d3fcccbc602a | L3 agent | R610-4 | :-) | True |
| 88de90f3-d7d6-485e-ac00-b764a5d14e7f | DHCP agent | R610-5 | :-) | True |
| 9fe37082-6154-461d-b955-b8cccf2aedc0 | Open vSwitch agent | R710-8 | :-) | True |
| d3fb2959-311c-403e-b91c-735c5e80ed65 | Open vSwitch agent | R710-7 | :-) | True |
| f9648c1a-e8e9-4f39-92c7-649f2f296440 | L3 agent | R610-5 | :-) | True |
+--------------------------------------+--------------------+--------+-------+----------------+
#Then let's add net-1 to DHCP agents on R610-4 and R610-5 hosts
quantum dhcp-agent-network-add 4efc2293-b42f-415e-89ad-5ce5265b1df4 net-1
quantum dhcp-agent-network-add 88de90f3-d7d6-485e-ac00-b764a5d14e7f net-1
|
Only quantum-l3-agent service in the solution has a single point of failure, quantum-dhcp-agent services are running in active-active mode on 2 network nodes. This section describes how to recover l3-agent service from a network node failure.
#Check dhcp agent list
quantum agent-list
+--------------------------------------+--------------------+--------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+--------+-------+----------------+
| 35b8b317-4108-4504-9a89-063231d008aa | Open vSwitch agent | R610-4 | :-) | True |
| 423fdb06-26b1-415b-b605-de9c313daf0f | Open vSwitch agent | R610-5 | :-) | True |
| 4efc2293-b42f-415e-89ad-5ce5265b1df4 | DHCP agent | R610-4 | :-) | True |
| 553a32bf-b8aa-41e9-b1fc-d3fcccbc602a | L3 agent | R610-4 | :-) | True |
| 88de90f3-d7d6-485e-ac00-b764a5d14e7f | DHCP agent | R610-5 | :-) | True |
| 9fe37082-6154-461d-b955-b8cccf2aedc0 | Open vSwitch agent | R710-8 | :-) | True |
| d3fb2959-311c-403e-b91c-735c5e80ed65 | Open vSwitch agent | R710-7 | :-) | True |
| f9648c1a-e8e9-4f39-92c7-649f2f296440 | L3 agent | R610-5 | :-) | True |
+--------------------------------------+--------------------+--------+-------+----------------+
|
We can see L3 agent on R610-4 and R610-5 are both alive, this is normal status.
quantum l3-agent-list-hosting-router router-net-1
+--------------------------------------+--------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+--------+----------------+-------+
| f9648c1a-e8e9-4f39-92c7-649f2f296440 | R610-5 | True | :-) |
+--------------------------------------+--------+----------------+-------+
|
We can see now our router “router-net-1″ is running on R610-5 node.
# Firstly, remove the router from R610-5
quantum l3-agent-router-remove <L3 Agent ID of R610-5> router-net-1
# Secondly,add the router to R610-4
quantum l3-agent-router-add <L3 Agent ID of R610-4> router-net-1
#L3 Agent IDs can be read from output of "quantum agent-list"
#Then check again:
quantum l3-agent-list-hosting-router router-net-1
+--------------------------------------+--------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+--------+----------------+-------+
| f9648c1a-e8e9-4f39-92c7-649f2f296440 | R610-4 | True | :-) |
+--------------------------------------+--------+----------------+-------+
You should see the router is now alive on R610-4.
|