Swift HA结构图
包括组件:
Swift:
1.proxy servers (swift-proxy-server)
2.Account servers (swift-account-server)
3.Container servers (swift-container-server)
4.Object servers (swift-object-server)
5.Configurable WSGI middleware that handles authentication. Usually the Identity Service.
HA Service
Hapoxry servers
Keepalived server
Identity Service
Keystone servers
一:环境的描述
网络描述
Public: 192.168.128.0/24
Data Network:10.6.0.0/24
Replication Network:10.7.0.0/24
Vip:192.168.128.55/32
角色分配:
Keystone 192.168.128.35
Haproxy01 192.168.128.51/10.6.0.121
Haproxy02 192.168.128.52/10.6.0.122
Swift-proxy01 192.168.128.53/10.6.0.123
Swift-proxy02 192.168.128.54/10.6.0.124
Swift-storage01 192.168.128.56/10.6.0.126/10.7.0.126
Swift-storage02 192.168.128.57/10.6.0.127/10.7.0.127
Swift-storage03 192.168.128.58/10.6.0.128/10.7.0.128
建议hosts文件配置如下
192.168.128.35 controller
192.168.128.51 haproxy01
192.168.128.52 haproxy02
192.168.128.53 swift_proxy01
192.168.128.54 swift_proxy02
192.168.128.56 swift_storage01
192.168.128.57 swift_storage02
192.168.128.58 swift_storage03
创建环境变量(all)
Cat swiftrc
export OS_USERNAME=swift
export OS_PASSWORD=password
export OS_TENANT_NAME=service
export OS_AUTH_URL=http://controller:35357/v2.0
【keystone node】
ip:192.168.128.35
Hostname:controller
User:swift
Password:password
Tenant:service
【haproxy node 1】
Ip:192.168.128.51
Hostname:haproxy01
【haproxy node 2】
Ip:192.168.128.52
Hostname:haproxy02
【Swift_haproxy node 1】
Ip:192.168.128.53
Hostname:swift_haproxy01
【Swift_haproxy node 2】
Ip:192.168.128.54
Hostname:swift_haproxy02
【Storage node 1】
Ip:192.168.128.56
Hostname:swift_storage01
【Storage node 2】
Ip:192.168.128.57
Hostname:swift_storage02
【Storage node3】
Ip:192.168.128.58
Hostname:swift_storage03
二:yum 源的安装与软件包的更新(全部节点)
1. yum源安装:(全部节点)
wget http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
2. 安装软件包
#rpm –Uvh rdo-release-havana-6.noarch.rpm
#rpm –Uvh epel-release-6-8.noarch.rpm
3. 更新yum源及软件包
# yum upgrade && reboot
三:创建对象存储用户并且验证(keystone node )
1.安装openstack-utils
# yum install openstack-utils
2.使用keystone验证系统,创建用户、角色与租户,并且联系起来
#keystone user-create --name=swift --pass=password [email protected]
# keystone user-role-add --user=swift --tenant=service --role=admin
3.创建对象存储的服务
# keystone service-create --name=swift --type=object-store --description="Object Storage Service"
4. 指定对象存储endpoint 的API(public API、internal API、admin API)【vip 地址】
# keystone endpoint-create --service-id=$( keystone service-list |awk ' /object-store/ {print $2}')\
--publicurl='http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s'\
--internalurl='http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s'\
--adminurl=http://192.168.128.55:8080
+-------------+---------------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------------+
| adminurl | http://192.168.128.55:8080/ |
| id | 9e3ce428f82b40d38922f242c095982e |
| internalurl | http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s |
| publicurl | http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s |
| region | regionOne |
| service_id | eede9296683e4b5ebfa13f5166375ef6 |
+-------------+---------------------------------------------------+
5. Create the configuration directory on all nodes, In addition to haproxy and keystone:
#mkdir –p /etc/swift
#chown –R /etc/swift
6. 拷贝proxy 节点 /etc/swift/swift.conf on all nodes ,In addition to haproxy and keystone:
四:配置安装storage node
【Stoarage node 1】
1. 安装相关软件包
# yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
2. 配置xfs文件系统以支持Swift存储
添加一块磁盘
dd if=/dev/zero of=/home/object-swift bs=1 count=0 seek=100G
#losetup /dev/loop2 /home/object-swift
#echo "losetup /dev/loop2 /home/object-swift" >> /etc/rc.local
以上可以在没有多余硬盘情况下虚拟
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdb1
# mount /srv/node/sdb1
# chown -R swift:swift /srv/node
3. Create/etc/rsyncd.conf:
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.7.0.126 # Replication Network
[account]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Note
The rsync service requires no authentication, so run it on a local, private network.
4. 编辑/etc/xinetd.d/rsync
disable = false
5. Start the xinetd service:
Service xinetd start
6. 创建相关目录并赋予相关权限
# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon
【Stoarage node 2】
1. 安装相关软件包
# yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
2. 配置xfs文件系统以支持Swift存储
添加一块硬盘
dd if=/dev/zero of=/home/object-swift bs=1 count=0 seek=100G
#losetup /dev/loop2 /home/object-swift
#echo "losetup /dev/loop2 /home/object-swift" >> /etc/rc.local
以上可以在没有多余硬盘情况下虚
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs= 8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdb1
# mount /dev/sdb /srv/node/sdb1
# chown -R swift:swift /srv/node
3. Create/etc/rsyncd.conf:
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.7.0.127 # Replication Network
[account]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Note
The rsync service requires no authentication, so run it on a local, private network.
4. 编辑/etc/xinetd.d/rsync
disable = false
5. Start the xinetd service:
Service xinetd start
6. 创建相关目录并赋予相关权限
# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon
【Stoarage node 3】
1. 安装相关软件包
# yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
2. 配置xfs文件系统以支持Swift存储
添加一块硬盘
dd if=/dev/zero of=/home/object-swift bs=1 count=0 seek=100G
#losetup /dev/loop2 /home/object-swift
#echo "losetup /dev/loop2 /home/object-swift" >> /etc/rc.local
以上可以在没有多余硬盘情况下虚
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs= 8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdb1
# mount /dev/sdb /srv/node/sdb1
# chown -R swift:swift /srv/node
3. Create/etc/rsyncd.conf:
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.7.0.128 # Replication Network
[account]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Note
The rsync service requires no authentication, so run it on a local, private network.
4. 编辑/etc/xinetd.d/rsync
disable = false
5. Start the xinetd service:
Service xinetd start
6. 创建相关目录并赋予相关权限
# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon
五:配置安装proxy node
【swift_proxy01】
1. 安装 swift-proxy 服务:
# yum install openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token
2. 配置memeached 侦听端口
# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.128.53"
3. 配置服务并添加开机启动项
# service memcached start
# chkconfig memcached on
4. 编辑/etc/swift/proxy-server.conf:
[root@controller ~]# cat /etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = healthcheck cache authtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54,11211
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin, SwiftOperator
is_admin = true
cache = swift.cache
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = swift
admin_password = password
auth_host = controller
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift
[root@controller swift]# cat object-expirer.conf
[DEFAULT]
[object-expirer]
# auto_create_account_prefix = .
[pipeline:main]
pipeline = catch_errors cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54:11211
[root@compute swift]# cat container-server.conf
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
[root@compute swift]# cat container-server.conf
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
5. 创建accont、container、object rings,并且为进入每一个节点设备rings (简单的执行文件)
[root@controller swift]# cat test01.sh
cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1
#### swift-storage01
swift-ring-builder account.builder add z1-10.6.0.126:6002/sdb1 100
swift-ring-builder container.builder add z1-10.6.0.126:6001/sdb1 100
swift-ring-builder object.builder add z1-10.6.0.126:6000/sdb1 100
####swift-storage02
swift-ring-builder account.builder add z2-10.6.0.127:6002/sdb1 100
swift-ring-builder container.builder add z2-10.6.0.127:6001/sdb1 100
swift-ring-builder object.builder add z2-10.6.0.127:6000/sdb1 100
##swift-storage03
swift-ring-builder account.builder add z1-10.6.0.128:6002/sdb1 100
swift-ring-builder container.builder add z1-10.6.0.128:6001/sdb1 100
swift-ring-builder object.builder add z1-10.6.0.128:6000/sdb1 100
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
6. 拷贝account.ring.gz、container.ring.gz、object.ring.gz到每一个storage node 与 Swift_proxy02.
7. Start the Proxy service and configure it to start when the system boots:
# service openstack-swift-proxy start
# chkconfig openstack-swift-proxy on
【swift_proxy02】
8. 安装 swift-proxy 服务:
# yum install openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token
9. 配置memeached 侦听端口
# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.128.54"
10. 配置服务并添加开机启动项
# service memcached start
# chkconfig memcached on
11. 编辑/etc/swift/proxy-server.conf:
[root@controller ~]# cat /etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = healthcheck cache authtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54,11211
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin, SwiftOperator
is_admin = true
cache = swift.cache
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = swift
admin_password = password
auth_host = controller
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift
[root@controller swift]# cat object-expirer.conf
[DEFAULT]
[object-expirer]
# auto_create_account_prefix = .
[pipeline:main]
pipeline = catch_errors cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54:11211
[root@compute swift]# cat container-server.conf
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
[root@compute swift]# cat container-server.conf
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
12. Start the Proxy service and configure it to start when the system boots:
# service openstack-swift-proxy start
# chkconfig openstack-swift-proxy on
六:在storage node 1、storage node 2启动服务
创建一个简单的脚本用于启动全部服务,并且添加到开机服务启动项
[root@compute ~]# cat restart_swift.sh
#!/bin/bash
cd /etc/init.d
if [ "$#" == "1" ];then
Act=$1
for service in openstack-swift*
do
service $service $Act
chkconfig $service on
done
else
echo "Usage: $0 {start|stop|restart|status}"
fi
Note
To start all swift services at once, run the command:
# swift-init all start
七:安装 HA node
【haproxy01 node】
1. 安装haproxy,keepalived软件
#yum –y install keepalived haproxy
2. 配置haproxy软件
# cd /etc/haproxy/
=======================================
#我自己的配置
[root@swift_proxy02 haproxy]# cat haproxy.cfg
global #全局设置
log 127.0.0.1 local0 #日志输出配置,所有日志都记录在本机,通过local0输出
#log loghost local0 info
maxconn 4096 #最大连接数
uid haproxy #所属运行的用户uid
gid haproxy #所属运行的用户组
daemon #以后台形式运行haproxy
nbproc 2 #启动2个haproxy实例
pidfile /var/run/haproxy.pid #将所有进程写入pid文件
#debug
#quiet
defaults #默认设置
#log global
log 127.0.0.1 local3 #日志文件的输出定向
mode http #所处理的类别,默认采用http模式,可配置成tcp作4层消息转发
option httplog #日志类别,采用httplog
option dontlognull
option forwardfor #如果后端服务器需要获得客户端真实ip需要配置的参数,可以从Http Header中获得客户端ip
option httpclose #每次请求完毕后主动关闭http通道,haproxy不支持keep-alive,只能模拟这种模式的实现
retries 3 #3次连接失败就认为服务器不可用,主要通过后面的check检查
option redispatch #当serverid对应的服务器挂掉后,强制定向到其他健康服务器
maxconn 2000 #最大连接数
contimeout 5000 #连接超时时间
clitimeout 50000 #客户端连接超时时间
srvtimeout 50000 #服务器端连接超时时间
frontend http-in #前台
bind 192.168.128.55:8080
mode http
option httplog
log global
default_backend swift_proxy #静态服务器池
backend swift_proxy #后台
balance roundrobin#负载均衡算法
server swift_porxy01 192.168.128.50:8090 check inter 2000 rise 3 fall 5
server swift_porxy02 192.168.128.51:8090 check inter 2000 rise 3 fall 5
listen admin_stats
bind 192.168.128.55:1080
mode http
log 127.0.0.1 local2 err
stats refresh 30s
stats uri /admin?stats
stats auth admin:admin
#======================================
#同事配置
# cat haproxy.cfg
# This file managed by Puppet
global
chroot /var/lib/haproxy
daemon
group haproxy
log 192.168.128.55 local0
maxconn 4000
pidfile /var/run/haproxy.pid
stats socket /var/lib/haproxy/stats
user haproxy
defaults
log global
maxconn 8000
option redispatch
retries 3
stats enable
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen swift_proxy
bind 192.168.128.55:8080
#balance source
#option tcpka
#option httpchk
#option tcplog
mode http
stats enable
# stats auth username:password
balance roundrobin
option httpchk HEAD /healthcheck HTTP/1.0
option forwardfor
option httpclose
server swiftproxy01 192.168.128.51:8080 check inter 2000 rise 2 fall 5
server swiftproxy02 192.168.128.52:8080 check inter 2000 rise 2 fall 5
listen admin_stats
bind 192.168.128.55:1080
mode http
log 127.0.0.1 local2 err
stats refresh 30s
stats uri /admin?stats
stats auth admin:admin
3. 配置keepalived
# cd /etc/keepalived/
# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id swift-proxy
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 111111
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.128.55
}
}
4. 启动keepalived ,并加入到开机启动
#service keepalived start
#service haproxy start
#chkconfig keepalived on
5. 测试haproxy
http://192.168.128.55:1080/admin?stats
#虚拟ip查看
ip add show eth0
【haproxy02 node】
1. 安装haproxy,keepalived软件
#yum –y install keepalived haproxy
2. 配置haproxy软件
#mkdir –p /etc/haproxy
=======================================
#我自己的配置
[root@swift_proxy02 haproxy]# cat haproxy.cfg
global #全局设置
log 127.0.0.1 local0 #日志输出配置,所有日志都记录在本机,通过local0输出
#log loghost local0 info
maxconn 4096 #最大连接数
uid haproxy #所属运行的用户uid
gid haproxy #所属运行的用户组
daemon #以后台形式运行haproxy
nbproc 2 #启动2个haproxy实例
pidfile /var/run/haproxy.pid #将所有进程写入pid文件
#debug
#quiet
defaults #默认设置
#log global
log 127.0.0.1 local3 #日志文件的输出定向
mode http #所处理的类别,默认采用http模式,可配置成tcp作4层消息转发
option httplog #日志类别,采用httplog
option dontlognull
option forwardfor #如果后端服务器需要获得客户端真实ip需要配置的参数,可以从Http Header中获得客户端ip
option httpclose #每次请求完毕后主动关闭http通道,haproxy不支持keep-alive,只能模拟这种模式的实现
retries 3 #3次连接失败就认为服务器不可用,主要通过后面的check检查
option redispatch #当serverid对应的服务器挂掉后,强制定向到其他健康服务器
maxconn 2000 #最大连接数
contimeout 5000 #连接超时时间
clitimeout 50000 #客户端连接超时时间
srvtimeout 50000 #服务器端连接超时时间
frontend http-in #前台
bind 192.168.128.55:8080
mode http
option httplog
log global
default_backend swift_proxy #静态服务器池
backend swift_proxy #后台
balance roundrobin#负载均衡算法
server swift_porxy01 192.168.128.50:8090 check inter 2000 rise 3 fall 5
server swift_porxy02 192.168.128.51:8090 check inter 2000 rise 3 fall 5
listen admin_stats
bind 192.168.128.55:1080
mode http
log 127.0.0.1 local2 err
stats refresh 30s
stats uri /admin?stats
stats auth admin:admin
#======================================
#同事配置
# cat haproxy.cfg
# This file managed by Puppet
global
chroot /var/lib/haproxy
daemon
group haproxy
log 192.168.128.55 local0
maxconn 4000
pidfile /var/run/haproxy.pid
stats socket /var/lib/haproxy/stats
user haproxy
defaults
log global
maxconn 8000
option redispatch
retries 3
stats enable
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen swift_proxy
bind 192.168.128.55:8080
#balance source
#option tcpka
#option httpchk
#option tcplog
mode http
stats enable
# stats auth username:password
balance roundrobin
option httpchk HEAD /healthcheck HTTP/1.0
option forwardfor
option httpclose
server swiftproxy01 192.168.128.51:8080 check inter 2000 rise 2 fall 5
server swiftproxy02 192.168.128.52:8080 check inter 2000 rise 2 fall 5
listen admin_stats
bind 192.168.128.55:1080
mode http
log 127.0.0.1 local2 err
stats refresh 30s
stats uri /admin?stats
stats auth admin:admin
3. 配置keepalived
# cd /etc/keepalived/
# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id swift-proxy
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 180
advert_int 1
authentication {
auth_type PASS
auth_pass 111111
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.128.55
}
}
4. 启动keepalived ,并加入到开机启动
#service keepalived start
#service haproxy start
#chkconfig keepalived on
测试haproxy
http://192.168.128.55:1080/admin?stats
#虚拟ip查看
ip add show eth0
4. 启动keepalived ,并加入到开机启动
#service keepalived start
#service haproxy start
#chkconfig keepalived on
测试haproxy
http://192.168.128.55:1080/admin?stats
八:验证存储服务
安装Swift client工具
#source swiftrc
# swift stat
Account: AUTH_95d2477adc90453ea13d0b7d3571acaf
Containers: 3
Objects: 4
Bytes: 28889
Accept-Ranges: bytes
X-Timestamp: 1400134126.35649
X-Trans-Id: tx9cb1d3f2cc224cfc9344a-0053758b67
Content-Type: text/plain; charset=utf-8
#上传文件
#touch test1.txt
#swift upload swift test1.txt
下载文件
#swift download swift
=================
swift -U admin:admin -K password -A http://192.168.128.59:35357/v2.0 -V 2.0 list
swift -U admin:admin -K password -A http://192.168.128.59:35357/v2.0 -V 2.0 upload test test.txt
swift -U admin:admin -K password -A http://192.168.128.59:35357/v2.0 -V 2.0 list test --lh
swift -U admin:admin -K password -A http://192.168.128.59:35357/v2.0 -V 2.0 download test