前几天总结了单机vault的入门使用步骤:Vault---机密信息管理工具安装及常用示例,但是生产环境往往只会部署集群。
vault官网提供了部署集群的教程:https://learn.hashicorp.com/tutorials/vault/raft-storage
但是这个教程是在一台机子上通过虚拟不同ip网络部署的伪集群,虽然看起来相差不大,但是实机分开部署时却遇到了很多问题,所以在此记录下折腾步骤。
正如官网上提供的集群示例图那样,接下来开启四个linux虚拟机,一台是用来专门存储传输密匙的,这台机子的数据要保证其他机子都能访问的到,所以用mysql存储比较合适,一台用来做主leader,其他两台做follower.
ip分别为:
vault_1(transit)::192.168.0.111
vault_2:192.168.0.112
vault_3:192.168.0.114
vault_4:192.168.0.115
确保四台机器上都安装了vault和jq
ubuntu可以通过sudo apt install jq
直接安装jq,其他发行版可以参考这个去下载:http://www.voidcn.com/article/p-pfsncgdk-btx.html ,上面提供的github地址如果打不开可以用国内github克隆地址https://hub.fastgit.org/来下载,只需把后缀放在这个网址后面就可以访问国内的克隆地址了。
配置vault_1
新建config-vault_1.hcl
在192.168.0.111机子上准备要安装的集群路径新建config-vault_1.hcl,比如我的是/home/zhaohy/vault/cluster/路径下
#storage "inmem" {}
storage "mysql" {
address = "192.168.0.111:3306"
username = "root"
password = "root"
database = "vault"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = true
}
disable_mlock = true
ui = true
如上配置文件所示,transit密匙存在mysql里,vault库,如果没有库就自动新建vault库和vault表,表结构如下所示:
CREATE TABLE `vault` (
`vault_key` varbinary(512) NOT NULL,
`vault_value` mediumblob,
PRIMARY KEY (`vault_key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
启动vault_1
VAULT_API_ADDR='http://192.168.0.111:8200' vault server -log-level=trace -config "config-vault_1.hcl" > "vault_1.log" 2>&1 &
导入环境变量
export VAULT_ADDR=http://192.168.0.111:8200 && vault "$@"
初始化vault_1
vault operator init -format=json -key-shares 1 -key-threshold 1
会得到如下结果:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator init -format=json -key-shares 1 -key-threshold 1
{
"unseal_keys_b64": [
"EGrB0iaCiYmAHmpMZNVm1nOSIRoknb5L0Q4WJFmr/aI="
],
"unseal_keys_hex": [
"106ac1d226828989801e6a4c64d566d67392211a249dbe4bd10e162459abfda2"
],
"unseal_shares": 1,
"unseal_threshold": 1,
"recovery_keys_b64": [],
"recovery_keys_hex": [],
"recovery_keys_shares": 5,
"recovery_keys_threshold": 3,
"root_token": "s.UL5VCIBhlAd3UlcdEc4ao6CQ"
}
把unseal_keys_b64解封密匙写入unseal_key-vault_1文件,把root_token写入root_token-vault_1文件
echo "EGrB0iaCiYmAHmpMZNVm1nOSIRoknb5L0Q4WJFmr/aI=" > unseal_key-vault_1
echo "s.UL5VCIBhlAd3UlcdEc4ao6CQ" > root_token-vault_1
解封vault_1
vault operator unseal "EGrB0iaCiYmAHmpMZNVm1nOSIRoknb5L0Q4WJFmr/aI="
登录vault_1
vault login "s.UL5VCIBhlAd3UlcdEc4ao6CQ"
可以看到返回结果如下:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault login "s.UL5VCIBhlAd3UlcdEc4ao6CQ"
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.UL5VCIBhlAd3UlcdEc4ao6CQ
token_accessor YGqyl3RrJQThovVMhiTcMveL
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
允许使用transit引擎
vault secrets enable transit
vault write -f transit/keys/unseal_key
至此,vault_1配置完成,把刚刚给vault_1解封的密匙文件和root_token文件发到vault_2(192.168.0.112)机器:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ scp root_token-vault_1 [email protected]:/home/zhaohy/vault/cluster/
[email protected]'s password:
root_token-vault_1 100% 27 16.4KB/s 00:00
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ scp unseal_key-vault_1 [email protected]:/home/zhaohy/vault/cluster/
[email protected]'s password:
unseal_key-vault_1 100% 45 22.7KB/s 00:00
配置vault_2
创建config-vault_2.hcl
在192.168.0.112机器上的cluster安装路径新建config-vault_2.hcl,如我的是在/home/zhaohy/vault/cluster/下面
storage "raft" {
path = "/home/zhaohy/vault/cluster/raft-vault_2/"
node_id = "vault_2"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "192.168.0.112:8201"
tls_disable = true
}
seal "transit" {
address = "http://192.168.0.111:8200"
# token is read from VAULT_TOKEN env
# token = ""
disable_renewal = "false"
#Key configuration
key_name = "unseal_key"
mount_path = "transit/"
}
ui = true
disable_mlock = true
cluster_addr = "http://192.168.0.112:8201"
上面配置中vault_2数据存储在/home/zhaohy/vault/cluster/raft-vault_2/文件夹下,确保这个文件夹存在,没有的话新建一下。
启动vault_2
启动命令里要加上vault_1里的root_token,可以读取root_token-vault_1获取
VAULT_TOKEN='s.UL5VCIBhlAd3UlcdEc4ao6CQ' VAULT_API_ADDR='http://192.168.0.112:8200' vault server -log-level=trace -config "config-vault_2.hcl" > "vault_2.log" 2>&1 &
导入环境变量
export VAULT_ADDR=http://192.168.0.112:8200 && vault "$@"
初始化vault_2
vault operator init -format=json -recovery-shares 1 -recovery-threshold 1
注意这次和vault_1的初始化命令参数是不一样的
会得到如下结果:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator init -format=json -recovery-shares 1 -recovery-threshold 1
{
"unseal_keys_b64": [],
"unseal_keys_hex": [],
"unseal_shares": 1,
"unseal_threshold": 1,
"recovery_keys_b64": [
"8WXkExlphKxUXFNpfRQaBnBTJw8NoQAmj8aQWtoWpac="
],
"recovery_keys_hex": [
"f165e413196984ac545c53697d141a067053270f0da100268fc6905ada16a5a7"
],
"recovery_keys_shares": 1,
"recovery_keys_threshold": 1,
"root_token": "s.pFmyULKGAUFtoBO8KwkLTgkr"
}
把上面新生成的recovery_keys_b64存入recovery_key-vault_2文件中
把上面新生成的root_token存入root_token-vault_2文件中
echo "8WXkExlphKxUXFNpfRQaBnBTJw8NoQAmj8aQWtoWpac=" > recovery_key-vault_2
echo "s.pFmyULKGAUFtoBO8KwkLTgkr" > root_token-vault_2
此时等待一会,就不用解封了,会自动跟vault_1通信并解封
登录vault_2
vault login "s.pFmyULKGAUFtoBO8KwkLTgkr"
登录成功会得到如下结果:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault login "s.pFmyULKGAUFtoBO8KwkLTgkr"
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.pFmyULKGAUFtoBO8KwkLTgkr
token_accessor Rwb3geDtEn4ywn2MG4gzLRds
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
新建kv引擎存入测试数据
vault secrets enable -path=kv kv-v2
vault kv put kv/apikey webapp=ABB39KKPTWOR832JGNLS02
vault kv get kv/apikey
查看集群情况
vault operator raft list-peers
会得到结果如下:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault_2 192.168.0.112:8201 leader true
可以看到集群中只有vault_2此时是leader
vault_2配置完成
配置vault_3
在192.168.0.114机子上的集群安装路径新建config-vault_3.hcl,如我的是/home/zhaohy/vault/cluster/路径下:
storage "raft" {
path = "/home/zhaohy/vault/cluster/raft-vault_3/"
node_id = "vault_3"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "192.168.0.114:8201"
tls_disable = true
}
seal "transit" {
address = "http://192.168.0.111:8200"
# token is read from VAULT_TOKEN env
# token = ""
disable_renewal = "false"
# Key configuration
key_name = "unseal_key"
mount_path = "transit/"
}
ui = true
disable_mlock = true
cluster_addr = "http://192.168.0.114:8201"
确保数据存储路径/home/zhaohy/vault/cluster/raft-vault_3/文件夹存在,没有就新建一下。
启动vault_3
跟启动vault_2类似,要加上vault_1的root_token
VAULT_TOKEN='s.UL5VCIBhlAd3UlcdEc4ao6CQ' VAULT_API_ADDR='http://192.168.0.114:8200' vault server -log-level=trace -config "config-vault_3.hcl" > "vault_3.log" 2>&1 &
导入环境变量
export VAULT_ADDR="http://192.168.0.114:8200"
使vault_3加入vault_2的集群
此时vault_3不用初始化,直接加入vault_2的集群即可
vault operator raft join http://192.168.0.112:8200
成功会显示结果如下:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft join http://192.168.0.112:8200
Key Value
--- -----
Joined true
用vault_2 leader的root_token登录
vault login "s.pFmyULKGAUFtoBO8KwkLTgkr"
登录成功结果如下:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault login "s.pFmyULKGAUFtoBO8KwkLTgkr"
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.pFmyULKGAUFtoBO8KwkLTgkr
token_accessor Rwb3geDtEn4ywn2MG4gzLRds
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
查看集群情况
vault operator raft list-peers
得到结果如下:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault_2 192.168.0.112:8201 leader true
vault_3 192.168.0.114:8201 follower true
如此vault_3就配置完成了,浏览器访问http://192.168.0.114:8200/ui/,跳出的界面中输入vault_2的root_token即可成功登录,里面储存的数据和vault_2是保持一致的。
配置vault_4
在192.168.0.115机子中的安装路径新建config-vault_4.hcl
storage "raft" {
path = "/home/zhaohy/vault/cluster/raft-vault_4/"
node_id = "vault_4"
retry_join{
leader_api_addr = "http://192.168.0.112:8200"
}
retry_join{
leader_api_addr = "http://192.168.0.114:8200"
}
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "192.168.0.115:8201"
tls_disable = true
}
seal "transit" {
address = "http://192.168.0.111:8200"
# token is read from VAULT_TOKEN env
# token = ""
disable_renewal = "false"
# Key configuration
key_name = "unseal_key"
mount_path = "transit/"
}
ui = true
disable_mlock = true
cluster_addr = "http://192.168.0.115:8201"
如上配置所示,确保存储路径/home/zhaohy/vault/cluster/raft-vault_4/文件夹存在,没有的话新建一下。
上面配置中出现了retry_join,有了这个配置,vault在启动的时候会自动去join不用自己手动写命令了。
启动vault_4
启动和上面的那几个类似,也要把vault_1的root_token加进去
VAULT_TOKEN='s.UL5VCIBhlAd3UlcdEc4ao6CQ' VAULT_API_ADDR='http://192.168.0.115:8200' vault server -log-level=trace -config "config-vault_4.hcl" > "vault_4.log" 2>&1 &
启动后,在浏览器输入http://192.168.0.115:8200/ui/可以直接进入输入token登录的页面,用vault_2的root_token就可以登录,或者用命令行登录
导入环境变量
export VAULT_ADDR="http://192.168.0.115:8200"
登录并查看集群情况
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault login "s.pFmyULKGAUFtoBO8KwkLTgkr"
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.pFmyULKGAUFtoBO8KwkLTgkr
token_accessor Rwb3geDtEn4ywn2MG4gzLRds
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault_2 192.168.0.112:8201 leader true
vault_3 192.168.0.114:8201 follower true
vault_4 192.168.0.115:8201 follower true
vault_4配置完成。
结语
如此,这个vault的高可用集群配置就配置成功了,在无论哪个机子上新增或改变的数据,在其他机子上也一样生效了。
可以用ps aux | grep "vault server" | grep -v grep | awk '{print $2}' | xargs kill
命令来关闭各个机子上的vault服务
如果想要切换集群leader,可以去vault_2(192.168.0.112)机子上执行
vault operator step-down
如:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault_2 192.168.0.112:8201 leader true
vault_3 192.168.0.114:8201 follower true
vault_4 192.168.0.115:8201 follower true
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator step-down
Success! Stepped down: http://192.168.0.112:8200
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault_2 192.168.0.112:8201 follower true
vault_3 192.168.0.114:8201 leader true
vault_4 192.168.0.115:8201 follower true
这样leader就由vault_2变成了vault_3
移除集群节点
vault operator raft remove-peer vault_4
如:
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault_2 192.168.0.112:8201 follower true
vault_3 192.168.0.114:8201 leader true
vault_4 192.168.0.115:8201 follower true
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft remove-peer vault_4
Peer removed successfully!
zhaohy@zhaohy-VirtualBox:~/vault/cluster$ vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault_2 192.168.0.112:8201 follower true
vault_3 192.168.0.114:8201 leader true
要想加回vault_4要把vault_4停掉,删除vault4下面的存储数据,然后重新启动它,就可以自动加回了。
vault还提供了恢复模式启动来排除问题的功能,这里不多做介绍,详情可以去看官方文档:https://learn.hashicorp.com/tutorials/vault/raft-storage