mtls
Sleep
服务访问 Java
服务PeerAuthentication
定义流量是否通过隧道到达 SideCar
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
# 范围级别
# 可以是网格级别,也就是配置为 istio root 命名空间
# 也可以是命名空间级别,
# 也可以是 Service 级别,但是 Service 级别需要搭配 DestinationRule, 并且 matchLabels 不为空,在这个配置中就是属于 Service 级别
namespace: foo
spec:
# 要匹配的 Service,官网称为 WorkLoad/工作负载
# 如果不配置 selector 并且配置了 namespace 则代表策略范围是命名空间级别
selector:
matchLabels:
app: finance
mtls:
# mode 启用模式
# 1: UNSET 从父类继承,如果有的话,否则视为 PERMISSIVE 模式
# 2: DISABLE 不建立连接隧道
# 3: PERMISSIVE 连接可以是明文或 mTLS 隧道
# 4: STRICT 连接必须是 mTLS 隧道(必须提供带有客户端证书的 TLS)
mode: UNSET
# 指定端口的双向 TLS 认证
portLevelMtls:
8080:
mode: DISABLE
mtls
命名空间创建两个 pod
分别注入 SideCar
,legacy
命名空间创建一个 pod
不注入 SideCar
$ kubectl apply -f <(istioctl kube-inject -f ./deployment-v1.yaml) -n mtls
$ kubectl apply -f <(istioctl kube-inject -f ~/istio-1.6.0/samples/sleep/sleep.yaml) -n mtls
$ kubectl apply -f samples/sleep/sleep.yaml -n legacy
验证部署:
[root@node4 mtls]# kubectl get pod -nlegacy -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpbin-779c54bf49-ng6hf 1/1 Running 0 10s 192.168.33.141 node5
[root@node4 mtls]# kubectl get pod -nmtls -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
java-service-deployv1-7c4d786879-f4nxp 2/2 Running 0 9m6s 192.168.139.11 node6
sleep-77747b8698-t7h92 2/2 Running 0 6m36s 192.168.33.147 node5
测试访问,从两个命名空间中的 sleep 服务访问,下面输出看出两个命名空间都可以访问到 httpbin
服务
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n mtls -o jsonpath={.items..metadata.name}) -c sleep -n mtls -- curl http://java-service:8080/demo/service/test
张三小企业一v1
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -n legacy -c sleep -- curl http://java-service.mtls.svc.cluster.local:8080/demo/service/test
张三小企业一v1
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: STRICT
测试访问,从两个命名空间中的 sleep 服务访问,只有当前命名空间的 Sleep
可以访问
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n mtls -o jsonpath={.items..metadata.name}) -c sleep -n mtls -- curl http://java-service:8080/demo/service/test
张三小企业一v1
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -n legacy -c sleep -- curl http://java-service.mtls.svc.cluster.local:8080/demo/service/test
curl: (56) Recv failure: Connection reset by peer
command terminated with exit code 56
我们在 default
下创建一个 pod
并且自动注入,查看两个注入 sidecar
的工作负载之间是否可以访问:
[root@node4 mtls]# kubectl apply -f <(istioctl kube-inject -f ~/istio-1.6.0/samples/sleep/sleep.yaml) -n default
serviceaccount/sleep unchanged
service/sleep unchanged
deployment.apps/sleep configured
[root@node4 mtls]# kubectl get pod -n default | grep sleep
sleep-77747b8698-d8r9z 2/2 Running 0 160m
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n default -o jsonpath={.items..metadata.name}) -n default -c sleep -- curl http://java-service.mtls.svc.cluster.local:8080/demo/service/test
张三小企业一v1
可以看到两个注入 sidecar
的工作负载之间网文是不存在任何问题的。为什么可以访问在下面官方提供的示例已经指出。
稍后补充…
foo
以及 bar
命名空间中的 sleep
、httpbin
pod
都开起了自动注入,legacy
没有开启任何注入
$ kubectl create ns foo
$ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n foo
$ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n foo
$ kubectl create ns bar
$ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n bar
$ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n bar
$ kubectl create ns legacy
$ kubectl apply -f samples/httpbin/httpbin.yaml -n legacy
$ kubectl apply -f samples/sleep/sleep.yaml -n legacy
互相访问不存在问题,此时没有开启任何认证策略
[root@node4 mtls]# for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 200
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200
注意:注入了 sidecar
的 pod
的 service
之间所有流量都使用双向 TLS
。例如,请求到 httpbin/header
的响应。当使用 双向 TLS
时,代理将 x-forward-client-cert
请求头注入到后端上游请求。请求头的存在证明使用了双向 TLS
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c sleep -n foo -- curl http://httpbin.foo:8000/headers -s | grep X-Forwarded-Client-Cert
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=e73aeed3b40e36adef02331bacbc00e2c24d5ebcaaf5f730c3d3d89bef78b3b5;Subject=\"\";URI=spiffe://cluster.local/ns/foo/sa/sleep"
没有注入 sidecar
的 pod
是不存在的,直接使用明文传输
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c sleep -n foo -- curl http://httpbin.legacy:8000/headers -s | grep X-Forwarded-Client-Cert
[root@node4 mtls]#
STRICT
模式注意:虽然 Istio 自动将 sidecar 和 service 之间的所有流量升级为双向 TLS
,但 service
仍然可以接收明文流量。为了防止整个网格的非双向 TLS
,设置一个网格范围内的对等身份验证策略,将双向 TLS
模式设置为 STRICT
。网格范围的对等身份验证策略不应该有 selector
,它必须应用到根名称空间
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: STRICT
上面对等身份验证策略有效果:它将网格中的所有 service
配置为只接受用 TLS 加密
的请求。它没有为 selector
字段指定值,所以这个策略适用于网格中的所有 service
。
再次测试
[root@node4 mtls]# for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 000
command terminated with exit code 56
sleep.legacy to httpbin.bar: 000
command terminated with exit code 56
sleep.legacy to httpbin.legacy: 200
除了来自 sleep.legacy
命名空间的请求(sleep.legacy
没有注入 sidecar
代理),其他的都成功了,这样的话符合预期结果,因为只有注入 sidecar
的 pod
的 service
才可以请求成功。
PERMISSIVE
模式apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: PERMISSIVE
此时的认证策略则允许明文、密文两种请求方式
结果
[root@node4 simple]# for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 200
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200
启用每个命名空间或 service
命名空间范围策略的 双向 TLS
要执行命名空间内所有 service
的双向 TLS
,使用命名空间范围的策略。策略的规范与网格范围策略相同,需要在 metadata
字段下指定应用的命名空间。
例如,下面的对等身份验证策略为 foo
命名空间启用了 STRICT
的 双向TLS
:
STRICT
模式apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "foo"
spec:
mtls:
mode: STRICT
[root@node4 mtls]# for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 000
command terminated with exit code 56
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200
由于这个策略应用在 foo
命名空间中的 service
,所以只能看到来自 sleep.legacy
(无 sidecar 注入
)到 httpbin
的请求是失败的。
要为特定 Service
设置对等身份验证策略,必须配置 selector
字段并指定与所需 Service
匹配的标签。Istio
无法聚合针对 Service
的出站双向 TLS
流量的 Service
级别策略,这个时候需要配置 destinationrule
管理这种行为 。
例子,下面对等身份验证策略和 DestinationRule
为 httpbin
启用严格的 双向 TLS
。指定 Service
httpbin.bar
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "httpbin"
namespace: "bar"
spec:
selector:
matchLabels:
app: httpbin
mtls:
mode: STRICT
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "httpbin"
spec:
host: "httpbin.bar.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
再次运行测试命令。与预期一直,从 sleep.legacy
到 httpbin.bar
的请求由于相同的原因失败。
[root@node4 mtls]# for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 200
sleep.legacy to httpbin.bar: 000
command terminated with exit code 56
sleep.legacy to httpbin.legacy: 200
下面的对等身份验证策略要求在所有端口上使用 双向 TLS
,端口 80 除外
注意:
DestinationRule
的值是 Service
的端口。Service
时使用 portLevelMtls
。否则 istio 会忽略 portLevelMtls
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "httpbin"
namespace: "bar"
spec:
selector:
matchLabels:
app: httpbin
mtls:
mode: STRICT
portLevelMtls:
80:
mode: DISABLE
DestinationRule
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "httpbin"
spec:
host: httpbin.bar.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8000
tls:
mode: DISABLE
指定 Service
的对等身份验证策略优先于命名空间范围的策略。如果添加一个策略来禁用 httpbin
的 双向 TLS
,例如 foo
Service
。注意,你已经创建了一个命名空间范围的策略,它为命名空间 foo
中的所有服务启用了 双向 TLS
,并观察来自 sleep
的请求。
命名空间范围策略
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "foo"
spec:
mtls:
mode: STRICT
运行测试命令
[root@node4 mtls]# for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 000
command terminated with exit code 56
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200
Service
范围策略
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "overwrite-example"
namespace: "foo"
spec:
selector:
matchLabels:
app: httpbin
mtls:
mode: DISABLE
DestinationRule
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "overwrite-example"
spec:
host: httpbin.foo.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
运行测试命令
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"
200
[root@node4 mtls]# kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"
200
重新运行 sleep.legacy
请求,这次会看当访问成功,确认指定服务的策略将覆盖整个命名空间范围的策略。