本文使用istio版本为v1.9.2。
Istio安全架构图如下:
从图中可知安全方面主要集中在以下几个方面:
终端用户基于JWT的身份认证方案;
经网关(ingress/egress)的流量安全;
网格内部的流量安全(认证,授权);
Istio自身的安全方案;
除非显示的指定,否则通过官方的配置(default/demo等)安装的istio,均默认开启了双向TLS认证(mtls, mutual TLS),正如"Istio安全架构图"中展示的,网格内服务间通信时双方的envoy代理数据是被TLS加密的。
源码部分可参考github,默认mTLS配置被置为true
:
EnableAutoMtls: &types.BoolValue{Value: true},
具体配置可参考官网释义。
老版本通过
Values.global.mtls.enabled
和Values.global.mtls.auto
来配置,新版本替换为'PeerAuthentication
和meshConfig.enableAutoMtls
,参见github。
默认情况下istio使用自生成的证书与密钥等进行mTLS加密,并根据认证等策略异步发送配置到目标端点。代理收到配置后,新的认证要求会立即生效。
分别创建full
,part
,legacy
三个命名空间,分别部署httpbin
与sleep
服务,除了legacy
之外其他均注入边车。
# kubectl create ns full
# kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n full
# kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n full
# kubectl create ns part
# kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n part
# kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n part
# kubectl create ns legacy
# kubectl apply -f samples/httpbin/httpbin.yaml -n legacy
# kubectl apply -f samples/sleep/sleep.yaml -n legacy
若流量被istio双向TLS加密,则代理自动在消息头中添加了X-Forwarded-Client-Cer
,可通过httpbin
服务的/headers
查看。
# kubectl -nfull exec -it deploy/sleep -- curl -s http://httpbin.part:8000/headers | grep X-Forwarded-Client-Cert
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/full/sa/httpbin;Hash=aa013048768c74c5289c2ae4bbab4f944cb878d13e7dee78aa75d7b7930a34fc;Subject=\"\";URI=spiffe://cluster.local/ns/full/sa/sleep"
通过以下循环,使得三个命名空间下的服务进行互相访问测试,均能正常访问:
# for from in "full" "part" "legacy"; do for to in "full" "part" "legacy"; do echo -e "sleep.${from} to httpbin.${to}";kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/headers" -s -w "response code: %{http_code}" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$';echo ; done; done
sleep.full to httpbin.full
URI=spiffe://cluster.local/ns/full/sa/sleep # 双向mTLS
response code: 200
sleep.full to httpbin.part
URI=spiffe://cluster.local/ns/full/sa/sleep # 双向mTLS
response code: 200
sleep.full to httpbin.legacy # 宽容模式,明文
response code: 200
sleep.part to httpbin.full
URI=spiffe://cluster.local/ns/part/sa/sleep # 双向mTLS
response code: 200
sleep.part to httpbin.part
URI=spiffe://cluster.local/ns/part/sa/sleep # 双向mTLS
response code: 200
sleep.part to httpbin.legacy # 宽容模式,明文
response code: 200
sleep.legacy to httpbin.full # 宽容模式,明文
response code: 200
sleep.legacy to httpbin.part # 宽容模式,明文
response code: 200
sleep.legacy to httpbin.legacy # 宽容模式,明文
response code: 200
从测试结果来看,只有互相具备代理容器的服务相互访问才被添加了mTLS,其他情况下流量均为被加密。
这是istio具备的一种特殊模式:宽容模式(permissive mode)。
宽容模式也是在双向TLS中默认开启的,允许网格内服务同时接受纯文本流量和双向 TLS 流量,由代理自动识别并决定是否加密(两个http服务都携带边车,两者边车默认mTLS通信;当其中一个服务不携带边车,两者也能通信,通过明文)。
宽容模式是istio自定义资源PeerAuthentication
(PA,同级认证)中的配置项,模式可配置为禁用
(仅明文),严格
(仅mTLS),宽容
,未配置
(若父级有配置则继承,否则宽容模式)。
认证策略为在 Istio 网格中接收请求的工作负载指定认证要求。事实上,Pa资源只是用来管理接收端是否启用TLS认证,并没有携带身份认证方面的信息,认证信息通过RequestAuthentication资源进行配置。
Pa资源主要用于为命名空间或特定工作负载配置认证模式,通过namespace
与selector
字段协同配置:
namespace为根命名空间,selector为空或不配置:整个网格生效的配置;
namespace为特定命名空间,selector为空或不配置:特定命名空间下生效的配置;
namespace为特定命名空间,selector不为空:特定工作负载下生效的配置;
Istio 按照以下顺序为每个工作负载应用最窄的匹配策略:
特定于工作负载的
命名空间范围
网格范围
不同级别的策略配置可以认为是一种父子关系,可用于继承。
在根命名空间创建STRICT
模式的PA,
# kubectl apply -f - <
运行稍作改动的上述命令:
# for from in "full" "part" "legacy"; do for to in "full" "part" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.full to httpbin.full: 200
sleep.full to httpbin.part: 200
sleep.full to httpbin.legacy: 200
sleep.part to httpbin.full: 200
sleep.part to httpbin.part: 200
sleep.part to httpbin.legacy: 200
sleep.legacy to httpbin.full: 000
command terminated with exit code 56
sleep.legacy to httpbin.part: 000
command terminated with exit code 56
sleep.legacy to httpbin.legacy: 200
由于认证策略为在 Istio 网格中接收请求的工作负载(上游主机)指定认证要求,因此mTLS模式下携带边车代理的服务不能接收明文流量,连接被重置,curl客户端异常退出。
curl: (56) Recv failure: Connection reset by peer command terminated with exit code 56
在特定命名空间范围设置STRICT
模式的PA,同理。
若有使用场景为出站mTLS流量的工作负载指定负载级别认证策略,则需要结合PA与DR,如:
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "httpbin"
spec:
host: "httpbin.part.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
其中的tls.mode
参见官网,当为ISTIO_MUTUAL
代表使用istio自身管理的证书等进行加密管理,DISABLE
代表禁用TLS连接。
运行上述命令进行测试:
# for from in "full" "part" "legacy"; do for to in "full" "part" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.full to httpbin.full: 200
sleep.full to httpbin.part: 503
sleep.full to httpbin.legacy: 200
sleep.part to httpbin.full: 200
sleep.part to httpbin.part: 503
sleep.part to httpbin.legacy: 200
sleep.legacy to httpbin.full: 000
command terminated with exit code 56
sleep.legacy to httpbin.part: 000
command terminated with exit code 56
sleep.legacy to httpbin.legacy: 200
在另一篇描述503的文章中提到,当网格内服务TLS配置冲突时,请求以503
状态码返回客户端(示例中表现为part被配置接收mTLS流量,但是发送过来的流量又是明文)。由于legacy
未配置边车,因此DR资源不能对其命名空间下的服务生效,仍然以错误码56退出。
注意:通过上例可知,通过DR资源这种配置边车达到的mTLS禁用,与不携带边车默认的http明文是不一样的。
再次修改策略,使满足如下条件:
# kubectl get -A pa
NAMESPACE NAME MODE AGE
istio-system default STRICT 134m # 全局设置mTLS
part default DISABLE 2m31s # part命名空间禁用mTLS
# kubectl get -A dr
NAMESPACE NAME HOST AGE
part default httpbin.part.svc 15m # 仅对目标'httpbin.part.svc'禁用mTLS
# for from in "full" "part" "legacy"; do for to in "full" "part" "legacy"; do echo -e "sleep.${from} to httpbin.${to}";kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/headers" -s -w "response code: %{http_code}" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$';echo ; done; done
sleep.full to httpbin.full
URI=spiffe://cluster.local/ns/full/sa/sleep # full到full为mTLS
response code: 200
sleep.full to httpbin.part # full到part为明文,因为pa.part
response code: 200
sleep.full to httpbin.legacy # 明文,因为legacy无边车
response code: 200
sleep.part to httpbin.full # mTLS,因为全局mTLS
URI=spiffe://cluster.local/ns/part/sa/sleep
response code: 200
sleep.part to httpbin.part # 明文,因为pa.part和dr.part
response code: 200
sleep.part to httpbin.legacy # 明文,因为legacy无边车
response code: 200
sleep.legacy to httpbin.full # 错误,因为legacy明文,full为mTLS
command terminated with exit code 56
response code: 000
sleep.legacy to httpbin.part # 明文,因为part仅接收明文
response code: 200
sleep.legacy to httpbin.legacy # 明文
response code: 200
上述示例结果确认了Pa策略为接收请求的工作负载生效,Dr资源为发送目的地的流量生效策略。
通过PA与DR,还可以为特定端口设置认证策略:
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "httpbin"
namespace: "part"
spec:
selector:
matchLabels:
app: httpbin
mtls:
mode: STRICT
portLevelMtls:
80: # pod或container中的端口
mode: DISABLE
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "httpbin"
namespace: "part"
spec:
host: httpbin.part.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8000 # svc中的端口
tls:
mode: DISABLE
再次进行验证:
# for from in "full" "part" "legacy"; do for to in "full" "part" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.full to httpbin.full: 200
sleep.full to httpbin.part: 200
sleep.full to httpbin.legacy: 200
sleep.part to httpbin.full: 200
sleep.part to httpbin.part: 200
sleep.part to httpbin.legacy: 200
sleep.legacy to httpbin.full: 000
command terminated with exit code 56
sleep.legacy to httpbin.part: 200
sleep.legacy to httpbin.legacy: 200
此时,legacy
下的服务能够以明文的方式访问part
下的httpbin服务,因为特定端口已经配置为明文发送。
需要注意
的是,PA资源中指定的端口是pod或container中的端口号,而DR中指定的是service中的端口号(一般情况下都是填的service的端口)。
apiVersion: v1
kind: Service
metadata:
labels:
app: httpbin
name: httpbin
spec:
ports:
- name: http
port: 8000
protocol: TCP
targetPort: 80
selector:
app: httpbin
若存在策略的更新,由于istio几乎实时将新策略推送到工作负载。但是,Istio 无法保证所有工作负载都同时收到新政策。
因此在同级认证策略进行切换时,最好使用宽容
模式作为过渡。
Istio默认提供Ingress Gateway
用于部署在网格边缘,通过配置Gateway
资源来管理进入集群的流量。
安全网关支持通过Istio的Secret 发现服务
(SDS)完成配置,当为多个主机或访问入口域名配置网关入口时,无需重启网关pod,可以动态新增、删除或者更新证书等配置。
该功能在老版本需要显示开启,在1.9版本中默认开启。
Istio支持一下不同格式的secret格式:
下文例子中使用的键名为tls.key
和tls.crt
,双向TLS的ca使用ca.crt
;
键名为key
和cert
,双向TLS的ca使用cacert
;
名为
中键名为key
和cert
,另一个单独的secret命名为
,其中ca键名为cacert
;
为full
命名空间下的httpbin服务生成双向客户端、服务端认证证书(其中服务端域名指定为httpbin.example.com
),HTTPS证书生成及相关参见网络。
在istio-system
(istio-ingressgateway所在的命名空间)下创建响应的secret:
kubectl create -n istio-system secret generic httpbin-credential --from-file=tls.key=httpbin.key \
--from-file=tls.crt=httpbin.crt --from-file=ca.crt=ca.crt
根据官方示例找出环境变量INGRESS_HOST
和SECURE_INGRESS_PORT
,为httpbin服务创建网关与虚拟服务:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpsgw
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: MUTUAL
credentialName: httpbin-credential # must be the same as secret
hosts:
- httpbin.example.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "httpbin.example.com"
gateways:
- httpsgw
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
route:
- destination:
port:
number: 8000 # service端口
host: httpbin
通过一台机子访问网关:
# curl -s -HHost:httpbin.example.com --resolve "httpbin.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST" --cacert ./ca.crt --cert ./client.crt --key ./client.key "https://httpbin.example.com:$SECURE_INGRESS_PORT/status/418"
-=[ teapot ]=-
_...._
.' _ _ `.
| ."` ^ `". _,
\_;`"---"`|//
| ;/
\_ _/
`"""`
注意:原httpbin服务在网格内其实是http服务,通过网关添加了一层TLS使得入站请求必须为HTTPS请求,但是可以使用istio针对http的所有丰富的流量管理功能。
在同一命名空间下部署另一个http服务:
apiVersion: v1
kind: Service
metadata:
name: goserver
spec:
ports:
- name: http
port: 9091
protocol: TCP
targetPort: 8081
selector:
app: goserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goserver
spec:
replicas: 1
template:
metadata:
labels:
app: goserver
spec:
containers:
- image: goserver:v1.0.1
name: goserver
ports:
- containerPort: 8081
protocol: TCP
验证默认情况下在网格内是以http请求进行访问的:
# kubectl -nfull get po
NAME READY STATUS RESTARTS AGE
goserver-69f9c9f89-tqs52 2/2 Running 0 17h
httpbin-66cdbdb6c5-vhkdl 2/2 Running 2 6d22h
sleep-865cdd767b-6qtwg 2/2 Running 0 47h
# kubectl -nfull exec deploy/sleep -- curl goserver:9091/healthz -s
{"status":"healthy","hostName":"goserver-69f9c9f89-tqs52"}
用之前的ca证书为这个服务签发双向客户端、服务端认证证书(其中服务端域名指定为go.example.com
),在istio-system
(istio-ingressgateway所在的命名空间)下创建响应的secret:
kubectl create -n istio-system secret generic go-credential --from-file=tls.key=goserver.key \
--from-file=tls.crt=goserver.crt --from-file=ca.crt=ca.crt
为goserver
服务创建网关与虚拟服务:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpsgw
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: MUTUAL
credentialName: httpbin-credential # must be the same as secret
hosts:
- httpbin.example.com
- port:
number: 443
name: https-go # 名称确保唯一,不能与上述其他名称重复
protocol: HTTPS
tls:
mode: MUTUAL
credentialName: go-credential # must be the same as secret
hosts:
- go.example.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: goserver
spec:
hosts:
- go.example.com
gateways:
- httpsgw
http:
- match:
- uri:
exact: /healthz
route:
- destination:
host: goserver
port:
number: 9091
应用上述配置之后,尝试访问前期的两个服务:
# curl -s -HHost:go.example.com --resolve "go.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST" --cacert ./ca.crt --cert ./client.crt --key ./client.key "https://go.example.com:$SECURE_INGRESS_PORT/healthz"
{"status":"healthy","hostName":"goserver-69f9c9f89-tqs52"}
# curl -s -HHost:httpbin.example.com --resolve "httpbin.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST" --cacert ./ca.crt --cert ./client.crt --key ./client.key "https://httpbin.example.com:$SECURE_INGRESS_PORT/status/418"
-=[ teapot ]=-
... ...
若网格内的服务自身已配置为https服务,则网关就没有必要再次进行TLS加密,此时就可以使用网关的透传模式。
# 简略版
kind: Service
metadata:
name: nginx-https
spec:
ports:
- port: 443
name: https
---
kind: Deployment
metadata:
name: nginx-https
spec:
spec:
containers:
- image: ymqytw/nginxhttps:1.5
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
volumes:
- name: secret-volume
secret:
secretName: nginx-https
- name: configmap-volume
configMap:
name: nginx-https
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-https
data:
default.conf: |-
server {
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt; #配置证书位置
ssl_certificate_key /etc/nginx/ssl/server.key; #配置秘钥位置
ssl_client_certificate /etc/nginx/ssl/ca.crt; #双向认证
ssl_verify_client on; #双向认证
}
---
kind: Secret
metadata:
name: nginx-https
type: generic
data:
server.crt: xxx
server.key: yyy
ca.crt: zzz
先在本机验证服务正常且为https服务(此处以nodeport的方式自解析,证书域名为nginx.example.com
):
# curl -HHost:nginx.example.com --resolve "nginx.example.com:24755:1xx.xx.xx.184" --cacert ./ca.crt --cert ./client.crt --key ./client.key https://nginx.example.com:24755 -I
HTTP/1.1 200 OK
Server: nginx/1.11.3
以透传方式配置网关:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gw-pt
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- nginx.example.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-https
spec:
hosts:
- nginx.example.com
gateways:
- gw-pt
tls:
- match:
- port: 443
sniHosts:
- nginx.example.com
route:
- destination:
host: nginx-https # svc name
port:
number: 443 # svc port
再以网关为入口进行访问测试:
# curl -I -HHost:nginx.example.com --resolve "nginx.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST" --cacert ./ca.crt --cert ./client.crt --key ./client.key "https://nginx.example.com:$SECURE_INGRESS_PORT/"
HTTP/1.1 200 OK
Server: nginx/1.11.3
Istio安装模块其他功能等后续有时间再补充。