calico 网络结合 k8s networkpolicy 实现租户隔离及部分租户下业务隔离

实现 namespace 之间多租户网络隔离

我们经常需要按租户进行网络隔离,k8s 提供了 networkpolicy 来定义网络策略,从而实现网络隔离以满足租户隔离及部分租户下业务隔离等。Network Policy 提供了基于策略的网络控制,用于隔离应用并减少攻击面。它使用标签选择器模拟传统的分段网络,并通过策略控制它们之间的流量以及来自外部的流量。但这个 networkpolicy 需要有第外方外接网络插件的支持,如Calico、Romana、Weave Net和trireme等。

现我们拿 calio 来做测试。准备两个namespaces,分别取名叫 demo, local,现在测试他们的网络隔离情况。
建立如下网络策略,分两步:
- 1. 为 namespace 创建默认拒绝所有访问策略,这样所有的容器都不能访问
- 2. 为在同一 namespace 中的 deployment, pod, svc 打上标签,如 “environment: demo” ,设定 network ingress 准入 pods

1、创建 netowrkpolicy - policy.yaml

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny
  namespace: demo
spec:
  podSelector: {}

---

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-demo
  namespace: demo
spec:
  podSelector:
    matchLabels:
      environment: demo
  ingress:
    - from:
      - podSelector:
          matchLabels:
            environment: demo

---

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny
  namespace: local
spec:
  podSelector: {}

---

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-local
  namespace: local
spec:
  podSelector:
    matchLabels:
      environment: local
  ingress:
    - from:
      - podSelector:
          matchLabels:
            environment: local

2、编写 demo.yaml, local.yaml 并创建相关 pod

demo.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-hello-deployment
  namespace: demo
  labels:
    app: nginx-hello
    environment: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-hello
  template:
    metadata:
      labels:
        app: nginx-hello
        environment: demo
    spec:
      containers:
      - name: nginx-hello
        image: qzschen/nginx-hello
        ports:
        - containerPort: 80

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-hello-service
  namespace: demo
spec:
  selector:
    app: nginx-hello
    environment: demo
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

local.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: local
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-hello-deployment
  namespace: local
  labels:
    app: nginx-hello
    environment: local
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-hello
  template:
    metadata:
      labels:
        app: nginx-hello
        environment: local
    spec:
      containers:
      - name: nginx-hello
        image: qzschen/nginx-hello
        ports:
        - containerPort: 80

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-hello-service
  namespace: local
spec:
  selector:
    app: nginx-hello
    environment: local
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

按 namespace 进行多租户网络隔离测试结果

通过下面的结果,我们可以知道租户可以通过ns进行网络隔离,在同一个ns下的pod的能相互ping通,不同ns下的不能ping通。

[root@SCSP01539 policy]# ls
demo.yaml  local.yaml  policy.yaml
[root@SCSP01539 policy]# kubectl -n demo get po -o wide                                       
NAME                                      READY     STATUS    RESTARTS   AGE       IP               NODE
nginx-hello-deployment-5bfc847b48-mh62d   1/1       Running   0          6m        172.12.232.82    10.130.33.13
nginx-hello-deployment-5bfc847b48-sn7x4   1/1       Running   0          6m        172.12.180.175   10.130.33.8
[root@SCSP01539 policy]# kubectl -n local get po -o wide  
NAME                                      READY     STATUS    RESTARTS   AGE       IP              NODE
nginx-hello-deployment-5bd5585948-fp8kx   1/1       Running   0          6m        172.12.210.73   10.130.33.11
nginx-hello-deployment-5bd5585948-nm492   1/1       Running   0          6m        172.12.195.75   10.130.33.12
[root@SCSP01539 policy]# kubectl -n local exec -it nginx-hello-deployment-5bd5585948-fp8kx sh
sh-4.1# ping  172.12.180.175
PING 172.12.180.175 (172.12.180.175) 56(84) bytes of data.
^C
--- 172.12.180.175 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1298ms

sh-4.1# ping 172.12.195.75 
PING 172.12.195.75 (172.12.195.75) 56(84) bytes of data.
64 bytes from 172.12.195.75: icmp_seq=1 ttl=62 time=0.305 ms
64 bytes from 172.12.195.75: icmp_seq=2 ttl=62 time=0.198 ms
^C
--- 172.12.195.75 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1095ms
rtt min/avg/max/mdev = 0.198/0.251/0.305/0.055 ms
sh-4.1# ping 172.12.210.73
PING 172.12.210.73 (172.12.210.73) 56(84) bytes of data.
64 bytes from 172.12.210.73: icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from 172.12.210.73: icmp_seq=2 ttl=64 time=0.033 ms
^C
--- 172.12.210.73 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1239ms
rtt min/avg/max/mdev = 0.033/0.054/0.075/0.021 ms
sh-4.1# ping 172.12.232.82
PING 172.12.232.82 (172.12.232.82) 56(84) bytes of data.
^C
--- 172.12.232.82 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1229ms
[root@SCSP01539 policy]#  kubectl -n demo exec -it nginx-hello-deployment-5bfc847b48-mh62d sh
sh-4.1# ping 172.12.195.75
PING 172.12.195.75 (172.12.195.75) 56(84) bytes of data.
^C
--- 172.12.195.75 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2183ms

sh-4.1# ping 172.12.210.73
PING 172.12.210.73 (172.12.210.73) 56(84) bytes of data.
^C
--- 172.12.210.73 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4409ms

sh-4.1# ping 172.12.232.82
PING 172.12.232.82 (172.12.232.82) 56(84) bytes of data.
64 bytes from 172.12.232.82: icmp_seq=1 ttl=64 time=0.064 ms
64 bytes from 172.12.232.82: icmp_seq=2 ttl=64 time=0.043 ms
^C
--- 172.12.232.82 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1987ms
rtt min/avg/max/mdev = 0.043/0.053/0.064/0.012 ms
sh-4.1# ping 172.12.180.175
PING 172.12.180.175 (172.12.180.175) 56(84) bytes of data.
64 bytes from 172.12.180.175: icmp_seq=1 ttl=62 time=0.309 ms
64 bytes from 172.12.180.175: icmp_seq=2 ttl=62 time=0.231 ms
64 bytes from 172.12.180.175: icmp_seq=3 ttl=62 time=0.235 ms
^C
--- 172.12.180.175 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2602ms
rtt min/avg/max/mdev = 0.231/0.258/0.309/0.038 ms

实现同一 namespace 下的多租户网络隔离

以 demo 这个 ns 为例,为应用创建多个 label,并打不同的标签 demo1, demo2。然后依据打的label重新修改 networkpolicy

demo1.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: demo

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-hello1-deployment
  namespace: demo
  labels:
    app: nginx-hello1
    environment: demo1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-hello1
  template:
    metadata:
      labels:
        app: nginx-hello1
        environment: demo1
    spec:
      containers:
      - name: nginx-hello1
        image: qzschen/nginx-hello
        ports:
        - containerPort: 80

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-hello1-service
  namespace: demo
spec:
  selector:
    app: nginx-hello1
    environment: demo1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

demo2.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: demo

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-hello2-deployment
  namespace: demo
  labels:
    app: nginx-hello2
    environment: demo2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-hello2
  template:
    metadata:
      labels:
        app: nginx-hello2
        environment: demo2
    spec:
      containers:
      - name: nginx-hello2
        image: qzschen/nginx-hello
        ports:
        - containerPort: 80

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-hello2-service
  namespace: demo
spec:
  selector:
    app: nginx-hello2
    environment: demo2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

同一 namespace 下的多租户网络隔离测试结果

通过观察下面的测试结果,同一个ns下,我们也可以按租户的业务需求进行网络隔离,即同一个租户有多个业务,每个业务之间的网络隔离,相同业务之间的网络打通。

[appuser@SCSP01539 policy]$ kubectl -n demo get po -o wide
NAME                                       READY     STATUS    RESTARTS   AGE       IP              NODE
nginx-hello1-deployment-85f4954599-5jtnm   1/1       Running   0          1m        172.12.210.77   10.130.33.11
nginx-hello1-deployment-85f4954599-8tdjc   1/1       Running   0          1m        172.12.232.86   10.130.33.13
nginx-hello2-deployment-7f497dc558-4zg99   1/1       Running   0          1m        172.12.232.87   10.130.33.13
nginx-hello2-deployment-7f497dc558-55rqt   1/1       Running   0          1m        172.12.210.78   10.130.33.11
[appuser@SCSP01539 policy]$ kubectl -n demo exec -it nginx-hello1-deployment-85f4954599-5jtnm sh
sh-4.1# ping 172.12.232.86
PING 172.12.232.86 (172.12.232.86) 56(84) bytes of data.
64 bytes from 172.12.232.86: icmp_seq=1 ttl=62 time=0.412 ms
64 bytes from 172.12.232.86: icmp_seq=2 ttl=62 time=0.238 ms
^C
--- 172.12.232.86 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1145ms
rtt min/avg/max/mdev = 0.238/0.325/0.412/0.087 ms
sh-4.1# ping 172.12.232.87
PING 172.12.232.87 (172.12.232.87) 56(84) bytes of data.
^C
--- 172.12.232.87 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1578ms

sh-4.1# ping 172.12.210.78
PING 172.12.210.78 (172.12.210.78) 56(84) bytes of data.
^C
--- 172.12.210.78 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1123ms

sh-4.1# ping 172.12.210.77
PING 172.12.210.77 (172.12.210.77) 56(84) bytes of data.
64 bytes from 172.12.210.77: icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from 172.12.210.77: icmp_seq=2 ttl=64 time=0.032 ms
^C
--- 172.12.210.77 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1155ms
rtt min/avg/max/mdev = 0.032/0.053/0.075/0.022 ms
sh-4.1# exit
exit
[appuser@SCSP01539 policy]$ kubectl -n demo exec -it nginx-hello2-deployment-7f497dc558-4zg99 sh
sh-4.1# ping 172.12.232.86
PING 172.12.232.86 (172.12.232.86) 56(84) bytes of data.
^C
--- 172.12.232.86 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1214ms

sh-4.1# ping 172.12.210.78
PING 172.12.210.78 (172.12.210.78) 56(84) bytes of data.
64 bytes from 172.12.210.78: icmp_seq=1 ttl=62 time=0.339 ms
64 bytes from 172.12.210.78: icmp_seq=2 ttl=62 time=0.236 ms
^C
--- 172.12.210.78 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1508ms
rtt min/avg/max/mdev = 0.236/0.287/0.339/0.054 ms
sh-4.1# ping 172.12.210.77
PING 172.12.210.77 (172.12.210.77) 56(84) bytes of data.
^C
--- 172.12.210.77 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1116ms
[appuser@SCSP01539 policy]$ kubectl -n local get po -o wide
NAME                                      READY     STATUS    RESTARTS   AGE       IP               NODE
nginx-hello-deployment-5bd5585948-5bkjz   1/1       Running   0          16m       172.12.210.76    10.130.33.11
nginx-hello-deployment-5bd5585948-jqncw   1/1       Running   0          16m       172.12.180.179   10.130.33.8
[appuser@SCSP01539 policy]$ kubectl -n local exec -it nginx-hello-deployment-5bd5585948-5bkjz sh
sh-4.1# ping 172.12.180.179
PING 172.12.180.179 (172.12.180.179) 56(84) bytes of data.
64 bytes from 172.12.180.179: icmp_seq=1 ttl=62 time=0.392 ms
64 bytes from 172.12.180.179: icmp_seq=2 ttl=62 time=0.227 ms
^C
--- 172.12.180.179 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1094ms
rtt min/avg/max/mdev = 0.227/0.309/0.392/0.084 ms
sh-4.1# ping 172.12.210.78
PING 172.12.210.78 (172.12.210.78) 56(84) bytes of data.
^C
--- 172.12.210.78 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1188ms

sh-4.1# ping 172.12.210.77
PING 172.12.210.77 (172.12.210.77) 56(84) bytes of data.
^C
--- 172.12.210.77 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1317ms

你可能感兴趣的:(kubernetes,calico)