Kubernetes网络组件详解

目录

1、Kubernetes网络组件

1.1、Flannel网络组件

1.2、Calico 网络插件

2、环境准备

2.2、部署docker环境

3、部署kubernetes集群

3.1、组件介绍

3.2、配置阿里云yum源

3.3、安装kubelet kubeadm kubectl

3.4、配置init-config.yaml

3.5、安装master节点

3.6、安装node节点

3.7、部署Calico网络插件

4、Calico网络策略基础

4.1、创建服务

4.2、启用网络隔离

4.3、测试网络隔离

4.4、允许通过网络策略进行访问

5、Calico网络策略进阶

5.1、创建服务

5.2、拒绝所有入口流量

5.3、允许进入Nginx的流量

5.4、拒绝所有出口流量


1、Kubernetes网络组件

随着 Docker 容器化兴起,云计算面对的挑战越来越大,例如:网络管理、存储管理等。一个数据中心中基本上都有成百上千个容器,这么多的容器需要运维人员集中管理。而在云计算的世界中,计算是最基础的,存储是最重要的,网络则是最复杂的。Kubernetes 网络的实现不是集群内部自己实现,而是依赖于第三方网络插件。本次主要介绍 Kubernetes网络组件中的一个重要成员——Calico。

1.1、Flannel网络组件

Flannel 是 CoreOS 团队针对 Kubernetes 设计的一个网络规划服务。它的功能是让集群中的不同节点主机创建的 Docker 容器都具有全集群唯一的虚拟 IP 地址。在默认的 Docker 配置中,每个节点上的 Docker 服务会分别负责所在节点容器的 IP 分配。这样导致的问题是,不同节点上容器可能获得相同的内网 IP 地址。

Flannel 的设计目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得“同属一个内网”且”不重复的”IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 地址通信。

Flannel 实质上是一种“覆盖网络(overlay network)”,也就是将 TCP 数据包装在另一种网络包里面进行路由转发和通信,目前已经支持 UDP、VxLAN、AWS VPC 和 GCE 路由等数据转发方式,默认的节点间数据通信方式是 UDP 转发。

Kubernetes网络组件详解_第1张图片

 

数据从源容器中发出后,经由所在主机的 docker0 虚拟网卡转发到 flannel0 虚拟网卡,

这是个 P2P 的虚拟网卡,flanneld 服务监听在网卡的另外一端。

Flannel 通过 Etcd 服务维护了一张节点间的路由表。源主机的 flanneld 服务将原本的数据内容 UDP 封装后根据自己的路由表投递给目的节点的 flanneld 服务,数据到达以后被解包,然后直接进入目的节点的 flannel0 虚拟网卡,之后被转发到目的主机的 docker0 虚拟网卡,最后就像本机容器通信一下的有 docker0 路由到达目标容器。

1.2、Calico 网络插件

Calico 是一种基于 BGP 的、纯三层的、容器间互通的网络方案。与 OpenStack、Kubenetes、AWS、GCE 等云平台都能够良好的集成。在虚拟化平台中,如 OpenStack、Docker 等都需要实现 workloads 之间互连,但同时也需要对容器做隔离控制,就像在Internet 中的服务仅开放 80 端口、公有云的多租户一样,提供隔离和管控机制。

而在多数的虚拟化平台实现中,通常使用二层隔离技术来实现容器的网络,这些二层技术有一些弊端,比如需要依赖 VLAN、bridge 和隧道等技术。其中 bridge 带来了复杂性,vlan 隔离和 tunnel 隧道则消耗更多的资源并对物理环境有要求,随着网络规模的增大,整体会变得越加复杂。我们尝试把 Host 当作 Internet 中的路由器,同样使用 BGP 同步路由,并使用 Iptables 来做安全访问策略,最终设计出了 Calico 方案。

Kubernetes网络组件详解_第2张图片

 

(1)Calico 网络模型工作组件

  1. Felix:运行在每一台 Host 的 agent 进程,主要负责网络接口管理和监听、路由、ARP管理、ACL 管理和同步、状态上报等。
  2. etcd:分布式键值存储,主要负责网络元数据一致性,确保 Calico 网络状态的准确性,可以与 kubernetes 共用;
  3. BGP Client(BIRD):Calico 为每一台 Host 部署一个 BGP Client,使用 BIRD 实现。BIRD 是一个单独的持续发展的项目,实现了众多动态路由协议比如 BGP、OSPF、RIP 等。在 Calico 的角色是监听 Host 上由 Felix 注入的路由信息,然后通过 BGP 协议广播告诉剩余 Host 节点,从而实现网络互通。
  4. BGP Route Reflector:在大型网络规模中,如果仅仅使用 BGP client 形成 mesh 全网互联的方案就会导致规模限制,因为所有节点之间俩俩互联,需要 N^2 个连接,为了解决这个规模问题,可以采用 BGP 的 Router Reflector 的方法,使所有 BGP Client仅与特定 RR 节点互联并做路由同步,从而大大减少连接数。
  5. CalicoCtl:Calico 命令行管理工具。

(2)Calico 网络 Node 之间两种网络

  1. IPIP

从字面来理解,就是把一个 IP 数据包又套在一个 IP 包里,即把 IP 层封装到 IP 层的一个 tunnel。它的作用相当于一个基于 IP 层的网桥。一般来说,普通的网桥是基于 mac 层的,根本不需 IP,而这个 ipip 则是通过两端的路由做一个 tunnel,把两个本来不通的网络通过点对点连接起来。

  1. BGP

边界网关协议(Border Gateway Protocol, BGP)是互联网上一个核心的去中心化自治路由协议。它通过维护 IP 路由表或‘前缀’表来实现自治系统(AS)之间的可达性,属于矢量路由协议。BGP 不使用传统的内部网关协议(IGP)的指标,而使用基于路径、网络策略或规则集来决定路由。因此,它更适合被称为矢量性协议,而不是路由协议。BGP通俗的讲就是讲接入到机房的多条线路(如电信、联通、移动等)融合为一体,实现多线单 IP,BGP 机房的优点:服务器只需要设置一个 IP 地址,最佳访问路由是由网络上的骨干路由器根据路由跳数与其它技术指标来确定的,不会占用服务器的任何系统。

2、环境准备

操作系统

IP地址

主机名

组件

CentOS7.5

192.168.50.53

k8s-master

kubeadm、kubelet、kubectl、docker-ce

CentOS7.5

192.168.50.51

k8s-node01

kubeadm、kubelet、kubectl、docker-ce

CentOS7.5

192.168.50.50

k8s-node02

kubeadm、kubelet、kubectl、docker-ce

注意:所有主机配置推荐CPU2C+  Memory:2G+

1、主机初始化配置

所有主机配置禁用防火墙和selinux

    [root@localhost ~]# setenforce 0

    [root@localhost ~]# iptables -F

    [root@localhost ~]# systemctl stop firewalld

    [root@localhost ~]# systemctl disable firewalld

    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.

    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

    [root@localhost ~]# systemctl stop NetworkManager

    [root@localhost ~]# systemctl disable NetworkManager

    Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.

    Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.

    Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.

    [root@localhost ~]#  sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config

配置主机名并绑定hosts,不同主机名称不同

    [root@localhost ~]# hostname k8s-master

    [root@localhost ~]# bash

    [root@k8s-master ~]# cat << EOF >> /etc/hosts

    > 192.168.50.53 k8s-master

    > 192.168.50.51 k8s-node01

    > 192.168.50.50 k8s-node02

    > EOF

主机配置初始化

    [root@k8s-master ~]# yum -y install vim wget net-tools lrzsz

    [root@k8s-master ~]#  swapoff -a

    [root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/fstab

    [root@k8s-master ~]#  cat << EOF >> /etc/sysctl.conf

    > net.bridge.bridge-nf-call-ip6tables = 1

    > net.bridge.bridge-nf-call-iptables = 1

    > EOF

    [root@k8s-master ~]# modprobe br_netfilter

    [root@k8s-master ~]#  sysctl -p

    net.bridge.bridge-nf-call-ip6tables = 1

    net.bridge.bridge-nf-call-iptables = 1

2.2、部署docker环境

三台主机上分别部署 Docker 环境,因为 Kubernetes 对容器的编排需要 Docker 的支持。

    [root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    [root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

 使用 YUM 方式安装 Docker 时,推荐使用阿里的 YUM 源。

     [root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

    [root@k8s-master ~]#  yum clean all && yum makecache fast

    root@k8s-master ~]#  yum -y install docker-ce

    [root@k8s-master ~]# systemctl start docker

    [root@k8s-master ~]#  systemctl enable docker

    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

镜像加速器(所有主机配置)

    [root@k8s-master ~]#  systemctl daemon-reload

    [root@k8s-master ~]#  systemctl restart docker

    [root@k8s-master ~]# cat << END > /etc/docker/daemon.json

    > {

    >         "registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]

    > }

    > END

    [root@k8s-master ~]#  systemctl daemon-reload

    [root@k8s-master ~]#  systemctl restart docker

3、部署kubernetes集群

3.1、组件介绍

三个节点都需要安装下面三个组件

  1. kubeadm:安装工具,使所有的组件都会以容器的方式运行
  2. kubectl:客户端连接K8S API工具
  3. kubelet:运行在node节点,用来启动容器的工具

3.2、配置阿里云yum源

使用 YUM 方式安装 Kubernetes时,推荐使用阿里的 YUM 源。

准备好基础环境和 Docker 环境,下面就开始通过 Kubeadm 来部署 Kubernetes 集群。首先,安装 Kubelet、Kubeadm 和 Kubectl。

    [root@k8s-master ~]# cat < /etc/yum.repos.d/kubernetes.repo

    > [kubernetes]

    > name=Kubernetes

    > baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

    > enabled=1

    > gpgcheck=1

    > repo_gpgcheck=1

    > gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

    >        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

    > EOF

3.3、安装kubelet kubeadm kubectl

所有主机配置

    [root@k8s-master ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

    [root@k8s-master ~]# systemctl enable kubelet

    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

kubelet 刚安装完成后,通过 systemctl start kubelet 方式是无法启动的,需要加入节点或初始化为 master 后才可启动成功。

Kubeadm 提供了很多配置项,Kubeadm 配置在 Kubernetes 集群中是存储在ConfigMap 中的,也可将这些配置写入配置文件,方便管理复杂的配置项。Kubeadm 配内容是通过 kubeadm config 命令写入配置文件的。

3.4、配置init-config.yaml

Kubeadm 提供了很多配置项,Kubeadm 配置在 Kubernetes 集群中是存储在ConfigMap 中的,也可将这些配置写入配置文件,方便管理复杂的配置项。Kubeadm 配内容是通过 kubeadm config 命令写入配置文件的。

在master节点安装,master 定于为192.168.200.111,通过如下指令创建默认的init-config.yaml文件:

[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml

[root@k8s-master ~]# vim init-config.yaml

      1 apiVersion: kubeadm.k8s.io/v1beta2

      2 bootstrapTokens:

      3 - groups:

      4   - system:bootstrappers:kubeadm:default-node-token

      5   token: abcdef.0123456789abcdef

      6   ttl: 24h0m0s

      7   usages:

      8   - signing

      9   - authentication

     10 kind: InitConfiguration

     11 localAPIEndpoint:

     12   advertiseAddress: 192.168.50.53                                   //master节点IP地址

     13   bindPort: 6443

     14 nodeRegistration:

     15   criSocket: /var/run/dockershim.sock

     16   name: k8s-master

     17   taints:

     18   - effect: NoSchedule

     19     key: node-role.kubernetes.io/master

     20 ---

     21 apiServer:

     22   timeoutForControlPlane: 4m0s

     23 apiVersion: kubeadm.k8s.io/v1beta2

     24 certificatesDir: /etc/kubernetes/pki

     25 clusterName: kubernetes

     26 controllerManager: {}

     27 dns:

     28   type: CoreDNS

     29 etcd:

     30   local:

     31     dataDir: /var/lib/etcd

     32 imageRepository: registry.aliyuncs.com/google_containers                                //修改为国内地址

     33 kind: ClusterConfiguration

     34 kubernetesVersion: v1.20.0

     35 networking:

     36   dnsDomain: cluster.local

     37   serviceSubnet: 10.96.0.0/12

     38   podSubnet: 10.244.0.0/16                    //新增加 Pod 网段

     39 scheduler: {}

3.5、安装master节点

拉取所需镜像

 

[root@k8s-master ~]#  kubeadm config images list --config init-config.yaml

registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0

registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0

registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0

registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0

registry.aliyuncs.com/google_containers/pause:3.2

registry.aliyuncs.com/google_containers/etcd:3.4.13-0

registry.aliyuncs.com/google_containers/coredns:1.7.0

[root@k8s-master ~]#  ls | while read line

> do

> docker load < $line

> done

archive/tar: invalid tar header

archive/tar: invalid tar header

[root@k8s-master ~]# kubeadm config images pull --config=init-config.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2

[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0

[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0

安装matser节点

[root@k8s-master ~]# echo "1" > /proc/sys/net/ipv4/ip_forward

[root@k8s-master ~]# kubeadm init --config=init-config.yaml

 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.50.53:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:1bf321292f0114664ef83862f09f15f38db5a1394d44b059bd21a68ffb0cfef5

根据提示操作

kubectl 默认会在执行的用户家目录下面的.kube 目录下寻找config 文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf 拷贝到.kube/config

[root@k8s-master ~]#  mkdir -p $HOME/.kube

[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.6、安装node节点

根据master安装时的提示信息

[root@k8s-node1 ~]# kubeadm join 192.168.50.53:6443 --token abcdef.0123456789abcdef \

>     --discovery-token-ca-cert-hash sha256:1bf321292f0114664ef83862f09f15f38db5a1394d44b059bd21a68ffb0cfef5

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03

[WARNING Hostname]: hostname "k8s-node1" could not be reached

[WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 180.76.76.76:53: no such host

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node2 ~]# kubeadm join 192.168.50.53:6443 --token abcdef.0123456789abcdef \

>     --discovery-token-ca-cert-hash sha256:1bf321292f0114664ef83862f09f15f38db5a1394d44b059bd21a68ffb0cfef5

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看

[root@k8s-master ~]# kubectl get nodes

NAME         STATUS     ROLES                  AGE     VERSION

k8s-master   NotReady   control-plane,master   3m41s   v1.20.0

k8s-node1    NotReady                    12s     v1.20.0

k8s-node2    NotReady                    2m47s   v1.20.0

前面已经提到,在初始化 k8s-master 时并没有网络相关配置,所以无法跟 node 节点通信,因此状态都是“NotReady”。但是通过 kubeadm join 加入的 node 节点已经在k8s-master 上可以看到

3.7、部署Calico网络插件

安装 Calico 网络插件。

[root@k8s-master ~]# kubectl apply -f calico.yaml

configmap/calico-config unchanged

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers configured

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged

clusterrole.rbac.authorization.k8s.io/calico-node configured

clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged

daemonset.apps/calico-node configured

serviceaccount/calico-node unchanged

deployment.apps/calico-kube-controllers configured

serviceaccount/calico-kube-controllers unchanged

poddisruptionbudget.policy/calico-kube-controllers created

查看node节点状态。

[root@k8s-master ~]# kubectl get nodes

NAME         STATUS     ROLES                  AGE     VERSION

k8s-master   Ready      control-plane,master   10m     v1.20.0

k8s-node1    NotReady                    7m18s   v1.20.0

k8s-node2    Ready                       9m53s   v1.20.0

查看所有pod状态

[root@k8s-master ~]#  kubectl get pod --all-namespaces

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE

kube-system   calico-kube-controllers-744cfdf676-t24r8   1/1     Running   0          26m

kube-system   calico-node-bqmc6                          1/1     Running   0          26m

kube-system   calico-node-ljfxt                          1/1     Running   0          26m

kube-system   calico-node-ztkvc                          1/1     Running   0          26m

kube-system   coredns-7f89b7bc75-bvfsq                   1/1     Running   0          31m

kube-system   coredns-7f89b7bc75-x8b2g                   1/1     Running   0          31m

kube-system   etcd-k8s-master                            1/1     Running   0          31m

kube-system   kube-apiserver-k8s-master                  1/1     Running   0          31m

kube-system   kube-controller-manager-k8s-master         1/1     Running   0          31m

kube-system   kube-proxy-bhqhn                           1/1     Running   0          31m

kube-system   kube-proxy-rs47s                           1/1     Running   0          28m

kube-system   kube-proxy-xgwrb                           1/1     Running   0          31m

kube-system   kube-scheduler-k8s-master                  1/1     Running   0          31m

[root@k8s-master ~]# kubectl get nodes

NAME         STATUS   ROLES                  AGE   VERSION

k8s-master   Ready    control-plane,master   31m   v1.20.0

k8s-node1    Ready                     28m   v1.20.0

k8s-node2    Ready                     31m   v1.20.0

4、Calico网络策略基础

4.1、创建服务

创建命名空间。

[root@k8s-master ~]# kubectl create ns policy-demo

namespace/policy-demo created

在 policy-demo 命名空间中创建两个副本的 Nginx Pod。

[root@k8s-master ~]# vim nginx-deployment.yaml

[root@k8s-master ~]# cat nginx-deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx

  namespace: policy-demo

  labels:

    app: nginx

spec:

  replicas: 2

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx

        ports:

        - containerPort: 80

[root@k8s-master ~]# kubectl apply -f nginx-deployment.yaml

deployment.apps/nginx created

[root@k8s-master ~]# kubectl get pod -n policy-demo

NAME                     READY   STATUS              RESTARTS   AGE

nginx-7848d4b86f-2g5lv   0/1     ContainerCreating   0          22s

nginx-7848d4b86f-xpghc   0/1     ContainerCreating   0          22s

[root@k8s-master ~]# kubectl get pod -n policy-demo

NAME                     READY   STATUS    RESTARTS   AGE

nginx-7848d4b86f-2g5lv   1/1     Running   0          97s

nginx-7848d4b86f-xpghc   1/1     Running   0          97s

通过服务暴露 Nginx 的 80 端口。

[root@k8s-master ~]# kubectl expose --namespace=policy-demo deployment nginx --port=80

service/nginx exposed

[root@k8s-master ~]# kubectl get all -n policy-demo

NAME                         READY   STATUS    RESTARTS   AGE

pod/nginx-7848d4b86f-2g5lv   1/1     Running   0          7m26s

pod/nginx-7848d4b86f-xpghc   1/1     Running   0          7m26s

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE

service/nginx   ClusterIP   10.98.167.122           80/TCP    27s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/nginx   2/2     2            2           7m26s

NAME                               DESIRED   CURRENT   READY   AGE

replicaset.apps/nginx-7848d4b86f   2         2         2       7m26s

通过 busybox 的 Pod 去访问 Nginx 服务。

[root@k8s-master ~]# kubectl run --namespace=policy-demo access --rm -ti --image busybox

If you don't see a command prompt, try pressing enter.

/ # wget -q nginx -O -

Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.

For online documentation and support please refer to

nginx.org.

Commercial support is available at

nginx.com.

Thank you for using nginx.

/ # exit

Session ended, resume using 'kubectl attach access -c access -i -t' command when the pod is running

pod "access" deleted

4.2、启用网络隔离

在 policy-demo 命名空间中打开隔离。然后 Calico 将阻止连接到该命名空间中的 Pod。执行以下命令将创建一个 NetworkPolicy,该策略将对 policy-demo 名称空间中的所有 Pod实现默认的拒绝行为。

[root@k8s-master ~]#  kubectl create -f - <

> kind: NetworkPolicy

> apiVersion: networking.k8s.io/v1

> metadata:

>   name: default-deny

>   namespace: policy-demo

> spec:

>   podSelector:

>     matchLabels: {}

> EOF

networkpolicy.networking.k8s.io/default-deny created

4.3、测试网络隔离

启用网络隔离后,所有对 Nginx 服务的访问都将阻止。执行以下命令,尝试再次访问Nginx 服务,查看网络隔离的效果。

[root@k8s-master ~]#  kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh

If you don't see a command prompt, try pressing enter.

/ # wget -q --timeout=5 nginx -O -

wget: download timed out

/ # exit

Session ended, resume using 'kubectl attach access -c access -i -t' command when the pod is running

pod "access" deleted

4.4、允许通过网络策略进行访问

使用 NetworkPolicy 启用对 Nginx 服务的访问。设置允许从 accessPod 传入的连接,但不能从其他任何地方传入。创建 access-nginx 的网络策略具体内容如下所示。

[root@k8s-master ~]# kubectl create -f - <

> kind: NetworkPolicy

> apiVersion: networking.k8s.io/v1

> metadata:

>   name: access-nginx

>   namespace: policy-demo

> spec:

>   podSelector:

>     matchLabels:

>       app: nginx

>   ingress:

>     - from:

>       - podSelector:

>           matchLabels:

>             run: access

> EOF

networkpolicy.networking.k8s.io/access-nginx created

[root@k8s-master ~]# kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/bash

pod "access" deleted

error: timed out waiting for the condition

从 accessPod 访问该服务。

[root@k8s-master ~]#  kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh

If you don't see a command prompt, try pressing enter.

/ # wget -q --timeout=5 nginx -O -

Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.

For online documentation and support please refer to

nginx.org.

Commercial support is available at

nginx.com.

Thank you for using nginx.

/ # exit

Session ended, resume using 'kubectl attach access -c access -i -t' command when the pod is running

pod "access" deleted

如果没有标记access,仍然无法访问服务

[root@k8s-master ~]# kubectl run --namespace=policy-demo cant-access --rm -ti --image busybox /bin/sh

If you don't see a command prompt, try pressing enter.

/ # wget -q --timeout=5 nginx -O -

wget: download timed out

/ # wget -q --timeout=5 nginx -O -

wget: download timed out

/ # exit

Session ended, resume using 'kubectl attach cant-access -c cant-access -i -t' command when the pod is running

pod "cant-access" deleted

5、Calico网络策略进阶

5.1、创建服务

删除命令空间 policy-demo,创建新的命名空间 advanced-policy-demo。

[root@k8s-master ~]# kubectl delete ns policy-demo

namespace "policy-demo" deleted

[root@k8s-master ~]# kubectl create ns advanced-policy-demo

namespace/advanced-policy-demo created

使用 YAML 文件创建 Nginx 服务。

[root@k8s-master ~]# vim nginx-deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx

  namespace: advanced-policy-demo

  labels:

    app: nginx

spec:

  replicas: 2

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx

        ports:

        - containerPort: 80

[root@k8s-master ~]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx created

[root@k8s-master ~]# kubectl expose --namespace=advanced-policy-demo deployment nginx --port=80
service/nginx exposed

验证访问权限并访问百度测试外网连通性。
[root@k8s-master ~]# kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ #  wget -q --timeout=5 nginx -O -



Welcome to nginx!



Welcome to nginx!


If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
nginx.org.

Commercial support is available at
nginx.com.

Thank you for using nginx.



/ # wget -q --timeout=5 www.baidu.com -O -

ays name=referrer>百度一下,你就知道

关于百度 About Baidu

©2017 Baidu 使用百度前必读  意见反馈 京ICP证030173号 

5.2、拒绝所有入口流量


设置网络策略,要求 Nginx 服务拒绝所有入口流量。然后进行访问权限的验证。
[root@k8s-master ~]# kubectl create -f - < apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: advanced-policy-demo
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress
EOF
networkpolicy.networking.k8s.io/default-deny-ingress created

[root@k8s-master ~]# kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget -q --timeout=5 nginx -O -
wget: download timed out

/ # wget -q --timeout=5 www.baidu.com -O -

ays name=referrer>百度一下,你就知道


5.3、允许进入Nginx的流量


执行以下命令,创建一个 NetworkPolicy,设置允许流量从 advanced-policy-demo 命名空间中的任何 Pod 到 Nginx Pod。创建策略成功后,就可以访问 Nginx 服务了。
[root@k8s-master ~]# kubectl create -f - < apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: access-nginx
  namespace: advanced-policy-demo
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress:
    - from:
      - podSelector:
          matchLabels: {}
EOF
networkpolicy.networking.k8s.io/access-nginx created

[root@k8s-master ~]# kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget -q --timeout=5 nginx -O -



Welcome to nginx!



Welcome to nginx!


If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
nginx.org.

Commercial support is available at
nginx.com.

Thank you for using nginx.



5.4、拒绝所有出口流量

设置拒绝所有出口流量的网络策略,该策略设置成功后,任何策略未明确允许的入站或出站流量都将被拒绝。
[root@k8s-master ~]# kubectl create -f - < apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: advanced-policy-demo
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Egress
EOF
networkpolicy.networking.k8s.io/default-deny-egress created

[root@k8s-master ~]# kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
;; connection timed out; no servers could be reached

/ # wget -q --timeout=5 www.baidu.com -O -
wget: bad address 'www.baidu.com'


5.5、允许DNS出口流量
执行以下命令,在 kube-system 名称空间上创建一个标签。该标签的 NetworkPolicy允许 DNS 从 advanced-policy-demo 名称空间中的任何 Pod 到名称空间 kube-system 的出站流量。
[root@k8s-master ~]# kubectl label namespace kube-system name=kube-system
namespace/kube-system labeled
[root@k8s-master ~]# kubectl create -f - < apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-access
  namespace: advanced-policy-demo
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
EOF
networkpolicy.networking.k8s.io/allow-dns-access created
 kubectl create -f - < apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-access
  namespace: advanced-policy-demo
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
EOF


[root@k8s-master ~]# kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server:        10.96.0.10
Address:    10.96.0.10:53

** server can't find nginx.advanced-policy-demo.svc.cluster.local: NXDOMAIN

*** Can't find nginx.svc.cluster.local: No answer
*** Can't find nginx.cluster.local: No answer
*** Can't find nginx.localdomain: No answer
*** Can't find nginx.advanced-policy-demo.svc.cluster.local: No answer
*** Can't find nginx.svc.cluster.local: No answer
*** Can't find nginx.cluster.local: No answer
*** Can't find nginx.localdomain: No answer

/ # nslookup www.baidu.com
Server:        10.96.0.10
Address:    10.96.0.10:53

Non-authoritative answer:
www.baidu.com    canonical name = www.a.shifen.com
Name:    www.a.shifen.com
Address: 39.156.66.18
Name:    www.a.shifen.com
Address: 39.156.66.14

*** Can't find www.baidu.com: No answer

即使 DNS 出口流量被允许,但来自 Advanced-policy-demo 命名空间中所有 Pod 的所有其他出口流量仍被阻止。因此,来自 wget 调用的 HTTP 出口流量仍将失败。
/ # wget -q --timeout=5 nginx -O -
wget: download timed out
5.6、允许出口流量到Nginx
执行以下命令,创建一个 NetworkPolicy,允许从 advanced-policy-demo 名称空间中的任何 Pod 到具有 app: nginx 相同名称空间中标签匹配的 Pod 的出站流量。
[root@k8s-master ~]# kubectl create -f - < apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-egress-to-advance-policy-ns
  namespace: advanced-policy-demo
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: nginx
EOF
networkpolicy.networking.k8s.io/allow-egress-to-advance-policy-ns created

[root@k8s-master ~]# kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget -q --timeout=5 nginx -O -



Welcome to nginx!



Welcome to nginx!


If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
nginx.org.

Commercial support is available at
nginx.com.

Thank you for using nginx.




/ # wget -q --timeout=5 www.baidu.com -O -
wget: download timed out
访问百度超时,是因为它可以解决 DNS 匹配标签以外的其他任何出口访问 app: nginx
的 advanced-policy-demo 命名空间。

你可能感兴趣的:(java,开发语言)