K8s 配置网络插件flannel

配置网络插件flannel

docker:
    bridge:容器的默认网络
    joined:使用别的容器的网络空间
    open:容器直接共享宿主机的网络空间
    none:不使用网络空间

Kubernetes网络通信:
    容器间通信:同一个Pod内的多个容器间的通信
    Pod通信:Pod IP <==> Pod IP
    Pod与Service通信: Pod IP <==> ClusterIP
    Service与集群外部客户端的通信

K8s本身没有网络方案,它允许别人给他提供
主要的网络有:
    flannel  默认是使用VXLAN方式进行通信
    calico
    canel
    Kube-router
    -----
    解决方案:
    虚拟网桥
    多路复用:MacVLAN
    硬件交换:SR-IOV 单根虚拟化

-------------------------------------------------------------------------

两台主机上上Pod进行通信,利用flannel
vxlan:扩展的虚拟局域网
    V虚拟的
    X扩展的
    lan局域网

flannel
    支持多种后端
    Vxlan
        1.valan
        2.Dirextrouting
    host-gw:Host Gateway  #不推荐,只能在二层网络中,不支持跨网络,如果有成千上万的Pod,容易产生广播风暴
    UDP:性能差

查看CNI插件

[root@master ~]# cat /etc/cni/net.d/10-flannel.conflist 
{
  "name": "cbr0",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true  #端口映射
      }
    }
  ]
}

注意:使用adm安装的k8s的插件flannel是使用的容器的形式

[root@master ~]# kubectl get daemonset -n kube-system
NAME                      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
kube-flannel-ds-amd64     3         3         3         3            3           beta.kubernetes.io/arch=amd64     12d
kube-flannel-ds-arm       0         0         0         0            0           beta.kubernetes.io/arch=arm       12d
kube-flannel-ds-arm64     0         0         0         0            0           beta.kubernetes.io/arch=arm64     12d
kube-flannel-ds-ppc64le   0         0         0         0            0           beta.kubernetes.io/arch=ppc64le   12d
kube-flannel-ds-s390x     0         0         0         0            0           beta.kubernetes.io/arch=s390x     12d
kube-proxy                3         3         3         3            3           beta.kubernetes.io/
查看一下,另外的两个node节点是否安装了flannet
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                                   READY     STATUS    RESTARTS   AGE       IP              NODE      NOMINATED NODE
coredns-78fcdf6894-27npt               1/1       Running   1          12d       10.244.0.5      master    
coredns-78fcdf6894-mbg8n               1/1       Running   1          12d       10.244.0.4      master    
etcd-master                            1/1       Running   1          12d       192.168.68.10   master    
kube-apiserver-master                  1/1       Running   1          12d       192.168.68.10   master    
kube-controller-manager-master         1/1       Running   1          12d       192.168.68.10   master    
kube-flannel-ds-amd64-qdmsx            1/1       Running   0          12d       192.168.68.20   node1     
kube-flannel-ds-amd64-rhb49            1/1       Running   6          12d       192.168.68.30   node2     
kube-flannel-ds-amd64-sd6mr            1/1       Running   1          12d       192.168.68.10   master    
kube-proxy-g9n4d                       1/1       Running   1          12d       192.168.68.10   master    
kube-proxy-wrqt8                       1/1       Running   2          12d       192.168.68.30   node2     
kube-proxy-x7vc2                       1/1       Running   0          12d       192.168.68.20   node1     
kube-scheduler-master                  1/1       Running   1          12d       192.168.68.10   master    
kubernetes-dashboard-767dc7d4d-7rmp8   1/1       Running   0          2d        10.244.1.72     node1     
因为matser配置文件配置的flannel-cfg
[root@master ~]# kubectl get configmap -n kube-system
NAME                                 DATA      AGE
coredns                              1         12d
extension-apiserver-authentication   6         12d
kube-flannel-cfg                     2         12d  #配置文件
kube-proxy                           2         12d
kubeadm-config                       1         12d
kubelet-config-1.11                  1         12d
kubernetes-dashboard-settings        1         2d

查看kube-flannel-cfg

[root@master ~]# kubectl get configmap kube-flannel-cfg -o json -n kube-system
{
    "apiVersion": "v1",
    "data": {
        "cni-conf.json": "{\n  \"name\": \"cbr0\",\n  \"plugins\": [\n    {\n      \"type\": \"flannel\",\n      \"delegate\": {\n        \"hairpinMode\": true,\n        \"isDefaultGateway\": true\n      }\n    },\n    {\n      \"type\": \"portmap\",\n      \"capabilities\": {\n        \"portMappings\": true\n      }\n    }\n  ]\n}\n",
        "net-conf.json": "{\n  \"Network\": \"10.244.0.0/16\",\n  \"Backend\": {\n    \"Type\": \"vxlan\"\n  }\n}\n"
    },
    "kind": "ConfigMap",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"cni-conf.json\":\"{\\n  \\\"name\\\": \\\"cbr0\\\",\\n  \\\"plugins\\\": [\\n    {\\n      \\\"type\\\": \\\"flannel\\\",\\n      \\\"delegate\\\": {\\n        \\\"hairpinMode\\\": true,\\n        \\\"isDefaultGateway\\\": true\\n      }\\n    },\\n    {\\n      \\\"type\\\": \\\"portmap\\\",\\n      \\\"capabilities\\\": {\\n        \\\"portMappings\\\": true\\n      }\\n    }\\n  ]\\n}\\n\",\"net-conf.json\":\"{\\n  \\\"Network\\\": \\\"10.244.0.0/16\\\",\\n  \\\"Backend\\\": {\\n    \\\"Type\\\": \\\"vxlan\\\"\\n  }\\n}\\n\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"flannel\",\"tier\":\"node\"},\"name\":\"kube-flannel-cfg\",\"namespace\":\"kube-system\"}}\n"
        },
        "creationTimestamp": "2018-09-04T15:21:06Z",
        "labels": {
            "app": "flannel",
            "tier": "node"
        },
        "name": "kube-flannel-cfg",
        "namespace": "kube-system",
        "resourceVersion": "1263",
        "selfLink": "/api/v1/namespaces/kube-system/configmaps/kube-flannel-cfg",
        "uid": "249399d6-b056-11e8-a432-000c29f33006"
    }
}


通过上面的信息可以看出:
默认网络时vxlan
默认的pod网络是:10.244.0.0/16

flannel的配置参数:
	Network:flannel使用的CIDR格式的网络地址,用于Pod配置网络功能
	10.244.0.0/16 ->
		master:10.244.0.0/24
		node01:10.244.1.0/24
		node255:10.244.255.0/24

	SubnetLen:把Network切分子网供各个节点使用时,使用多长的掩码进行切分,默认为24位
	subnetMin:10.244.10.0/24
	subnetMax:10.244.100.0/24
	Backend:vxlan,host-gw,udp

##########################
网络测试一
##########################

[root@master manifests]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80


[root@master manifests]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE      NOMINATED NODE
myapp-deploy-67f6f6b4dc-2dqrp   1/1       Running   0          2m        10.244.2.85   node2     
myapp-deploy-67f6f6b4dc-cqttt   1/1       Running   0          2m        10.244.1.73   node1     
myapp-deploy-67f6f6b4dc-qqv7f   1/1       Running   0          2m        10.244.2.84   node2     
pod-sa-demo                     1/1       Running   0          3d        10.244.2.82   node2     

node1和node2上面都有myapp服务

在master 
[root@master manifests]# kubectl exec -it myapp-deploy-67f6f6b4dc-2dqrp /bin/sh
新开master窗口:
[root@master ~]# kubectl exec -it myapp-deploy-67f6f6b4dc-cqttt /bin/sh

在node1和node2上分别安装tcpdump抓包工具

yum install -y tcpdump

[root@node2 ~]# brctl show cni0
bridge name	bridge id		STP enabled	interfaces
cni0		8000.0a580af40201	no		veth09de1518
							veth91f026fc
							vethb035fae2
因为他是经过cni0接口进行转发

node1执行:
[root@node1 ~]# tcpdump -i cni0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:44:52.773662 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 364, length 64
11:44:52.773690 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 364, length 64
11:44:53.774519 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 365, length 64
11:44:53.774562 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 365, length 64
11:44:54.774933 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 366, length 64
11:44:54.774975 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 366, length 64

node2执行:
[root@node2 ~]# tcpdump -i cni0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:45:25.798557 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 397, length 64
11:45:25.798958 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 397, length 64
11:45:26.799021 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 398, length 64
11:45:26.799405 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 398, length 64

从nni0进来,从flannel出去,到物理网卡的时候已经封装成vxlan格式报文了

查看flannel
[root@node1 ~]# tcpdump -i flannel.1 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
11:48:55.927311 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 606, length 64
11:48:55.927404 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 606, length 64
11:48:56.927997 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 607, length 64
11:48:56.928074 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 607, length 64
11:48:57.928449 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 608, length 64
11:48:57.928537 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 608, length 64
11:48:58.928862 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 609, length 64
11:48:58.928918 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 609, length 64



直接网卡抓包:
tcpdump -i ens33 -nn


直接编辑kube-flannel-cfg网络模式:
kubectl edit configmap kube-flannel-cfg -n kube-system

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
        "Directrouting":true  #新加
      }
    }


[root@master ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.10 metric 100 

可以看出:流量都是flanne1.1送出去的

[root@master ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.68.2    0.0.0.0         UG    100    0        0 ens33
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      10.244.1.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.68.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33


再次查看配置:
kubectl get configmap kube-flannel-cfg -o json -n kube-system

有记录:\"Directrouting\":true\
但是没有生效,缺少,

 net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting":true
      }
    }

修改之后,在node1上查看,依旧没有生效
正常情况下不会显示flannel.1 他会直接显示物理接口
[root@node1 ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100 

我更换一个方式继续配置网络:

https://github.com/coreos/flannel#flannel

找到:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

把文件下载下来:
打开编辑文件:
net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting":true #新加
      }


开始创建:
[root@master flannel]# kubectl apply -f kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.extensions/kube-flannel-ds-amd64 unchanged
daemonset.extensions/kube-flannel-ds-arm64 unchanged
daemonset.extensions/kube-flannel-ds-arm unchanged
daemonset.extensions/kube-flannel-ds-ppc64le unchanged
daemonset.extensions/kube-flannel-ds-s390x unchanged


这是在查看节点
[root@node1 ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100 
发现还是没有生效


然后我们删除:
#注意,生产不能运行这一步,所有的pod都会运行不了,失去通信
[root@master flannel]# kubectl delete -f kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.extensions "kube-flannel-ds-amd64" deleted
daemonset.extensions "kube-flannel-ds-arm64" deleted
daemonset.extensions "kube-flannel-ds-arm" deleted
daemonset.extensions "kube-flannel-ds-ppc64le" deleted
daemonset.extensions "kube-flannel-ds-s390x" deleted



再次创建
[root@master flannel]# kubectl apply -f kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

这是再次查看就已经生效了,直接是本机的网卡

[root@node1 ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100 
10.244.0.0/24 via 192.168.68.10 dev ens33 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 192.168.68.30 dev ens33 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100 


正常运行:
[root@master flannel]# kubectl get pods -n kube-system -w
NAME                                   READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-27npt               1/1       Running   1          12d
coredns-78fcdf6894-mbg8n               1/1       Running   1          12d
etcd-master                            1/1       Running   1          12d
kube-apiserver-master                  1/1       Running   1          12d
kube-controller-manager-master         1/1       Running   1          12d
kube-flannel-ds-amd64-5lrjm            1/1       Running   0          2m
kube-flannel-ds-amd64-b8dfz            1/1       Running   0          2m
kube-flannel-ds-amd64-n45sn            1/1       Running   0          2m
kube-proxy-g9n4d                       1/1       Running   1          12d
kube-proxy-wrqt8                       1/1       Running   2          12d
kube-proxy-x7vc2                       1/1       Running   0          12d
kube-scheduler-master                  1/1       Running   1          12d
kubernetes-dashboard-767dc7d4d-7rmp8   1/1       Running   0          2d

##########################
再次开始node1的Pod ping node2的Pod

[root@master manifests]# kubectl apply -f deploy-demo.yaml 
deployment.apps/myapp-deploy created
[root@master manifests]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE      NOMINATED NODE
myapp-deploy-67f6f6b4dc-6k25w   1/1       Running   0          8s        10.244.1.74   node1     
myapp-deploy-67f6f6b4dc-b28tl   1/1       Running   0          8s        10.244.2.86   node2     
myapp-deploy-67f6f6b4dc-g5n95   1/1       Running   0          8s        10.244.2.87   node2     
pod-sa-demo                     1/1       Running   0          4d        10.244.2.82   node2     

master:
[root@master manifests]# kubectl exec -it myapp-deploy-67f6f6b4dc-6k25w /bin/sh
[root@master ~]# kubectl exec -it myapp-deploy-67f6f6b4dc-b28tl /bin/sh

[root@master manifests]# kubectl exec -it myapp-deploy-67f6f6b4dc-6k25w /bin/sh
/ # ping 10.244.2.86   #node1 pod pingnode2 pod
PING 10.244.2.86 (10.244.2.86): 56 data bytes
64 bytes from 10.244.2.86: seq=0 ttl=62 time=1.020 ms
64 bytes from 10.244.2.86: seq=1 ttl=62 time=0.225 ms
可以ping通


开始在两个节点开始抓包:
[root@node1 ~]# tcpdump -i ens33 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
14:18:29.327728 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 58, length 64
14:18:29.327958 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 58, length 64
14:18:30.328669 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 59, length 64
14:18:30.328904 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 59, length 64
14:18:31.328810 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 60, length 64
14:18:31.329032 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 60, length 64
14:18:32.329177 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 61, length 64
14:18:32.329371 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 61, length 64

[root@node2 ~]# tcpdump -i ens33 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
14:19:17.368560 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 106, length 64
14:19:17.368623 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 106, length 64
14:19:18.369045 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 107, length 64
14:19:18.369105 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 107, length 64
14:19:19.369631 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 108, length 64
14:19:19.369689 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 108, length 64
14:19:20.370102 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 109, length 64
14:19:20.370141 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 109, length 64
这是成功的
这是直接实现了桥接功能,性能也是非常优越的
都是直接路由,直接路由为什么可以成功,因为是本地每一条都做了路由

ip route show
10.244.0.0/24 via 192.168.68.10 dev ens33 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 192.168.68.30 dev ens33 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100 

 

 

你可能感兴趣的:(Docker,运维,Kubernetes)