Tungsten Fabric入门宝典丨关于安装的那些事(上)

Tungsten Fabric入门宝典系列文章,来自技术大牛倾囊相授的实践经验,由TF中文社区为您编译呈现,旨在帮助新手深入理解TF的运行、安装、集成、调试等全流程。如果您有相关经验或疑问,欢迎与我们互动,并与社区极客们进一步交流。更多TF技术文章,请点击公号底部按钮>学习>文章合集。
作者:Tatsuya Naganawa 译者:TF编译组

在“启动并运行”的部分,描述了1个控制器和1个vRouter的设置,没有涵盖HA的情况(也没有overlay流量的情况)。下面来描述更现实的情况,每个编排器包含3个控制器和2个计算节点(可能还包含多NIC)。

  • 在本章中,我将使用opencontrailnightly:latest repo,因为5.0.1版本中有几个功能还不可用,但是请注意,在某些情况下此repo可能会有点不稳定。

Tungsten Fabric 组件的HA行为

如果计划设置用于关键流量,则始终需要使用HA。

Tungsten Fabric拥有不错的HA实施,已经以下的文档中有相关信息。

  • http://www.opencontrail.org/opencontrail-architecture-documentation/#section2_7

这里我想多说的一件事,cassandra的keyspace在configdb和analyticsdb之间具有不同的replication-factor。

  • configdb:https://github.com/Juniper/contrail-controller/blob/master/src/config/common/vnc_cassandra.py#L609
  • analytics:https://github.com/Juniper/contrail-analytics/blob/master/contrail-collector/db_handler.cc#L524

由于configdb的数据已复制到所有的cassandras,因此即使某些节点的磁盘崩溃并需要抹掉,也不太可能丢失数据。另一方面,由于analyticsdb的replication-factor始终设置为2,因此如果两个节点同时丢失数据,那么数据就可能会丢失。

多NIC安装

在安装Tungsten Fabric时,许多情况下都需要进行多NIC安装,例如用于管理平面和控制/数据平面的,都是单独的NIC。

  • 绑定(bonding)不在此讨论中,因为bond0可以直接由VROUTER_GATEWAY参数指定

我需要明确一下在此设置中vRouter的有趣的行为。

对于controller/analytics来说,与典型的Linux安装并没有太大区别,这是因为Linux可以与多个NIC和其自己的路由表(包括使用静态路由)很好地协同工作。

另一方面,在vRouter节点中您需要注意的是,vRouter在发送报文时不会使用Linux路由表,而是始终将报文发送到网关IP。

  • 这可以使用concert-vrouter-agent.conf中的网关参数和vrouter-agent容器的环境变量中的VROUTER_GATEWAY进行设置

因此,在设置多NIC安装时,如果需要指定VROUTER_GATEWAY,那么您需要小心一点。

如果没有指明,并且Internet访问的路由(0.0.0.0/0)是由管理NIC而不是数据平面NIC所覆盖,那么vrouter-agent容器将选择保存该节点默认路由的NIC,尽管那不会是正确的NIC。

在这种情况下,您需要显式指定VROUTER_GATEWAY参数。

由于这些行为的存在,当您要将报文从虚拟机或容器发送到NIC(除了vRouter使用的NIC之外的其它NIC)时,仍然需要谨慎一些,因为它同样不会检查Linux路由表,并且它始终使用与其它vRouter通信相同的NIC。

  • 据我所知,来自本地链接服务或无网关的报文也显示出类似的行为

在这种情况下,您可能需要使用简单网关(simple-gateway)或SR-IOV。

  • https://github.com/Juniper/contrail-controller/wiki/Simple-Gateway

调整集群大小

对于Tungsten Fabric集群的一般规格(sizing),您可以使用下面的表。

  • https://github.com/hartmutschroeder/contrailandrhosp10#21sizing-the-controller-nodes-and-vms

如果集群规模很大,则需要大量资源来保障控制平面的稳定。

请注意,从5.1版本开始,analytics数据库(以及analytics的某些组件)成为了可选项。因此,如果您只想使用Tungsten Fabric中的控制平面,我建议使用5.1版本。

  • https://github.com/Juniper/contrail-analytics/blob/master/specs/analytics_optional_components.md

尽管没有一个方便的答案,但集群的大小也是很重要的,因为它取决于很多因素。

  • 我曾经尝试用一个K8s集群(https://kubernetes.io/docs/setup/cluster-large/)部署了近5,000个节点。在它与一个具有64个vCPU和58GB内存的控制器节点配合使用时效果很不错,尽管当时我并没有创建太多的端口、策略和逻辑路由器等。
  • 这个Wiki也描述了一些有关海量规模集群的真实经验:https://wiki.tungsten.io/display/TUN/KubeCon+NA+in+Seattle+2018

由于可以随时从云中获取大量资源,因此最好的选择应该是按照实际需求的大小和流量来模拟集群,并查看其是否正常运行,以及瓶颈是什么。

Tungsten Fabric在应对海量规模方面拥有一些很好的功能,例如,基于集群之间的MP-BGP的多集群设置,以及基于3层虚拟网络的BUM丢弃功能,这大概就是其具备可扩展性和稳定性虚拟网络的关键。

  • https://bugs.launchpad.net/juniperopenstack/+bug/1471637

为了说明控件的横向扩展行为,我在AWS中创建了一个包含980个vRouter和15个控件的集群。

  • 所有控制节点均具有4个vCPU和16GB内存
Following instances.yaml is used to provision controller nodes,
and non-nested.yaml for kubeadm (https://github.com/Juniper/contrail-container-builder/blob/master/kubernetes/manifests/contrail-non-nested-kubernetes.yaml) is used to provision vRouters

(venv) [root@ip-172-31-21-119 ~]# cat contrail-ansible-deployer/config/instances.yaml
provider_config:
  bms:
   ssh_user: centos
   ssh_public_key: /root/.ssh/id_rsa.pub
   ssh_private_key: /tmp/aaa.pem
   domainsuffix: local
   ntpserver: 0.centos.pool.ntp.org
instances:
  bms1:
   provider: bms
   roles:
      config_database:
      config:
      control:
      analytics:
      webui:
   ip: 172.31.21.119
  bms2:
   provider: bms
   roles:
     control:
     analytics:
   ip: 172.31.21.78
  bms3:
   provider: bms

 ...

  bms13:
   provider: bms
   roles:
     control:
     analytics:
   ip: 172.31.14.189
  bms14:
   provider: bms
   roles:
     control:
     analytics:
   ip: 172.31.2.159
  bms15:
   provider: bms
   roles:
     control:
     analytics:
   ip: 172.31.7.239
contrail_configuration:
  CONTRAIL_CONTAINER_TAG: r5.1
  KUBERNETES_CLUSTER_PROJECT: {}
  JVM_EXTRA_OPTS: "-Xms128m -Xmx1g"
global_configuration:
  CONTAINER_REGISTRY: tungstenfabric
(venv) [root@ip-172-31-21-119 ~]#

[root@ip-172-31-4-80 ~]# kubectl get node | head
NAME                                               STATUS     ROLES    AGE     VERSION
ip-172-31-0-112.ap-northeast-1.compute.internal    Ready      <none>   9m24s   v1.15.0
ip-172-31-0-116.ap-northeast-1.compute.internal    Ready      <none>   9m37s   v1.15.0
ip-172-31-0-133.ap-northeast-1.compute.internal    Ready      <none>   9m37s   v1.15.0
ip-172-31-0-137.ap-northeast-1.compute.internal    Ready      <none>   9m24s   v1.15.0
ip-172-31-0-141.ap-northeast-1.compute.internal    Ready      <none>   9m24s   v1.15.0
ip-172-31-0-142.ap-northeast-1.compute.internal    Ready      <none>   9m24s   v1.15.0
ip-172-31-0-151.ap-northeast-1.compute.internal    Ready      <none>   9m37s   v1.15.0
ip-172-31-0-163.ap-northeast-1.compute.internal    Ready      <none>   9m37s   v1.15.0
ip-172-31-0-168.ap-northeast-1.compute.internal    Ready      <none>   9m16s   v1.15.0
[root@ip-172-31-4-80 ~]# 
[root@ip-172-31-4-80 ~]# kubectl get node | grep -w Ready | wc -l
980
[root@ip-172-31-4-80 ~]# 


(venv) [root@ip-172-31-21-119 ~]# contrail-api-cli --host 172.31.21.119 ls virtual-router | wc -l
980
(venv) [root@ip-172-31-21-119 ~]#

当控制节点的数量为15时,XMPP的连接数最多只有113,因此CPU使用率不是很高(最高只有5.4%)。

[root@ip-172-31-21-119 ~]# ./contrail-introspect-cli/ist.py ctr nei | grep -w XMPP | wc -l
113
[root@ip-172-31-21-119 ~]#

top - 05:52:14 up 42 min,  1 user,  load average: 1.73, 5.50, 3.57
Tasks: 154 total,   1 running, 153 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.4 us,  2.9 sy,  0.0 ni, 94.6 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
KiB Mem : 15233672 total,  8965420 free,  2264516 used,  4003736 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 12407304 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                
32368 root      20   0  839848  55240  11008 S   7.6  0.4   0:21.40 contrail-collec                                                                                        
28773 root      20   0 1311252 132552  14540 S   5.3  0.9   1:40.72 contrail-contro                                                                                        
17129 polkitd   20   0   56076  22496   1624 S   3.7  0.1   0:11.42 redis-server                                                                                           
32438 root      20   0  248496  40336   5328 S   2.0  0.3   0:15.80 python                                                                                                 
18346 polkitd   20   0 2991576 534452  22992 S   1.7  3.5   4:56.90 java                                                                                                   
15344 root      20   0  972324  97248  35360 S   1.3  0.6   2:25.84 dockerd                                                                                                
15351 root      20   0 1477100  32988  12532 S   0.7  0.2   0:08.72 docker-containe                                                                                        
18365 centos    20   0 5353996 131388   9288 S   0.7  0.9   0:09.49 java                                                                                                   
19994 polkitd   20   0 3892836 127772   3644 S   0.7  0.8   1:34.55 beam.smp                                                                                               
17112 root      20   0    7640   3288   2456 S   0.3  0.0   0:00.24 docker-containe                                                                                        
24723 root      20   0  716512  68920   6288 S   0.3  0.5   0:01.75 node   

但是,当其中12个控制节点停止工作时,剩余的每个控制节点的XMPP连接数将高达708,因此CPU使用率变得很高(21.6%)。

因此,如果您需要部署大量的节点,那么可能需要仔细规划控制节点的数量。

[root@ip-172-31-21-119 ~]# ./contrail-introspect-cli/ist.py ctr nei | grep -w BGP
| ip-172-31-13-119.local               | 172.31.13.119 | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:10:47.527354 |
| ip-172-31-13-87.local                | 172.31.13.87  | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:10:08.610734 |
| ip-172-31-14-189.local               | 172.31.14.189 | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:16:34.953311 |
| ip-172-31-14-243.local               | 172.31.14.243 | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:06:12.379006 |
| ip-172-31-17-212.local               | 172.31.17.212 | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:03:15.650529 |
| ip-172-31-2-159.local                | 172.31.2.159  | 64512    | BGP      | internal  | Established | in sync         | 0          | n/a                         |
| ip-172-31-21-78.local                | 172.31.21.78  | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 05:58:15.068791 |
| ip-172-31-22-95.local                | 172.31.22.95  | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 05:59:43.238465 |
| ip-172-31-23-207.local               | 172.31.23.207 | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:02:24.922901 |
| ip-172-31-25-214.local               | 172.31.25.214 | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:04:52.624323 |
| ip-172-31-30-137.local               | 172.31.30.137 | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:05:33.020029 |
| ip-172-31-4-76.local                 | 172.31.4.76   | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:12:04.853319 |
| ip-172-31-7-239.local                | 172.31.7.239  | 64512    | BGP      | internal  | Established | in sync         | 0          | n/a                         |
| ip-172-31-9-245.local                | 172.31.9.245  | 64512    | BGP      | internal  | Active      | not advertising | 1          | 2019-Jun-29 06:07:01.750834 |
[root@ip-172-31-21-119 ~]# ./contrail-introspect-cli/ist.py ctr nei | grep -w XMPP | wc -l
708
[root@ip-172-31-21-119 ~]# 

top - 06:19:56 up  1:10,  1 user,  load average: 2.04, 2.47, 2.27
Tasks: 156 total,   2 running, 154 sleeping,   0 stopped,   0 zombie
%Cpu(s): 11.5 us,  9.7 sy,  0.0 ni, 78.4 id,  0.0 wa,  0.0 hi,  0.3 si,  0.2 st
KiB Mem : 15233672 total,  7878520 free,  3006892 used,  4348260 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 11648264 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                
32368 root      20   0  890920 145632  11008 S  15.6  1.0   3:25.34 contrail-collec                                                                                        
28773 root      20   0 1357728 594448  14592 S  13.0  3.9   9:00.69 contrail-contro                                                                                        
18686 root      20   0  249228  41000   5328 R  10.3  0.3   1:00.89 python                                                                                                 
15344 root      20   0  972324  97248  35360 S   9.0  0.6   3:26.60 dockerd                                                                                                
17129 polkitd   20   0  107624  73908   1644 S   8.3  0.5   1:50.81 redis-server                                                                                           
21458 root      20   0  248352  40084   5328 S   2.7  0.3   0:41.11 python                                                                                                 
18302 root      20   0    9048   3476   2852 S   2.0  0.0   0:05.32 docker-containe                                                                                        
28757 root      20   0  248476  40196   5328 S   1.7  0.3   0:37.21 python                                                                                                 
32438 root      20   0  248496  40348   5328 S   1.7  0.3   0:34.26 python                                                                                                 
15351 root      20   0 1477100  33204  12532 S   1.3  0.2   0:16.82 docker-containe                                                                                        
18346 polkitd   20   0 2991576 563864  25552 S   1.0  3.7   5:45.65 java                                                                                                   
19994 polkitd   20   0 3880472 129392   3644 S   0.7  0.8   1:51.54 beam.smp                                                                                               
28744 root      20   0 1373980 136520  12180 S   0.7  0.9   3:13.94 contrail-dns

kubeadm

在撰写本文档时,ansible-deployer尚未支持K8s master HA。

  • https://bugs.launchpad.net/juniperopenstack/+bug/1761137

由于kubeadm已经支持K8s master HA,因此我将介绍集成基于kubeadm的k8s安装和基于YAML的Tungsten Fabric安装的方法。

  • https://kubernetes.io/docs/setup/independent/high-availability/
  • https://github.com/Juniper/contrail-ansible-deployer/wiki/Provision-Contrail-Kubernetes-Cluster-in-Non-nested-Mode

与其它CNI一样,也可以通过“kubectl apply”命令直接安装Tungsten Fabric。但要实现此目的,需要手动配置一些参数,例如控制器节点的IP地址。

对于此示例的设置,我使用了5个EC2实例(AMI也一样,ami-3185744e),每个实例具有2个vCPU、8 GB内存、20 GB磁盘空间。VPC的CIDR为172.31.0.0/16。

(on all nodes)
# cat < install-k8s-packages.sh
bash -c 'cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
     https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF'
setenforce 0
yum install -y kubelet kubeadm kubectl docker
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
swapoff -a
CONTENTS

# bash install-k8s-packages.sh

(on the first k8s master node)
yum -y install haproxy
# vi /etc/haproxy/haproxy.cfg
(add those lines at the last of this file)
listen kube
  mode tcp
  bind 0.0.0.0:1443
  server master1 172.31.13.9:6443
  server master2 172.31.8.73:6443
  server master3 172.31.32.58:6443
# systemctl start haproxy
# systemctl enable haproxy

# vi kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "ip-172-31-13-9"
controlPlaneEndpoint: "ip-172-31-13-9:1443"

# kubeadm init --config=kubeadm-config.yaml

(save those lines for later use)
  kubeadm join ip-172-31-13-9:1443 --token mlq9gw.gt5m13cbro6c8xsu \
    --discovery-token-ca-cert-hash sha256:677ea74fa03311a38ecb497d2f0803a5ea1eea85765aa2daa4503f24dd747f9a \
    --experimental-control-plane
  kubeadm join ip-172-31-13-9:1443 --token mlq9gw.gt5m13cbro6c8xsu \
    --discovery-token-ca-cert-hash sha256:677ea74fa03311a38ecb497d2f0803a5ea1eea85765aa2daa4503f24dd747f9a 

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

# cd /etc/kubernetes
# tar czvf /tmp/k8s-master-ca.tar.gz pki/ca.crt pki/ca.key pki/sa.key pki/sa.pub pki/front-proxy-ca.crt pki/front-proxy-ca.key pki/etcd/ca.crt pki/etcd/ca.key admin.conf
(scp that tar file to 2nd and 3rd k8s master node)

(On 2nd and 3rd k8s master nodes)
# mkdir -p /etc/kubernetes/pki/etcd
# cd /etc/kubernetes
# tar xvf /tmp/k8s-master-ca.tar.gz

# kubeadm join ip-172-31-13-9:1443 --token mlq9gw.gt5m13cbro6c8xsu \
    --discovery-token-ca-cert-hash sha256:677ea74fa03311a38ecb497d2f0803a5ea1eea85765aa2daa4503f24dd747f9a \
    --experimental-control-plane

(on k8s nodes)
 - type kubeadm join commands, which is previosly saved
# kubeadm join ip-172-31-13-9:1443 --token mlq9gw.gt5m13cbro6c8xsu \
    --discovery-token-ca-cert-hash sha256:677ea74fa03311a38ecb497d2f0803a5ea1eea85765aa2daa4503f24dd747f9a

(on the first k8s master node)
# vi set-label.sh
masternodes=$(kubectl get node | grep -w master | awk '{print $1}')
agentnodes=$(kubectl get node | grep -v -w -e master -e NAME | awk '{print $1}')
for i in config configdb analytics webui control
do
 for masternode in ${masternodes}
 do
  kubectl label node ${masternode} node-role.opencontrail.org/${i}=
 done
done

for i in ${agentnodes}
do
 kubectl label node ${i} node-role.opencontrail.org/agent=
done

# bash set-label.sh



# yum -y install git
# git clone https://github.com/Juniper/contrail-container-builder.git
# cd /root/contrail-container-builder/kubernetes/manifests
# cat < ../../common.env
CONTRAIL_CONTAINER_TAG=latest
CONTRAIL_REGISTRY=opencontrailnightly
EOF

# ./resolve-manifest.sh contrail-standalone-kubernetes.yaml > cni-tungsten-fabric.yaml 
# vi cni-tungsten-fabric.yaml
(manually modify those lines)
 - lines which includes ANALYTICS_API_VIP, CONFIG_API_VIP, VROUTER_GATEWAY need to be deleted
 - Several lines which include ANALYTICS_NODES. ANALYTICSDB_NODES, CONFIG_NODES, CONFIGDB_NODES, CONTROL_NODES, CONTROLLER_NODES, RABBITMQ_NODES, ZOOKEEPER_NODES need to be set properly, like CONFIG_NODES: ip1,ip2,ip3
# kubectl apply -f cni-tungsten-fabric.yaml

我将附上一些原始和修改的yaml文件以供进一步参考。

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/cni-tungsten-fabric.yaml.orig
  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/cni-tungsten-fabric.yaml

然后,您终于有了(多数情况下)已经启动了的具有Tungsten Fabric CNI的kubernetes HA环境。

注意:Coredns在此输出中未处于活动状态,我将在本节稍后的部分对此进行修复。

[root@ip-172-31-13-9 ~]# kubectl get node -o wide
NAME                                               STATUS     ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
ip-172-31-13-9.ap-northeast-1.compute.internal     NotReady   master   34m   v1.14.1   172.31.13.9     <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://1.13.1
ip-172-31-17-120.ap-northeast-1.compute.internal   Ready      <none>   30m   v1.14.1   172.31.17.120   <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://1.13.1
ip-172-31-32-58.ap-northeast-1.compute.internal    NotReady   master   32m   v1.14.1   172.31.32.58    <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://1.13.1
ip-172-31-5-235.ap-northeast-1.compute.internal    Ready      <none>   30m   v1.14.1   172.31.5.235    <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://1.13.1
ip-172-31-8-73.ap-northeast-1.compute.internal     NotReady   master   31m   v1.14.1   172.31.8.73     <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://1.13.1
[root@ip-172-31-13-9 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                                                      READY   STATUS    RESTARTS   AGE     IP              NODE                                               NOMINATED NODE   READINESS GATES
kube-system   config-zookeeper-d897f                                                    1/1     Running   0          7m14s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   config-zookeeper-fvnbq                                                    1/1     Running   0          7m14s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   config-zookeeper-t5vjc                                                    1/1     Running   0          7m14s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-agent-cqpxc                                                      2/2     Running   0          7m12s   172.31.17.120   ip-172-31-17-120.ap-northeast-1.compute.internal   <none>           <none>
kube-system   contrail-agent-pv7c8                                                      2/2     Running   0          7m12s   172.17.0.1      ip-172-31-5-235.ap-northeast-1.compute.internal    <none>           <none>
kube-system   contrail-analytics-cfcx8                                                  3/3     Running   0          7m14s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-analytics-h5jbr                                                  3/3     Running   0          7m14s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-analytics-wvc5n                                                  3/3     Running   0          7m14s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   contrail-config-database-nodemgr-7f5h5                                    1/1     Running   0          7m14s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-config-database-nodemgr-bkmpz                                    1/1     Running   0          7m14s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-config-database-nodemgr-z6qx9                                    1/1     Running   0          7m14s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   contrail-configdb-5vd8t                                                   1/1     Running   0          7m14s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   contrail-configdb-kw6v7                                                   1/1     Running   0          7m14s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-configdb-vjv2b                                                   1/1     Running   0          7m14s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-controller-config-dk78j                                          5/5     Running   0          7m13s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-controller-config-jrh27                                          5/5     Running   0          7m14s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   contrail-controller-config-snxnn                                          5/5     Running   0          7m13s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-controller-control-446v8                                         4/4     Running   0          7m14s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-controller-control-fzpwz                                         4/4     Running   0          7m14s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   contrail-controller-control-tk52v                                         4/4     Running   1          7m14s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-controller-webui-94s26                                           2/2     Running   0          7m13s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   contrail-controller-webui-bdzbj                                           2/2     Running   0          7m13s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-controller-webui-qk4ww                                           2/2     Running   0          7m13s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-kube-manager-g6vsg                                               1/1     Running   0          7m12s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-kube-manager-ppjkf                                               1/1     Running   0          7m12s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   contrail-kube-manager-rjpmw                                               1/1     Running   0          7m12s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   coredns-fb8b8dccf-wmdw2                                                   0/1     Running   2          34m     10.47.255.252   ip-172-31-17-120.ap-northeast-1.compute.internal   <none>           <none>
kube-system   coredns-fb8b8dccf-wsrtl                                                   0/1     Running   2          34m     10.47.255.251   ip-172-31-17-120.ap-northeast-1.compute.internal   <none>           <none>
kube-system   etcd-ip-172-31-13-9.ap-northeast-1.compute.internal                       1/1     Running   0          33m     172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   etcd-ip-172-31-32-58.ap-northeast-1.compute.internal                      1/1     Running   0          32m     172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   etcd-ip-172-31-8-73.ap-northeast-1.compute.internal                       1/1     Running   0          30m     172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-apiserver-ip-172-31-13-9.ap-northeast-1.compute.internal             1/1     Running   0          33m     172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-apiserver-ip-172-31-32-58.ap-northeast-1.compute.internal            1/1     Running   1          32m     172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   kube-apiserver-ip-172-31-8-73.ap-northeast-1.compute.internal             1/1     Running   1          30m     172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-controller-manager-ip-172-31-13-9.ap-northeast-1.compute.internal    1/1     Running   1          33m     172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-controller-manager-ip-172-31-32-58.ap-northeast-1.compute.internal   1/1     Running   0          31m     172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   kube-controller-manager-ip-172-31-8-73.ap-northeast-1.compute.internal    1/1     Running   0          31m     172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-proxy-6ls9w                                                          1/1     Running   0          32m     172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   kube-proxy-82jl8                                                          1/1     Running   0          30m     172.31.5.235    ip-172-31-5-235.ap-northeast-1.compute.internal    <none>           <none>
kube-system   kube-proxy-bjdj9                                                          1/1     Running   0          31m     172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-proxy-nd7hq                                                          1/1     Running   0          31m     172.31.17.120   ip-172-31-17-120.ap-northeast-1.compute.internal   <none>           <none>
kube-system   kube-proxy-rb7nk                                                          1/1     Running   0          34m     172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-scheduler-ip-172-31-13-9.ap-northeast-1.compute.internal             1/1     Running   1          33m     172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   kube-scheduler-ip-172-31-32-58.ap-northeast-1.compute.internal            1/1     Running   0          31m     172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   kube-scheduler-ip-172-31-8-73.ap-northeast-1.compute.internal             1/1     Running   0          31m     172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   rabbitmq-9lp4n                                                            1/1     Running   0          7m12s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   rabbitmq-lxkgz                                                            1/1     Running   0          7m12s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   rabbitmq-wfk2f                                                            1/1     Running   0          7m12s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
kube-system   redis-h2x2b                                                               1/1     Running   0          7m13s   172.31.13.9     ip-172-31-13-9.ap-northeast-1.compute.internal     <none>           <none>
kube-system   redis-pkmng                                                               1/1     Running   0          7m13s   172.31.8.73     ip-172-31-8-73.ap-northeast-1.compute.internal     <none>           <none>
kube-system   redis-r68ks                                                               1/1     Running   0          7m13s   172.31.32.58    ip-172-31-32-58.ap-northeast-1.compute.internal    <none>           <none>
[root@ip-172-31-13-9 ~]# 


[root@ip-172-31-13-9 ~]# contrail-status 
Pod              Service         Original Name                          State    Id            Status             
                 redis           contrail-external-redis                running  8f38c94fc370  Up About a minute  
analytics        api             contrail-analytics-api                 running  2edde00b4525  Up About a minute  
analytics        collector       contrail-analytics-collector           running  c1d0c24775a6  Up About a minute  
analytics        nodemgr         contrail-nodemgr                       running  4a4c455cc0df  Up About a minute  
config           api             contrail-controller-config-api         running  b855ad79ace4  Up About a minute  
config           device-manager  contrail-controller-config-devicemgr   running  50d590e6f6cf  Up About a minute  
config           nodemgr         contrail-nodemgr                       running  6f0f64f958d9  Up About a minute  
config           schema          contrail-controller-config-schema      running  2057b21f50b7  Up About a minute  
config           svc-monitor     contrail-controller-config-svcmonitor  running  ba48df5cb7f9  Up About a minute  
config-database  cassandra       contrail-external-cassandra            running  1d38278d304e  Up About a minute  
config-database  nodemgr         contrail-nodemgr                       running  8e4f9315cc38  Up About a minute  
config-database  rabbitmq        contrail-external-rabbitmq             running  4a424e2f456c  Up About a minute  
config-database  zookeeper       contrail-external-zookeeper            running  4b46c83f1376  Up About a minute  
control          control         contrail-controller-control-control    running  17e4b9b9e3b8  Up About a minute  
control          dns             contrail-controller-control-dns        running  39fc34e19e13  Up About a minute  
control          named           contrail-controller-control-named      running  aef0bf56a0e2  Up About a minute  
control          nodemgr         contrail-nodemgr                       running  21f091df35d5  Up About a minute  
kubernetes       kube-manager    contrail-kubernetes-kube-manager       running  db661ef685b0  Up About a minute  
webui            job             contrail-controller-webui-job          running  0bf35b774aac  Up About a minute  
webui            web             contrail-controller-webui-web          running  9213ce050547  Up About a minute  

== Contrail control ==
control: active
nodemgr: active
named: active
dns: active

== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active

== Contrail kubernetes ==
kube-manager: backup

== Contrail analytics ==
nodemgr: active
api: active
collector: active

== Contrail webui ==
web: active
job: active

== Contrail config ==
svc-monitor: backup
nodemgr: active
device-manager: active
api: active
schema: backup

[root@ip-172-31-13-9 ~]# 

[root@ip-172-31-8-73 ~]# contrail-status 
Pod              Service         Original Name                          State    Id            Status             
                 redis           contrail-external-redis                running  39af38401d31  Up 2 minutes       
analytics        api             contrail-analytics-api                 running  29fa05f18927  Up 2 minutes       
analytics        collector       contrail-analytics-collector           running  994bffbe4c1f  Up About a minute  
analytics        nodemgr         contrail-nodemgr                       running  1eb143c7b864  Up About a minute  
config           api             contrail-controller-config-api         running  92ee8983bc81  Up About a minute  
config           device-manager  contrail-controller-config-devicemgr   running  7f9ab5d2a9ca  Up About a minute  
config           nodemgr         contrail-nodemgr                       running  c6a88b487031  Up About a minute  
config           schema          contrail-controller-config-schema      running  1fe2e2767dca  Up About a minute  
config           svc-monitor     contrail-controller-config-svcmonitor  running  ec1d66894036  Up About a minute  
config-database  cassandra       contrail-external-cassandra            running  80f394c8d1a8  Up 2 minutes       
config-database  nodemgr         contrail-nodemgr                       running  af9b70285564  Up About a minute  
config-database  rabbitmq        contrail-external-rabbitmq             running  edae18a7cf9f  Up 2 minutes       
config-database  zookeeper       contrail-external-zookeeper            running  f00c2e5d94ac  Up 2 minutes       
control          control         contrail-controller-control-control    running  6e3e22625a50  Up About a minute  
control          dns             contrail-controller-control-dns        running  b1b6b9649761  Up About a minute  
control          named           contrail-controller-control-named      running  f8aa237fca10  Up About a minute  
control          nodemgr         contrail-nodemgr                       running  bb0868390322  Up About a minute  
kubernetes       kube-manager    contrail-kubernetes-kube-manager       running  02e99f8b9490  Up About a minute  
webui            job             contrail-controller-webui-job          running  f5ffdfc1076f  Up About a minute  
webui            web             contrail-controller-webui-web          running  09c3f77223d3  Up About a minute  

== Contrail control ==
control: active
nodemgr: active
named: active
dns: active

== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active

== Contrail kubernetes ==
kube-manager: backup

== Contrail analytics ==
nodemgr: active
api: active
collector: active

== Contrail webui ==
web: active
job: active

== Contrail config ==
svc-monitor: backup
nodemgr: active
device-manager: backup
api: active
schema: backup

[root@ip-172-31-8-73 ~]# 

[root@ip-172-31-32-58 ~]# contrail-status 
Pod              Service         Original Name                          State    Id            Status             
                 redis           contrail-external-redis                running  44363e63f104  Up 2 minutes       
analytics        api             contrail-analytics-api                 running  aa8c5dc17c57  Up 2 minutes       
analytics        collector       contrail-analytics-collector           running  6856b8e33f34  Up 2 minutes       
analytics        nodemgr         contrail-nodemgr                       running  c1ec67695618  Up About a minute  
config           api             contrail-controller-config-api         running  ff95a8e3e4a9  Up 2 minutes       
config           device-manager  contrail-controller-config-devicemgr   running  abc0ad6b32c0  Up 2 minutes       
config           nodemgr         contrail-nodemgr                       running  c883e525205a  Up About a minute  
config           schema          contrail-controller-config-schema      running  0b18780b02da  Up About a minute  
config           svc-monitor     contrail-controller-config-svcmonitor  running  42e74aad3d3d  Up About a minute  
config-database  cassandra       contrail-external-cassandra            running  3994d9f51055  Up 2 minutes       
config-database  nodemgr         contrail-nodemgr                       running  781c5c93e662  Up 2 minutes       
config-database  rabbitmq        contrail-external-rabbitmq             running  849427f37237  Up 2 minutes       
config-database  zookeeper       contrail-external-zookeeper            running  fbb778620915  Up 2 minutes       
control          control         contrail-controller-control-control    running  85b2e8366a13  Up 2 minutes       
control          dns             contrail-controller-control-dns        running  b1f05dc6b8ee  Up 2 minutes       
control          named           contrail-controller-control-named      running  ca68ff0e118b  Up About a minute  
control          nodemgr         contrail-nodemgr                       running  cf8aaff71343  Up About a minute  
kubernetes       kube-manager    contrail-kubernetes-kube-manager       running  62022a542509  Up 2 minutes       
webui            job             contrail-controller-webui-job          running  28413e9f378b  Up 2 minutes       
webui            web             contrail-controller-webui-web          running  4a6edac6d596  Up 2 minutes       

== Contrail control ==
control: active
nodemgr: active
named: active
dns: active

== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active

== Contrail kubernetes ==
kube-manager: active

== Contrail analytics ==
nodemgr: active
api: active
collector: active

== Contrail webui ==
web: active
job: active

== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: backup
api: active
schema: active

[root@ip-172-31-32-58 ~]# 

[root@ip-172-31-5-235 ~]# contrail-status 
Pod      Service  Original Name           State    Id            Status        
vrouter  agent    contrail-vrouter-agent  running  48377d29f584  Up 2 minutes  
vrouter  nodemgr  contrail-nodemgr        running  77d7a409d410  Up 2 minutes  

vrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active

[root@ip-172-31-5-235 ~]# 

[root@ip-172-31-17-120 ~]# contrail-status 
Pod      Service  Original Name           State    Id            Status        
vrouter  agent    contrail-vrouter-agent  running  f97837959a0b  Up 3 minutes  
vrouter  nodemgr  contrail-nodemgr        running  4e48673efbcc  Up 3 minutes  

vrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active


[root@ip-172-31-13-9 ~]# ./contrail-introspect-cli/ist.py ctr nei
+--------------------------------------+---------------+----------+----------+-----------+-------------+------------+------------+-----------------------------+
| peer                                 | peer_address  | peer_asn | encoding | peer_type | state       | send_state | flap_count | flap_time                   |
+--------------------------------------+---------------+----------+----------+-----------+-------------+------------+------------+-----------------------------+
| ip-172-31-32-58.ap-                  | 172.31.32.58  | 64512    | BGP      | internal  | Established | in sync    | 0          | n/a                         |
| northeast-1.compute.internal         |               |          |          |           |             |            |            |                             |
| ip-172-31-8-73.ap-                   | 172.31.8.73   | 64512    | BGP      | internal  | Established | in sync    | 0          | n/a                         |
| northeast-1.compute.internal         |               |          |          |           |             |            |            |                             |
| ip-172-31-17-120.ap-                 | 172.31.17.120 | 0        | XMPP     | internal  | Established | in sync    | 5          | 2019-Apr-28 07:35:40.743648 |
| northeast-1.compute.internal         |               |          |          |           |             |            |            |                             |
| ip-172-31-5-235.ap-                  | 172.31.5.235  | 0        | XMPP     | internal  | Established | in sync    | 6          | 2019-Apr-28 07:35:40.251476 |
| northeast-1.compute.internal         |               |          |          |           |             |            |            |                             |
+--------------------------------------+---------------+----------+----------+-----------+-------------+------------+------------+-----------------------------+
[root@ip-172-31-13-9 ~]# 


[root@ip-172-31-13-9 ~]# ./contrail-introspect-cli/ist.py ctr route summary
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| name                                               | prefixes | paths | primary_paths | secondary_paths | infeasible_paths |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| default-domain:default-                            | 0        | 0     | 0             | 0               | 0                |
| project:__link_local__:__link_local__.inet.0       |          |       |               |                 |                  |
| default-domain:default-project:dci-                | 0        | 0     | 0             | 0               | 0                |
| network:__default__.inet.0                         |          |       |               |                 |                  |
| default-domain:default-project:dci-network:dci-    | 0        | 0     | 0             | 0               | 0                |
| network.inet.0                                     |          |       |               |                 |                  |
| default-domain:default-project:default-virtual-    | 0        | 0     | 0             | 0               | 0                |
| network:default-virtual-network.inet.0             |          |       |               |                 |                  |
| inet.0                                             | 0        | 0     | 0             | 0               | 0                |
| default-domain:default-project:ip-fabric:ip-       | 4        | 8     | 2             | 6               | 0                |
| fabric.inet.0                                      |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-pod-network | 4        | 8     | 2             | 6               | 0                |
| :k8s-default-pod-network.inet.0                    |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-service-    | 4        | 8     | 0             | 8               | 0                |
| network:k8s-default-service-network.inet.0         |          |       |               |                 |                  |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
[root@ip-172-31-13-9 ~]# 

在创建cirros部署后,就像“启动并运行”部分所描述的一样,两个vRouter节点之间已经可以ping通了。

  • 输出是相同的,但现在在两个vRouter之间使用的是MPLS封装!
[root@ip-172-31-13-9 ~]# kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
cirros-deployment-86885fbf85-pkzqz   1/1     Running   0          16s   10.47.255.249   ip-172-31-17-120.ap-northeast-1.compute.internal   <none>           <none>
cirros-deployment-86885fbf85-w4w6h   1/1     Running   0          16s   10.47.255.250   ip-172-31-5-235.ap-northeast-1.compute.internal    <none>           <none>
[root@ip-172-31-13-9 ~]# 
[root@ip-172-31-13-9 ~]# 
[root@ip-172-31-13-9 ~]# kubectl exec -it cirros-deployment-86885fbf85-pkzqz sh
/ # ping 10.47.255.250
PING 10.47.255.250 (10.47.255.250): 56 data bytes
64 bytes from 10.47.255.250: seq=0 ttl=63 time=3.376 ms
64 bytes from 10.47.255.250: seq=1 ttl=63 time=2.587 ms
64 bytes from 10.47.255.250: seq=2 ttl=63 time=2.549 ms
^C
--- 10.47.255.250 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 2.549/2.837/3.376 ms
/ # 
/ # 
/ # ip -o a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue \    link/ether 02:64:0d:41:b0:69 brd ff:ff:ff:ff:ff:ff
23: eth0    inet 10.47.255.249/12 scope global eth0\       valid_lft forever preferred_lft forever
23: eth0    inet6 fe80::489a:28ff:fedf:2e7b/64 scope link \       valid_lft forever preferred_lft forever
/ # 

[root@ip-172-31-13-9 ~]# ./contrail-introspect-cli/ist.py ctr route summary
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| name                                               | prefixes | paths | primary_paths | secondary_paths | infeasible_paths |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| default-domain:default-                            | 0        | 0     | 0             | 0               | 0                |
| project:__link_local__:__link_local__.inet.0       |          |       |               |                 |                  |
| default-domain:default-project:dci-                | 0        | 0     | 0             | 0               | 0                |
| network:__default__.inet.0                         |          |       |               |                 |                  |
| default-domain:default-project:dci-network:dci-    | 0        | 0     | 0             | 0               | 0                |
| network.inet.0                                     |          |       |               |                 |                  |
| default-domain:default-project:default-virtual-    | 0        | 0     | 0             | 0               | 0                |
| network:default-virtual-network.inet.0             |          |       |               |                 |                  |
| inet.0                                             | 0        | 0     | 0             | 0               | 0                |
| default-domain:default-project:ip-fabric:ip-       | 6        | 12    | 2             | 10              | 0                |
| fabric.inet.0                                      |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-pod-network | 6        | 12    | 4             | 8               | 0                |
| :k8s-default-pod-network.inet.0                    |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-service-    | 6        | 12    | 0             | 12              | 0                |
| network:k8s-default-service-network.inet.0         |          |       |               |                 |                  |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
[root@ip-172-31-13-9 ~]# 

[root@ip-172-31-13-9 ~]# ./contrail-introspect-cli/ist.py ctr route show -t default-domain:k8s-default:k8s-default-pod-network:k8s-default-pod-network.inet.0 10.47.255.251

default-domain:k8s-default:k8s-default-pod-network:k8s-default-pod-network.inet.0: 6 destinations, 12 routes (4 primary, 8 secondary, 0 infeasible)

10.47.255.251/32, age: 0:08:37.590508, last_modified: 2019-Apr-28 07:37:16.031523
    [XMPP (interface)|ip-172-31-17-120.ap-northeast-1.compute.internal] age: 0:08:37.596128, localpref: 200, nh: 172.31.17.120, encap: ['gre', 'udp'], label: 25, AS path: None
    [BGP|172.31.32.58] age: 0:08:37.594533, localpref: 200, nh: 172.31.17.120, encap: ['gre', 'udp'], label: 25, AS path: None


[root@ip-172-31-13-9 ~]# ./contrail-introspect-cli/ist.py ctr route show -t default-domain:k8s-default:k8s-default-pod-network:k8s-default-pod-network.inet.0 10.47.255.250

default-domain:k8s-default:k8s-default-pod-network:k8s-default-pod-network.inet.0: 6 destinations, 12 routes (4 primary, 8 secondary, 0 infeasible)

10.47.255.250/32, age: 0:01:50.135045, last_modified: 2019-Apr-28 07:44:06.371447
    [XMPP (interface)|ip-172-31-5-235.ap-northeast-1.compute.internal] age: 0:01:50.141480, localpref: 200, nh: 172.31.5.235, encap: ['gre', 'udp'], label: 25, AS path: None
    [BGP|172.31.32.58] age: 0:01:50.098328, localpref: 200, nh: 172.31.5.235, encap: ['gre', 'udp'], label: 25, AS path: None
[root@ip-172-31-13-9 ~]# 

注意:要使coredns处于活动状态,需要进行两项更改。

[root@ip-172-31-8-73 ~]# kubectl edit configmap -n kube-system coredns
-        forward . /etc/resolv.conf
+        forward . 10.47.255.253

# kubectl edit deployment -n kube-system coredns
 -> delete livenessProbe, readinessProbe

终于,coredns也处于活动状态,集群已完全启动!

[root@ip-172-31-13-9 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                                                      READY   STATUS    RESTARTS   AGE
default       cirros-deployment-86885fbf85-pkzqz                                        1/1     Running   0          47m
default       cirros-deployment-86885fbf85-w4w6h                                        1/1     Running   0          47m
kube-system   config-zookeeper-l8m9l                                                    1/1     Running   0          24m
kube-system   config-zookeeper-lvtmq                                                    1/1     Running   0          24m
kube-system   config-zookeeper-mzlgm                                                    1/1     Running   0          24m
kube-system   contrail-agent-jc4x2                                                      2/2     Running   0          24m
kube-system   contrail-agent-psk2v                                                      2/2     Running   0          24m
kube-system   contrail-analytics-hsm7w                                                  3/3     Running   0          24m
kube-system   contrail-analytics-vgwcb                                                  3/3     Running   0          24m
kube-system   contrail-analytics-xbpwf                                                  3/3     Running   0          24m
kube-system   contrail-config-database-nodemgr-7xvnb                                    1/1     Running   0          24m
kube-system   contrail-config-database-nodemgr-9bznv                                    1/1     Running   0          24m
kube-system   contrail-config-database-nodemgr-lqtkq                                    1/1     Running   0          24m
kube-system   contrail-configdb-4svwg                                                   1/1     Running   0          24m
kube-system   contrail-configdb-gdvmc                                                   1/1     Running   0          24m
kube-system   contrail-configdb-sll25                                                   1/1     Running   0          24m
kube-system   contrail-controller-config-gmkpr                                          5/5     Running   0          24m
kube-system   contrail-controller-config-q6rvx                                          5/5     Running   0          24m
kube-system   contrail-controller-config-zbpjm                                          5/5     Running   0          24m
kube-system   contrail-controller-control-4m9fd                                         4/4     Running   0          24m
kube-system   contrail-controller-control-9klxh                                         4/4     Running   0          24m
kube-system   contrail-controller-control-wk6jp                                         4/4     Running   0          24m
kube-system   contrail-controller-webui-268bc                                           2/2     Running   0          24m
kube-system   contrail-controller-webui-57dbf                                           2/2     Running   0          24m
kube-system   contrail-controller-webui-z6c68                                           2/2     Running   0          24m
kube-system   contrail-kube-manager-6nh9d                                               1/1     Running   0          24m
kube-system   contrail-kube-manager-stqf5                                               1/1     Running   0          24m
kube-system   contrail-kube-manager-wqgl4                                               1/1     Running   0          24m
kube-system   coredns-7f865bd4f9-g8j8f                                                  1/1     Running   0          13s
kube-system   coredns-7f865bd4f9-zftsc                                                  1/1     Running   0          13s
kube-system   etcd-ip-172-31-13-9.ap-northeast-1.compute.internal                       1/1     Running   0          82m
kube-system   etcd-ip-172-31-32-58.ap-northeast-1.compute.internal                      1/1     Running   0          81m
kube-system   etcd-ip-172-31-8-73.ap-northeast-1.compute.internal                       1/1     Running   0          79m
kube-system   kube-apiserver-ip-172-31-13-9.ap-northeast-1.compute.internal             1/1     Running   0          82m
kube-system   kube-apiserver-ip-172-31-32-58.ap-northeast-1.compute.internal            1/1     Running   1          81m
kube-system   kube-apiserver-ip-172-31-8-73.ap-northeast-1.compute.internal             1/1     Running   1          80m
kube-system   kube-controller-manager-ip-172-31-13-9.ap-northeast-1.compute.internal    1/1     Running   1          83m
kube-system   kube-controller-manager-ip-172-31-32-58.ap-northeast-1.compute.internal   1/1     Running   0          80m
kube-system   kube-controller-manager-ip-172-31-8-73.ap-northeast-1.compute.internal    1/1     Running   0          80m
kube-system   kube-proxy-6ls9w                                                          1/1     Running   0          81m
kube-system   kube-proxy-82jl8                                                          1/1     Running   0          80m
kube-system   kube-proxy-bjdj9                                                          1/1     Running   0          81m
kube-system   kube-proxy-nd7hq                                                          1/1     Running   0          80m
kube-system   kube-proxy-rb7nk                                                          1/1     Running   0          83m
kube-system   kube-scheduler-ip-172-31-13-9.ap-northeast-1.compute.internal             1/1     Running   1          83m
kube-system   kube-scheduler-ip-172-31-32-58.ap-northeast-1.compute.internal            1/1     Running   0          80m
kube-system   kube-scheduler-ip-172-31-8-73.ap-northeast-1.compute.internal             1/1     Running   0          80m
kube-system   rabbitmq-b6rpx                                                            1/1     Running   0          24m
kube-system   rabbitmq-gn67t                                                            1/1     Running   0          24m
kube-system   rabbitmq-r8dvb                                                            1/1     Running   0          24m
kube-system   redis-5qvbv                                                               1/1     Running   0          24m
kube-system   redis-8mck5                                                               1/1     Running   0          24m
kube-system   redis-9d9dv                                                               1/1     Running   0          24m
[root@ip-172-31-13-9 ~]# 

[root@ip-172-31-13-9 ~]# kubectl get deployment -n kube-system
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
coredns   2/2     2            2           98m
[root@ip-172-31-13-9 ~]# 


[root@ip-172-31-13-9 ~]# ./contrail-introspect-cli/ist.py ctr route summary
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| name                                               | prefixes | paths | primary_paths | secondary_paths | infeasible_paths |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| default-domain:default-                            | 0        | 0     | 0             | 0               | 0                |
| project:__link_local__:__link_local__.inet.0       |          |       |               |                 |                  |
| default-domain:default-project:dci-                | 0        | 0     | 0             | 0               | 0                |
| network:__default__.inet.0                         |          |       |               |                 |                  |
| default-domain:default-project:dci-network:dci-    | 3        | 8     | 0             | 8               | 0                |
| network.inet.0                                     |          |       |               |                 |                  |
| default-domain:default-project:default-virtual-    | 0        | 0     | 0             | 0               | 0                |
| network:default-virtual-network.inet.0             |          |       |               |                 |                  |
| inet.0                                             | 0        | 0     | 0             | 0               | 0                |
| default-domain:default-project:ip-fabric:ip-       | 5        | 12    | 2             | 10              | 0                |
| fabric.inet.0                                      |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-pod-network | 5        | 14    | 4             | 10              | 0                |
| :k8s-default-pod-network.inet.0                    |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-service-    | 5        | 12    | 2             | 10              | 0                |
| network:k8s-default-service-network.inet.0         |          |       |               |                 |                  |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
[root@ip-172-31-13-9 ~]#

由于MP-BGP支持两个集群之间的缝合(stitching),因此这些集群很容易扩展到多集群环境。

  • 每个集群的前缀路由都将泄漏到其它集群

我将在附录部分描述此设置的详细信息。

(编者按:下一篇文章,我们将介绍关于OpenStack和vCenter的HA安装,以及在新的安装中选择什么标签的问题。)

Tungsten Fabric入门宝典系列文章——
1.首次启动和运行指南
2. TF组件的七种“武器”
3. 编排器集成

Tungsten Fabric 架构解析系列文章——
第一篇:TF主要特点和用例
第二篇:TF怎么运作
第三篇:详解vRouter体系结构
第四篇:TF的服务链
第五篇:vRouter的部署选项
第六篇:TF如何收集、分析、部署?
第七篇:TF如何编排
第八篇:TF支持API一览
第九篇:TF如何连接到物理网络
第十篇:TF基于应用程序的安全策略

Tungsten Fabric入门宝典丨关于安装的那些事(上)_第1张图片
Tungsten Fabric入门宝典丨关于安装的那些事(上)_第2张图片

你可能感兴趣的:(Tungsten,Fabric中文社区,#Tungsten,Fabric入门宝典系列)