虚拟化技术演历路径可分为三个时代:
物理机时代,多个应用程序可能跑在一台物理机器上;
这种方式会造成一旦某个服务出现溢出或者未知的错误,从而影响机器上所有服务的正常运行;
虚拟机时代,一台物理机器启动多个虚拟机实例,一个虚拟机跑多个应用程序;
这种方式解决了上个问题,但是却显得笨重,资源利用率低,重量级;
另外开发人员在实际的工作中,经常会遇到测试环境或生产环境与本地开发环境不一致的问题,轻则修复保持环境一致,重则可能需要返工。
容器化时代,一台物理机上启动多个容器实例,一个容器跑多个应用程序。
恰好解决了上述的问题,它将软件程序和运行的基础环境分开,开发人员编码完成后将程序整合环境通过 DockerFile 打包到一个容器镜像中,从根本上解决了环境不一致的问题。
并且,它更加轻量级,跨平台,可复制。
尽管Docker为容器化的应用程序提供了开放标准,但随着容器越来越多出现了一系列新问题:
我们急需一个大规模容器编排系统,Kubernetes应运而生。
Kubernetes 是一个全新的基于容器技术的架构方案,其目的是实现资源管理的自动化以及跨数据中心的资源利用率最大化。
Kubernetes具有完备的集群管理能力,包括多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和服务发现机制、内建的智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制,以及多力度的资源配额管理能力。
同时,Kubernetes 提供了完善的管理工具,这些工具涵盖了包括开发、部署测试、运维监控在内的各个环节,不仅是一个全新的基于容器技术的分布式架构解决方案,还是一个一站式的完备分布式系统开发和支撑平台。
具体特性如下:
服务发现和负载均衡
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
服务发现是让k8s即时发现应用是否健康,以及不健康后采取的措施。
存储编排
Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。
通俗理解就是防止 多个应用开辟出的空间 碎片化,对存储的整合回收以及分配。
自动部署和回滚
你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。
自动完成装箱计算
Kubernetes 允许你指定每个容器所需 CPU 和内存(RAM)。 当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。
自我修复
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的 运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。
密钥与配置管理
Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。 你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
在生产环境中,k8s一般是以集群方式进行安装,不可能是以单机的形式安装的。
集群的组成方式为:
N个 master 节点(一般为奇数个);
可以理解为董事会,有N个股东
一旦主节点挂掉,选举机制是少数服从多数
N个 worker 节点
以上的N>=1
kubernetes官方文档
来自官方文档的一张图:
官方介绍:
即master节点,负责集群的决策 ( 管理 )
Master是集群的网关和中枢,负责诸如为用户和客户端暴露API、跟踪其它服务器的健康状态、以最优方式调度工作负载,以及编排其他组件之间的通信等任务,它是用户或客户端与集群之间的核心联络点,并负责Kubernetes系统的大多数集中式管控逻辑。单个Master节点即可完成其所有的功能,但出于冗余及负载均衡等目的,生产环境中通常需要协同部署多个此类主机。Master节点类似于蜂群中的蜂王。
API 服务器是 Kubernetes 控制面的组件, 该组件公开了 Kubernetes API。 API 服务器是 Kubernetes 控制面的前端。
Kubernetes API 服务器的主要实现是 kube-apiserver。 kube-apiserver 设计上考虑了水平伸缩,也就是说,它可通过部署多个实例进行伸缩。 你可以运行 kube-apiserver 的多个实例,并在这些实例之间平衡流量。
etcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。
您的 Kubernetes 集群的 etcd 数据库通常需要有个备份计划。
要了解 etcd 更深层次的信息,请参考 etcd 文档。
控制平面组件,负责监视新创建的、未指定运行节点(node)的 Pods,选择节点让 Pod 在上面运行。
调度决策考虑的因素包括单个 Pod 和 Pod 集合的资源需求、硬件/软件/策略约束、亲和性和反亲和性规范、数据位置、工作负载间的干扰和最后时限。
在主节点上运行 控制器 的组件。
从逻辑上讲,每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在一个进程中运行。
这些控制器包括:
云控制器管理器是指嵌入特定云的控制逻辑的 控制平面组件。 云控制器管理器允许您链接集群到云提供商的应用编程接口中, 并把和该云平台交互的组件与只和您的集群交互的组件分离开。
cloud-controller-manager
仅运行特定于云平台的控制回路。 如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的环境中不需要云控制器管理器。
与 kube-controller-manager
类似,cloud-controller-manager
将若干逻辑上独立的 控制回路组合到同一个可执行文件中,供你以同一进程的方式运行。 你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。
下面的控制器都包含对云平台驱动的依赖:
即node节点,工作节点,负责为容器提供运行环境 ( 干活 )
Node是Kubernetes集群的工作节点,负责接收来自Master的工作指令并根据指令相应的创建或删除Pod对象,以及调整网络规则以合理地路由和转发流量等。理论上讲,Node可以是任何形式的计算设备,不过Master会统一将其抽象为Node对象进行管理。Node类似于蜂群中的工蜂,生产环境中,它们通常数量众多
一个在集群中每个节点(node)上运行的代理。 它保证容器(containers)都 运行在 Pod 中。
kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。
kube-proxy 是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。
kube-proxy 维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。
如果操作系统提供了数据包过滤层并可用的话,kube-proxy 会通过它来实现网络规则。否则, kube-proxy 仅转发流量本身。
该解释只是侠义上的解释,只是为了方便理解!!!!!!
对于官方的介绍,我们可能比较难理解,对比下面的图可能会理解容易的多!
一家名为硅谷集团(k8s集群)
的公司打算造飞机,硅谷总部(k8s下的control plane ,Master节点)
用于指挥,下达命令,各个分厂(k8s下的Node节点)
负责执行。
某一天,决策者(controller manager)
突然觉得造飞机项目不错,能挣大钱,就签署各种协议、合同,并嘱咐秘书部(API server)
的成员,说这些文件很重要,一定要放在公司核心数据的资料库(etcd)
中(毕竟大老板怎么可能自己亲手去做这些事情呢,不得有秘书嘛)
这造飞机的大战略一经确定,就开始搞了,这个时候就该分配任务了。这个时候我们的调度者(scheduler)
上场了,因为只有他知道各个分厂的现状,哪个厂适合干飞机任务;
调度者(scheduler)
最终决定将飞机项目就交给西厂(Node)
干吧;
但调度者(scheduler)
好得也是高层啊,放不下面直接通知,这个时候就告诉秘书部(API server)
我们的决策,让他下发通知;
当然,如果涉及到文件或者数据相关的东西也会交给
秘书部(API server)
让他去保存到资料库(etcd)
中
秘书部(API server)
骂骂咧咧的心理嘀咕,啥脏活累活都得我做,极其不情愿的通知了西厂的厂长(kubelet)
,说飞机项目以后就交给你们了,说完就走了。
西厂的厂长(kubelet)
一听,卧槽,中彩票了,这要干好了,不得升职加薪啊,于是就吆喝这兄弟加紧的干了起来。
厂长(kubelet)是用来监控当前节点的所有情况的,以及控制当前节点要干什么的。
就这样,过了1个月,领导想来视察工作了,但领导不知道是哪个厂在搞飞机项目,稀里糊涂的来到了东厂
,于是就过去问了门卫大爷(kube-proxy)
飞机项目在哪里,然后东厂的大爷
告诉他是西厂在搞,你可以去西厂。
因为毕竟是
大爷(kube-proxy)
嘛,消息灵通的很,各个大爷经常聊八卦,同步信息,所以随便问哪个厂的大爷,都知道在哪。kube-proxy信息互通
然后领导来到了西厂,觉得进度缓慢,问是什么原因,西厂反馈说某个零件造不出来,只有云厂商(cloud provider API)
有;领导一看这么下去不是办法啊,就让总部的外联部(cloud controller manager)
尝试去跟云厂商(cloud provider API)
沟通沟通;
经过一番谈判后,对方同意了,然后造飞机的项目就如火如荼的建造起来了。
在上述解释中,要注意的点如下:
硅谷总部
中交互都是通过秘书部(api server)
进行通信的,其他节点并不互通,同时也是外界跟总部交互的唯一节点最后再放一张交互图:
部署k8s的方式有两种方式,分别为:
此篇介绍的是kubeadm
的方式,在虚拟机上部署实现。
kubernetes集群大体上分为两类:一主多从和多主多从。
一主多从:一台Master节点和多台Node节点,搭建简单,但是有单机故障风险,适合用于测试环境
多主多从:多台Master节点和多台Node节点,搭建麻烦,安全性高,同时具备高可用,适合用于生产环境
本次搭建的是 一主两从 类型的集群,其中一台为master节点,两台work节点;另外,搭建软件的版本如下:
运行k8s的服务需要具备以下条件:
生产环境建议:
master节点 :8核16G 100G存储node节点:16核64G 500G存储
安装虚拟机过程中注意下面选项的设置:
操作系统环境:CPU(2C) 内存(2G) 硬盘(50G)
分区选择:自动分区
网络配置:按照下面配置网路地址信息
如下:
开启网络访问
网络地址:192.168.56.100 (每台主机都不一样 分别为100、101、102)
子网掩码:255.255.255.0
默认网关:192.168.56.2
DNS: 8.8.8.8
主机名设置:按照下面信息设置主机名
master节点: k8s-master
node节点: k8s-node1
node节点: k8s-node2
用vi编辑器打开网卡配置/etc/sysconfig/network-scripts/ifcfg-ens33
# 将 BOOTPROTO 改为static
BOOTPROTO=static
# 添加ip、网关和DNS地址,网关可以通过命令:“netstat -rn” 查看
IPADDR=192.168.56.100
NETMASK=255.255.255.0
GATEWAY=192.168.56.2
k8s和docker在运行中会产生大量的iptables规则, 为了不让系统规则跟它们混淆,直接关闭系统的规则
# 在所有节点上执行以下命令
# 关闭iptables,没这个服务可忽略
systemctl stop iptables
systemctl disable iptables
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
k8s要求集群中的节点必须精确一致,所以直接使用chronyd从网络同步时间
# 在所有节点上执行以下命令
# 启动chronyd服务
systemctl start chronyd
# 设为开机自启
systemctl enable chronyd
# 查看当前时间
date
如果设置之后时间还是不一样,查看防火墙是否关闭;
firewall-cmd --state #检查服务端防火墙是否关闭
关闭服务端防火墙后,客户端重启 systemctl restart chronyd,在执行上述命令
# 在所有节点上执行以下命令
# 主节点
hostnamectl set-hostname k8s-master
# 工作节点1
hostnamectl set-hostname k8s-node1
# 工作节点2
hostnamectl set-hostname k8s-node2
# 在执行完成后在所有节点执行以下命令校验一下
hostname
在所有节点上执行以下命令,在hosts文件中添加一个指向主节点的域名,这里不要照抄,要将ip改成你自己的ip
# 在所有节点执行,ip改成你自己 主节点 的ip
echo "192.168.56.100 cluster-endpoint" >> /etc/hosts
在初始化的时候需要用到master,所以需要hosts加入以下配置,对应的主机名要和上面第三步的一样
# 以下操作只在主节点操作
echo "192.168.56.100 k8s-master" >> /etc/hosts
echo "192.168.56.101 k8s-node1" >> /etc/hosts
echo "192.168.56.102 k8s-node2" >> /etc/hosts
SELinux或Security-Enhanced Linux是提供访问控制安全策略的机制或安全模块,简而言之,它是一项功能或服务,用于将用户限制为系统管理员设置的某些政策和规则。 这里需要将SELINUX设为permissive 模式(相当于禁用)
# 在所有节点执行 下面两个命令
# 临时禁用,重启后复原,也可以用 setenforce Permissive 命令,效果是一样的
setenforce 0
# 永久禁用,
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
swap分区是虚拟内存,Swap分区在系统的物理内存不够用的时候,把硬盘内存中的一部分空间释放出来,以供当前运行的程序使用,关闭虚拟内存可以提高k8s的性能;
# 在所有节点执行 下面两个命令
#临时关闭swap分区, 重启失效;
swapoff -a
#永久关闭swap分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
网卡上可能有ipv6的流量,将ipv6的流量桥接到ipv4网卡上,方便统计;
将以下内容一起复制到命令行执行
# 在所有节点执行
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
让以上配置生效
# 在所有节点执行
sudo sysctl --system
# 最好重启一下,避免后续安装出现意外 reboot
在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块
安装ipset和ipvsadm
# 在所有节点执行
yum install ipset ipvsadmin -y
添加需要加载的模块写入脚本文件
# 在所有节点执行
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
为脚本添加执行权限
# 在所有节点执行
chmod +x /etc/sysconfig/modules/ipvs.modules
执行脚本文件
# 在所有节点执行
/bin/bash /etc/sysconfig/modules/ipvs.modules
查看对应的模块是否加载成功
# 在所有节点执行
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
成功之后显示效果如下:
在docker章节已经解释过了,这里就直接摘命令把,直接复制运行就ok
注意一点下面的
registry-mirrors
地址配置的是自己的阿里云镜像地址,位置:阿里控制台-容器镜像服务-镜像工具-镜像加速器-加速地址
# 在所有节点执行下面全部命令
sudo yum remove docker*
sudo yum install -y yum-utils
#配置docker的yum地址
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安装指定版本
sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
# 启动&开机启动docker
systemctl enable docker --now
# docker加速配置
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://8d0dd2e1.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
安装成功之后使用docker ps
检查一下,成功之后的结果为:
在所有节点执行下面全部命令
如果使用k8s官网上的地址,将会导致下载很慢, 所以这里建议使用阿里镜像安装
# 在所有节点执行下面全部命令
# 配置源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
说明
enabled=1
: 开启gpgcheck=0
: 是否开启gpg签名,1开启,0关闭repo_gpgcheck=0
: 是否检查元数据信息文件的签名信息与完整性,1开启,0关闭--disableexcludes=kubernetes
表示排除禁用
# 在所有节点执行下面全部命令
yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
在/etc/sysconfig/kubelet
文件中加入以下内容
# 在所有节点执行下面全部命令
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
说明
# 在所有节点执行下面全部命令
# 开机自动启动
systemctl enable --now kubelet
以下操作只需要在master节点进行
初始化主节点之前,需要先下载一些组件,但是这些组件都在官网上,所以我们国内是无法下载的,所以一我们将官网的镜像改成阿里云的镜像,以下命令执行后会在当前目录得到一个 images.sh
的脚本文件,执行后会在docker中下载一些需要的组件
# 以下操作只需要在master节点进行
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
设置可执行权限并执行脚本
# 以下操作只需要在master节点进行
chmod +x ./images.sh && ./images.sh
注意:以下的配置和安装是主节点才有的东西,所以只在master节点上进行,不要在work节点上操作;
使用init组件快速初始化一个主节点
# 以下操作只需要在master节点进行
# apiserver-advertise-address ip修改为你自己的master IP,其他不要改
# 这里需要注意一下`cidr`的ip地址,不要与 apiserver-advertise-address 的ip 在同一个域下,不然会有问题(k8s搭建成功后出现 coredns 显示 0/1 Runing 的问题)!!!!
kubeadm init \
--apiserver-advertise-address=192.168.56.100 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
说明
安装过程需等待几分钟, 如显示以下信息则表示安装成功,但是这还没完,如果要使用k8s集群,还需要按照下面的步骤继续执行命令;
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join cluster-endpoint:6443 --token crlcho.us71d78fmo7an88n \
--discovery-token-ca-cert-hash sha256:8805c2e3c270f9650056453a14d6bc34687afa1fa68a520725d715d999bc45a6 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster-endpoint:6443 --token crlcho.us71d78fmo7an88n \
--discovery-token-ca-cert-hash sha256:8805c2e3c270f9650056453a14d6bc34687afa1fa68a520725d715d999bc45a6
根据上面的提示继续进行配置,如果需要使用集群,还需要执行以下命令,
记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行
# 以下操作只需要在master节点进行,此处命令来源:第一步执行成功后的命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果要在集群中加入工作节点,那么需要在工作节点执行以下命令,这些是集群的令牌
记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行
以下操作只在work节点执行即可
# 以下操作只在work节点执行即可,此处命令来源:第一步执行成功后的最后一行的命令
kubeadm join cluster-endpoint:6443 --token crlcho.us71d78fmo7an88n \
--discovery-token-ca-cert-hash sha256:8805c2e3c270f9650056453a14d6bc34687afa1fa68a520725d715d999bc45a6
加入后我们在主节点执行kubectl get nodes
命令,可以看到,除了主节点之外,还有2个工作节点,在看它们的状态都是NotReady(未准备好)的,因为还没安装网络插件,这刚好是我们下一步要做的事;
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 106s v1.20.9
k8s-node1 NotReady <none> 21s v1.20.9
k8s-node2 NotReady <none> 17s v1.20.9
需要注意的是,这个令牌是24小时内有效,超过24小时就要重新生成了;在master
节点执行以下命令即可创建一个新的令牌
# 这个命令一定要在主节点执行才能生成令牌
kubeadm token create --print-join-command
下面这个命令是如果你有多个master节点的话执行
kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \ --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \ --control-plane
以下操作只在master节点执行即可
网络插件有好几种,可以的童鞋可以通过官网下载配置进行安装:https://kubernetes.io/docs/concepts/cluster-administration/addons/
;
在这里我们使用calico插件进行安装:
# 下载 calico-etcd.yaml 到当前目录
wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml
#因为是国外网站,考虑到有些童鞋无法访问,现将calico.yaml的文件内容放在结尾处,需要的童鞋自行复制到文件即可,相关命令为 vi calico-etcd.yaml
# 加载网络插件
kubectl apply -f calico-etcd.yaml
安装成功后,会显示以下信息表示安装完成
[root@k8s-master ~]# kubectl apply -f calico-etcd.yaml
configmap/calico-config created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
再过几分钟,就可以看到所有的节点都已经ready了
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 3m58s v1.20.9
k8s-node1 Ready <none> 2m33s v1.20.9
k8s-node2 Ready <none> 2m29s v1.20.9
使用kubectl get pod -A
查看所有pod的运行状态,如果全部为Running
,并且前面数字都是"1/1"或者“2/2”的表示搭建成功,如果出现不是Running
状态或者数字为0/1
的则说明哪块安装的有问题,需自己检查,检查相关命令
kubectl describe pod -n xxx xxx
kubectl logs xxxx
kubectl get pod -o wide -n default
部署成功之后显示为:
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-7854b85cf7-zvfqs 1/1 Running 0 2m45s
kube-system calico-node-682k4 1/1 Running 0 2m45s
kube-system calico-node-8pnlw 1/1 Running 0 2m45s
kube-system calico-node-l6928 1/1 Running 0 2m45s
kube-system coredns-5897cd56c4-rwfct 1/1 Running 0 5m35s
kube-system coredns-5897cd56c4-rzz7t 1/1 Running 0 5m35s
kube-system etcd-k8s-master 1/1 Running 0 5m48s
kube-system kube-apiserver-k8s-master 1/1 Running 0 5m48s
kube-system kube-controller-manager-k8s-master 1/1 Running 0 5m48s
kube-system kube-proxy-bmbg4 1/1 Running 0 4m27s
kube-system kube-proxy-skzct 1/1 Running 0 5m36s
kube-system kube-proxy-vlj2t 1/1 Running 0 4m23s
kube-system kube-scheduler-k8s-master 1/1 Running 0 5m48s
按照以上的步骤搭建成功之后,k8s集群环境就算是运行起来的;
下面说的这些大家可装可不装,根据你实际情况来就行。
NFS介绍
概述
网络文件系统(Network File System, NFS),是基于内核的文件系统,nfs主要是通过网络实现服务器和客户端之间的数据传输,采用远程过程调用RPC(Romete Procedure Call)机制,让不同的机器节点共享文件目录,其功能是在允许客户端主机可以像访问本地存储一样通过网络访问服务端文件。Kubernetes的NFS存储用于将某事先存在的NFS服务器导出export的存储空间挂载到Pod中来供Pod容器使用。与emptyDir不同的是,NFS存储在Pod对象终止后仅是被卸载而非删除。另外,NFS是文件系统及共享服务,它支持同时存在多路挂载请求。
# 在所有节点都执行
yum install -y nfs-utils
# 在master 执行以下全部命令
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data
# 在master执行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
# 使配置生效
exportfs -r
#检查配置是否生效
exportfs
最终出现以下情况则代表成功
[root@k8s-master ~]# exportfs
/nfs/data <world>
配置动态供应的默认存储类
在运行下方命令之前执行kubectl get sc
会得到以下结果:
[root@k8s-master ~]# kubectl get sc
No resources found
执行完下方的yaml之后,再次运行kubectl get sc
会得到下面的结果:
[root@k8s-master ~]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 26s
注意,下方yaml涉及到两处ip地址,请修改为自己的master的ip地址(只在master节点运行)
创建一个sc.yaml,复制下方内容,最后执行kubectl apply -f sc.yaml
即可
## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.56.100 ## 指定自己nfs服务器地址
- name: NFS_PATH
value: /nfs/data ## nfs服务器共享的目录
volumes:
- name: nfs-client-root
nfs:
server: 192.168.56.100 ## 指定自己nfs服务器地址
path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
执行kubectl get pod -A
,得到以下结果:
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-6695fc778b-7lb6h 1/1 Running 0 13s
kube-system calico-kube-controllers-7854b85cf7-zvfqs 1/1 Running 0 20m
kube-system calico-node-682k4 1/1 Running 0 20m
kube-system calico-node-8pnlw 1/1 Running 0 20m
kube-system calico-node-l6928 1/1 Running 0 20m
kube-system coredns-5897cd56c4-rwfct 1/1 Running 0 23m
kube-system coredns-5897cd56c4-rzz7t 1/1 Running 0 23m
kube-system etcd-k8s-master 1/1 Running 0 23m
kube-system kube-apiserver-k8s-master 1/1 Running 0 23m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 23m
kube-system kube-proxy-bmbg4 1/1 Running 0 22m
kube-system kube-proxy-skzct 1/1 Running 0 23m
kube-system kube-proxy-vlj2t 1/1 Running 0 22m
kube-system kube-scheduler-k8s-master 1/1 Running 0 23m
一定要保证所有服务都处于Running
状态!!
Metrics server是Kubernetes集群资源使用情况的聚合器,Kubernetes中有些组件依赖资源指标API(metric API)的功能 ,如kubectl top 、hpa。如果没有资源指标API接口,这些组件无法运行。自kubernetes 1.8开始,资源使用指标通过 Metrics API 在 Kubernetes 中获取,从kubernetes1.11开始Heapster被废弃不在使用,metrics-server 替代了heapster。
- 通过Metrics API可以获取指定node或者pod的当前资源使用情况(而无法获取历史数据)
- Metrics API的api路径:/apis/metrics.k8s.io/
- Metrics API的使用需要在K8S集群中成功部署metrics server
简而言之就是一个 集群指标监控组件
创建metrics.yaml,使用kubectl apply -f metrics.yaml
执行即可
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
hostNetwork: true
containers:
- name: metrics-server
# image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7
image: oyymmw/metrics-server:0.3.7
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
- --logtostderr
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: main-port
运行下面命令如下:
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-6695fc778b-7lb6h 1/1 Running 0 7m59s
kube-system calico-kube-controllers-7854b85cf7-zvfqs 1/1 Running 0 28m
kube-system calico-node-682k4 1/1 Running 0 28m
kube-system calico-node-8pnlw 1/1 Running 0 28m
kube-system calico-node-l6928 1/1 Running 0 28m
kube-system coredns-5897cd56c4-rwfct 1/1 Running 0 31m
kube-system coredns-5897cd56c4-rzz7t 1/1 Running 0 31m
kube-system etcd-k8s-master 1/1 Running 0 31m
kube-system kube-apiserver-k8s-master 1/1 Running 0 31m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 31m
kube-system kube-proxy-bmbg4 1/1 Running 0 30m
kube-system kube-proxy-skzct 1/1 Running 0 31m
kube-system kube-proxy-vlj2t 1/1 Running 0 30m
kube-system kube-scheduler-k8s-master 1/1 Running 0 31m
kube-system metrics-server-567d4c49cb-5qnmb 1/1 Running 0 30s
这个时候就可以执行下方命令来看到集群资源使用信息了:
[root@k8s-master ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 295m 14% 1123Mi 65%
k8s-node1 135m 6% 834Mi 48%
k8s-node2 129m 6% 819Mi 47%
[root@k8s-master ~]# kubectl top pod -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default nfs-client-provisioner-6695fc778b-7lb6h 2m 6Mi
kube-system calico-kube-controllers-7854b85cf7-zvfqs 2m 6Mi
kube-system calico-node-682k4 29m 28Mi
kube-system calico-node-8pnlw 31m 29Mi
kube-system calico-node-l6928 37m 28Mi
kube-system coredns-5897cd56c4-rwfct 4m 8Mi
kube-system coredns-5897cd56c4-rzz7t 6m 20Mi
kube-system etcd-k8s-master 24m 40Mi
kube-system kube-apiserver-k8s-master 83m 432Mi
kube-system kube-controller-manager-k8s-master 19m 76Mi
kube-system kube-proxy-bmbg4 1m 30Mi
kube-system kube-proxy-skzct 1m 25Mi
kube-system kube-proxy-vlj2t 1m 23Mi
kube-system kube-scheduler-k8s-master 5m 35Mi
kube-system metrics-server-567d4c49cb-5qnmb 3m 13Mi
k8s集群里面最厉害的一点就是自运行,也就是说当集群中的机器因为某些原因宕机了,重启之后它会自动运行起来,保证高可用,下面我们就来测试一下, 输入以下命令重启三台服务器
reboot
过了几分钟后,我们发现k8s机器已经恢复好了,都是已就绪(Ready)状态,这就是自运行
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 15h v1.20.9
k8s-node1 Ready <none> 13h v1.20.9
k8s-node2 Ready <none> 11h v1.20.9
原因:
这样的问题在于 当您使⽤kubeadm init时,请指定pod-network-cidr网络段范围。确保主机/主⽹络的IP不在您引⽤的⼦⽹中。
即如果您的⽹络运⾏在192.168.*.*使⽤10.0.0.0/16
如果您的⽹络是10.0.*.*使⽤192.168.0.0/16,下面图片是官方所说
而我的问题就在于 我的网络运行在192.168.56.100 ,在指定pod-network-cidr网络段范围的时候写的是192.168.0.0/16 ,已经包含了我的网络,所以在init的时候将pod-network-cidr改为10.244.0.0/16
解决办法:
重置kubesadm环境
#执行以下命令,然后在重新用kubeadm引导集群(本文的`安装k8s三大件`)
kubeadm reset
rm -rf $HOME/.kube
执行kubectl top pod
的时候出现上述命令
解决办法:
添加启动参数(yaml中添加,参考本文中的metrics-server yaml)
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
上述方法不行的话,更换metrics版本
centos7.9安装k8s_java叶新东老师的博客
CrashLoopBackOff、Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout (ishells.cn)
k8scoredns显示01Running问题排查
云原生Java架构师的第一课K8s+Docker+KubeSphere+DevOps
此内容根据链接下载得来:wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml
;
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset
---
# Source: calico/templates/rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Nodes are watched to monitor for deletions.
- apiGroups: [""]
resources:
- nodes
verbs:
- watch
- list
- get
# Pods are queried to check for existence.
- apiGroups: [""]
resources:
- pods
verbs:
- get
# IPAM resources are manipulated when nodes are deleted.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
verbs:
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
# Needs access to update clusterinformations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- clusterinformations
verbs:
- get
- create
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only requried for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
# The Calico IPAM migration needs to get daemonsets. These permissions can be
# removed if not upgrading from an installation using host-local IPAM.
- apiGroups: ["apps"]
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: calico/cni:v3.10.4
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.10.4
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: calico/pod2daemon-flexvol:v3.10.4
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.10.4
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: calico/kube-controllers:v3.10.4
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml