K8S-5--云原生基础/k8s基础及组件/二进制部署k8s集群

一、云原生基础:

CNCF 云原生容器生态系统概要:
http://dockone.io/article/3006

13年,docker项目正式发布
14年,kubernetes项目正式发布
15年,Google,Redhat微软牵头成立CNCF(云原生计算基金会)
18年,CNCF成立三周年有了195个成员,19个基金会项目和11个孵化项目。

云原生的定义:
https://github.com/cncf/toc/blob/main/DEFINITION.md#%E4%B8%AD%E6
云原生技术有利于各组织在公有云、私有云和混合云等新型动态环境中,构建和运行可弹性扩展的应用。云原生的代表技术包括容器、服务网格、微服务、不可变基础设施和声明式API。
这些技术能够构建容错性好、易于管理和便于观察的松耦合系统。结合可靠的自动化手段,云原生技术使工程师能够轻松地对系统作出频繁和可预测的重大变更。
云原生计算基金会(CNCF)致力于培育和维护一个厂商中立的开源生态系统,来推广云原生技术。我们通过将最前沿的模式民主化,让这些创新为大众所用。

云原生技术栈:
容器:以docker为代表的容器运行技术。
服务网格:比如Service Mesh等。
微服务:在微服务体系结构中,一个项目是由多个松耦合且可独立部署的较小组件或服务组成。
不可变基础设施:不可变基础设施可以理解为一个应用运行所需要的基本运行需求,不可变最基本的就是指运行服务的服务器在完成部署后,就不在进行更改,比如镜像等。
声明式APl:描述应用程序的运行状态,并且由系统来决定如何来创建这个环境,例如声明一个pod,会有k8s执行创建并维持副本。

云原生特征:
符合12因素应用:是构建应用程序的方法.
1、基准代码:一份基准代码,多份部署(用同一个代码库进行版本控制,并可进行多次部署)。
2、依赖:显式地声明和隔离相互之间的依赖。
3、配置:在环境中存储配置。
4、后端服务:把后端服务当作一种附加资源。
5、构建,发布,运行:对程序执行构建或打包,并严格分离构建和运行。
6、进程:以一个或多个无状态进程运行应用。
7、端口绑定:通过端口绑定提供服务。
8、并发:通过进程模型进行扩展。
9、易处理:快速地启动,优雅地终止,最大程度上保持健壮性。
10、开发环境与线上环境等价:尽可能的保持开发,预发布,线上环境相同。
11、日志:将所有运行中进程和后端服务的输出流按照时间顺序统一收集存储和展示。
12、管理进程:一次性管理进程(数据备份等)应该和正常的常驻进程使用同样的运行环境。
面向微服务架构.
自服务敏捷架构.
基于API的协作.
抗脆弱性.

云原生CNCF官网:
https://www.cncf.io/

云原生景观图:
https://landscape.cncf.io/
毕业项目16个,包括kubernetes,Prometheus,habor,etcd,envoy,coredns,rook,helm
孵化项目24个
沙箱项目多个

二、k8s基础及组件介绍

官网:https://kubernetes.io/zh/
github:https://github.com/kubernetes/kubernetes
k8s设计架构:(架构图、各节点)
https://www.kubernetes.org.cn/kubernetes%E8%AE%BE%E8%AE%A1%E6%9E%B6%E6%9E%84
k8s官方中文文档:
http://docs.kubernetes.org.cn/

REST-API(网络接口):
https://github.com/Arachni/arachni/wiki/REST-API

组件介绍:
官网组件介绍:https://kubernetes.io/zh/docs/reference/command-line-tools-reference/

总体流程:
运维通过命令行kubectl或dashboard与apiserver进行交互,查看数据时去etcd查看数据,创建容器需要调度使用kube-schedule调度,容器副本要启动的个数由kube-controller去维护。
kubelet接受客户端指令,并管理本节点容器的生命周期,包括创建容器删除容器初始化容器对容器探针检测等,容器创建好后被封装在pod里,pod是k8s运行的最小单元。
kubeproxy网络组件,维护当前节点的网络规则,包括iptables或是ipvs,用户访问时通过kubeproxy访问pod的数据包括url图片服务等
master的kube-controller-manager\kube-scheduler不直接与node通信,是通过apiserver连接到etcd查询数据得知node/pod的状态等。
kubelet通过kube-apiserver监听到pod绑定信息后,获取对应的pod清单并下载镜像启动容器等。

1、kube-apiserver

提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
访问流程:身份认证token-验证权限/鉴权(增删改查读写)-验证数据/指令/参数是否合法-执行操作-返回结果,
k8s API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统的数据总线和数据中心。

apiserver 目前在master监听两个端口,通过–insecure-port int监听一个非安全的127.0.0.1本地端口(默认为8080):
该端口用于接收HTTP请求;
该端口默认值为8080,可以通过API Server的启动参数"–insecure-port”的值来修改默认值;
默认的Ip地址为"1ocalhost”,可以通过启动参数"–insecure-bind-address”的值来修改该IP地址;
非认证或未授权的HTTP请求通过该端口访问API Server(kube-controller-manager、kube-scheduler)。

通过参数–bind-address=192.168.7.101监听一个对外访问且安全(https)的端口(默认为6443):
该端口默认值为6443,可通过启动参数"–secure-port”的值来修改默认值;
默认Ip地址为非本地(Non-Localhost)网络端口,通过启动参数"–bind-address”设置该值;
该端口用于接收客户端、dashboard等外部HTTPS请求;
用于基于Tocken文件或客户端证书及HTTP Base的认证;用于基于策略的授权;

kubernetes API Server的功能与使用:
提供了集群管理的RESTAPI接口(包括认证授权、数据校验以及集群状态变更);
提供其他模块之间的数据交互和通信的枢纽(其他模块通过ApI Server查询或修改数据,只有API Server才直接操作etcd);
是资源配额控制的入口;
拥有完备的集群安全机制。

#curl 127.0.0.1:8080/apis #分组api
#curl 127.0.0.1:8080/api/v1 #带具体版本号的api
#curl 127.0.0.1:8080/ #返回核心api列表(
#curl 127.0.0.1:8080/version #api版本信息
#curl 127.0.0.1:8080/health2/etcd #与etcd的心跳监测
#curl 127.0.0.1:8080/apis/autoscaling/v1 #api的详细信息
#curl 127.0.0.1:8080/metrics #指标数据

#启动脚本
#vim /etc/systemd/system/kube-apiserver.service
  --bind-address=192.168.150.151 \  #外部监听端口
  --insecure-bind=127.0.0.1 \   #本机监听端口

2、kube-controller-manager

负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;

Controller Manager作为集群内部的管理控制中心,非安全默认端口10252,
负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,
当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态。

pod 高可用机制:
node monitor period:节点监视周期(如5s)
node monitor grace period:节点监视器宽限期(如40s)
pod eviction timeout:pod驱逐超时时间(如5min)
每5s检查一次节点的状态,没有收到节点心跳后等待40s后将此节点标记为不可达,不可达5min后仍未恢复将删除此节点并在其他节点重建这些pod。

#启动脚本
#vim /etc/systemd/system/kube-controller-manager.service
   --master=http://127.0.0.1:8080\ #调用kube-api-server的本地端口进行通信

3、kube-scheduler

Scheduler负责Pod调度,在整个系统中起“承上启下”作用,
承上:负责接收Controller Manager创建的新的Pod,为其选择一个合适的Node;
启下:Node上的kubelet接管Pod的生命周期。

通过调度算法为pod选择合适的node节点并将信息写入etcd。

策略:
LeastRequestedPriority:优先从备选节点列表中选择资源消耗最小的节点(CPU+内存)
CalculateNodeLabelPriority:优先选择含有指定Label的节点
BalancedResourceAllocation:优先从备选节点列表中选择各项资源使用率最均衡的节点。

创建pod的调度流程:
第一步:创建pod,一个一个调度
第二步:过滤掉资源不足的node节点
第三步:在剩余可用的node节点中删选
第四步:选中节点

#启动脚本:
#vim /etc/systemd/system/kube-scheduler.service
   --master=http://127.0.0.1:8080\ #调用kube-api-server的本地端口进行通信

4、kubelet

负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
向master汇报node的状态信息;接受指令并在Pod中创建docker容器;准备Pod所需的数据卷;返回pod的运行状态;在node节点执行容器健康检查。

在kubernetes集群中,每个Node节点都会启动kubelet进程,用来处理Master节点下发到本节点的任务,管理Pod和其中的容器。
kubelet会在API Server上注册节点信息,定期向Master汇报节点资源使用情况,并通过cAdvisor(顾问)监控容器和节点资源,可以把kubelet理解成Server/Agent架构中的agent,kubelet是Node上的pod管家。

#启动脚本:
#vim /etc/systemd/system/kubelet.service
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ #配置文件
#kubelet cAdvisor默认在所有接口监听4194端口的请求,以下iptables限制内网访问
ExecStartpost=/sbin/iptables-A INPUT-s 10.0.0.0/8-p tcp--dport 4194-jACCEPT 
ExecStartPost=/sbin/iptables-A INPUT-s 172.16.0.0/12-p tcp--dport 4194-j 
ACCEPTExecStartPost=/sbin/iptables-A INPUT-s 192.168.0.0/16-p tcp--dport 4194-jACCEPT 
ExecStartPost=/sbin/iptables-A INPUT-p tcp--dport 4194-jDROP 
Restart=on-failure 
RestartSec=5
PLEG:即Pod Lifecycle Event Generator,pleg会记录Pod生命周期中的各种事件,如容器的启动、终止等。
Container Gc:Kubernetes 垃圾回收(Garbage collection)机制由kubelet完成,kubelet定期清理不再使用的容器和镜像,每分钟进行一次容器的GC,每五分钟进行一次镜像的Gc。
Pod Eviction 是k8s一个特色功能,它在某些场景下应用,如节点NotReady、Node节点资源不足,把pod驱逐至其它Node节点。
Kube-controller-manager:周期性检查所有节点状态,当节点处于NotReady状态超过一段时间后,驱逐该节点上所有pod。
Kubelet:周期性检查本节点资源,当资源不足时,按照优先级驱逐部分 pod。
SyncLoop:控制pod 生命周期的制循环,驱动整个控制循环的事件有:pod更新事件、pod生命周期变化、kubelet本身设置的执行周期、定时清理事件等。
handlepods:kubelet针对pod的处理程序,创建pod、删除pod等。

5、kube-proxy

负责为Service提供cluster内部的服务发现和负载均衡;
kube-proxy运行在每个节点上,监听API Server中服务对象的变化,再通过管理IPtables或者IPVS规则来实现网络的转发。
kube-proxy和service的关系:kube-proxy-—watch—->k8s-apiserver
kube-proxy监听着k8s-apiserver,一旦service资源发生变化(调k8s-api修改service信息),kube-proxy就会生成对应的负载调度的调整,这样就保证service的最新状态。

https://kubernetes.io/zh/docs/concepts/services-networking/service/ #service介绍
https://kubernetes.io/zh/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/ #使用IPVS

#启动脚本:
#vim /etc/systemd/system/kube-proxy.service
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig\ #配置文件
  --proxy-mode=iptables #配置转发模式

Kube-Proxy不同版本支持的工作模式:
UserSpace:v1.1之前使用,v1.2及以后淘汰
IPtables:v1.1开始支持,v1.2开始为默认
IPVS:v1.9引入,v1.11为正式版本,需要安装ipvsadm、ipset工具包和加载ip_vs内核模块

iptables

Kube-Proxy 监听 Kubernetes Master 增加和删除Service以及Endpoint的消息。对于每一个Service,Kube Proxy 创建相应的IPtables规则,并将发送到Service Cluster IP的流量转发到Service 后端提供服务的Pod的相应端口上。
注:
虽然可以通过Service的ClusterIP和服务端口访问到后端Pod提供的服务,但该Cluster IP是Ping不通的。
其原因是Cluster IP只是IPtables中的规则,并不对应到一个任何网络设备。
IPVS模式的Cluster IP是可以Ping通的。

IPVS:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md%23ipvs
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md%23ipvs

IPVS相对IPtables效率会更高一些,使用IPVS模式需要在运行Kube-Proxy的节点上安装ipvsadm、ipset工具包和加载ip_vs内核模块,当Kube-Proxy以IPVS代理模式启动时,Kube-Proxy 将验证节点上是否安装了IPVS模块,如果未安装,则Kube-Proxy将回退到lPtables代理模式。
使用ipvs模式,Kube-Proxy会监视Kubernetes Service对象和Endpoints,调用宿主机内核Netlink接口以相应地创建rPVS规则并定期与Kubernetes service对象Endpoints对象同步pVs规则,以确保IPVS状态与期望一致,访问服务时,流量将被重定向到其中一个后端Pod,IPVS使用哈希表作为底层数据结构并在内核空间中工作,这意味着IPVS可以更快地重定向流量,并且在同步代理规则时具有更好的性能,此外,IPVS为负载均衡算法提供了更多选项,例如:rr(轮询调度)、1c(最小连接数)、dh(目标哈希)、sh(源哈希)、sed(最短期望延迟)、ng(不排队调度)等。

配置方式:
配置使用IPVS及指定调度算法:
https://kubernetes.io/zh/docs/reference/config-api/kube-proxy-config.v1alpha1/#ClientConnectionConfiguration

#vim /var/lib/kube-proxy/kube-proxy-config.yaml
#开启service会话保持:

6、其他:

Container runtime:
负责镜像管理以及Pod和容器的真正运行(CRI);

Ingress Controller:
为服务提供外网入口

Heapster:
提供资源监控

Federation:
提供跨可用区的集群

Fluentd-elasticsearch:
提供集群日志采集、存储与查询

三、基于二进制部署k8s高可用集群

基于二进制和ansible实现自动化部署k8s:
https://github.com/easzlab/kubeasz

资源推荐:

系统:kubeasz v3.1.0、k8s v20.04.3、docker v19.03.15
node:   48C 256G SSD/2T 10g/25g网卡
master:16c 16G 200G
etcd: 8c 16G 150G/SSD

单master环境:
1master、2+node、1etcd

多master环境:
2master、2+node、3etcd、2haproxy、1keepalived、2habor、2ansible(和master复用)

实战环境:
3master、1kubeasz(与master1复用)、3node、3etcd、2haproxy、2habor

实验最小环境:
1master+kubeasz、2node、1etcd、1haproxy+keepalived、1habor
2c/4g/40g

k8s高可用反向代理(haproxy及keepalived生产应用):
http://blogs.studylinux.net/?p=4579

本次实验准备服务器对应IP:
1master+kubeasz151 152 153、
1habor154 155、
1etcd156 157 158、
1haproxy+keepalived159 160、vip188、
2node161 162、
2c/4g/40g

1、基础环境准备:

#ubuntu时间同步:
# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# cat /etc/default/locale
LANG=en_US.UTF-8
LC_TIME=en_DK.UTF-8
# crontab -l
 */5 * * * * /usr/sbin/ntpdate timel.aliyun.com &>/dev/null && hwclock -w /usr/sbin/ntpdate

#配置apt源
[root@k8s-harbor1 ~]#vim /etc/apt/sources.list
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# 预发布软件源,不建议启用
# deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
[root@k8s-harbor1 ~]#apt update

#在所有master、etcd、harbor、node节点安装docker:
#tar xvf docker-19.03.15-binary-install.tar.gz -C /usr/local/src/
#cd /usr/local/src/
#sh docker-install.sh 
#验证:
#docker version
#docker info
#systemctl status docker

2、harbor:

#1、安装启动harbor:
[root@k8s-harbor1 ~]#mkdir /apps
[root@k8s-harbor1 ~]#tar xvf harbor-offline-installer-v2.3.2.tgz -C /apps/
#签发证书:
[root@k8s-harbor1 ~]#cd /apps/harbor/
[root@k8s-harbor1 harbor]#mkdir certs && cd certs
[root@k8s-harbor1 certs]#openssl genrsa -out harbor-ca.key
[root@k8s-harbor2 certs]#openssl req -x509 -new -nodes -key harbor-ca.key -subj "/CN=harbor.magedu.net" -days 7120 -out harbor-ca.crt 
#修改配置文件:
[root@k8s-harbor1 certs]#cd ..
[root@k8s-harbor1 harbor]#cp harbor.yml.tmpl harbor.yml
[root@k8s-harbor1 harbor]#vim harbor.yml
hostname: harbor.magedu.net #与"/CN=harbor.magedu.net"一致
http:
  port: 80
https:
  port: 443
  certificate: /apps/harbor/certs/harbor-ca.crt #证书路径
  private_key: /apps/harbor/certs/harbor-ca.key #key路径
harbor_admin_password: 123456  #密码
database:
  password: root123
  max_idle_conns: 100
  max_open_conns: 900
data_volume: /data
#执行harbor的自动安装和启动:
[root@k8s-harbor1 harbor]#./install.sh --with-trivy

#2、验证:
#客户端节点访问验证:
[root@k8s-harbor2 ~]#mkdir /etc/docker/certs.d/harbor.magedu.net -p #同步证书
[root@k8s-harbor1 ~]#scp /apps/harbor/certs/harbor-ca.crt 192.168.150.155:/etc/docker/certs.d/harbor.magedu.net
[root@k8s-harbor2 ~]#vim /etc/hosts #添加host文件解析
192.168.150.154 harbor.magedu.net
[root@k8s-harbor2 ~]#systemctl restart docker #重启docker
#测试登录harbor
[root@k8s-harbor2 ~]#docker login harbor.magedu.net
admin
123456
Login Succeeded
#浏览器登录验证:
设置运行本地浏览器访问:
C:\Windows\System32\drivers\etc\hosts
192.168.150.154  harbor.magedu.net
浏览器验证访问:https://harbor.magedu.net/

#3、测试
#测试push镜像到harbor:
[root@k8s-harbor2 ~]#docker pull alpine
[root@k8s-harbor2 ~]#docker tag alpine harbor.magedu.net/library/alpine
[root@k8s-harbor2 ~]#docker push harbor.magedu.net/library/alpine
#浏览器验证镜像

3、haproxy+keepalived:

[root@k8s-ha1 ~]#apt install keepalived haproxy -y
#keepalived:
[root@k8s-ha1 ~]#cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
[root@k8s-ha1 ~]#vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
  
global_defs {
   notification_email {
     acassen
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 1
    priority 100
    advert_int 3
        #unicast_src_ip 192.168.150.151
        #unicast_peer {
        #       192.168.150.160
        #}

    authentication {
        auth_type PASS
        auth_pass 123abc
    }
    virtual_ipaddress {
         192.168.150.188 dev eth0 label eth0:1
         192.168.150.189 dev eth0 label eth0:1
         192.168.150.190 dev eth0 label eth0:1
    }
}

[root@k8s-ha1 ~]#systemctl restart keepalived
[root@k8s-ha1 ~]#systemctl enable keepalived
#验证vip绑定成功
[root@k8s-ha1 ~]#ifconfig eth0:1
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.150.188  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:34:31:13  txqueuelen 1000  (Ethernet)
[root@k8s-master1 ~]#ping 192.168.150.188 #master1验证
#haproxy:
#修改为允许绑定本机不存在的 IP
[root@k8s-ha1 ~]#sysctl -a| grep net.ipv4.ip_nonlocal_bind
net.ipv4.ip_nonlocal_bind = 0
[root@k8s-ha1 ~]#echo "1" > /proc/sys/net/ipv4/ip_nonlocal_bind
[root@k8s-ha1 ~]#vim /etc/haproxy/haproxy.cfg 
listen k8s-6443
  bind 192.168.150.188:6443
  mode tcp
  server k8s1 192.168.150.151:6443 check inter 3s fall 3 rise 5
  #server k8s1 192.168.150.152:6443 check inter 3s fall 3 rise 5
  #server k8s1 192.168.150.153:6443 check inter 3s fall 3 rise 5
[root@k8s-ha1 ~]#systemctl restart haproxy
[root@k8s-ha1 ~]#systemctl enable haproxy
[root@k8s-ha1 ~]#ss -ntl
LISTEN    0     491      192.168.150.188:6443      0.0.0.0:*  

4、kubeasz部署:

#安装ansible
#安装较旧版本的ansible
[root@k8s-master1 ~]#apt install ansible -y 
#安装较新版本的ansible
[root@k8s-master1 ~]#apt install python3-pip git -y
[root@k8s-master1 ~]#pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/
[root@k8s-master1 ~]#ansible --version
ansible [core 2.11.6] 
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
  jinja version = 2.10.1
  libyaml = True

#配置免密钥登录
[root@k8s-master1 ~]#ssh-keygen 
[root@k8s-master1 ~]#apt install sshpass
[root@k8s-master1 ~]#vim scp.sh
#!/bin/bash
#目标主机列表
IP="
192.168.150.151
192.168.150.152
192.168.150.153
192.168.150.154
192.168.150.155
192.168.150.156
192.168.150.157
192.168.150.158
192.168.150.159
192.168.150.160 
192.168.150.161
192.168.150.162
192.168.150.163
" 
for node in ${IP};do
  sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
  if [ $? -eq 0 ];then
        echo "${node} 密钥copy完成"
  else 
        echo "${node} 密钥copy失败"
  fi
done 

#同步docker证书脚本
[root@k8s-master1 ~]#sh scp.sh

#部署节点下载部署项目及组件
使用master1作为部署节点
[root@k8s-master1 ~]#export release=3.1.0
[root@k8s-master1 ~]#curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
[root@k8s-master1 ~]#chmod a+x ./ezdown
[root@k8s-master1 ~]#vim ezdown
DOCKER VER=19.03.15
KUBEASZ VER=3.1.0
K8S BINVER=V1.21.0
[root@k8s-master1 ~]#./ezdown -D # 使用工具脚本下载
[root@k8s-master1 ~]#ll /etc/kubeasz/down/
total 1245988
drwxr-xr-x  2 root root      4096 Nov 25 09:46 ./
drwxrwxr-x 11 root root       209 Nov 25 09:42 ../
-rw-------  1 root root 451969024 Nov 25 09:44 calico_v3.15.3.tar
-rw-------  1 root root  42592768 Nov 25 09:44 coredns_1.8.0.tar
-rw-------  1 root root 227933696 Nov 25 09:45 dashboard_v2.2.0.tar
-rw-r--r--  1 root root  69158342 Nov 25 09:40 docker-20.10.5.tgz
-rw-------  1 root root  58150912 Nov 25 09:45 flannel_v0.13.0-amd64.tar
-rw-------  1 root root 124833792 Nov 25 09:44 k8s-dns-node-cache_1.17.0.tar
-rw-------  1 root root 179014144 Nov 25 09:46 kubeasz_3.1.0.tar
-rw-------  1 root root  34566656 Nov 25 09:45 metrics-scraper_v1.0.6.tar
-rw-------  1 root root  41199616 Nov 25 09:45 metrics-server_v0.3.6.tar
-rw-------  1 root root  45063680 Nov 25 09:45 nfs-provisioner_v4.0.1.tar
-rw-------  1 root root    692736 Nov 25 09:45 pause.tar
-rw-------  1 root root    692736 Nov 25 09:45 pause_3.4.1.tar

#生成ansible的host和config.yml文件并配置:
[root@k8s-master1 ~]#cd /etc/kubeasz
[root@k8s-master1 kubeasz]#./ezctl new k8s-01
[root@k8s-master1 kubeasz]#vim clusters/k8s-01/hosts 
[root@k8s-master1 kubeasz]#vim clusters/k8s-01/config.yml 

#部署k8s集群
[root@k8s-master1 kubeasz]#./ezctl help 
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
    list		             to list all of the managed clusters
    checkout    <cluster>            to switch default kubeconfig of the cluster
    new         <cluster>            to start a new k8s deploy with name 'cluster'
    setup       <cluster>  <step>    to setup a cluster, also supporting a step-by-step way
    start       <cluster>            to start all of the k8s services stopped by 'ezctl stop'
    stop        <cluster>            to stop all of the k8s services temporarily
    upgrade     <cluster>            to upgrade the k8s cluster
    destroy     <cluster>            to destroy the k8s cluster
    backup      <cluster>            to backup the cluster state (etcd snapshot)
    restore     <cluster>            to restore the cluster state from backups
    start-aio		             to quickly setup an all-in-one cluster with 'default' settings

Cluster ops:
    add-etcd    <cluster>  <ip>      to add a etcd-node to the etcd cluster
    add-master  <cluster>  <ip>      to add a master node to the k8s cluster
    add-node    <cluster>  <ip>      to add a work node to the k8s cluster
    del-etcd    <cluster>  <ip>      to delete a etcd-node from the etcd cluster
    del-master  <cluster>  <ip>      to delete a master node from the k8s cluster
    del-node    <cluster>  <ip>      to delete a work node from the k8s cluster

Extra operation:
    kcfg-adm    <cluster>  <args>    to manage client kubeconfig of the k8s cluster

Use "ezctl help " for more information about a given command.
[root@k8s-master1 kubeasz]#./ezctl help setup   #查看分步安装帮助信息
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings 
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
	  ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master

[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 01 #准备CA和基础系统设置
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 02 #部署etcd集群
#验证etcd节点服务状态:
[root@k8s-etcd1 ~]#export NODE_IPS="192.168.150.156"
[root@k8s-etcd1 ~]#for ip in ${NODE_IPS}; do
   ETCDCTL_API=3 etcdctl \
   --endpoints=https://${ip}:2379  \
   --cacert=/etc/kubernetes/ssl/ca.pem \
   --cert=/etc/kubernetes/ssl/etcd.pem \
   --key=/etc/kubernetes/ssl/etcd-key.pem \
   endpoint health; done
https://192.168.150.156:2379 is healthy: successfully committed proposal: took = 6.343588ms


#部署docker
#配置harbor客户端证书
[root@k8s-master1 ~]#mkdir /etc/docker/certs.d/harbor.magedu.net -p
[root@k8s-harbor1 ~]#scp /apps/harbor/certs/harbor-ca.crt 192.168.150.151:/etc/docker/certs.d/harbor.magedu.net/
[root@k8s-master1 ~]#echo "192.168.150.154 harbor.magedu.net" >> /etc/hosts #添加harbor域名解析
[root@k8s-master1 ~]#systemctl restart docker #重启docker
[root@k8s-master1 ~]#docker login harbor.magedu.net #验证登录harbor
admin/123456
#!!报错:Error response from daemon: Get https://harbor.magedu.net/v2/: dial tcp 192.168.150.154:443: connect: connection refused
#原因:harbor没启动,执行以下操作后重新登录显示成功
[root@k8s-harbor1 harbor]#./install.sh --with-trivy

#同步docker-harbor证书脚本
[root@k8s-master1 ~]#vim scp_harbor.sh 
#!/bin/bash
#目标主机列表
IP="
192.168.150.151
192.168.150.152
192.168.150.153
192.168.150.154
192.168.150.155
192.168.150.156
192.168.150.157
192.168.150.158
192.168.150.159
192.168.150.160
192.168.150.161
192.168.150.162
192.168.150.163
"
for node in ${IP};do
  sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
  if [ $? -eq 0 ];then
        echo "${node} 密钥copy完成"
        echo "${node} 密钥copy完成,准备初始化....."
           ssh ${node} "mkdir /etc/docker/certs.d/harbor.magedu.net -p"
           echo "Harbor 证书目录创建成功!"
           scp /etc/docker/certs.d/harbor.magedu.net/harbor-ca.crt ${node}:/etc/docker/certs.d/harbor.magedu.net/harbor-ca.crt
           echo "Harbor 证书拷贝成功!"
           ssh ${node} "echo "192.168.150.154 harbor.magedu.net">>/etc/hosts"
           echo "host文件拷贝完成"
           #scp -r /root/.docker ${node}:/root/
           #echo "Harbor认证文件拷贝完成!"
  else
        echo "${node} 密钥copy失败"
  fi
done
[root@k8s-master1 ~]#sh scp_harbor.sh 
#测试上传pause镜像至harbor仓库:
[root@k8s-master1 kubeasz]#docker pull easzlab/pause-amd64:3.4.1
[root@k8s-master1 kubeasz]#docker tag easzlab/pause-amd64:3.4.1 harbor.magedu.net/baseimages/pause-amd64:3.4.1
#harbor上创建项目baseimages
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/pause-amd64:3.4.1

[root@k8s-master1 kubeasz]#vim clusters/k8s-01/config.yml 
48 # [containerd]基础容器镜像
49 SANDBOX_IMAGE: "harbor.magedu.net/baseimages/pause-amd64:3.4.1"

#安装docker
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 03 
#node节点验证docker:
[root@k8s-node2 ~]#docker version

#部署master节点
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 04  
#验证服务器
[root@k8s-master1 ~]#kubectl get node
NAME              STATUS                     ROLES    AGE     VERSION
192.168.150.151   Ready,SchedulingDisabled   master   7m29s   v1.21.0

#部署node节点
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 05  
#验证服务器
[root@k8s-master1 ~]#kubectl get node
NAME              STATUS                     ROLES    AGE     VERSION
192.168.150.151   Ready,SchedulingDisabled   master   9m30s   v1.21.0
192.168.150.161   Ready                      node     32s     v1.21.0
192.168.150.162   Ready                      node     32s     v1.21.0

#部署网络服务
[root@k8s-master1 kubeasz]#grep image roles/calico/templates/calico-v3.15.yaml.j2 
          image: calico/cni:v3.15.3
          image: calico/pod2daemon-flexvol:v3.15.3
          image: calico/node:v3.15.3
          image: calico/kube-controllers:v3.15.3
[root@k8s-master1 kubeasz]#docker pull calico/cni:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/cni:v3.15.3 harbor.magedu.net/baseimages/calico-cni:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-cni:v3.15.3

[root@k8s-master1 kubeasz]#docker pull calico/pod2daemon-flexvol:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/pod2daemon-flexvol:v3.15.3 harbor.magedu.net/baseimages/calico-pod2daemon-flexvol:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-pod2daemon-flexvol:v3.15.3

[root@k8s-master1 kubeasz]#docker pull calico/node:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/node:v3.15.3 harbor.magedu.net/baseimages/calico-node:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-node:v3.15.3

[root@k8s-master1 kubeasz]#docker pull calico/kube-controllers:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/kube-controllers:v3.15.3 harbor.magedu.net/baseimages/calico-kube-controllers:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-kube-controllers:v3.15.3

#修改镜像地址
[root@k8s-master1 kubeasz]#vim roles/calico/templates/calico-v3.15.yaml.j2 
[root@k8s-master1 kubeasz]#grep image roles/calico/templates/calico-v3.15.yaml.j2 
          image: harbor.magedu.net/baseimages/calico-cni:v3.15.3
          image: harbor.magedu.net/baseimages/calico-pod2daemon-flexvol:v3.15.3
          image: harbor.magedu.net/baseimages/calico-node:v3.15.3
          image: harbor.magedu.net/baseimages/calico-kube-controllers:v3.15.3

[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 06  
#!!报错:fatal: [192.168.150.162]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl get pod -n kube-system -o wide|grep 'flannel'|grep ' 192.168.150.162 '|awk '{print $3}'", "delta": "0:00:00.094963", "end": "2021-12-15 16:53:27.424853", "msg": "", "rc": 0, "start": "2021-12-15 16:53:27.329890",  "stderr": "", "stderr_lines": [], "stdout": "Init:0/1", "stdout_lines": ["Init:0/1"]}
#已解决:docker images和roles/calico/templates/calico-v3.15.yaml.j2镜像地址不一致,修改后执行成功

#验证
[root@k8s-master1 kubeasz]#calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.150.161 | node-to-node mesh | up    | 12:15:03 | Established |
| 192.168.150.162 | node-to-node mesh | up    | 12:15:04 | Established |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

#验证node节点路由
[root@k8s-node1 ~]#route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.150.2   0.0.0.0         UG    0      0        0 eth0
10.200.36.64    0.0.0.0         255.255.255.192 U     0      0        0 *
10.200.159.128  192.168.150.151 255.255.255.192 UG    0      0        0 tunl0
10.200.169.128  192.168.150.162 255.255.255.192 UG    0      0        0 tunl0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0

#创建容器测试网络通信
[root@k8s-master1 ~]#docker pull alpine
[root@k8s-master1 ~]#docker tag alpine  harbor.magedu.net/baseimages/alpine
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/alpine
[root@k8s-master1 ~]#kubectl run net-test1 --image=harbor.magedu.net/baseimages/alpine sleep 360000
pod/net-test1 created
[root@k8s-master1 ~]#kubectl run net-test2 --image=harbor.magedu.net/baseimages/alpine sleep 360000
pod/net-test2 created
[root@k8s-master1 ~]#kubectl run net-test3 --image=harbor.magedu.net/baseimages/alpine sleep 360000
pod/net-test3 created
[root@k8s-master1 ~]#kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          15s   10.200.36.65     192.168.150.161   <none>           <none>
net-test2   1/1     Running   0          9s    10.200.36.66     192.168.150.161   <none>           <none>
net-test3   1/1     Running   0          3s    10.200.169.129   192.168.150.162   <none>           <none>
[root@k8s-master1 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.200.36.66
PING 10.200.36.66 (10.200.36.66): 56 data bytes
64 bytes from 10.200.36.66: seq=0 ttl=63 time=0.182 ms
64 bytes from 10.200.36.66: seq=1 ttl=63 time=0.101 ms
/ # ping 10.200.169.129
PING 10.200.169.129 (10.200.169.129): 56 data bytes
64 bytes from 10.200.169.129: seq=0 ttl=62 time=0.423 ms
64 bytes from 10.200.169.129: seq=1 ttl=62 time=0.632 ms

#集群管理
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 07 #这个有什么用?
#集群管理主要是添加master、添加node、删除master与删除node等节点管理及监控
#当前集群状态:
[root@k8s-master1 ~]#kubectl get node
NAME              STATUS                     ROLES    AGE   VERSION
192.168.150.151   Ready,SchedulingDisabled   master   13h   v1.21.0
192.168.150.161   Ready                      node     13h   v1.21.0
192.168.150.162   Ready                      node     13h   v1.21.0

#配置新节点的免密钥登录及同步docker-harbor证书
[root@k8s-master1 ~]#sh scp.sh 
[root@k8s-master1 ~]#sh scp_harbor.sh 

#添加及删除master
[root@k8s-master1 kubeasz]#./ezctl add-master k8s-01 192.168.150.152
[root@k8s-master1 kubeasz]#./ezctl del-master k8s-01 192.168.150.152
#node节点自动添加master
[root@k8s-node2 ~]#cat /etc/kube-lb/conf/kube-lb.conf

#添加及删除node
[root@k8s-master1 kubeasz]#./ezctl add-node k8s-01 192.168.150.163
[root@k8s-master1 kubeasz]#./ezctl del-node k8s-01 192.168.150.163

#添加及删除etcd
[root@k8s-master1 kubeasz]#./ezctl add-etcd k8s-01 192.168.150.158
[root@k8s-master1 kubeasz]#./ezctl del-etcd k8s-01 192.168.150.158

#验证集群状态
[root@k8s-master1 kubeasz]#kubectl get node
NAME              STATUS                     ROLES    AGE     VERSION
192.168.150.151   Ready,SchedulingDisabled   master   14h     v1.21.0
192.168.150.152   Ready,SchedulingDisabled   master   7m22s   v1.21.0
192.168.150.161   Ready                      node     14h     v1.21.0
192.168.150.162   Ready                      node     14h     v1.21.0
192.168.150.163   Ready                      node     2m3s    v1.21.0
#验证网络组件calico状态
[root@k8s-master1 ~]#calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.150.152 | node-to-node mesh | up    | 03:20:49 | Established |
| 192.168.150.161 | node-to-node mesh | up    | 03:21:33 | Established |
| 192.168.150.162 | node-to-node mesh | up    | 03:22:24 | Established |
| 192.168.150.163 | node-to-node mesh | up    | 03:23:28 | Established |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.
#验证node节点路由
[root@k8s-node3 ~]#route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.150.2   0.0.0.0         UG    0      0        0 eth0
10.200.36.64    192.168.150.161 255.255.255.192 UG    0      0        0 tunl0
10.200.107.192  0.0.0.0         255.255.255.192 U     0      0        0 *
10.200.159.128  192.168.150.151 255.255.255.192 UG    0      0        0 tunl0
10.200.169.128  192.168.150.162 255.255.255.192 UG    0      0        0 tunl0
10.200.224.0    192.168.150.152 255.255.255.192 UG    0      0        0 tunl0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0

#装错了怎么重装
[root@k8s-master1 kubeasz]#./ezctl destroy k8s-01

5、dns:

自定义 DNS 服务:
https://kubernetes.io/zh/docs/tasks/administer-cluster/dns-custom-nameservers/
负责为整个集群提供DNS服务,从而实现服务之间的访问。
用于解析k8s集群中service name所对应的IP地址。
sky-dns:第一代1.2之前
kube-dns:1.18之后不支持
coredns:1.18及后续版本使用

coredns:

https://github.com/coredns/coredns #版本较旧
https://github.com/coredns/deployment/tree/master/kubernetes #历史版本+最新版本

#下载k8s相关组件位置:
https://github.com/kubernetes/kubernetes–>releases–>CHANGELOG–>Downloads for v1.21.0:
kubernetes.tar.gz
kubernetes-client-linux-amd64.tar.gz
kubernetes-node-linux-amd64.tar.gz
kubernetes-server-linux-amd64.tar.gz

[root@k8s-master1 ~]#tar xf kubernetes-client-linux-amd64.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#tar xf kubernetes-server-linux-amd64.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#tar xf kubernetes.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#tar xf kubernetes-node-linux-amd64.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#cd /usr/local/src/
[root@k8s-master1 src]#ll kubernetes/cluster/addons/dashboard/
total 24
drwxr-xr-x  2 root root   81 Dec  9  2020 ./
drwxr-xr-x 20 root root 4096 Dec  9  2020 ../
-rw-r--r--  1 root root  242 Dec  9  2020 MAINTAINERS.md
-rw-r--r--  1 root root  147 Dec  9  2020 OWNERS
-rw-r--r--  1 root root  281 Dec  9  2020 README.md
-rw-r--r--  1 root root 6878 Dec  9  2020 dashboard.yaml
[root@k8s-master1 src]#ll kubernetes/cluster/addons/dns
total 8
drwxr-xr-x  5 root root   71 Dec  9  2020 ./
drwxr-xr-x 20 root root 4096 Dec  9  2020 ../
-rw-r--r--  1 root root  129 Dec  9  2020 OWNERS
drwxr-xr-x  2 root root  147 Dec  9  2020 coredns/
drwxr-xr-x  2 root root  167 Dec  9  2020 kube-dns/
drwxr-xr-x  2 root root   48 Dec  9  2020 nodelocaldns/
[root@k8s-master1 src]#cd kubernetes/cluster/addons/dns/coredns/
[root@k8s-master1 coredns]#cp coredns.yaml.base /root/
[root@k8s-master1 coredns]#cd
[root@k8s-master1 ~]#mv coredns.yaml.base coredns-n56.yaml

[root@k8s-master1 ~]#vim coredns-n56.yaml 
70  kubernetes magedu.local in-addr.arpa ip6.arpa  #“__DNS__DOMAIN__”必须与文件/etc/kubeasz/clusters/k8s-01/hosts中的CLUSTER_DNS_DOMAIN="magedu.local"保持一致
135   image: harbor.magedu.net/baseimages/coredns:v1.8.3
139    memory: 256Mi  #最大内存,最好大一点
205   clusterIP: 10.100.0.2  #文件/etc/kubeasz/clusters/k8s-01/hosts中的SERVICE_CIDR="10.100.0.0/16",则该值为10.100.0.2
#配置Prometheus的端口暴露:
spec:
  type: NodePort
  ports:
    targetPort: 9153
    nodePort: 30009

#验证为10.100.0.2
[root@k8s-master1 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # cat /etc/resolv.conf 
nameserver 10.100.0.2
search default.svc.magedu.local svc.magedu.local magedu.local
options ndots:5

#查询yaml文件怎么写
[root@k8s-master1 ~]#kubectl explain service.spec.ports

#找机器把镜像文件下载下来
#docker pull k8s.gcr.io/coredns/coredns:v1.8.3
#docker save k8s.gcr.io/coredns/coredns:v1.8.3 > coredns-v1.8.3.tar.gz
#上传到master节点
[root@k8s-master1 ~]#docker load -i coredns-image-v1.8.3.tar.gz 
85c53e1bd74e: Loading layer [==================================================>]  43.29MB/43.29MB
Loaded image: k8s.gcr.io/coredns/coredns:v1.8.3
[root@k8s-master1 ~]#docker tag k8s.gcr.io/coredns/coredns:v1.8.3 harbor.magedu.net/baseimages/coredns:v1.8.3
#上传到harbor镜像仓库
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/coredns:v1.8.3
#修改coredns-n56.yaml的原镜像地址为harbor地址即可
135   image: harbor.magedu.net/baseimages/coredns:v1.8.3

#错误示范
[root@k8s-master1 ~]#kubectl apply -f coredns-n56.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@k8s-master1 ~]#kubectl get pod -A #起不来可能是版本不兼容,官方的yaml文件是1.8.0,但镜像是1.8.3的
kube-system   coredns-7578b5687d-xbzsp                  0/1     Running            0          2m47s
[root@k8s-master1 ~]#kubectl delete -f coredns-n56.yaml 

#正确示范
#用杰哥的1.8.3的配置文件
[root@k8s-master1 ~]#kubectl apply -f coredns-n56.yaml 
serviceaccount/coredns configured
clusterrole.rbac.authorization.k8s.io/system:coredns configured
clusterrolebinding.rbac.authorization.k8s.io/system:coredns configured
configmap/coredns configured
deployment.apps/coredns configured
service/kube-dns configured
[root@k8s-master1 ~]#kubectl get pod -A
kube-system   coredns-6b68dbb944-9cpxp                  1/1     Running            0          13s

#测试外网通信
[root@k8s-master1 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping www.baidu.com
PING www.baidu.com (103.235.46.39): 56 data bytes
64 bytes from 103.235.46.39: seq=0 ttl=127 time=176.776 ms
64 bytes from 103.235.46.39: seq=1 ttl=127 time=173.360 ms
/ # ping kubernetes
PING kubernetes (10.100.0.1): 56 data bytes
64 bytes from 10.100.0.1: seq=0 ttl=64 time=1.079 ms
64 bytes from 10.100.0.1: seq=1 ttl=64 time=0.104 ms

[root@k8s-master1 ~]#kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                                    AGE
default       kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP                                    37h
kube-system   kube-dns     NodePort    10.100.0.2   <none>        53:54724/UDP,53:54724/TCP,9153:30009/TCP   25m

#验证corndns指标数据:
http://192.168.150.161:30009/metrics 
http://192.168.150.162:30009/metrics 
http://192.168.150.163:30009/metrics 
coredns.yaml的主要配置参数
error: #错误日志输出到stdout。
health: #CoreDNS的运行状况报告为http://localhost:8080/health
cache: #启用coredns缓存。
reload: #配置自动重新加载配置文件,如果修改了ConfigMap的配置,会在两分钟后生效。
1oadbalance: #一个域名有多个记录会被轮询解析。
cache 30#缓存时间
kubernetes: #CoreDNs将根据指定的service domain名称在Kubernetes SVC中进行域名解析。
forward: #不是Kubernetes集群域内的域名查询都进行转发指定的服务器(/etc/resolv.conf)
prometheus: #CoreDNs的指标数据可以配置Prometheus访问http://coredns svc:9153/metrics 进行收集。
ready: #当coredns 服务启动完成后会进行在状态监测,会有个URL路径为/ready返回200状态码,否则返回报错。

6、dashboard:

git地址:https://github.com/kubernetes/dashboard
官网:https://kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/
Dashboard是基于网页的Kubernetes用户界面,可获取运行在集群中的应用的概览信息,也可以创建或者修改Kubernetes资源(如Deployment.Job.DaemonSet等等),也可以对Deployment实现弹性伸缩、发起滚动升级、重启Pod或者使用向导创建新的应用。

确保版本兼容k8s:
https://github.com/kubernetes/dashboard–>releases–>

#两个镜像
kubernetesui/dashboard:v2.4.0
kubernetesui/metrics-scraper:v1.0.7
[root@k8s-master1 ~]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
[root@k8s-master1 ~]#docker pull kubernetesui/dashboard:v2.4.0
[root@k8s-master1 ~]#docker tag kubernetesui/dashboard:v2.4.0 harbor.magedu.net/baseimages/dashboard:v2.4.0
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/dashboard:v2.4.0
[root@k8s-master1 ~]#docker pull kubernetesui/metrics-scraper:v1.0.7
[root@k8s-master1 ~]#docker tag kubernetesui/metrics-scraper:v1.0.7 harbor.magedu.net/baseimages/metrics-scraper:v1.0.7
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/metrics-scraper:v1.0.7

[root@k8s-master1 ~]#vim recommended.yaml
190    image: harbor.magedu.net/baseimages/dashboard:v2.4.0
274    image: harbor.magedu.net/baseimages/metrics-scraper:v1.0.7
#暴露端口
spec:
  type: NodePort
  ports:
    targetPort: 8443
    nodePort: 30002

[root@k8s-master1 ~]#kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

#访问dashboard网页:
https://192.168.150.161:30002/

#创建超级管理员权限获取token登录dashboard:
[root@k8s-master1 ~]#kubectl apply -f admin-user.yml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
[root@k8s-master1 ~]#kubectl get secrets -A |grep admin
kubernetes-dashboard   admin-user-token-2w2wj                           kubernetes.io/service-account-token   3      61s
[root@k8s-master1 ~]#kubectl describe secrets admin-user-token-2w2wj -n kubernetes-dashboard
Name:         admin-user-token-2w2wj
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 9f60fa54-916c-49fb-a00f-96303eb3af88

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlpGeHU5ZDVtbGFMTG03bkl1UGxYaVVtWjBtcXgtTVA0Z0NLT1c3UWVvX0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJ3MndqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5ZjYwZmE1NC05MTZjLTQ5ZmItYTAwZi05NjMwM2ViM2FmODgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.SKrxRP5IZ21nwsIML3Ay8DEnewJiHihuxndqE1Z3-Dmx7Rk4r6uD-qH6vspCsbZkD87T75FOOdbSIu-LdBwUR9RSjj_ck2Yt8A_7zloWcBMg3rQ3zKcuGcf1vQpu8OpwNtXmHA3u0BYLXcBP4jk1VWBOXJrQbZ47lx-OSRjbc-W2MAmaP9fNvZZseg_ckzKWfpVFJEr0l4PE2IeIG37RNeJOMzDGUJlCg2zMmjXcbYTvuZdWl9c0Zi1RdXP4AA4IaH9ZVvURIAr39xzkKLqqDh3AVM_duqg-T7HNKOildRvx03scBpk87mh5IFkO1ImeRQfGy2kGfsfI3p4gp1ef2w
#使用该token登录
#设置tonken登录会话保持时间
[root@k8s-master1 ~]#kubectl apply -f dashboard-v2.3.1.yaml

#使用kubeconfig登录
[root@k8s-master1 ~]#cp .kube/config /opt/
[root@k8s-master1 ~]#vim /opt/config 
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlpGeHU5ZDVtbGFMTG03bkl1UGxYaVVtWjBtcXgtTVA0Z0NLT1c3UWVvX0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJ3MndqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5ZjYwZmE1NC05MTZjLTQ5ZmItYTAwZi05NjMwM2ViM2FmODgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.SKrxRP5IZ21nwsIML3Ay8DEnewJiHihuxndqE1Z3-Dmx7Rk4r6uD-qH6vspCsbZkD87T75FOOdbSIu-LdBwUR9RSjj_ck2Yt8A_7zloWcBMg3rQ3zKcuGcf1vQpu8OpwNtXmHA3u0BYLXcBP4jk1VWBOXJrQbZ47lx-OSRjbc-W2MAmaP9fNvZZseg_ckzKWfpVFJEr0l4PE2IeIG37RNeJOMzDGUJlCg2zMmjXcbYTvuZdWl9c0Zi1RdXP4AA4IaH9ZVvURIAr39xzkKLqqDh3AVM_duqg-T7HNKOildRvx03scBpk87mh5IFkO1ImeRQfGy2kGfsfI3p4gp1ef2w
[root@k8s-master1 ~]#mv /opt/config kubeconfig
[root@k8s-master1 ~]#sz kubeconfig
#登录dashboard时选择该文件登录

你可能感兴趣的:(k8s,云原生,kubernetes)