Kubernetes
Kubernetes是Google在2014年开源的一个容器群集管理系统,Kubernetes简称K8S。
发展:
1.mesos+zookeeper+marathon 架构
2.docker+swarm 容器群集管理
3.kubernetes 开源框架 —>二次开发 API接口 舵手(Go语言开发)
K8S用于容器化应用程序的部署,扩展和管理。
K8S提供了容器编排,资源调度,弹性伸缩,部署管理。服务发现等一系列功能。
部署管理:无状态、有状态—>由控制器完成。
服务发现:ETCD—>相当于分布式数据库(自动的服务发现)。(ETCD在生产环境最少三台起步)
Kubernetes目标是让部署容器化应用简单高效
官方网站:http://www.kubernetes.io
自我修复
在节点故障时重新启动失败的容器,替换和重新部署,保证预期的副本数量;杀死将康检查失败的容器,并且在为准备好之前不会处理客户端请求,确保线上服务部中断。
弹性伸缩
使用命令、UI或者加基于CPU使用情况自动快速扩容和缩容应用程序实例,保证应用业务高峰并发时的高可用性;业务地缝时挥手资源,以最小成本运行服务。
自动部署和回滚
K8S采用滚动更新策略更新应用,一次更新一个Pod,而不是同时删除所有Pod,如果封信过程中出现问题,将回滚更改,确保升级不受影响业务。
服务发现和负载均衡
K8S为多个容器提供一个统一访问入口(管理方的入口,客户端的入口)(内部IP地址和一个DNS名称),并且负载均衡关联的所有容器,使得用户无需考虑容器IP问题。
机密和配置管理
管理机密数据和应用程序配置,而不需要吧敏感数据暴露在镜像里,提高敏感数据安全性。并可以将一些常用的配置存储在K8S中,方便应用程序使用。
存储编排
挂载外部存储系统,无论是来自本地存储,公有云(如AWS),还是网络存储(如NFS、GlusterFS、Ceph)都作为群集资源的一部分,几大提高存储使用灵活性
批处理
提供一次性任务,定时任务;满足批量数据处理和分析的场景。
三个节点:master主控节点 node1 node2
kubectl 管理人员操作的操作指令
两个node:提供业务
两个入口:一个是客户端访问的,一个是管理员访问的
两种方式管理资源:kubectl、YAML文件
Pod
最小部署单元
一组容器的集合
一个Pod中的容器共享网络命名空间
Pod是短暂的
Controllers
ReplicaSet:确保预期的Pod副本数量
Deployment:无状态应用部署
StatefulSet:用状态应用部署
DaemonSet:确保所有Node节点运行在同一个Pod
Job:一次性任务
Cronjob:定时任务
更高层次对象,部署和管理Pod
Service
防止Pod失联
定义一组Pod的访问策略
Label:标签,附加到某个资源上,用于关联对象、查询和筛选
Namepaces:命名空间,将对象逻辑上隔离
Annotations:注释
etcd:ca.pem server.pem server-key.pem
flannel:ca.pem server.pem server-key.pem
kube-apiserve:ca.pem server.pem server-key.pem
kubelet:ca.pem ca-key.pem
kube-proxy:ca.pem kube-proxy.pem kube-proxy-key.pem
kubectl:ca.pem admin.pem admin-key.pem
etcd具有以下特点:
完全复制:集群中的每个节点都可以使用完整的存档
高可用性:Etcd可用于避免硬件的单点故障或网络问题
一致性:每次读取都会返回跨多主机的最新写入
简单:包括一个定义良好、面向用户的API(gRPC)
快速:每秒10000次写入的基准速度
可靠:使用Raft算法实现强一致性、高可用服务存储目录
flannel网络组件
Overlay Network:覆盖网络,在基础网络叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
Flannel:是Overlay网络的一种。也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、VPC和GCE路由等数据转发方式。
负责管理Kubernetes集群。它们管理pod的生命周期,pod是Kubernetes集群内部署的基本单元。
节点组件是Kubernetes中的worker机器,受到master的管理。节点可以是虚拟机(VM)或物理机器——Kubernetes在这两种类型的系统上都能良好运行。
每个节点都包含运行pod的必要组件:
所有节点清空防火墙规则和关闭核心防护
iptables -F
setenforce 0
一:环境部署
官网地址:https://github.com/kubernetes/kubernetes/releases?after=v1.13.1
二:K8S部署
环境
master:192.168.20.10 kube-apiserver kube-controller-manager kube-scheduler etcd
node1:192.168.20.20 kubelet kube-proxy docker flannel etcd
node2:192.168.20.30 kubelet kube-proxy docker flannel etcd
//证书制作
//master操作
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# su
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# ls //从宿主机拖进来
etcd-cert.sh etcd.sh
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv etcd-cert.sh etcd-cert
//下载证书制作工具
[root@master k8s]# cd etcd-cert/
[root@master etcd-cert]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master etcd-cert]# bash cfssl.sh //下载cfssl官方包
或
[root@master etcd-cert]# ls 下载软件包拖入
cfssl cfssl-certinfo cfssljson etcd-cert etcd.sh
[root@master etcd-cert]# mv cfssl* /usr/local/bin/
[root@master etcd-cert]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master etcd-cert]# ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
//开始制作证书
//cfssl 生成证书工具 cfssljson通过传入json文件生成证书
cfssl-certinfo查看证书信息
//定义ca证书
[root@master etcd-cert]# cat > ca-config.json <<EOF
> {
> "signing": {
> "default": {
> "expiry": "87600h"
> },
> "profiles": {
> "www": {
> "expiry": "87600h",
> "usages": [
> "signing",
> "key encipherment",
> "server auth",
> "client auth"
> ]
> }
> }
> }
> }
> EOF
//实现证书签名
[root@master etcd-cert]# cat > ca-csr.json <<EOF
> {
> "CN": "etcd CA",
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "Beijing",
> "ST": "Beijing"
> }
> ]
> }
> EOF
//生成证书,生成ca-key.pem ca.pem
[root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/28 16:11:29 [INFO] generating a new CA key and certificate from CSR
2020/09/28 16:11:29 [INFO] generate received request
2020/09/28 16:11:29 [INFO] received CSR
2020/09/28 16:11:29 [INFO] generating key: rsa-2048
2020/09/28 16:11:30 [INFO] encoded CSR
2020/09/28 16:11:30 [INFO] signed certificate with serial number 307109152987071081700641248999918396111229161596
//指定etcd三个节点之间的通信验证
[root@master etcd-cert]# cat > server-csr.json <<EOF
> {
> "CN": "etcd",
> "hosts": [
> "192.168.20.10", //master地址
> "192.168.20.20", //node1地址
> "192.168.20.30" //node2地址
> ],
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [ //名字要和上面定义的一样
> {
> "C": "CN",
> "L": "BeiJing",
> "ST": "BeiJing"
> }
> ]
> }
> EOF
//生成ETCD证书 server-key.pem server.pem
[root@master etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/09/28 16:12:52 [INFO] generate received request
2020/09/28 16:12:52 [INFO] received CSR
2020/09/28 16:12:52 [INFO] generating key: rsa-2048
2020/09/28 16:12:52 [INFO] encoded CSR
2020/09/28 16:12:52 [INFO] signed certificate with serial number 538862372957746116117729195241060280056748061751
2020/09/28 16:12:52 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
//ETCD 二进制包地址 https://github.com/etcd-io/etcd/releases
//拖入flannel-v0.10.0-linux-amd64.tar.gz、etcd-v3.3.10-linux-amd64.tar.gz、kubernetes-server-linux-amd64.tar.gz
[root@master etcd-cert]# ls
ca-config.json etcd-cert.sh server-csr.json
ca.csr etcd-v3.3.10-linux-amd64.tar.gz server-key.pem
ca-csr.json flannel-v0.10.0-linux-amd64.tar.gz server.pem
ca-key.pem kubernetes-server-linux-amd64.tar.gz
ca.pem server.csr
[root@master etcd-cert]# mv *.tar.gz ../
[root@master etcd-cert]# cd ..
[root@master k8s]# ls
cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
//解压etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
//创建目录——配置文件,命令文件,证书
[root@master k8s]# mkdir /opt/etcd/{
cfg,bin,ssl} -p
[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
//证书拷贝
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/
//进入卡住状态等待其他节点加入
[root@master k8s]# bash etcd.sh etcd01 192.168.20.10 etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
//再使用另外一个会话打开,会发现etcd进程已经开启
[root@master k8s]# ps -ef | grep etcd
root 22521 1 2 16:41 ? 00:00:01 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.10:2380 --listen-client-urls=https://192.168.20.10:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.20.10:2379 --initial-advertise-peer-urls=https://192.168.20.10:2380 --initial-cluster=etcd01=https://192.168.20.10:2380,etcd02=https://192.168.20.20:2380,etc03=https://192.168.20.30:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root 22534 22330 0 16:42 pts/2 00:00:00 grep --color=auto etcd
//拷贝证书到node节点
[root@master k8s]# scp -r /opt/etcd/ root@192.168.20.20:/opt/
The authenticity of host '192.168.20.20 (192.168.20.20)' can't be established.
ECDSA key fingerprint is SHA256:M+6YSK2hm7e8JY4G1qYmT0X1UmIr280vvpa+1rW8IBc.
ECDSA key fingerprint is MD5:bd:01:e2:85:f0:b0:36:8c:49:64:08:30:6c:2d:a4:37.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.20.20' (ECDSA) to the list of known hosts.
root@192.168.20.20's password:
etcd 100% 509 237.4KB/s 00:00
etcd 100% 18MB 71.0MB/s 00:00
etcdctl 100% 15MB 81.1MB/s 00:00
ca-key.pem 100% 1679 1.0MB/s 00:00
ca.pem 100% 1265 1.4MB/s 00:00
server-key.pem 100% 1675 1.4MB/s 00:00
server.pem 100% 1338 1.8MB/s 00:00
[root@master k8s]# scp -r /opt/etcd/ root@192.168.20.30:/opt/
The authenticity of host '192.168.20.30 (192.168.20.30)' can't be established.
ECDSA key fingerprint is SHA256:YI9QBe63U8Cgwvdpz0mTaUAPrBP7p0NRMbrujvLhYm8.
ECDSA key fingerprint is MD5:2a:d0:1b:eb:fb:50:3f:a4:f4:f0:a0:59:9b:97:e5:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.20.30' (ECDSA) to the list of known hosts.
root@192.168.20.30's password:
etcd 100% 509 335.8KB/s 00:00
etcd 100% 18MB 81.8MB/s 00:00
etcdctl 100% 15MB 75.6MB/s 00:00
ca-key.pem 100% 1679 351.1KB/s 00:00
ca.pem 100% 1265 316.3KB/s 00:00
server-key.pem 100% 1675 1.2MB/s 00:00
server.pem 100% 1338 805.8KB/s 00:00
//拷贝启动脚本到node节点
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.20.20:/usr/lib/systemd/system/
root@192.168.20.20's password:
etcd.service 100% 923 283.8KB/s 00:00
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.20.30:/usr/lib/systemd/system/
root@192.168.20.30's password:
etcd.service
//在node1节点修改
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# su
[root@node1 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" //名字改成etcd02
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.20:2380" //地址改成自己的地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.20:2379" //地址改成自己的地址
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.20:2380" //地址改成自己的地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.20:2379" //地址改成自己的地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.10:2380,etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
//在node2节点修改
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# su
[root@node2 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03" //名字改成etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.30:2380" //地址改成自己的地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.30:2379" //地址改成自己的地址
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.30:2380" //地址改成自己的地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.30:2379" //地址改成自己的地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.10:2380,etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
//启动
[root@master k8s]# bash etcd.sh etcd01 192.168.20.10 etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@node1 ~]# systemctl start etcd.service
[root@node2 ~]# systemctl start etcd.service
//查看信息
[root@master k8s]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2020-09-28 17:23:02 CST; 55s ago
Main PID: 78752 (etcd)
Tasks: 13
CGroup: /system.slice/etcd.service
└─78752 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.10...
...
[root@node1 ~]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2020-09-28 17:22:50 CST; 2min 14s ago
Main PID: 22277 (etcd)
Tasks: 13
CGroup: /system.slice/etcd.service
└─22277 /opt/etcd/bin/etcd --name=etcd02 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.2...
...
[root@node2 ~]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2020-09-28 17:22:53 CST; 2min 16s ago
Main PID: 22366 (etcd)
Tasks: 14
CGroup: /system.slice/etcd.service
└─22366 /opt/etcd/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.3...
...
//检查群集健康状态
[root@master k8s]# cd etcd-cert/
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379" cluster-health
member 350b6ab68923a8a2 is healthy: got healthy result from https://192.168.20.20:2379
member 51ae3f86f3783687 is healthy: got healthy result from https://192.168.20.10:2379
member c05141f45e08d8ff is healthy: got healthy result from https://192.168.20.30:2379
cluster is healthy
//3.docker引擎部署——所有node节点部署docker引擎
node1和node2节点安装docker
//4.flannel网络配置
//写入分配的子网段到ETCD中,供flannel使用
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{
"Network": "172.17.0.0/16", "Backend": {
"Type": "vxlan"}}
//查看写入的信息
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379" get /coreos.com/network/config
{
"Network": "172.17.0.0/16", "Backend": {
"Type": "vxlan"}}
//拷贝到所有node节点(只需要部署在node节点即可)
[root@master etcd-cert]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.1^Croot
[root@master etcd-cert]# cd ..
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.20.20:/root
root@192.168.20.20's password:
flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 55.8MB/s 00:00
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.20.30:/root
root@192.168.20.30's password:
flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 35.8MB/s 00:00
//所有node节点操作(这里只展示node1的操作)
//解压
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
//创建k8s工作目录
[root@node1 ~]# mkdir /opt/kubernetes/{
cfg,bin,ssl} -p
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
//编写flannel的脚本
[root@node1 ~]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${
1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
//开启flannel网络功能
[root@node1 ~]# bash flannel.sh https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
//配置docker连接flannel
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@node1 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.39.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.39.1/24 --ip-masq=false --mtu=1450"
//重启docker服务
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
//查看flannel网络
[root@node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.39.1 netmask 255.255.255.0 broadcast 172.17.39.255
ether 02:42:9e:5b:d5:d1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.20.20 netmask 255.255.255.0 broadcast 192.168.20.255
inet6 fe80::f0c9:c17f:3e56:9bf5 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:ac:fe:ba txqueuelen 1000 (Ethernet)
RX packets 361114 bytes 227105212 (216.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 261561 bytes 29704749 (28.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.39.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::2062:10ff:fe72:d64d prefixlen 64 scopeid 0x20<link>
ether 22:62:10:72:d6:4d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 38 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 810 bytes 55986 (54.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 810 bytes 55986 (54.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:7e:c1:42 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
//测试ping通对方docker0网卡 证明flannel起到路由作用
[root@node1 ~]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
75f829a71a1c: Pull complete
Digest: sha256:19a79828ca2e505eaee0ff38c2f3fd9901f4826737295157cc5212b7a372cd2b
Status: Downloaded newer image for centos:7
[root@2ab5e936498a /]# yum install net-tools -y
[root@2ab5e936498a /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.39.2 netmask 255.255.255.0 broadcast 172.17.39.255
ether 02:42:ac:11:27:02 txqueuelen 0 (Ethernet)
RX packets 16290 bytes 12483008 (11.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7815 bytes 425422 (415.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node2 ~]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
75f829a71a1c: Pull complete
Digest: sha256:19a79828ca2e505eaee0ff38c2f3fd9901f4826737295157cc5212b7a372cd2b
Status: Downloaded newer image for centos:7
[root@c72893bc9690 /]# yum install net-tools -y
[root@c72893bc9690 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.63.2 netmask 255.255.255.0 broadcast 172.17.63.255
ether 02:42:ac:11:3f:02 txqueuelen 0 (Ethernet)
RX packets 16264 bytes 12482650 (11.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7783 bytes 423626 (413.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@2ab5e936498a /]# ping 172.17.63.2
PING 172.17.63.2 (172.17.63.2) 56(84) bytes of data.
64 bytes from 172.17.63.2: icmp_seq=1 ttl=62 time=2.55 ms
64 bytes from 172.17.63.2: icmp_seq=2 ttl=62 time=4.69 ms
64 bytes from 172.17.63.2: icmp_seq=3 ttl=62 time=0.383 ms
^C
--- 172.17.63.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 0.383/2.542/4.695/1.761 ms
[root@c72893bc9690 /]# ping 172.17.39.2
PING 172.17.39.2 (172.17.39.2) 56(84) bytes of data.
64 bytes from 172.17.39.2: icmp_seq=1 ttl=62 time=2.02 ms
64 bytes from 172.17.39.2: icmp_seq=2 ttl=62 time=0.917 ms
64 bytes from 172.17.39.2: icmp_seq=3 ttl=62 time=0.751 ms
^C
--- 172.17.39.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.751/1.231/2.027/0.567 ms
部署master组件
//在master上操作,api-server生成证书
拖入 master.zip压缩包到/root/k8s目录下
[root@master k8s]# ls
cfssl.sh etcd-v3.3.10-linux-amd64 kubernetes-server-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz master.zip
etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
[root@master k8s]# unzip master.zip
[root@master k8s]# mkdir /opt/kubernetes/{
cfg,bin,ssl} -p
[root@master k8s]# mkdir k8s-cert
[root@master k8s]# cd k8s-cert/
拖入k8s-cert.sh脚本
[root@master k8s-cert]# ls
k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.20.10", //master1
"192.168.20.40", //master2
"192.168.20.111", //vip
"192.168.20.50", //lb (master)
"192.168.20.60", //lb (backup)
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-pro
xy
//生成k8s证书
[root@master k8s-cert]# bash k8s-cert.sh
2020/09/29 15:20:27 [INFO] generating a new CA key and certificate from CSR
2020/09/29 15:20:27 [INFO] generate received request
2020/09/29 15:20:27 [INFO] received CSR
2020/09/29 15:20:27 [INFO] generating key: rsa-2048
2020/09/29 15:20:28 [INFO] encoded CSR
2020/09/29 15:20:28 [INFO] signed certificate with serial number 572092143940477158442975741908760581653757414586
2020/09/29 15:20:28 [INFO] generate received request
2020/09/29 15:20:28 [INFO] received CSR
2020/09/29 15:20:28 [INFO] generating key: rsa-2048
2020/09/29 15:20:28 [INFO] encoded CSR
2020/09/29 15:20:28 [INFO] signed certificate with serial number 645411198364777330575133409297661007151065267201
2020/09/29 15:20:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/09/29 15:20:28 [INFO] generate received request
2020/09/29 15:20:28 [INFO] received CSR
2020/09/29 15:20:28 [INFO] generating key: rsa-2048
2020/09/29 15:20:28 [INFO] encoded CSR
2020/09/29 15:20:28 [INFO] signed certificate with serial number 382185722811839684332683631495065868107644288788
2020/09/29 15:20:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/09/29 15:20:28 [INFO] generate received request
2020/09/29 15:20:28 [INFO] received CSR
2020/09/29 15:20:28 [INFO] generating key: rsa-2048
2020/09/29 15:20:29 [INFO] encoded CSR
2020/09/29 15:20:29 [INFO] signed certificate with serial number 54367561030861349163097338268655276544563898262
2020/09/29 15:20:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls *.pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master k8s-cert]# cd ..
//解压kubernetes压缩包
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
//复制关键命令文件
[root@master k8s]# cd /root/k8s/kubernetes/server/bin
[root@master bin]# ls
apiextensions-apiserver kube-controller-manager.tar
cloud-controller-manager kubectl
cloud-controller-manager.docker_tag kubelet
cloud-controller-manager.tar kube-proxy
hyperkube kube-proxy.docker_tag
kubeadm kube-proxy.tar
kube-apiserver kube-scheduler
kube-apiserver.docker_tag kube-scheduler.docker_tag
kube-apiserver.tar kube-scheduler.tar
kube-controller-manager mounter
kube-controller-manager.docker_tag
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master bin]# cd /root/k8s/
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' //随机生成序列号
7c0a6952689f0769225e08a5d1f705b2
[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
7c0a6952689f0769225e08a5d1f705b2,kubelet-bootstrap,10001,"system:kubelet-bootstrap" //序列号,用户名,id,角色
//二进制文件,token,证书都准备好,开启apiserver
[root@master k8s]# bash apiserver.sh 192.168.20.10 https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
//检查进程是否启动成功
[root@master k8s]# ps aux | grep kube-apiserver
//查看配置文件
[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379 \
--bind-address=192.168.20.10 \
--secure-port=6443 \
--advertise-address=192.168.20.10 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
//监听的https端口
[root@master k8s]# netstat -ntap | grep 6443
tcp 0 0 192.168.20.10:6443 0.0.0.0:* LISTEN 80347/kube-apiserve
tcp 0 0 192.168.20.10:6443 192.168.20.10:58068 ESTABLISHED 80347/kube-apiserve
tcp 0 0 192.168.20.10:58068 192.168.20.10:6443 ESTABLISHED 80347/kube-apiserve
[root@master k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 80347/kube-apiserve
//启动scheduler服务
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# chmod +x controller-manager.sh
//启动controller-manager
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
//查看master节点状态
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {
"health":"true"}
etcd-0 Healthy {
"health":"true"}
etcd-1 Healthy {
"health":"true"}
node节点部署
//master上操作
//把kubelet、kube-proxy拷贝到node节点上去
[root@master k8s]# cd kubernetes/server/bin/
[root@master bin]# scp kubelet kube-proxy root@192.168.20.20:/opt/kubernetes/bin/
root@192.168.20.20's password:
kubelet 100% 168MB 60.4MB/s 00:02
kube-proxy 100% 48MB 59.5MB/s 00:00
root@master bin]# scp kubelet kube-proxy root@192.168.20.30:/opt/kubernetes/bin/
root@192.168.20.30's password:
kubelet 100% 168MB 96.8MB/s 00:01
kube-proxy 100% 48MB 96.0MB/s 00:00
//node节点上操作(复制node.zip到/root目录下)
[root@node1 ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 视频 文档 音乐
flannel.sh initial-setup-ks.cfg README.md 模板 图片 下载 桌面
[root@node1 ~]# unzip node.zip
Archive: node.zip
inflating: proxy.sh
inflating: kubelet.sh
//在master上操作
[root@master bin]# cd /root/k8s
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig
//拷贝kubeconfig.sh文件进行重命名
[root@master kubeconfig]# ls
kubeconfig.sh
[root@master kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# vim kubeconfig
删除以下部分
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008
cat > token.csv <<EOF
${
BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
//获取token信息(复制下前面的序列号)
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
7c0a6952689f0769225e08a5d1f705b2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
//配置文件修改为tokenID
[root@master kubeconfig]# vim kubeconfig
#----------------------
APISERVER=$1
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${
KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=7c0a6952689f0769225e08a5d1f705b2 \ //这里输入复制的序列号
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${
KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
//设置环境变量
[root@master kubeconfig]# vim /etc/profile
在最后一行后插入
export PATH=$PATH:/opt/kubernetes/bin/
[root@master kubeconfig]# source /etc/profile
[root@master kubeconfig]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {
"health":"true"}
etcd-0 Healthy {
"health":"true"}
etcd-1 Healthy {
"health":"true"}
//生成配置文件bootstrap.kubeconfig、kube-proxy.kubeconfig
[root@master kubeconfig]# bash kubeconfig 192.168.20.10 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
//拷贝配置文件到node节点
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.20.20:/opt/kubernetes/cfg/
root@192.168.20.20's password:
bootstrap.kubeconfig 100% 2167 1.4MB/s 00:00
kube-proxy.kubeconfig 100% 6273 1.2MB/s 00:00
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.20.30:/opt/kubernetes/cfg/
root@192.168.20.30's password:
bootstrap.kubeconfig 100% 2167 1.2MB/s 00:00
kube-proxy.kubeconfig 100% 6273 4.9MB/s 00:00
//创建bootstrap角色赋予权限用于连接apiserver请求签名
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
//在node01节点上操作
[root@node1 ~]# bash kubelet.sh 192.168.20.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
//检查kubelet服务启动
[root@node1 ~]# bash kubelet.sh 192.168.20.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]# ps -aux | grep kube
root 79703 0.1 0.4 399640 18088 ? Ssl 14:23 0:22 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 97400 1.0 1.1 534300 42548 ? Ssl 17:30 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.20.20 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 97465 0.0 0.0 112728 984 pts/2 S+ 17:31 0:00 grep --color=auto kube
//master上操作
//检查到node01节点的请求
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 2m1s kubelet-bootstrap Pending
[root@master kubeconfig]# kubectl certificate approve node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY
certificatesigningrequest.certificates.k8s.io/node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY approved
//kubectl certificate
approve 同意一个自签证书请求
deny 拒绝一个自签证书请求
//继续查看证书状态
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 3m11s kubelet-bootstrap Approved,Issued
//Pending等待集群给该节点颁发证书 Approved,Issued已经被允许加入群集
//查看群集节点,成功加入node01节点
[root@master kubeconfig]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.20.20 Ready <none> 68s v1.12.3
//在node01节点操作,启动proxy服务
[root@node1 ~]# bash proxy.sh 192.168.20.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node1 ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 17:35:57 CST; 1min 24s ago
Main PID: 98700 (kube-proxy)
Tasks: 0
Memory: 7.8M
CGroup: /system.slice/kube-proxy.service
‣ 98700 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-...
9月 29 17:37:12 node1 kube-proxy[98700]: I0929 17:37:12.263279 98700 config.go:14...te
9月 29 17:37:13 node1 kube-proxy[98700]: I0929 17:37:13.437384 98700 config.go:14...te
9月 29 17:37:14 node1 kube-proxy[98700]: I0929 17:37:14.277055 98700 config.go:14...te
9月 29 17:37:15 node1 kube-proxy[98700]: I0929 17:37:15.451517 98700 config.go:14...te
9月 29 17:37:16 node1 kube-proxy[98700]: I0929 17:37:16.287927 98700 config.go:14...te
9月 29 17:37:17 node1 kube-proxy[98700]: I0929 17:37:17.464773 98700 config.go:14...te
9月 29 17:37:18 node1 kube-proxy[98700]: I0929 17:37:18.296889 98700 config.go:14...te
9月 29 17:37:19 node1 kube-proxy[98700]: I0929 17:37:19.474728 98700 config.go:14...te
9月 29 17:37:20 node1 kube-proxy[98700]: I0929 17:37:20.308835 98700 config.go:14...te
9月 29 17:37:21 node1 kube-proxy[98700]: I0929 17:37:21.489116 98700 config.go:14...te
Hint: Some lines were ellipsized, use -l to show in full.
//node02节点部署
//在node01节点操作
//把现成的/opt/kubernetes目录复制到其他节点进行修改即可
[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.20.30:/opt/
The authenticity of host '192.168.20.30 (192.168.20.30)' can't be established.
ECDSA key fingerprint is SHA256:YI9QBe63U8Cgwvdpz0mTaUAPrBP7p0NRMbrujvLhYm8.
ECDSA key fingerprint is MD5:2a:d0:1b:eb:fb:50:3f:a4:f4:f0:a0:59:9b:97:e5:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.20.30' (ECDSA) to the list of known hosts.
root@192.168.20.30's password:
flanneld 100% 235 209.8KB/s 00:00
bootstrap.kubeconfig 100% 2167 1.7MB/s 00:00
kube-proxy.kubeconfig 100% 6273 5.1MB/s 00:00
kubelet 100% 377 257.4KB/s 00:00
kubelet.config 100% 267 75.5KB/s 00:00
kubelet.kubeconfig 100% 2296 2.1MB/s 00:00
kube-proxy 100% 189 167.6KB/s 00:00
mk-docker-opts.sh 100% 2139 1.6MB/s 00:00
scp: /opt//kubernetes/bin/flanneld: Text file busy
kubelet 100% 168MB 106.7MB/s 00:01
kube-proxy 100% 48MB 113.7MB/s 00:00
kubelet.crt 100% 2185 2.1MB/s 00:00
kubelet.key 100% 1675 646.4KB/s 00:00
kubelet-client-2020-09-29-17-33-26.pem 100% 1273 273.7KB/s 00:00
kubelet-client-current.pem 100% 1273 304.1KB/s 00:00
//把kubelet,kube-proxy的service文件拷贝到node2中
[root@node1 ~]# scp /usr/lib/systemd/system/{
kubelet,kube-proxy}.service root@192.168.20.30:/usr/lib/systemd/system/
root@192.168.20.30's password:
kubelet.service 100% 264 136.6KB/s 00:00
kube-proxy.service 100% 231 143.8KB/s 00:00
//在node02上操作,进行修改
//首先删除复制过来的证书,等会node02会自行申请证书
[root@node2 ~]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# rm -rf *
//修改配置文件kubelet kubelet.config kube-proxy
[root@node2 ssl]# cd ../cfg
[root@node2 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.30 \ //这里改成自己的地址
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node2 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.20.30 //这里改成自己的地址
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[root@node2 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.30 \ //这里改成自己的地址
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
//启动服务
[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl start kube-proxy.service
[root@node2 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
//在master上操作查看请求
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 41m kubelet-bootstrap Approved,Issued
node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE 117s kubelet-bootstrap Pending //复制序列号
//授权许可加入群集
[root@master kubeconfig]# kubectl certificate approve node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE
certificatesigningrequest.certificates.k8s.io/node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE approved
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 42m kubelet-bootstrap Approved,Issued
node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE 2m45s kubelet-bootstrap Approved,Issued
//查看群集中的节点
[root@master kubeconfig]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.20.20 Ready <none> 39m v1.12.3
192.168.20.30 Ready <none> 12s v1.12.3
[root@node2 cfg]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 18:11:15 CST; 16min ago
Main PID: 99461 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
‣ 99461 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.20.30 --cluster-cidr=10.0.0.0/24 --p...
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.360327 99461 iptables.go:327] running iptables-save [-t filter]
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.362421 99461 iptables.go:327] running iptables-save [-t nat]
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.435050 99461 proxier.go:1472] Bind addr 10.0.0.1
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.458144 99461 iptables.go:391] running iptables-restore [-w 5 --noflush --counters]
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.461366 99461 proxier.go:672] syncProxyRules took 101.094914ms
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.461402 99461 bounded_frequency_runner.go:221] sync-runner: ran, next poss... in 30s
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.903731 99461 config.go:141] Calling handler.OnEndpointsUpdate
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.932189 99461 config.go:141] Calling handler.OnEndpointsUpdate
9月 29 18:27:21 node2 kube-proxy[99461]: I0929 18:27:21.917556 99461 config.go:141] Calling handler.OnEndpointsUpdate
9月 29 18:27:21 node2 kube-proxy[99461]: I0929 18:27:21.941538 99461 config.go:141] Calling handler.OnEndpointsUpdate
Hint: Some lines were ellipsized, use -l to show in full.