Kubernetes 的主要服务程序都可以通过直接运行而今只能怪文件加上启动参数完成运行。在 Kubernetes 的 Master 上需要部署 etcd、kube-apiserver、kube-controller-manager、kube-scheduler 服务进行,在工作 Node 上需要部署 docker 、kubelet 和 kube-proxy 服务进程。
将 Kubernetes 的二进制可执行文件复制到/usr/bin目录下,然后在/usr/lib/systemd/system目录下为各服务创建 systemd 的服务配置文件,这样就完成了软件的安装。要使 Kubernetes 正常工作,需要详细配置各个服务的启动参数。
master节点
Node组件
关于Etcd集群
接触过分布式系统我们应该知道,分布式系统中最基本的问题之一就是实现信息的共识,在此基础上才能实现对服务配置信息的管理、服务的发现、更新、同步,等等。而要解决这些问题,往往需要利用一套能保证一致性的分布式数据库系统。比如经典的Apache ZooKeeper项目,采用了Paxos算法来实现数据库的强一致性。
Etcd专门为集群环境设计,采用了更为简洁的Raft共识算法 ,同样可以实现数据强一致性,并支持集群节点状态管理和服务自动发现。
Etcd目前在github.com/coreos/etcd进行维护,最新版本为3.x系列。
受到Apache ZooKeeper项目和doozer项目(doozer是一个一致性分布式数据库实现,主要面向少量数据)的启发,Etcd在设计的时候重点考略了下面四个因素:
通常情况下,用户使用Etcd可以在多个节点上启动多个实例,并将它们添加为一个集群。同一个集群中的Etcd实例将会自动保持彼此信息的一致性,这意味着分布在各个节点上的应用也将获取到一致的信息。
关于CA认证
在一个安全的的内网环境中,Kubernetes 的各个组件与 Master 之间可以通过kube-apiserver 非安全端口 http://
组件 | 证书 |
---|---|
etcd | ca.pem,server.pem,server-key.pem |
flannel | ca.pem,server.pem,server-key.pem |
kube-apiserver | ca.pem,server.pem,server-key.pem |
kubelet | ca.pem,ca-key.pem |
kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
kubectl | ca.pem,admin-pem,admin-key.pem |
节点 | IP地址 | 硬件资源 | 组件 |
---|---|---|---|
master | 192.168.100.10 | CentOS7.4 (双核双线 4G) | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |
node1 | 192.168.100.20 | CentOS7.4 (双核双线 4G) | kubelet、kube-proxy、docker、flnanel、etcd |
node2 | 192.168.100.30 | CentOS7.4 (双核双线 4G) | kubelet、kube-proxy、docker、flannel、etcd |
部署 master 节点
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# vim etcd.sh
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
####位置变量指定节点名字,IP地址,和集群
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd
####创建etcd的配置文件
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" ####etcd内部端口
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" ####etcd对外提供的端口
#[Clustering],定义集群信息
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
##生成服务执行脚本
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
##定义服务之间的证书认证
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
##开启服务
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
[root@master k8s]# vim etcd-cert.sh
####定义ca证书
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
####实现证书签名
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
####生产证书,生成ca-key.pem ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
####指定etcd三个节点之间的通信验证
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.100.10",
"192.168.100.20",
"192.168.100.30"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
####生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
[root@master k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master k8s]# ll /usr/local/bin/
总用量 18808
-rwxr-xr-x 1 root root 10376657 1月 16 2020 cfssl
-rwxr-xr-x 1 root root 6595195 1月 16 2020 cfssl-certinfo
-rwxr-xr-x 1 root root 2277873 1月 16 2020 cfssljson
[root@master k8s]# ./etcd-cert.sh
2020/11/24 19:44:30 [INFO] generating a new CA key and certificate from CSR
2020/11/24 19:44:30 [INFO] generate received request
2020/11/24 19:44:30 [INFO] received CSR
2020/11/24 19:44:30 [INFO] generating key: rsa-2048
2020/11/24 19:44:30 [INFO] encoded CSR
2020/11/24 19:44:30 [INFO] signed certificate with serial number 179234423085770384325949193674187086340408540536
2020/11/24 19:44:30 [INFO] generate received request
2020/11/24 19:44:30 [INFO] received CSR
2020/11/24 19:44:30 [INFO] generating key: rsa-2048
2020/11/24 19:44:30 [INFO] encoded CSR
2020/11/24 19:44:30 [INFO] signed certificate with serial number 55679034051257973314798172718159493120130262396
[root@master k8s]# ls
ca-config.json ca-csr.json ca.pem etcd.sh server.csr server-key.pem
ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv c* etcd-cert
[root@master k8s]# mv s* etcd-cert
[root@master k8s]# ls etcd-cert
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
[root@master k8s]# ls
etcd-cert etcd-cert.sh etcd.sh etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz ####解压包
[root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p ####指定配置文件,命令文件,证书
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ ####复制证书文件到目录
[root@master k8s]# bash etcd.sh etcd01 192.168.100.10 etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380
####打开新的会话,会发现etcd已经启动
[root@master k8s]# ps -ef | grep etcd
root 12465 1 3 23:04 ? 00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.100.10:2380 --listen-client-urls=https://192.168.100.10:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.100.10:2379 --initial-advertise-peer-urls=https://192.168.100.10:2380 --initial-cluster=etcd01=https://192.168.100.10:2380,etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root 12477 12344 0 23:04 pts/2 00:00:00 grep --color=auto etcd
node节点操作
[root@master k8s]# scp -r /opt/etcd/ [email protected]:/opt/
[root@master k8s]# scp -r /opt/etcd/ [email protected]:/opt/
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
[root@node1 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.20:2380" ####修改为node1的ip地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.20:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.20:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.20:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.10:2380,etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.30:2380" ####修改成node2的ip地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.30:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.30:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.30:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.10:2380,etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@node1 ~]# systemctl start etcd
[root@node1 ~]# systemctl status etcd
[root@node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@node1 opt]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 开启路由转发
[root@node1 opt]# sysctl -p
[root@node1 ~]# yum install -y docker-ce
[root@node1 ~]# systemctl start docker
[root@node1 ~]# systemctl enable docker
[root@node1 ~]# mkdir -p /etc/docker
[root@node1 ~]# tee /etc/docker/daemon.json <<-'EOF' ####开启镜像加速
> {
> "registry-mirrors": ["https://13tjalqi.mirror.aliyuncs.com"]
> }
> EOF
{
"registry-mirrors": ["https://13tjalqi.mirror.aliyuncs.com"]
}
[root@node1 ~]# systemctl daemon-reload ####开启进程守护
[root@node1 ~]# systemctl restart docker
[root@master k8s]# cd /opt/etcd/ssl/
[root@master ssl]# ls
ca-key.pem ca.pem server-key.pem server.pem
####set写入,指定网络类型为vxaln
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
####用get命令查看写入的信息是否正确
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
####出现这段语句说明配置争取
[root@node1 opt]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p ####创建工作目录,便于后续操作
[root@node1 opt]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz ####解压
flanneld
mk-docker-opts.sh
README.md
[root@node1 opt]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ ####将执行文件移动到指定目录下
[root@node1 opt]#
[root@node1 opt]# vim flannel.sh ####编写安装脚本
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
####生成flannel系统服务
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
####执行脚本,指定各个节点的ip地址
[root@node1 opt]# bash flannel.sh https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@node1 opt]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env ####添加环境文件,包含子网段信息
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock ####添加变量
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@node1 opt]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.70.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.70.1/24 --ip-masq=false --mtu=1450"
####bip指定启动时的子网段信息
[root@node1 opt]# systemctl daemon-reload ####重启docker服务使网段生效
[root@node1 opt]# systemctl restart docker
[root@node1 opt]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.70.1 netmask 255.255.255.0 broadcast 172.17.70.255
ether 02:42:84:20:54:c4 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.20 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::45f0:dfe5:cb28:7b57 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:62:a0:dd txqueuelen 1000 (Ethernet)
RX packets 1093825 bytes 1309774257 (1.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 414614 bytes 38489810 (36.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.70.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::6450:edff:fe90:4a9a prefixlen 64 scopeid 0x20<link>
ether 66:50:ed:90:4a:9a txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 37 overruns 0 carrier 0 collisions 0
####可以看出已经生成了一个flannel的ip地址,接下来我们可以通过ping用对方的flannel地址证明网段是连通的
[root@node1 opt]# ping 172.17.99.1 ####这是节点2的flannel地址
PING 172.17.99.1 (172.17.99.1) 56(84) bytes of data.
64 bytes from 172.17.99.1: icmp_seq=1 ttl=64 time=0.329 ms
64 bytes from 172.17.99.1: icmp_seq=2 ttl=64 time=0.974 ms
^C
--- 172.17.99.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.329/0.651/0.974/0.323 ms
[root@node1 opt]#
[root@node1 opt]# docker run -it centos:7 /bin/bash
[root@d48678cdb07f /]# yum -y install net-tools
[root@d48678cdb07f /]# ifconfig
[root@d48678cdb07f /]# ping 172.17.99.2
PING 172.17.99.2 (172.17.99.2) 56(84) bytes of data.
64 bytes from 172.17.99.2: icmp_seq=1 ttl=62 time=0.478 ms
64 bytes from 172.17.99.2: icmp_seq=2 ttl=62 time=0.466 ms
^C
--- 172.17.99.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.466/0.472/0.478/0.006 ms
####再次测试,发现2个容器之间可以互通
[root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@master k8s]# mkdir k8s-cert
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# vim k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1", #####此处不声明node节点的地址,k8s自动发现node
"192.168.100.10", ####master1节点ip地址
"192.168.100.40", ####master2节点ip地址便于以后做多借点
"192.168.100.100", ####vip(虚拟ip)
"192.168.100.40", ####第一台调度器ip地址
"192.168.100.50", ####第二台调度器ip地址
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin ####admin-csr.json生成管理员的证书
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@master k8s-cert]# ls
admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json
admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem
admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr server.pem
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem ca.pem server-key.pem server.pem
[root@master k8s-cert]# cd ..
[root@master k8s]# ls
etcd-cert etcd.sh etcd-v3.3.10-linux-amd64.tar.gz k8s-cert
etcd-cert.sh etcd-v3.3.10-linux-amd64 flannel-v0.10.0-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# cd /root/k8s/kubernetes/server/bin
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master bin]# ls /opt/kubernetes/bin/
kube-apiserver kube-controller-manager kubectl kube-scheduler
[root@master bin]# cd /opt/kubernetes/cfg/
[root@master cfg]# ls
[root@master cfg]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') ####随机生成序列号
[root@master cfg]# cat > token.csv << EOF
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
[root@master cfg]# ls
token.csv
[root@master cfg]# cat token.csv ####查看令牌序列
c3448b11da03f11f07ade34d862fd428,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
####序列号,用户名,id,角色身份
[root@master cfg]# cd /root/k8s/
[root@master k8s]# vim apiserver.sh
#!/bin/bash
MASTER_ADDRESS=$1
ETCD_SERVERS=$2
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
[root@master k8s]# bash apiserver.sh 192.168.100.10 https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master k8s]# ps aux | grep kube ####查看进程
[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver ####查看配置文件
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379 \
--bind-address=192.168.100.10 \
--secure-port=6443 \
--advertise-address=192.168.100.10 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
[root@master k8s]# netstat -ntap | grep 6443
tcp 0 0 192.168.100.10:6443 0.0.0.0:* LISTEN 15500/kube-apiserve
tcp 0 0 192.168.100.10:59616 192.168.100.10:6443 ESTABLISHED 15500/kube-apiserve
tcp 0 0 192.168.100.10:6443 192.168.100.10:59616 ESTABLISHED 15500/kube-apiserve
[root@master k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 15500/kube-apiserve
[root@master k8s]# vim scheduler.sh
#!/bin/bash
MASTER_ADDRESS=$1
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
[root@master k8s]# chmod +x scheduler.sh
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# ps aux | grep ku ####查看服务是否开启
[root@master k8s]# vim controller-manager.sh
#!/bin/bash
MASTER_ADDRESS=$1
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
[root@master k8s]# chmod +x controller-manager.sh
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
[root@master bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password:
[root@master bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password:
[root@master bin]# cd /root/k8s/
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv ####查看令牌序列号
c3448b11da03f11f07ade34d862fd428,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@master k8s]# mkdir kubeconfig
##指定apiserver的地址
APISERVER=$1
##指定证书路径
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
##此处token复制于 /opt/kubernetes/cfg/token.csv
--token=c3448b11da03f11f07ade34d862fd428 \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/ ####设置环境变量
[root@master kubeconfig]# kubectl get cs ####查看节点状态
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
[root@master kubeconfig]# bash kubeconfig 192.168.100.10 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[email protected]'s password:
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[email protected]'s password:
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@node1 /]# mkdir /root/k8s
[root@node1 /]# cd /root/k8s/
[root@node1 k8s]# vim kubelet.sh
#!/bin/bash
NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}
cat <<EOF >/opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
cat <<EOF >/opt/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP}
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOF
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@node1 k8s]# bash kubelet.sh 192.168.100.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 k8s]# ps aux | grep kube ####检查启动服务
[root@master kubeconfig]# kubectl get csr ####查看请求Pending说明在等待该节点办法证书
NAME AGE REQUESTOR CONDITION
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA 111s kubelet-bootstrap Pending
[root@master kubeconfig]# kubectl certificate approve node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA ####同意请求
certificatesigningrequest.certificates.k8s.io/node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA approved
[root@master kubeconfig]# kubectl get csr ####再次查看请求,说明已经已经运行假如集群
NAME AGE REQUESTOR CONDITION
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA 4m30s kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl get node ####查看群集节点,成功加入node01节点
NAME STATUS ROLES AGE VERSION
192.168.100.20 Ready <none> 2m8s v1.12.3
[root@node1 k8s]# vim proxy.sh
#!/bin/bash
NODE_ADDRESS=$1
cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@node1 k8s]# bash proxy.sh 192.168.100.20 ####启动proxy服务
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service
[root@node1 k8s]# systemctl status kube-proxy.service ####查看proxy服务
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2020-11-25 22:48:57 CST; 45s ago
Main PID: 117262 (kube-proxy)
Memory: 7.5M
CGroup: /system.slice/kube-proxy.service
‣ 117262 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.100.20 --cluster-cidr=10.0.0....
11月 25 22:49:33 node1 kube-proxy[117262]: I1125 22:49:33.406578 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:34 node1 kube-proxy[117262]: I1125 22:49:34.816066 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:35 node1 kube-proxy[117262]: I1125 22:49:35.430822 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:36 node1 kube-proxy[117262]: I1125 22:49:36.829488 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:37 node1 kube-proxy[117262]: I1125 22:49:37.465849 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:38 node1 kube-proxy[117262]: I1125 22:49:38.878549 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:39 node1 kube-proxy[117262]: I1125 22:49:39.510393 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:40 node1 kube-proxy[117262]: I1125 22:49:40.891883 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:41 node1 kube-proxy[117262]: I1125 22:49:41.547500 117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:42 node1 kube-proxy[117262]: I1125 22:49:42.927768 117262 config.go:141] Calling handler.OnEndpointsUpdate
[root@node1 kubernetes]# scp -r /opt/kubernetes/ [email protected]:/opt/
[email protected]'s password:
[root@node1 kubernetes]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system/
[email protected]'s password:
[root@node2 ~]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# ls
kubelet-client-2020-11-25-20-11-54.pem kubelet-client-current.pem kubelet.crt kubelet.key
[root@node2 ssl]# rm -rf *
[root@node2 ssl]# ls
[root@node2 ssl]# cd ../cfg/
[root@node2 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.30 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node2 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.100.30
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[root@node2 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.30 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl start kube-proxy.service
[root@node2 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@master kubeconfig]# kubectl get csr ####显示node2待授权
NAME AGE REQUESTOR CONDITION
node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE 7m43s kubelet-bootstrap Pending
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA 3h kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl certificate approve node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE ####授权
certificatesigningrequest.certificates.k8s.io/node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE approved
[root@master kubeconfig]# kubectl get csr ####node2也获得授权
NAME AGE REQUESTOR CONDITION
node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE 11m kubelet-bootstrap Approved,Issued
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA 3h4m kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl get node ####2个节点已经配置好
NAME STATUS ROLES AGE VERSION
192.168.100.20 Ready <none> 3h1m v1.12.3
192.168.100.30 Ready <none> 104s v1.12.3