本次实验选择5台主机,3台作为master主机,2台作为node节点
节点ip | OS版本 | hostname -f | 安装软件 |
---|---|---|---|
192.168.0.1 | RHEL7.4 | k8s-master01 | docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler |
192.168.0.2 | RHEL7.4 | k8s-master02 | docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler |
192.168.0.3 | RHEL7.4 | k8s-master03 | docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler |
192.168.0.4 | RHEL7.4 | k8s-node01 | docker,flanneld,kubelet,kube-proxy |
192.168.0.5 | RHEL7.4 | k8s-node02 | docker,flanneld,kubelet,kube-proxy |
kubernetes Node 节点包含如下组件:
# wget https://dl.k8s.io/v1.15.3/kubernetes-server-linux-amd64.tar.gz
# tar xf kubernetes-server-linux-amd64.tar.gz# cd kubernetes/server/bin/
# cp kubelet kube-proxy /k8s/kubernetes/bin/
# scp kubelet kube-proxy 192.168.0.4:/k8s/kubernetes/bin/
# scp kubelet kube-proxy 192.168.0.5:/k8s/kubernetes/bin/
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-master1 --kubeconfig ~/.kube/config
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-master2 --kubeconfig ~/.kube/config
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-master3 --kubeconfig ~/.kube/config
# kubeadm token list --kubeconfig ~/.kube/config
kubectl config set-cluster kubernetes --certificate-authority=/k8s/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.0.3:6443 --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap --token=j53og3.p7bdy6ezasrlbszg --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色,然后kubelet 才有权限创建认证请求certificatesigningrequests
# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
如不配置可能出现报错:
failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope#查看kubelet-bootstrap绑定信息
# kubectldescribe clusterrolebinding kubelet-bootstrap
#删除kubelet-bootstrap绑定信息
# kubectl delete clusterrolebinding kubelet-bootstrap
cat << EOF > /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.0.4
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["101.254.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOF
提示:其他node节点参照配置,修改对应红字部分的地址即可
cat << EOF > /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.4 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
提示:其他node节点参照配置,修改对应红字部分的地址即可
cat << EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process[Install]
WantedBy=multi-user.target
EOF
scp /k8s/kubernetes/cfg/kubelet* 192.168.0.5:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kubelet.service 192.168.0.5:/lib/systemd/system/kubelet.service其他节点需要修改对应的address和hostname-override地址
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet
提示:启动必须关闭swap,不然会报错“[ERROR Swap]: running with swap on is not supported. Please disable swap”
kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。
#查看未授权的CSR 请求
# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk 56s system:bootstrap:q3sb4h Pending# kubectl get nodes
No resources found.
#通过CSR 请求:
# kubectl certificate approve node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk
certificatesigningrequest.certificates.k8s.io/node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk approved
# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk 2m18s system:bootstrap:q3sb4h Approved,Issued# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.124.3.104 NotReady0s v1.15.0
#查看详情
# kubectl describe csr node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk
Name: node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk
Labels:
Annotations:
CreationTimestamp: Wed, 18 Sep 2019 22:54:27 +0800
Requesting User: system:bootstrap:q3sb4h
Status: Approved,Issued
Subject:
Common Name: system:node:10.124.3.104
Serial Number:
Organization: system:nodes
Events:
创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:
cat > csr-crb.yaml <
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: approve-node-server-renewal-csr
apiGroup: rbac.authorization.k8s.io
EOF
#生效配置
# kubectl apply -f csr-crb.yaml
clusterrolebinding.rbac.authorization.k8s.io/auto-approve-csrs-for-group created
clusterrolebinding.rbac.authorization.k8s.io/node-client-cert-renewal created
clusterrole.rbac.authorization.k8s.io/approve-node-server-renewal-csr created
clusterrolebinding.rbac.authorization.k8s.io/node-server-cert-renewal created
# netstat -lnpt|grep kubelet
tcp 0 0 127.0.0.1:39037 0.0.0.0:* LISTEN 18584/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 18584/kubelet
tcp 0 0 10.124.3.105:10250 0.0.0.0:* LISTEN 18584/kubelet
tcp 0 0 10.124.3.105:10255 0.0.0.0:* LISTEN 18584/kubelet
kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
cat > kube-proxy-csr.json <
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem -ca-key=/k8s/kubernetes/ssl/ca-key.pem -config=/k8s/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem# cp kube-proxy*.pem /k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.0.4:/k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.0.5:/k8s/kubernetes/ssl/
kubectl config set-cluster kubernetes --certificate-authority=/k8s/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.0.1:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 将kube-proxy kubeconfig文件拷贝到所有 nodes节点
cp kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.0.4:/k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.0.5:/k8s/kubernetes/cfg/
cat << EOF > /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.4 \
--cluster-cidr=101.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
EOF
cat << EOF > /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy