《kubernetes 1.8.0 测试环境安装部署》
时间:2017-11-22
本例中node节点为node-134
分发RPM包及证书,本例中的rpm包来自,mritd提供的tarball中的rpms目录:
##分发rpm包:
$ cd ~/k8s/rpms/
$ scp kubernetes-node-1.8.0-1.el7.centos.x86_64.rpm \
kubernetes-client-1.8.0-1.el7.centos.x86_64.rpm \
root@172.18.169.134.134:~
$ scp kubernetes-node-1.8.0-1.el7.centos.x86_64.rpm \
kubernetes-client-1.8.0-1.el7.centos.x86_64.rpm \
root@172.18.169.134:~
$ ssh root@172.18.169.134 yum install -y kubernetes-node-1.8.0-1.el7.centos.x86_64.rpm \
kubernetes-client-1.8.0-1.el7.centos.x86_64.rpm
##分发kubernets证书(k8s-root-ca.pem):
$ cd /etc/kubernetes/ssl/
$ ssh root@172.18.169.134 mkdir /etc/kubernetes/ssl
$ scp k8s-root-ca.pem root@172.18.169.134:/etc/kubernetes/ssl
##分发bootstrap.kubeconfig kube-proxy.kubeconfig文件或者node节点上重新生成这两个配置文件
##方法1:分发
$ cd /etc/kubernetes/
$ scp *.kubeconfig root@172.18.169.134:/etc/kubernetes
##方法2:在node节点上操作生成对应kubelet配置文件
##kubelet配置文件
$ # 设置集群参数
$ kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
$ # 设置客户端认证参数
$ kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
$ # 设置上下文参数
$ kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
$ # 设置默认上下文
$ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
$ mv bootstrap.kubeconfig /etc/kubernetes/
####特别注意,${BOOTSTRAP_TOKEN}要写成之前apiserver,token文件里的token字段
##kube-proxy配置文件
$ # 设置集群参数
$ kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
$ # 设置客户端认证参数
$ kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
$ # 设置上下文参数
$ kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
$ # 设置默认上下文
$ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
$ mv kube-proxy.kubeconfig /etc/kubernetes/
###设置属主属组
$ ssh root@172.18.169.134 chown -R kube:kube /etc/kubernetes/ssl
${KUBE_APISERVER}
为https://127.0.0.1:6443${BOOTSTRAP_TOKEN}
为master节点上创建的token,参见04-master节点的搭建kube-proxy.pem
证书中 CN 为 system:kube-proxy
,kube-apiserver
预定义的 RoleBinding cluster-admin
将User system:kube-proxy
与 Role system:node-proxier
绑定,该 Role 授予了调用 kube-apiserver
Proxy 相关 API 的权限;###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
# KUBE_MASTER="--master=http://127.0.0.1:8080"
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=172.18.169.134"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node.134"
# location of the api-server
# KUBELET_API_SERVER=""
# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
--cluster-dns=10.254.0.2 \
--resolv-conf=/etc/resolv.conf \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--fail-swap-on=false \
--cert-dir=/etc/kubernetes/ssl \
--cluster-domain=cluster.local. \
--hairpin-mode=promiscuous-bridge \
--serialize-image-pulls=false \
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
--fail-swap-on=false
选项,否则可能导致在开启 swap 分区的机器上无法启动 kubelet,详细可参考 CHANGELOG(before-upgrading 第一条)KUBELET_HOSTNAME
:所设置的name需写入hosts文件--pod-infra-container-image
=gcr.io/google_containers/pause-amd64:3.0”:由于科学上网,所采用镜像可使用mritd提供的images 通过docker load -i导入。--kubeconfig
:指定启动加载的配置文件###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=172.18.169.134 \
--hostname-override=node.134 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--cluster-cidr=10.254.0.0/16"
for IP in `seq 131 134`; do
ssh root@172.18.169.$IP mkdir ~/images
scp ~/k8s/images/* root@172.18.169.$IP:~/images
ssh [email protected].$IP docker load -i ~/images/gcr.io_google_containers_pause-amd64_3.0.tar
done
由于 HA 方案基于 Nginx 反代实现,所以每个 Node 要启动一个 Nginx 负载均衡 Master
nginx.conf
# 创建配置目录
mkdir -p /etc/nginx
# 写入代理配置
cat << EOF >> /etc/nginx/nginx.conf
error_log stderr notice;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
stream {
upstream kube_apiserver {
least_conn;
server 172.18.169.131:6443;
server 172.18.169.132:6443;
server 172.18.169.133:6443;
}
server {
listen 0.0.0.0:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
EOF
# 更新权限
chmod +r /etc/nginx/nginx.conf
server
:字段写入三个master节点的ip。nginx-proxy.service
cat << EOF >> /etc/systemd/system/nginx-proxy.service
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service
[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
-v /etc/nginx:/etc/nginx \\
--name nginx-proxy \\
--net=host \\
--restart=on-failure:5 \\
--memory=512M \\
nginx:1.13.5-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s
[Install]
WantedBy=multi-user.target
EOF
最后启动 Nginx 代理即可
systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
一切准备就绪后就可以添加 Node 了,首先由于我们采用了 TLS Bootstrapping,所以需要先创建一个 ClusterRoleBinding
# 在任意 master 执行即可
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
然后启动 kubelet
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
由于采用了 TLS Bootstrapping,所以 kubelet 启动后不会立即加入集群,而是进行证书申请,从日志中可以看到如下输出
通过systemctl status kubelet
看到:
Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
在任意master节点上查看证书请求:
[root@node-131 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog 1m kubelet-bootstrap Pending
approve就可以了:
[root@node-131 ssl]# kubectl certificate approve node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog
certificatesigningrequest "node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog" approved
[root@node-131 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog 2m kubelet-bootstrap Approved,Issued
查看node是否加入:
[root@node-131 ssl]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node.134 Ready 31s v1.8.0
查看node节点自动生成的客户端啊证书对(ubelet-client.crt kubelet-client.key kubelet.crt kubelet.key
):
[root@node-134 kubernetes]# ls /etc/kubernetes/ssl/
k8s-root-ca.pem kubelet-client.crt kubelet-client.key kubelet.crt kubelet.key
最后再启动 kube-proxy 即可:
systemctl start kube-proxy
systemctl enable kube-proxy
如果想将 Master 也作为 Node 的话,请在 Master 上安装 kubernete-node rpm 包,配置与上面基本一致;区别于 Master 上不需要启动 nginx 做负载均衡,同时 bootstrap.kubeconfig、kube-proxy.kubeconfig 中的 API Server 地址改成当前 Master IP 即可。
验证:
[root@node-131 ssl]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node.131 Ready <none> 3s v1.8.0
node.132 Ready <none> 8s v1.8.0
node.133 Ready <none> 8s v1.8.0
node.134 Ready <none> 9m v1.8.0
至此node节点部署完毕
本系列其他内容:
01-环境准备
02-etcd群集搭建
03-kubectl管理工具
04-master搭建
05-node节点搭建
06-addon-calico
07-addon-kubedns
08-addon-dashboard
09-addon-kube-prometheus
10-addon-EFK
11-addon-Harbor
12-addon-ingress-nginx
13-addon-traefik
参考链接:
https://mritd.me/2017/10/09/set-up-kubernetes-1.8-ha-cluster/
https://github.com/opsnull/follow-me-install-kubernetes-cluster