CentOS7.3.1611部署k8s1.5.2集群
刚得知12小时前更新了最新的k8s1.5.3和1.4.9,安装方法应该类似
参考资料
Kubernetes权威指南(第二版)
http://jevic.blog.51cto.com/2183736/1881455
https://my.oschina.net/u/1791060/blog/830023
http://blog.csdn.net/lic95/article/details/55015284
https://coreos.com/etcd/docs/latest/clustering.html
下列文档简单的系统的测试了k8s 1.5.x系列:包括部署集群、创建POD、域名解析、仪表盘、监控、反向代理、存储、日志,另外双向认证自己建证书不太实用就没有列出。本系列文档环境部署使用二进制程序绿色安装,适用于1.5.2、1.5.3、1.5.4及后续版本,只是记得随时更新github上样例url即可。
k8s集群安装部署
http://jerrymin.blog.51cto.com/3002256/1898243
k8s集群RC、SVC、POD部署
http://jerrymin.blog.51cto.com/3002256/1900260
k8s集群组件kubernetes-dashboard和kube-dns部署
http://jerrymin.blog.51cto.com/3002256/1900508
k8s集群监控组件heapster部署
http://jerrymin.blog.51cto.com/3002256/1904460
k8s集群反向代理负载均衡组件部署
http://jerrymin.blog.51cto.com/3002256/1904463
k8s集群挂载volume之nfs
http://jerrymin.blog.51cto.com/3002256/1906778
k8s集群挂载volume之glusterfs
http://jerrymin.blog.51cto.com/3002256/1907274
k8s集群日志收集ELK架构
http://jerrymin.blog.51cto.com/3002256/1907282
架构
k8s-master 安装etcd,kubernetes-server/client
k8s-node1 安装docker,kubernetes-node/client,flannel
k8s-node2 安装docker,kubernetes-node/client,flannel
一,YUM安装的版本如下
CentOS7.3.1611 Yum安装
kubernetes-1.4.0-0.1.git87d9d8d.el7
会安装kubernets-master,node,client及其相关依赖项
kubernetes-master-1.4.0-0.1.git87d9d8d.el7
会产生三个二进制程序kube-apiserver kube-controller-manager kube-scheduler
kubernetes-node-1.4.0-0.1.git87d9d8d.el7
会安装很多依赖包包括docker-1.12.5-14.el7.centos,会安装kubelet kube-proxy
kubernetes-client-1.4.0-0.1.git87d9d8d.el7
会产生一个二进制程序kubectl
kubernetes-unit-test-1.4.0-0.1.git87d9d8d.el7
会安装很多依赖包包括etcd-3.0.15-1.el7,golang,gcc,glibc,rsync等
flannel-0.5.5-2.el7
会产生一个二进制程序flannel
二,本文选择二进制包版本安装最新版测试
github地址:
etct: https://github.com/coreos/etcd/releases
flannel: https://github.com/coreos/flannel/releases
kubernetes: https://github.com/kubernetes/kubernetes/releases
docker: https://docs.docker.com/engine/installation/linux/centos/
k8s 1.5.2
https://dl.k8s.io/v1.5.2/kubernetes-server-linux-amd64.tar.gz
会产生11个二进制程序hyperkube kubectl kubelet kube-scheduler kubeadm kube-controller-manager kube-discovery kube-proxy kube-apiserver kube-dns kubefed
https://dl.k8s.io/v1.5.2/kubernetes-client-linux-amd64.tar.gz
会产生两个二进制程序kube-proxy kubefed
etcd 3.1.10
https://github.com/coreos/etcd/releases/download/v3.1.0/etcd-v3.1.0-linux-amd64.tar.gz
docker 1.13.1
https://get.docker.com/builds/Linux/x86_64/docker-1.13.1.tgz
flannel
https://github.com/coreos/flannel/releases/download/v0.7.0/flannel-v0.7.0-linux-amd64.tar.gz
三,部署环境
1,准备工作
1),系统最小化安装,然后yum update,升级到最新版本CentOS7.3.1611
2),设置hostname及hosts
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.17.3.20 k8s-master
172.17.3.7 k8s-node1
172.17.3.8 k8s-node2
3),校对时间
[root@k8s-master ~]# ntpdate ntp1.aliyun.com &&hwclock -w
4),关闭selinux及防火墙
[root@k8s-master ~]# sed -i s'/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
[root@k8s-master ~]# systemctl disable firewalld; systemctl stop firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
5),重启服务器
2,Master节点部署
1),部署etcd服务(目前单点)
[root@k8s-master ~]# tar zxvf etcd-v3.1.0-linux-amd64.tar.gz -C /usr/local/
[root@k8s-master ~]# mv /usr/local/etcd-v3.1.0-linux-amd64/ /usr/local/etcd
[root@k8s-master ~]# ln -s /usr/local/etcd/etcd /usr/local/bin/etcd
[root@k8s-master ~]# ln -s /usr/local/etcd/etcdctl /usr/local/bin/etcdctl
设置systemd服务文件/usr/lib/systemd/system/etcd.service
[Unit]
Description=Eted Server
After=network.target
[Service]
WorkingDirectory=/data/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd
Type=notify
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
其中WorkingDirector表示etcd数据保存的目录,需要在启动etcd服务之前进行创建
etcd单点默认配置
[root@k8s-master ~]# cat /etc/etcd/etcd.conf
ETCD_NAME=k8s1
ETCD_DATA_DIR="/data/etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
etcd服务启动
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable etcd.service
[root@k8s-master ~]# systemctl start etcd.service
etcd服务检查
[root@k8s-master ~]# etcdctl cluster-health
member 869f0c691c5458a3 is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
[root@k8s-master ~]# etcdctl member list
869f0c691c5458a3: name=k8s1 peerURLs=http://172.17.3.20:2380 clientURLs=http://0.0.0.0:2379 isLeader=true
2)部署kube-apiserver服务
安装kube-apiserver
[root@k8s-master ~]# tar zxvf kubernetes-server-linux-amd64.tar.gz -C /usr/local/
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-apiserver /usr/local/bin/kube-apiserver
其他服务顺便做下软链接
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/hyperkube /usr/local/bin/hyperkube
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubeadm /usr/local/bin/kubeadm
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-controller-manager /usr/local/bin/kube-controller-manager
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubectl /usr/local/bin/kubectl
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-discovery /usr/local/bin/kube-discovery
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-dns /usr/local/bin/kube-dns
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubefed /usr/local/bin/kubefed
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubelet /usr/local/bin/kubelet
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-proxy /usr/local/bin/kube-proxy
[root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-scheduler /usr/local/bin/kube-scheduler
配置kubernetes system config
[root@k8s-master ~]# cat /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=false"
KUBE_LOG_DIR="--log-dir=/data/logs/kubernetes"
KUBE_LOG_LEVEL="--v=2"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://172.17.3.20:8080"
设置systemd服务文件/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kuber-apiserver启动参数
[root@k8s-master ~]# cat /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=" "
启动kube-api-servers服务
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@k8s-master ~]# systemctl start kube-apiserver.service
验证服务
http://172.17.3.20:8080/
3)部署kube-controller-manager服务
设置systemd服务文件/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_LOG_DIR \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kube-controller-manager启动参数
[root@k8s-master ~]# cat /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=""
启动kube-controller-manager服务
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master ~]# systemctl start kube-controller-manager
4)部署kube-scheduler服务
设置systemd服务文件/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_LOG_DIR \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kube-schedulerr启动参数
[root@k8s-master ~]# cat /etc/kubernetes/schedulerr
KUBE_SCHEDULER_ARGS=""
启动kube-scheduler服务
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@k8s-master ~]# systemctl start kube-scheduler
2,Node节点部署
1)安装docker(或者yum instll docker)
[root@k8s-node1 ~]# tar zxvf docker-1.13.1.tgz -C /usr/local
这里默认docker安装并启动,方便后面测试
[root@k8s-node1 ~]# systemctl start docker.service
2)安装kubernetes客户端
安装kubelet,kube-proxy
[root@k8s-master ~]# tar zxvf kubernetes-client-linux-amd64.tar.gz -C /usr/local/
[root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kubectl /usr/local/bin/kubectl
[root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kubefed /usr/local/bin/kubefed
kube-proxy包默认client没有可以从server拷贝过来
[root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kube-proxy /usr/local/bin/kube-proxy
[root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kubelet /usr/local/bin/kubelet
3)部署kubelet服务
配置kubernetes system config
[root@k8s-node1 ~]# cat /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=false"
KUBE_LOG_DIR="--log-dir=/data/logs/kubernetes"
KUBE_LOG_LEVEL="--v=2"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://172.17.3.20:8080"
设置systemd服务文件/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/data/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_LOG_DIR \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
配置kubelet启动参数
[root@k8s-node1 ~]# cat /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=k8s-node1"
KUBELET_API_SERVER="--api-servers=http://172.17.3.20:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-p_w_picpath=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
启动kubelet服务
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node1 ~]# systemctl start kubelet.service
4),部署kube-proxy服务
设置systemd服务文件/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_LOG_DIR \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kubelet启动参数
[root@k8s-node1 ~]# cat /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
启动kubelet服务
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-node1 ~]# systemctl start kube-proxy.service
验证节点是否启动
[root@k8s-node1 ~]# kubectl get nodes
NAME STATUS AGE
k8s-node1 Ready 9m
3,配置网络
1),配置etcd
[root@k8s-master ~]# etcdctl set /k8s/network/config '{ "Network": "10.1.0.0/16" }'
{ "Network": "10.1.0.0/16" }
[root@k8s-master ~]# etcdctl get /k8s/network/config
{ "Network": "10.1.0.0/16" }
2),安装flannel
[root@k8s-node1 ~]# tar zxvf flannel-v0.7.0-linux-amd64.tar.gz -C /usr/local/flannel
[root@k8s-node1 ~]# ln -s /usr/local/flannel/flannel /usr/local/bin/flanneld
[root@k8s-node1 ~]# ln -s /usr/local/flannel/mk-docker-opts.sh /usr/local/bin/mk-docker-opts.sh
3),配置flannel(配置比较麻烦,start脚本和启动脚本参考yum安装时生成的配置)
设置systemd服务文件/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/local/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
其中flanneld-start为
[root@k8s-node1 ~]# cat /usr/local/bin/flanneld-start
#!/bin/sh
exec /usr/local/bin/flanneld \
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}} \
"$@"
编辑flannel,设置etcd相关信息
[root@k8s-node1 ~]# cat /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://172.17.3.20:2379"
FLANNEL_ETCD_PREFIX="/k8s/network"
4),启动flannel
注意启动flannel前要关闭docker这样flannel才会覆盖docker0网桥
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl enable flanneld.service
[root@k8s-node1 ~]# systemctl stop docker.service
[root@k8s-node1 ~]# systemctl start flanneld.service
flanneld服务启动后就会根据etcd里面配置划分子网了,划分子网是给docker使用的,docker想使用还得折腾一翻,其实就是想办法把几个重要变量传过去,使docker启动时能够使用
注意启动docker前要使某些变量生效,需要source /run/flannel/docker source /run/flannel/subnet.env
[root@k8s-node1 ~]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.1.89.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.1.89.1/24 --ip-masq=true --mtu=1472"
[root@k8s-node1 bin]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.1.89.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.1.89.1/24 --ip-masq=true --mtu=1472"
[root@k8s-node1 ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.89.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
确保docker启动时带有--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} 这样docker0才会成为flannel0的子网这个启动参数是通过ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker 实现的
-kSet the combined options key to this value (default DOCKER_OPTS=)
-dPath to Docker env file to write to. Defaults to /run/docker_opts.env
最后才启动docker
[root@k8s-node1 ~]# systemctl start docker.service
5),最后确认效果
完成后确认网络接口docker0的IP地址属于flannel0的子网
网络启动后node1和node2节点会添加很多路由条目,并且会自动开启防火墙虽然之前我们关闭了,里面有很多策略目的是node直接的docker0网络可以互通,这样各个node间通过物理网卡--flannel0--docker0和容器通信
[root@k8s-node1 ~]# ip addr
6: flannel0:
link/none
inet 10.1.89.0/16 scope global flannel0
valid_lft forever preferred_lft forever
7: docker0:
link/ether 02:42:f1:e4:7c:a3 brd ff:ff:ff:ff:ff:ff
inet 10.1.89.1/24 scope global docker0
valid_lft forever preferred_lft forever
[root@k8s-node2 ~]# ip addr
6: docker0:
link/ether 02:42:33:a8:38:21 brd ff:ff:ff:ff:ff:ff
inet 10.1.8.1/24 scope global docker0
valid_lft forever preferred_lft forever
7: flannel0:
link/none
inet 10.1.8.0/16 scope global flannel0
valid_lft forever preferred_lft forever
node1上 ping node2的docker0能通就行
[root@k8s-node1 ~]# ping 10.1.8.1
PING 10.1.8.1 (10.1.8.1) 56(84) bytes of data.
64 bytes from 10.1.8.1: icmp_seq=1 ttl=62 time=0.498 ms
64 bytes from 10.1.8.1: icmp_seq=2 ttl=62 time=0.463 ms