自动化脚本已经上传到github,目前脚本支持版本有1.7.2、1.8.4、1.9.1:
https://github.com/zhuchuangang/k8s-install-scripts/tree/master/kubeadm
本文以v1.7.2为例
环境:
主机名 | IP |
---|---|
k8s-master | 172.16.120.151 |
k8s-node01 | 172.16.120.152 |
k8s-node02 | 172.16.120.153 |
==mac os x固定vware虚拟机IP
sudo vi /Library/Preferences/VMware\ Fusion/vmnet8/dhcpd.conf
在文件末尾添加==
host CentOS01{
hardware ethernet 00:0C:29:15:5C:F1;
fixed-address 172.16.120.151;
}
host CentOS02{
hardware ethernet 00:0C:29:D1:C4:9A;
fixed-address 172.16.120.152;
}
host CentOS03{
hardware ethernet 00:0C:29:C2:A6:93;
fixed-address 172.16.120.153;
}
ip地址取值范围必须在hdcpd.conf给定的范围内,配置完成后重启vware。
设置主机名:
hostnamectl --static set-hostname k8s-node01
hostnamectl --static set-hostname k8s-node02
hostnamectl --static set-hostname k8s-node03
关闭防火墙和selinux
systemctl disable firewalld
systemctl stop firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
配置yum源,由于google被墙,可以使用阿里云搭建的yum源
#docker yum源
cat >> /etc/yum.repos.d/docker.repo <.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=0
EOF
#kubernetes yum源
cat >> /etc/yum.repos.d/kubernetes.repo <.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
docker安装:
Kubernetes 1.6还没有针对docker 1.13和最新的docker 17.03上做测试和验证,所以这里安装Kubernetes官方推荐的Docker 1.12版本。
#查看docker版本
yum list docker-engine –showduplicates
#安装docker
yum install -y docker-engine-1.12.6-1.el7.centos.x86_64
kubernetes安装:
#查看版本
yum list kubeadm –showduplicates
yum list kubernetes-cni –showduplicates
yum list kubelet –showduplicates
yum list kubectl –showduplicates
#安装软件
yum install -y kubernetes-cni-0.5.1-0.x86_64 kubelet-1.7.2-0.x86_64 kubectl-1.7.2-0.x86_64 kubeadm-1.7.2-0.x86_64
使用阿里云美西服务器,配置yum源,使用yumdownloader下载rpm包
配置kubernetes yum源:
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
下载kubelet kubeadm kubectl kubernetes-cni四个rpm包
yumdownloader kubelet kubeadm kubectl kubernetes-cni
将下载的rpm包上传到指定服务器安装,rpm包安装命令如下:
yum install -y socat
rpm -ivh *.rpm
通过下面的网址查看依赖镜像的版本号:
https://kubernetes.io/docs/admin/kubeadm/
由于google被墙,将google的官方镜像上传到aliyun,国内可以直接使用。
registry.cn-hangzhou.aliyuncs.com/szss_k8s/etcd-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-apiserver-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-controller-manager-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-proxy-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-scheduler-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/pause-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-sidecar-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-kube-dns-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-dnsmasq-nanny-amd64
下面是下载和上传镜像脚步:
#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
KUBE_VERSION=v1.7.2
KUBE_PAUSE_VERSION=3.0
ETCD_VERSION=3.0.17
DNS_VERSION=1.14.4
GCR_URL=gcr.io/google_containers
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/szss_k8s
images=(kube-proxy-amd64:${KUBE_VERSION}
kube-scheduler-amd64:${KUBE_VERSION}
kube-controller-manager-amd64:${KUBE_VERSION}
kube-apiserver-amd64:${KUBE_VERSION}
pause-amd64:${KUBE_PAUSE_VERSION}
etcd-amd64:${ETCD_VERSION}
k8s-dns-sidecar-amd64:${DNS_VERSION}
k8s-dns-kube-dns-amd64:${DNS_VERSION}
k8s-dns-dnsmasq-nanny-amd64:${DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $GCR_URL/$imageName
docker tag $GCR_URL/$imageName $ALIYUN_URL/$imageName
docker push $ALIYUN_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
配置pod的基础镜像
cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <"KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/szss_k8s/pause-amd64:3.0"
EOF
安装docker 1.12.6及版本需要设置cgroup-driver=cgroupfs
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
说明:https://github.com/kubernetes/kubernetes/issues/43805 此处有说明
systemctl enable docker
systemctl enable kubelet
systemctl start docker
systemctl start kubelet
首先在master上执行init操作,api-advertise-addresses为master ip,pod-network-cidr指定IP段需要和kube-flannel.yml文件中配置的一致(kube-flannel.yaml在下面flannel的安装中会用到)
export KUBE_REPO_PREFIX="registry.cn-hangzhou.aliyuncs.com/szss_k8s"
export KUBE_ETCD_IMAGE="registry.cn-hangzhou.aliyuncs.com/szss_k8s/etcd-amd64:3.0.17"
kubeadm init --apiserver-advertise-address=172.16.120.151 --kubernetes-version=v1.7.2 --pod-network-cidr=10.244.0.0/12
执行结果:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [k8s-node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.120.151]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 140.504534 seconds
[token] Using token: 242b80.86d585ebd6358b08
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 242b80.86d585ebd6358b08 172.16.120.151:6443
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
在master节点安装flannel
kubectl --namespace kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml
rm -rf kube-flannel.yml
wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
sed -i 's/quay.io\/coreos\/flannel:v0.8.0-amd64/registry.cn-hangzhou.aliyuncs.com\/szss_k8s\/flannel:v0.8.0-amd64/g' ./kube-flannel.yml
kubectl --namespace kube-system apply -f ./kube-flannel.yml
通过命令验证:
$kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
node的节点需要执行1~3的安装步骤,安装完后执行下面的命令将node的节点加入集群:
export KUBE_REPO_PREFIX="registry.cn-hangzhou.aliyuncs.com/szss_k8s"
export KUBE_ETCD_IMAGE="registry.cn-hangzhou.aliyuncs.com/szss_k8s/etcd-amd64:3.0.17"
kubeadm join --token 242b80.86d585ebd6358b08 172.16.120.151:6443 --skip-preflight-checks
通过命令验证:
$kubectl get nodes
NAME STATUS AGE VERSION
k8s-node01 Ready 9h v1.7.2
k8s-node02 Ready 9h v1.7.2
【使用kubeadm安装Kubernetes 1.6】http://blog.frognew.com/2017/04/kubeadm-install-kubernetes-1.6.html
【使用kubeadm在Red Hat 7/CentOS 7快速部署Kubernetes 1.7集群】http://dockone.io/article/2514
【推荐国内安装方案】http://zerosre.com/2017/05/11/k8s新版本安装/
【国内如何快乐的安装k8s】https://my.oschina.net/xdatk/blog/895645
【基于kubeadm的kubernetes高可用集群部署】https://github.com/cookeem/kubeadm-ha/blob/master/README_CN.md