VirtualBox,双网卡。这样可以实现内外网。(有坑!见后面)
主机名 | IP/外网 | 配置 | 主机名 |
---|---|---|---|
Master | 192.168.56.4/10.0.3.15 | 2CPU,1G | master.smokelee.com |
node1 | 192.168.56.5/10.0.3.15 | 1CPU,1G | node1.smokelee.com |
node2 | 192.168.56.6/10.0.3.15 | 1CPU,1G | node2.smokelee.com |
调整/etc/hosts(全部主机)
192.168.56.4 master.smokelee.com
192.168.56.5 node1.smokelee.com
192.168.56.6 node2.smokelee.com
关闭Swap分区(全部主机)
临时关闭
swapoff -a
永久关闭
vim /etc/fstab
找到swap,用# 注释
调整内核参数/etc/sysctl.conf(全部主机)
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
关闭selinux、firewalld
调整仓库(全部主机)
# 调整CentOS7仓库
yum install wget -y
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 调整Kubernetes仓库
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
#vim保存
# 刷新仓库
yum clean all
yum makecache
主机上如果曾经安装过老版本,一定要卸载。无脑系列早期用的1.5.2版本
yum remove kubernetes-master kubernetes-node etcd flannel
安装基础组件
yum install kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
如果发生了冲突就删掉1.5.2或者更老的版本。看过去其它k8s无脑系列的有可能会安装上1.5.2系统默认的版本
下载镜像
编辑download_img.sh(来自www.kubernetes.org.cn的loong576)
# 仓库地址用的也是loong576在阿里的镜像,本人懒^_^
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
chmod +x download_img.sh && ./download_img.sh
运行后检查
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.16.4 3722a80984a0 2 months ago 217 MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.16.4 3722a80984a0 2 months ago 217 MB
k8s.gcr.io/kube-controller-manager v1.16.4 fb4cca6b4e4c 2 months ago 163 MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.16.4 fb4cca6b4e4c 2 months ago 163 MB
k8s.gcr.io/kube-scheduler v1.16.4 2984964036c8 2 months ago 87.3 MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.16.4 2984964036c8 2 months ago 87.3 MB
k8s.gcr.io/kube-proxy v1.16.4 091df896d78f 2 months ago 86.1 MB
registry.aliyuncs.com/google_containers/kube-proxy v1.16.4 091df896d78f 2 months ago 86.1 MB
k8s.gcr.io/etcd 3.3.15-0 b2756210eeab 5 months ago 247 MB
registry.aliyuncs.com/google_containers/etcd 3.3.15-0 b2756210eeab 5 months ago 247 MB
k8s.gcr.io/coredns 1.6.2 bf261d157914 6 months ago 44.1 MB
registry.aliyuncs.com/google_containers/coredns 1.6.2 bf261d157914 6 months ago 44.1 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742 kB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742 kB
大家注意:download_img.sh,会自动调整镜像的tag。这样后期安装过程就不会再出现跑到google去下载镜像的情况!节点的镜像请自行导入到节点,尤其是Proxy和pause
docker save k8s.gcr.io/pause:3.1 > pause.tar
docker save k8s.gcr.io/kube-proxy:v1.16.4 > proxy.tar
在节点上运行
docker load < pause.tar
docker load < proxy.tar
部署Pod
kubeadm init --apiserver-advertise-address=192.168.56.4 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.4 --service-cidr=10.254.0.0/16 --pod-network-cidr=10.244.0.0/16
参数说明:
出现如下内容,最后一行(kubeadm开头的命令)在Node1,Node2上执行,即可加入集群
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.4:6443 --token l6q448.jrqk9pj1ipi8fgn9 \
--discovery-token-ca-cert-hash sha256:78a10e6e6bca9c0090d9e9b2002b01d135b2a5c70f8240a7954e0d58f8d0052f
Master主机不Ready
调用kubectl get node
Master显示NotReady
安装flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
Node Join后,出现镜像下载不了,运行不成功,多半是因为,下载的aliyun镜像的原因。通过打tag,来让Docker认识
$docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
$docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.16.4 k8s.gcr.io/kube-proxy:v1.16.4
注意:仔细执行了2.3后面的脚本,理论上不会出现上面的问题
安装
$yum install kubelet-1.16.4 kubeadm-1.16.4 docker
导入镜像包(请参考2.3的docker load那两行)
加入集群(请参考2.4代码部分,kubeadm 开头的代码)
更详细的请访问《k8s无脑-分析flannel跨Node不通的分析,解决办法》
下面只讲解本次部署遇到的问题
随便挑一个flannel的Pod实例,查看输出
$docker ps -a(任一节点执行都可以)
找到flannel所在容器
$docker logs 9895357d488d(flannel所在容器)
I0226 17:21:01.510702 1 main.go:514] Determining IP address of default interface
I0226 17:21:01.513557 1 main.go:527] Using interface with name enp0s8 and address 10.0.3.15
I0226 17:21:01.513576 1 main.go:544] Defaulting external address to interface address (10.0.3.15)
I0226 17:21:01.612040 1 kube.go:126] Waiting 10m0s for node controller to sync
$ip route show
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 metric 101
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.254.49.0/24 dev docker0 proto kernel scope link src 10.254.49.1
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.5 metric 100
发现Flannel竟然自己Determining用了路由的默认出口(外网出口,内网出口enps03),自作聪明!!!!
解决方案A
修改:flannel线上配置
#把配置文件下载到本地
$wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
# 调用线上编辑功能
$kubectl edit -f kube-flannel.yml # 线上编辑资源配置
找到这一行 image: quay.io/coreos/flannel:v0.11.0-amd64
再往下的args:段,添加
- --iface=enp0s3
# 切记!这是我的配置,3台虚拟机用的同样网卡配置。所以节点全部设置成了这个网卡
保存后,系统会自动更新所有Pod。更新完毕即可访问
解决方案B
调整默认路由为内网网卡
临时方案
route add default gw 192.168.56.4
永久方案,要注意:
vim /etc/sysconfig/network-script/ifcfg-enps03
DEFROUTE=yes
#:w保存
vim /etc/sysconfig/network-script/ifcfg-enps08
DEFROUTE=no
#:w保存
service network restart
[1] lvs+keepalived部署k8s v1.16.4高可用集群
[2] K8S学习笔记之Flannel解读