使用kubeadm安装K8S集群环境

在centos7上安装kubernetes1.11.2,我们使用kubeadm来安装

 

一、架构规划

ip

hostname

role

192.168.13.41

k8s

master

192.168.13.123

k8s1

node

二、修改主机名

分别使用hostname命令把主机名称设置为k8s,k8s1

hostnamectl set-hostname k8s

hostnamectl set-hostname k8s1

三、修改hosts

vim /etc/hosts

输入如下内容:

192.168.48.25 k8s

192.168.48.28 k8s1

四、使用kubeadm安装kubernetes

1、关闭防火墙

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:

sudo systemctl stop firewalld.service     #停止firewall

sudo systemctl disable firewalld.service #禁止firewall开机启动

sudo firewall-cmd --state                       #查看防火墙状态

2、禁用SELINUX

sudo setenforce 0 sudo vi /etc/selinux/config #SELINUX修改为disabled SELINUX=disabled

3、关闭 swap

$ swapoff -a

4、解决路由异常问题

$ echo "net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0" >> /etc/sysctl.d/k8s.conf $ sysctl -p /etc/sysctl.d/k8s.conf

可能遇到问题—is an unknown key

报错

error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key

error: "net.bridge.bridge-nf-call-iptables" is an unknown key

解决方法

sudo modprobe bridge

sudo lsmod |grep bridge

sudo sysctl -p /etc/sysctl.d/k8s.conf

可能遇到问题–sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录

报错

[root@localhost ~]# sysctl -p /etc/sysctl.d/k8s.conf

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录

解决方法

modprobe br_netfilter

ls /proc/sys/net/bridge

sudo sysctl -p /etc/sysctl.d/k8s.conf

5、安装Docker(阿里云镜像)

$ curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun $ systemctl enable docker && systemctl start docker

6、安装 kubelet kubeadm kubectl(阿里云镜像)

cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

setenforce 0

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet && systemctl start kubelet

7、在master上执行初始化

kubeadm init --kubernetes-version=1.13.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.13.41

因为会去google镜像仓库中下载相应的镜像,所以一般都会失败 ,根据报错信息,在国内网站站上找到相关的镜像:

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.13.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.13.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.13.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.13.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.24

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

把这些images重新tag一下

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3

在master上重新执行初始化:

kubeadm init --kubernetes-version=1.13.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.48.25

注意:master端需要下面的镜像:

docker pull 192.168.28.230:8551/kube-apiserver:v1.13.3

docker pull 192.168.28.230:8551/kube-controller-manager:v1.13.3

docker pull 192.168.28.230:8551/kube-scheduler:v1.13.3

docker pull 192.168.28.230:8551/kube-proxy:v1.13.3

docker pull 192.168.28.230:8551/pause:3.1

docker pull 192.168.28.230:8551/etcd:3.2.24

docker pull 192.168.28.230:8551/coredns:1.2.6

node端只需要下面的镜像:

docker pull 192.168.28.230:8551/kube-proxy:v1.13.3

docker pull 192.168.28.230:8551/pause:3.1

docker pull 192.168.28.230:8551/coredns:1.2.6

8、创建kube的目录,添加kubectl配置(master节点)

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

9、安装flannel网络:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

也可以先下载下来,再执行kubectl apply

10、添加nodoe端

在node端输入:

kubeadm join 192.168.13.41:6443 --token z928ge.lhvxdqjfjrcw69lo --discovery-token-ca-cert-hash sha256:c441e0100cca097d27bd5364c56ef2f88499581b8d12c104ac0456b4157a2f4b

然后在master端执行节点查看:

[root@centos7 ~]# kubectl get nodes

NAME         STATUS   ROLES    AGE   VERSION

k8s-master   Ready    master   36m   v1.13.3

k8s-node     Ready       19m   v1.13.3

转载来源:http://www.16boke.com/article/detail/251

 Kubernetes 

你可能感兴趣的:(容器与虚拟化,Kubernetes)