一、前置条件:
环境:在virtuarbox和VMare中都可以,如果是在虚拟机中,要注意设置CPU核数至少为2。
我的系统是 操作系统:Centos7.7,内存:4G, 硬盘:80G
操作步骤:
1、如果是在虚拟机中需要先关闭防火墙,如果是云主机可以直接设置开放的端口即可。
systemctl stop firewalld
systemctl disable firewalld
2、禁用SELINUX:
setenforce 0
编辑文件:vi /etc/selinux/config,将文件中的值设置如下:
SELINUX=disabled
3、创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
4、执行命令使修改生效。
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
二、安装Docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
然后执行:yum makecache fast
接下来直接安装最新的Docker版本: yum install docker-ce,截止目前,最新版本的18.09,可以使用docker -v查看安装的docker版本,
如果报container-selinux>=2.9的错误,则需要去安装container-selinux-2.9-4.el7.noarch.rpm,
开启docker:systemctl start docker&&systemctl enable docker
三、安装kubeadm和kubelet
cat <
[Kubernetes]
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
然后执行:yum makecache
2、安装 kubeadm 和相关工具
yum install -y kubelet kubeadm kubectl kubernetes-cni
3、启动kubelet
systemctl enable kubelet && systemctl start kubelet
5、查看 kubeadm 会用到的镜像
[root@localhost ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
6、拉取镜像并设置tag,版本要和这个list出来的要相同
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 k8s.gcr.io/kube-apiserver:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/kube-proxy v1.18.2 0d40868643c6 4 weeks ago 117MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.18.2 0d40868643c6 4 weeks ago 117MB
k8s.gcr.io/kube-apiserver v1.18.2 6ed75ad404bd 4 weeks ago 173MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.18.2 6ed75ad404bd 4 weeks ago 173MB
k8s.gcr.io/kube-controller-manager v1.18.2 ace0a8c17ba9 4 weeks ago 162MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.18.2 ace0a8c17ba9 4 weeks ago 162MB
k8s.gcr.io/kube-scheduler v1.18.2 a3099161e137 4 weeks ago 95.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.18.2 a3099161e137 4 weeks ago 95.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 months ago 683kB
k8s.gcr.io/pause 3.2 80d28bedfe5d 3 months ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 3 months ago 43.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 3 months ago 43.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 6 months ago 288MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 6 months ago 288MB
7、init之前的设置
Kubernetes 新版本要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。可以通过kubelet的启动参数--fail-swap-on=false更改这个限制。
关闭系统的Swap方法如下:
swapoff -a
修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改 /etc/sysctl.d/k8s.conf添加下面一行:vm.swappiness=0
执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。
使配置修改生效:
systemctl daemon-reload
重启kubelet:systemctl restart kubelet
8、初始化k8s,这里的192.168.x.x设置为自己机器的ip地址,
kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.x.x --ignore-preflight-errors=Swap
看到以下信息表示安装成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.124.7:6443 --token 8k3uq2.my3wljoai0zleev4 \
--discovery-token-ca-cert-hash sha256:e48902fd88184c9733d4eabc8719b0a50eadae8433e1dbc8326884cebff27ddc
9、按照他的提示执行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
10、检查集群状态
kubectl get cs
Master 节点默认不参与工作负载,可以执行下面的命令来搭建一个 all-in-one 的 kubernetes 环境。
kubectl taint nodes node1 node-role.kubernetes.io/master-node "node1" untainted
11、查看node运行状态
[root@k8s-master software]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 3m39s v1.18.2
发现是NotReady状态,这是因为cni 网络插件没有安装的原因
12、 安装 cni 网络插件。
docker pull quay.io/coreos/flannel:v0.10.0-amd64
执行上面一句是发现报错,因为quay.io是国外网络 访问不了改为quay-mirror.qiniu.com,自己再打个tag
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64
通过查询pod 发现依赖的是v0.12.0版本
mkdir -p /etc/cni/net.d/
vi /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
执行上面一句是可能会报错 raw.githubusercontent.com was refused。查资料之后的纸是DNS污染执行下面的命令能解决
sudo vim /etc/hosts
添加199.232.68.133 raw.githubusercontent.com 再重复执行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml即可。
13、查看系统pod运行状态
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-gnfbr 1/1 Running 1 32m
coredns-66bff467f8-t7hp2 1/1 Running 1 32m
etcd-master 1/1 Running 1 32m
kube-apiserver-master 1/1 Running 1 32m
kube-controller-manager-master 1/1 Running 1 32m
kube-flannel-ds-amd64-9p96z 1/1 Running 1 24m
kube-proxy-9bp2q 1/1 Running 1 32m
kube-scheduler-master 1/1 Running 1 32m
可以看到所有pod都处于Running表示运行成功
14、再次查看node运行状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 32m v1.18.2
发现处于Ready状态,表示kubernetes单机版本部署成功
journalctl -f -u kubelet
kubectl get pod -n kube-system #查看所有pod状态
kubectl get pods -o wide
kubectl describe pod kube-flannel-ds-amd64-2dqlf -n kube-system #kube-flannel-ds-amd64-2dqlf 是pod名称
kubeadm reset
rm -rf /root/.kube/config
参考资料
https://blog.csdn.net/sdksdk0/article/details/86094598