Linux——Kubeadm部署k8s集群

官方文档

一、配置

1.基础环境

master node01 node02
192.168.1.40 192.168.1.41 192.168.1.42

2.关闭防火墙、SELinux

[root@master ~]# systemctl  stop firewalld
[root@master ~]# systemctl  disable firewalld
[root@master ~]# setenforce 0
setenforce: SELinux is disabled
[root@master ~]# iptables -F   //清空
[root@master ~]# iptables-save	//保存

3.禁用swap

[root@master ~]# swapoff -a
[root@master ~]# vim /etc/fstab
......
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
......
[root@master ~]# free -h    //验证swap确实关闭
              total        used        free      shared  buff/cache   available
Mem:           2.7G        534M        1.7G         13M        562M        2.0G
Swap:            0B          0B          0B

4.编辑域名解析和免密

[root@master ~]# vim  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.40 master
192.168.1.41 node01
192.168.1.42 node02
[root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:zxzn04epGiUqZ26QYgOyPWZ0VET5wJTWZ9XRstFvr0g root@master
The key's randomart image is:
+---[RSA 2048]----+
|     *=+   ....+ |
|    . * . o   + o|
|   . . o o     +.|
|. o .   .     . o|
| = o   .S o o  ..|
|. = + o  = *E. o.|
| o o o..+ =.o.+..|
|       =.  ..o.. |
|       .. ...    |
+----[SHA256]-----+
[root@master ~]# ssh-copy-id root@node01
[root@master ~]# ssh-copy-id root@node02

5.打开iptables桥接功能

  • 节点也需要做!!!
[root@master ~]# vim /etc/sysctl.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@master ~]# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

PS:如果sysctl -p 提示没有文件夹或目录输入下面命令

[root@master ~]# modprobe br_netfilter
[root@master ~]# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

6.安装docker 以及 kubernetes 的yum源

安装Docker

[root@master ~]#  yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# yum makecache fast
[root@master ~]# yum -y install docker-ce
[root@master ~]# systemctl  start docker
[root@master ~]# systemctl  enable  docker
[root@master ~]# vim /etc/docker/daemon.json   //配置加速器
{
     
"registry-mirrors": ["https://z1pa8k3e.mirror.aliyuncs.com"]
}
[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl restart docker

kubernetes 的 yum 源添加

[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@master ~]# scp -rp /etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/
kubernetes.repo                                                       100%  274    15.4KB/s   00:00    
[root@master ~]# scp -rp /etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/
kubernetes.repo         

二、部署master节点

1.master节点安装组件

PS: kubectl、kubelet、kuberadm三个组件

[root@master ~]# yum -y install kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
[root@master ~]# systemctl enable kubelet

2.导入镜像

[root@master ~]# unzip images.zip 
Archive:  images.zip
   creating: images/
  inflating: images/coredns-1-3-1.tar  
  inflating: images/etcd-3-3-10.tar  
  inflating: images/flannel-0.11.0.tar  
  inflating: images/kube-apiserver-1-15.tar  
  inflating: images/kube-controller-1-15.tar  
  inflating: images/kube-proxy-1-15.tar  
  inflating: images/kube-scheduler-1-15.tar  
  inflating: images/pause-3-1.tar    
[root@master ~]# vim images.sh
#!/bin/bash
for i in /root/images/*
do
docker load < $i
done
echo -e "\e[1;31m导入完成\e[0m"
[root@master ~]# sh images.sh 
fb61a074724d: Loading layer  479.7kB/479.7kB
c6a5fc8a3f01: Loading layer  40.05MB/40.05MB
......
导入完成

或者使用

[root@node01 ~]# for i in /root/images;do docker load < $i;done

3.初始化集群

[root@master ~]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/15 --ignore-preflight-errors=Swap
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.40:6443 --token nqxl15.utwr7aw0avjqrtri \
    --discovery-token-ca-cert-hash sha256:8f58ed3303f919778b5d8ad13f0c839b3a55cf0bb4e4da33644eeff63bd4c3dc 

4.使非 root 用户可以运行 kubectl

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

5.添加网络组件(flannel)

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6.查看状态

1.查看集群节点状态

[root@master ~]# kubectl  get  nodes
NAME     STATUS     ROLES    AGE     VERSION
master   Ready      master   19m     v1.15.0

2.查看pod状态(确保都为Running)

[root@master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-2s2zz         1/1     Running   1          92m
kube-system   coredns-5c98db65d4-nmbhj         1/1     Running   1          92m
kube-system   etcd-master                      1/1     Running   1          92m
kube-system   kube-apiserver-master            1/1     Running   1          92m
kube-system   kube-controller-manager-master   1/1     Running   2          92m
kube-system   kube-flannel-ds-45xp6            1/1     Running   0          80m
kube-system   kube-flannel-ds-ld5xz            1/1     Running   2          80m
kube-system   kube-flannel-ds-wv6wb            1/1     Running   1          92m
kube-system   kube-proxy-7fz2d                 1/1     Running   0          80m
kube-system   kube-proxy-7jmkb                 1/1     Running   1          80m
kube-system   kube-proxy-kmgmd                 1/1     Running   1          92m
kube-system   kube-scheduler-master            1/1     Running   2          92m

三、部署node节点

1.node节点安装组件

PS:kubelet、kuberadm组件

[root@node01 ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0

[root@node02 ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0

2.导入镜像

[root@master images]# scp -rp kube-proxy-1-15.tar flannel-0.11.0.tar  pause-3-1.tar node01:/root/
kube-proxy-1-15.tar                                                  100%   80MB  80.2MB/s   00:01    
flannel-0.11.0.tar                                                   100%    0     0.0KB/s   00:00    
pause-3-1.tar                                                        100%  737KB  25.7MB/s   00:00    
[root@master images]# scp -rp kube-proxy-1-15.tar flannel-0.11.0.tar  pause-3-1.tar node02:/root/
kube-proxy-1-15.tar                                                  100%   80MB  26.8MB/s   00:03    
flannel-0.11.0.tar                                                   100%    0     0.0KB/s   00:00    
pause-3-1.tar                                                        100%  737KB   5.9MB/s   00:00 
[root@node01 ~]# docker load < flannel-0.11.0.tar 
[root@node01 ~]# docker load < kube-proxy-1-15.tar 
[root@node01 ~]# docker load < pause-3-1.tar 

[root@node02 ~]# docker load < flannel-0.11.0.tar 
[root@node02 ~]# docker load < kube-proxy-1-15.tar 
[root@node02 ~]# docker load < pause-3-1.tar 

3.加入集群

[root@node01 ~]# kubeadm join 192.168.1.40:6443 --token nqxl15.utwr7aw0avjqrtri     --discovery-token-ca-cert-hash sha256:8f58ed3303f919778b5d8ad13f0c839b3a55cf0bb4e4da33644eeff63bd4c3dc 

[root@node02 ~]# kubeadm join 192.168.1.40:6443 --token nqxl15.utwr7aw0avjqrtri     --discovery-token-ca-cert-hash sha256:8f58ed3303f919778b5d8ad13f0c839b3a55cf0bb4e4da33644eeff63bd4c3dc 

4.master查看集群

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   93m   v1.15.0
node01   Ready       80m   v1.15.0
node02   Ready       81m   v1.15.0

四、设置

1.设置kubectl命令行工具自动补全功能

[root@master images]# yum install -y bash-completion
[root@master images]# source /usr/share/bash-completion/bash_completion
[root@master images]# source <(kubectl completion bash)
[root@master images]# echo "source <(kubectl completion bash)" >> ~/.bashrc

2.设置tab键空格个数

[root@master ~]# vim .vimrc
set tabstop=2
[root@master ~]# source .vimrc

你可能感兴趣的:(Linux系列,虚拟化,docker,k8s)