Centos 安装 K8S

一、环境准备

2台主机都要安装 docker Centos7 安装 docker_一直被模仿,从未被超越-CSDN博客

master     192.168.2.100

node1      192.168.2.101

node2      192.168.2.102

1、修改 master 主机名 及 host

hostnamectl set-hostname master

cat <> /etc/hosts
master     192.168.2.100
node1      192.168.2.101
node2      192.168.2.102
EOF

2、关闭 selinux 及防火墙

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

systemctl stop firewalld && systemctl disable firewalld

3、禁用swap

# 临时
swapoff -a

# 永久,打开/etc/fstab注释掉swap那一行
sed -i 's/.*swap.*/#&/' /etc/fstab

4、修改网桥过滤及地址转发

cat < /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 加载
modprobe br_netfilter

# 生效
sysctl -p /etc/sysctl.d/k8s.conf

 5、修改Cgroup Driver

# 消除告警
cat < /etc/docker/daemon.json
{
    "registry-mirrors": ["http://hub-mirror.c.163.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 重启生效
systemctl restart docker

6、开启 IPVS

yum install ipset ipvsadm -y

cat < /etc/sysconfig/modules/ipvs.modules
#/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

# 查看
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

 7、设置 kubenetes 源

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

8、安装

# 查看版本
yum list kubelet --showduplicates | sort -r 

# 安装指定版本
yum install -y kubelet-1.17.2 kubeadm-1.17.2 kubectl-1.17.2 
  • kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具

  • kubeadm 用于初始化集群,启动集群的命令工具

  • kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

9、启动

启动 kubelet 之后 我们查看一下 kubelet 状态是未启动状态,查看原因发现是 “/var/lib/kubelet/config.yaml”文件不存在,这里可以暂时先不用处理,当kubeadm init 之后会创建此文件

# 启动
systemctl start kubelet && systemctl enable kubelet

# 查看状态
systemctl status kubelet

kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 日 2019-03-31 16:18:55 CST; 7s ago
     Docs: https://kubernetes.io/docs/
  Process: 4564 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 4564 (code=exited, status=255)

二、master 安装

1、初始化 k8s

kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.2 \
--apiserver-advertise-address 192.168.2.100 \
--pod-network-cidr=10.244.0.0/16

--image-repository:指定镜像源

--kubernetes-version:指定 k8s 版本,要跟上面安装的保持一致

--apiserver-advertise-address:指定master的interface

--pod-network-cidr:指定Pod网络的范围,这里使用flannel网络方案

成功后,自动创建了 "/var/lib/kubelet/config.yaml" 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

# 记录这行,后面 node 节点加入集群会用到
kubeadm join 192.168.1.111:6443 --token 8a1x7a.84gh8ghc9c3z7uak \
    --discovery-token-ca-cert-hash sha256:16ebeae9143006938c81126050f8fc8527d2a6b1c4991d07b9282f47cf4203d6 

2、加载环境变量

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

3、查看组件状态 

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok  

4、查看 node

# 状态为 NotReady,因为没有安装 pod 网络
[root@mysql-master1 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE    VERSION
master         NotReady   master   105s   v1.17.2

5、安装 pod 网络

k8s cluster 工作必须安装pod网络,否则pod之间无法通信,k8s支持多种方案,这里选择flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6、查看 pod 状态,确保当前 pod 都为 Running

# 查看所有命名空间 pod
[root@master ~]# kubectl get pod --all-namespaces -o wide


NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE    IP              NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-9d85f5447-qghnb           1/1     Running   1          63m    10.244.0.5      master1              
kube-system   coredns-9d85f5447-xqsl2           1/1     Running   1          63m    10.244.0.4      master1              
kube-system   etcd-master1                      1/1     Running   1          63m    192.168.2.100   master1              
kube-system   kube-apiserver-master1            1/1     Running   1          63m    192.168.2.100   master1              
kube-system   kube-controller-manager-master1   1/1     Running   1          63m    192.168.2.100   master1              
kube-system   kube-flannel-ds-amd64-52n6m       1/1     Running   0          9m9s   192.168.2.100   master1              
kube-system   kube-proxy-xk7gq                  1/1     Running   1          63m    192.168.2.100   master1              
kube-system   kube-scheduler-master1            1/1     Running   1          63m    192.168.2.100   master1              

7、查看 node,status 为 Ready

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master    Ready    master   66m   v1.17.2

8、安装成功 

三、 node 节点 安装

按第一步上面的环境配置,在 node1,node2 节点 安装 k8s,注意要修改对应的主机名及hosts

1、加入集群

# 如果忘记,可以在 master 节点查看
kubeadm token list

# 如果过期,可以在 master 节点重新生成
kubeadm token create --print-join-command

# 显示,在 node1 node2 节点上面分别执行
kubeadm join 192.168.1.111:6443 --token j88bsx.o0ugzfnxqdl5s58e \
--discovery-token-ca-cert-hash sha256:16ebeae9143006938c81126050f8fc8527d2a6b1c4991d07b9282f47cf4203d6

2、查看 节点 网络

# node1 节点分配的子网段是 10.244.1.0/24
[root@node1 ~]# ifconfig | grep -A 6 flannel
flannel.1: flags=4163  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 10.244.1.0
        inet6 fe80::d4ce:d0ff:fed9:4dd3  prefixlen 64  scopeid 0x20
        ether d6:ce:d0:d9:4d:d3  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
[root@node1 ~]# 

# node2 节点分配的子网段是 10.244.2.0/24
[root@node2 ~]# ifconfig | grep -A 6 flannel
flannel.1: flags=4163  mtu 1450
        inet 10.244.2.0  netmask 255.255.255.255  broadcast 10.244.2.0
        inet6 fe80::8a3:8cff:fe4b:7692  prefixlen 64  scopeid 0x20
        ether 0a:a3:8c:4b:76:92  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)

3、查看 node

[root@master ~]# kubectl get nodes

# 显示
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   23m   v1.17.2
node1    Ready       18m   v1.17.2
node2    Ready       15m   v1.17.2

4、查看 pod

[root@master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-9d85f5447-7wvwr          1/1     Running   0          17m     10.244.0.2      master              
kube-system   coredns-9d85f5447-jm7t8          1/1     Running   0          17m     10.244.0.3      master              
kube-system   etcd-master                      1/1     Running   0          18m     192.168.2.100   master              
kube-system   kube-apiserver-master            1/1     Running   0          18m     192.168.2.100   master              
kube-system   kube-controller-manager-master   1/1     Running   0          18m     192.168.2.100   master              
kube-system   kube-flannel-ds-8qnct            1/1     Running   0          16m     192.168.2.100   master              
kube-system   kube-flannel-ds-bnljr            1/1     Running   1          9m54s   192.168.2.102   node2               
kube-system   kube-flannel-ds-txw2v            1/1     Running   0          13m     192.168.2.101   node1               
kube-system   kube-proxy-998w5                 1/1     Running   0          17m     192.168.2.100   master              
kube-system   kube-proxy-cddmq                 1/1     Running   0          13m     192.168.2.101   node1               
kube-system   kube-proxy-dkc82                 1/1     Running   0          9m54s   192.168.2.102   node2               
kube-system   kube-scheduler-master            1/1     Running   0          18m     192.168.2.100   master              

 5、如何删除 node

# 删除 node2 节点
[root@master1 ~]# kubectl delete node node2

# 重置
[root@node2 ~]# kubeadm reset

# 重新加入
[root@node2 ~]# kubeadm join

四、配置 IPVS

1、修改 kube-proxy 模式

[root@master ~]# kubectl edit cm kube-proxy -n kube-system

apiVersion: v1
data:
  config.conf: |-
    ---
    metricsBindAddress: ""
    mode: "ipvs"                # 修改成ipvs
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false

2、重启

[root@master ~]# kubectl delete pod \
$(kubectl get pod -n kube-system | grep kube-proxy | \
awk '{print $1}') -n kube-system

3、查看

[root@master ~]# kubectl logs kube-proxy-k9lwp -n kube-system
I1025 12:20:39.081445       1 node.go:135] Successfully retrieved node IP: 192.168.2.100
I1025 12:20:39.081559       1 server_others.go:172] Using ipvs Proxier.    # 已经使用 ipvs
W1025 12:20:39.081791       1 proxier.go:420] IPVS scheduler not specified, use rr by default
I1025 12:20:39.081985       1 server.go:571] Version: v1.17.2
I1025 12:20:39.082522       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1025 12:20:39.085741       1 config.go:313] Starting service config controller
I1025 12:20:39.089351       1 shared_informer.go:197] Waiting for caches to sync for service config
I1025 12:20:39.087017       1 config.go:131] Starting endpoints config controller
I1025 12:20:39.089388       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1025 12:20:39.189968       1 shared_informer.go:204] Caches are synced for endpoints config 
I1025 12:20:39.190031       1 shared_informer.go:204] Caches are synced for service config

你可能感兴趣的:(K8S,1024程序员节)