云原生工程师-3.kubernetes的搭建-kubeadm

个人博客

二.kubernetes的搭建

1.kubeadm安装:

​ kubeadm 是官方提供的开源工具,是一个开源项目,用于快速搭建 kubernetes 集群,目前是比较 方便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。 Kubeadm 初始化 k8s,所有的组件都是以 pod 形式运行的,具备故障自恢复能力。 kubeadm 是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部 署,简化部署操作,自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对 k8s 架构组件理解不 深的话,遇到问题比较难排查。 kubeadm 适合需要经常部署 k8s,或者对自动化要求比较高的场景下使用。

1.1.基础环境配置
1.1.1配置网卡:

vim /etc/sysconfig/network-scripts/ifcfg-xxx

BOOTPROTO=static #static 表示静态 ip 地址
ONBOOT=yes #开机自启动网络,必须是 yes
IPADDR=192.168.100.0 #ip 地址,需要跟自己电脑所在网段一致
NETMASK=255.255.255.0 #子网掩码
GATEWAY=192.168.100.2 #网关
DNS1=8.8.8.8 #DNS

1.1.2配置主机名和hosts和无密码登录

在各个节点输入:hostnamectl set-hostname master && bash

各个节点配置hosts:vi /etc/hosts

ip hostname

各个节点配置无密码登录

ssh-keygen #一路回车,不输入密码

ssh-copy-id IP或者主机名

1.1.3关闭swap分区和修改内核参数

swapoff -a #临时关闭

vim /etc/fstab

swap开头加注释 #永久关闭

modprobe br_netfilter

echo “modprobe br_netfilter” >> /etc/profile

cat > /etc/sysctl.d/k8s.conf <

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

1.1.4关闭防火墙和selinux

systemctl stop firewalld ; systemctl disable firewalld

sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config

1.1.5配置repo源

yum install yum-utils -y
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

1.1.6各个节点配置时间同步

yum install ntpdate -y

ntpdate cn.pool.ntp.org

crontab -e

service crond restart

1.1.7开启ipvs

ipvs.modules

#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

scp /etc/sysconfig/modules/ipvs.modules node:/etc/sysconfig/modules/

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的 4 层 LAN 交换,作为 Linux 内核的一部分。ipvs 运行在主机上,在真实服务器集群前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP 的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。

kube-proxy 支持 iptables 和 ipvs 两种模式, 在 kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于 netfilter 的,但是 ipvs 采用的是 hash 表,因此当 service 数量达到一定规模时,hash 查表的速度优势就会显现 出来,从而提高 service 的服务性能。

ipvs 为大型集群提供了更好的可扩展性和性能,更复杂的负责均衡算法和支持服务器健康检擦和链接重试功能

1.1.8安装基础环境服务及docker

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm

yum install docker-ce docker-ce-cli containerd.io -y

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://3ri333r1.mirror.aliyuncs.com","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可 以

1.1.9安装初始化k8s

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet && systemctl start kubelet

*1.1.10(多master)通过 keepalive+nginx 实现 k8s apiserver 节点高可用

epel.repo

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

yum install nginx keepalived nginx-mod-stream -y

vim /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.3.110:6443;   # Master1 APISERVER IP:PORT
       server 192.168.3.111:6443;   # Master2 APISERVER IP:PORT
    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}

vim /etc/keepalived/keepalived.conf

global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        192.168.0.0/24
    } 
    track_script {
        check_nginx
    } 
}

vim /etc/keepalived/check_nginx.sh

#!/bin/bash
#1、判断Nginx是否存活
counter=`ps -C nginx --no-header | wc -l`
if [ $counter -eq 0 ]; then
    #2、如果不存活则尝试启动Nginx
    service nginx start
    sleep 2
    #3、等待2秒后再次获取一次Nginx状态
    counter=`ps -C nginx --no-header | wc -l`
    #4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
    if [ $counter -eq 0 ]; then
        service  keepalived stop
    fi
fi

chmod +x /etc/keepalived/check_nginx.sh

systemctl daemon-reload

yum install nginx-mod-stream -y

systemctl start nginx

systemctl start keepalived

systemctl enable nginx keepalived

systemctl status keepalived

停掉1节点的会漂移到2,启动会回来。

1.1.11kubeadm初始化k8s集群

kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2 
kind: ClusterConfiguration 
kubernetesVersion: v1.20.6 
controlPlaneEndpoint: 192.168.40.199:16443 
imageRepository: registry.aliyuncs.com/google_containers 
apiServer: 
 certSANs: 
 - 192.168.40.180 
 - 192.168.40.181 
 - 192.168.40.182 
 - 192.168.40.199 
networking: 
 podSubnet: 10.244.0.0/16 
 serviceSubnet: 10.10.0.0/16 
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1 
kind: KubeProxyConfiguration 
mode: ipvs 

命令:

kubeadm init --apiserver-advertise-address=$MASTER_IP \
             --pod-network-cidr=192.168.0.0/16 \
             --service-cidr=10.96.0.0/12 \
             --image-repository registry.aliyuncs.com/google_containers \
             --controlPlaneEndpoint=$MASTER_IP:6443 >> ./join.txt

node 使用join.txt命令

#配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权;

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#证书拷加入到控制节点

2:cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p  ~/.kube/

1:scp /etc/kubernetes/pki/ca.crt  2:/etc/kubernetes/pki/

1:scp /etc/kubernetes/pki/ca.key  2:/etc/kubernetes/pki/

1:scp /etc/kubernetes/pki/sa.key  2:/etc/kubernetes/pki/

1: scp /etc/kubernetes/pki/sa.pub  2:/etc/kubernetes/pki/  

1:scp /etc/kubernetes/pki/front-proxy-ca.crt 2:/etc/kubernetes/pki/  

1:scp /etc/kubernetes/pki/front-proxy-ca.key  2:/etc/kubernetes/pki/  

1:scp /etc/kubernetes/pki/etcd/ca.crt  2:/etc/kubernetes/pki/etcd/ 

1:scp /etc/kubernetes/pki/etcd/ca.key  2:/etc/kubernetes/pki/etcd/

控制节点和node加入:

1:kubeadm token create --print-join-command

2:kubeadm join 192.168.3.110:16443 --token zwzcks.u4jd8lj56wpckcwv \ --discovery-token-ca-cert-hash sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 \ --control-plane --ignore-preflight-errors=SystemVerification

node: kubectl label node node1 noderole.kubernetes.io/worker=worke

one也是工作节点,可以更改ROLES

加入报错:

kubectl -n kube-system edit cm kubeadm-config

添加这个在version下

controlPlaneEndpoint: “10.0.0.210:6443”
###保存

1.1.12安装网络组件-Calico

在线下载配置文件地址是:https://docs.tigera.io/calico/3.25/getting-started/kubernetes/quickstart

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/custom-resources.yaml

1.1.13测试pod网络是否正常

#双master去除污点

[root@master2 ~]# kubectl describe node master1 |grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
[root@master2 ~]# kubectl describe node master2 |grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
[root@master2 ~]# kubectl taint node master1 node-role.kubernetes.io/control-plane-
node/master1 untainted
[root@master2 ~]# kubectl taint node master2 node-role.kubernetes.io/control-plane-
node/master2 untainted

kubectl run busybox --image docker.io/library/busybox:latest --restart=Never --rm -it busybox – sh

/ # nslookup kubernetes.default.svc.cluster.local

ping通百度即可

#测试nginx服务

apiVersion: v1  #pod属于k8s核心组v1
kind: Pod  #创建的是一个Pod资源
metadata:  #元数据
  name: demo-nginx  #pod名字
  namespace: default  #pod所属的名称空间
  labels:
    app: myapp  #pod具有的标签
    env: dev    #pod具有的标签
spec:
  containers:      #定义一个容器,容器是对象列表,下面可以有多个name
  - name:  demo-nginx  #容器的名字
    ports:
    - containerPort: 80
    image: docker.io/library/nginx:latest   #容器使用的镜像
    imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: demo-nginx
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30080
  selector:
    app: myapp
    env: dev

kubectl exec -it podname – /bin/bash or kubectl exec -it -n namesapce podname – 命令 #进入测试

1.1.14安装k8s可视化UI dashboard
kubectl apply -f https://www.kubebiz.com/raw/KubeBiz/Kubernetes%20Dashboard/v2.7.0/recommended.yaml
[root@master2 k8s]# kubectl get pods -n kubernetes-dashboard
[root@master2 k8s]# kubectl get svc -n kubernetes-dashboard
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
#修改映射端口
  - nodePort: 30443 
  type: NodePort
#创建dashboard令牌(ServiceAccount和ClusterRoleBinding)

kubectl apply -f https://www.kubebiz.com/raw/KubeBiz/Kubernetes%20Dashboard/v2.7.0/admin-user.yaml
#获取token
kubectl -n kubernetes-dashboard create token admin-user
#永久性
kubectl create clusterrolebinding dashboard-cluster-admin --
clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard

kubectl get secret -n kubernetes-dashboard

kubectl describe secret kubernetes-dashboard-token-name -n 
kubernetes-dashboard 
#通过kuberconfig访问dashboard

cd /etc/kubernetes/pki/

#创建cluster集群

kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.3.110:6443" --embed-certs=true --kubeconfig=/root/dashboard-admin.conf

#创建credentials用户(需要token值)

DASHBOARD_ADMIN_TOKEN=`kubectl -n kubernetes-dashboard create token admin-user`
kubectl config set-credentials dashboard-admin --token=$DASHBOARD_ADMIN_TOKEN --kubeconfig=/root/dashboard-admin.conf
#创建contest
kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashboard-admin.conf
#切换 context 的 current-context 是 dashboard-admin@kubernetes(上下文访问集群)
kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashboard-admin.conf
1.1.15部署metrics-server 服务 (都改)

metrics-server 是一个集群范围内的资源数据集和工具,只是显示数 据,并不提供数据存储服务,主要关注的是资源度量 API 的实现

在/etc/kubernetes/manifests 里面改一下 apiserver 的配置

这个是 k8s 在 1.17 的新特性,如果是 1.16 版本的可以不用添加,1.17 以后要添加。这个参 数的作用是 Aggregation 允许在不修改 Kubernetes 核心代码的同时扩展 Kubernetes API

vim /etc/kubernetes/manifests/kube-apiserver.yaml  

- command:增加如下内容:  - --enable-aggregator-routing=true 
#更新配置
kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
#删除CrashLoopBackOff的pod
kubectl get pods -n kube-system
kubectl delete pods -n kube-system kube-apiserver-name
#下载官方metrics-server
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.2/components.yaml
#使用metrics-server
添加 - --kubelet-insecure-tls
修改image registry.aliyuncs.com/google_containers/metrics-server:v0.6.2
[root@master2 k8s]# kubectl top nodes
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master1   165m         4%     2945Mi          38%
master2   191m         4%     1461Mi          18%

你可能感兴趣的:(kubernetes,云原生,docker)