ubuntu20.10+k8s

官网中文学习文档:https://kubernetes.io/zh/docs/home/

内容:1.k8s 各组件功能;2.k8s中创建pod的调度流程;3.基于二进制部署k8s集群

    网络组件、coredns、dashoard

1.k8s 各组件功能

k8s包括Master和Node

Master组件(控制平面组件(Control Plane Components)):

        kube-apiserver:负责接收并处理请求,是Kubernetes API(k8s控制器的前端) 服务器的主要实现,默认是6443端口,功能如下:1.提供各资源对象增删改查以及watch等的restful风格的接口;2.是所有请求的入口,包括客户端和其他k8s组件的请求,所有访问都是通过api server进行交互;3.负责和etcd进行交互;4.对客户端的访问进行身份认证(token或者证书方式),鉴权,数据验证,执行操作,返回数据。

        kube-scheduler:负责调度创建容器的请求,通过获取现有的node资源情况来监视新创建的或者未指定运行节点的pods在哪个节点上运行。

        kube-controller-manager:运行控制器进程的控制平面组件,主要是维持pod副本数,比如需要启动多少个容器,以达到预期的数量,确保控制器健康。

        etcd集群:etcd是kv数据存储系统,支持分布式集群功能,在这里用于保存k8s集群环境各组件的状态信息,etcd集群只和ApiServer进行交互,以保证数据的一致性。

Node组件

        kubelet:是node的客户端组件,1.是node与master进行交互的核心组件,在node节点中执行容器健康状态检查,收集node信息,并向master汇报node节点的状态信息(比如内存,cpu等信息);2.接收schduler的指令,通过调用runc试图在pod中创建容器,删除容器,监测容器;3.准备pod所需的数据卷;4.返回pod的运行状态,如果pod检测失败,会尝试重启pod

        kube-proxy:是node的网络组件,负责创建维护ipvs或者iptables等网络规则。如果操作系统启用了iptables或者ipvs,当kube-proxy从apiserver获取到ip和port绑定信息后,kube-proxy 会通过iptables或者ipvs来创建网络规则,否则仅转发流量本身。并负责和ApiServer进行通信,将service的变动实时反映给ApiServer。

        容器运行时(Container Runtime) :负责容器运行的软件。可以是Docker、 containerd、CRI-O 等。

kubectl 是个命令行客户端,通过命令行管理k8s

dashboard 是个web界面,可以在其上面对k8s进行访问和管理,但是管理功能有限。

CoreDNS:在k8s里面进行域名解析,比如通过它,k8s内部可以ping通外网,可以使用kubectl get svc -A查看servcie里面,这个service的name理论上通过coredns解析是可以直接ping通的



2.k8s中创建pod的调度流程

a.客户端(kubelet或者dashboard)向kube-apiserver请求创建多个pod的指令,kube-apiserver将这个任务请求写在etcd中

b.kube-scheduler会一直和apiserver进行通信,apiserver就会把etcd里面有创建n个pod的任务告诉scheduler,于是shecduler就会通过调度算法把这些pod依次指定到符合条件的node。

c.如何调度?

     先当调度第一个pod时,先排除不符合条件node,比如先排除不满足创建pod所需要的资源的node;然后在剩下的node中选出最符合条件的node,默认是优先选择资源消耗最小的node。如果指定了label就优先选择含有指定label的node,或者从备用列表中优先选择资源使用率最均衡的节点,选出第一个pod的node后,将结果通过apiserver写入etcd。选中node后立即更新被选中的node节点的资源利用率,通过apiserver把node节点最新的资源利率(减去分配给第一个pod所需要使用的资源后重新计算的资源利用率)用写入etcd,然后进行第二个pod的node节点分配,依次往后。

3.基于二进制部署k8s集群

    网络组件、coredns、dashoard

操作系统:ubuntu20.10

安装文档:github上搜索kubeasz,选择第一个

https://github.com/easzlab/kubeasz

环境:

192.168.241.31 k8s-master1  可以是虚拟机,主要是耗cpu,16C 16G内存,200G硬盘

192.168.241.32  k8s-master2

192.168.241.33  k8s-master3

192.168.241.34 www.harbor1.com  k8s-harbor1

192.168.241.35 www.harbor2.com  k8s-harbor2

192.168.241.36  k8s-etcd1   占用磁盘io,磁盘io要快,主要存一些kv数据,8C 16G内存,150G/SSD

192.168.241.37  k8s-etcd2

192.168.241.43  k8s-etcd3

192.168.241.38  k8s-ha1

192.168.241.39  k8s-ha2

192.168.241.40  k8s-node1    一般用物理机  48C 256G内存 ,SSD/2T,10g/25g网卡

192.168.241.41  k8s-node2

192.168.241.42  k8s-node3

#先安装harbor,部署harbor的高可用, keepavlieved+haproxy+harbor

vip: 192.168.241.110:8080

需要部署的机器:

k8s-harbor1:  安装harbor

k8s-harbor2: 安装harbor

k8s-ha1: 安装keepavlived+harpoxy

k8s-ha2: 安装keepavlived+harpoxy

k8s-harbor1 和 k8s-harbor2 安装harbor请参考:“docker namespace cgroup dockerfile harbor” 这篇文章

k8s-ha1 和 k8s-ha2:

apt install keepalived haproxy -y

#设置允许绑定不存在的地址,修改内核参数

vi  /etc/sysctl.conf  

添加一行: net.ipv4.ip_nonlocal_bind = 1

systctl  -p

配置  vi /etc/haproxy/haproxy.cfg

        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256

        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults

        log    global

        mode    http

        option  httplog

        option  dontlognull

        timeout connect 5000

        timeout client  50000

        timeout server  50000

        errorfile 400 /etc/haproxy/errors/400.http

        errorfile 403 /etc/haproxy/errors/403.http

        errorfile 408 /etc/haproxy/errors/408.http

        errorfile 500 /etc/haproxy/errors/500.http

        errorfile 502 /etc/haproxy/errors/502.http

        errorfile 503 /etc/haproxy/errors/503.http

        errorfile 504 /etc/haproxy/errors/504.http

listen harbor-80

    bind 192.168.241.38:80

    mode tcp

    balance source

    server harbor01 192.168.241.24:80 check inter 3s fall 3 rise 5

    server harbor02 192.168.241.25:80 check inter 3s fall 3 rise 5

listen k8s-8080

    bind 192.168.241.110:8080

    mode tcp

    balance source

    server k8s-master1 192.168.241.34:80 check inter 3s fall 3 rise 5

    server k8s-master2 192.168.241.35:80 check inter 3s fall 3 rise 5

k8s-ha1:

root@k8s-ha1:~# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

    acassen

  }

  notification_email_from [email protected]

  smtp_server 192.168.200.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

vrrp_instance VI_1 {

    state MASTER

    interface ens38

    garp_master_delay 10

    smtp_alert

    virtual_router_id 51

    priority 100

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.241.110 eth0:0

        192.168.241.111 eth0:1

        192.168.241.112 eth0:2

        192.168.241.113 eth0:3

    }

}

k8s-ha2:

root@k8s-ha2:~# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

    acassen

  }

  notification_email_from [email protected]

  smtp_server 192.168.200.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

vrrp_instance VI_1 {

    state MASTER

    interface ens38

    garp_master_delay 10

    smtp_alert

    virtual_router_id 51

    priority 90

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.241.110 eth0:0

        192.168.241.111 eth0:1

        192.168.241.112 eth0:2

        192.168.241.113 eth0:3

    }

}

#ha1和ha2  设置keepalived和haproxy为开机启动,并启动服务

systemctl enable haproxy

systemctl enable keepalived

查看keepalived配置是否生效,ss  -tnl 命令查看若有192.168.241.110:8080就是vip绑定生效了。


harbor1和harbor2配置:

vi /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry "www.harbor1.com" --insecure-registry "www.harbor2.com"   --insecure-registry "192.168.241.110:8080" 

systemctl daemon-reload

systemctl restart docker

登录docker   vip  192.168.241.110:8080:

执行命令,输入对应的账号密码: docker login 192.168.241.110:8080


验证:

浏览器登录192.168.241.110:8080 ,创建nginx项目如下:

在安装了harbor1上上传镜像:


###通过kubeasz安装k8s

设置root登录:

root@k8s-master2:~# echo  'PermitRootLogin yes'  >> /etc/ssh/sshd_config

root@k8s-master2:~# sudo -i

systemct  reload sshd

配置互信:

clusterip=`echo $PWD`

read -p "please input you host ip list:"  rootip

read -p "please input host password:" hostpass

for ip in `cat $clusterip/$rootip`

do

sshpass -p $hostpass ssh -o StrictHostKeychecking=no  root@$ip  "rm -rf /root/.ssh" && sshpass -p $hostpass ssh-copy-id -i /root/.ssh/id_rsa.pub root@$ip  && ssh $ip -o StrictHostKeyChecking=no  "echo $ip is ok." || ssh  -o StrictHostKeychecking=no $ip "echo $ip is failure."

done

#安装ansible

apt update

apt install -y python3-pip git

export release=3.1.0

curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown

vi ezdown

DOCKER_VER=19.03.15

##下载所有镜像,看配置文件默认保存在/etc/kubeasz 目录下,此目录下的bin目录下的命令可以直接使用

dyl@k8s-master1:~$ sudo bash ./ezdown -D

cd /etc/kubeasz/

bin/docker version

###创建集群名称

sudo ./ezctl new k8s-01

###配合相当于ansible的hosts文件,留一个master和node节点出来,用来尝试添加节点的方式加入集群,配置master,node,etcd,HA的vip

sudo vi clusters/k8s-01/hosts

[etcd]

192.168.241.36

192.168.241.37

192.168.241.43

# master node(s)

[kube_master]

#192.168.241.31

192.168.241.32

192.168.241.33

# work node(s)

[kube_node]

192.168.241.40

192.168.241.41

#192.168.241.42

[ex_lb]  #ha负载均衡

192.168.241.38 LB_ROLE=backup EX_APISERVER_VIP=192.168.241.110 EX_APISERVER_PORT=6443

192.168.241.39 LB_ROLE=master EX_APISERVER_VIP=192.168.241.110 EX_APISERVER_PORT=6443

CLUSTER_NETWORK="calico"  #网络组件

SERVICE_CIDR="10.100.0.0/16"  ##这个网段和现有的网段千万不能冲突

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking

CLUSTER_CIDR="10.200.0.0/16"

# NodePort Range

NODE_PORT_RANGE="30000-6500"

# Cluster DNS Domain

CLUSTER_DNS_DOMAIN="huahualin.local"

# Binaries Directory

bin_dir="/usr/local/bin"

##配置集群服务的配置文件config.yml,各个组件的配置文件

先进入config.yml查看基础镜像,先下载下来,然后tag以后上传到本地仓库,然后修改这个基础容器镜像为本地的:

cat  clusters/k8s-01/config.yml

# [containerd]基础容器镜像

SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"

docker pull easzlab/pause-amd64:3.4.1

sudo docker  tag easzlab/pause-amd64:3.4.1 192.168.241.110:8080/baseimages/pause-amd64:3.4.1

添加可信任仓库:

"insecure-registries": ["192.168.241.110:8080"]

重启docker服务:

sudo systemctl daemon-reload

sudo systemctl restart docker

登录:

sudo docker login 192.168.241.110:8080

上传镜像:

sudo docker  push 192.168.241.110:8080/baseimages/pause-amd64:3.4.1

# [containerd]基础容器镜像

#SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"

SANDBOX_IMAGE: "192.168.241.110:8080/baseimages/pause-amd64:3.4.1"

# [docker]信任的HTTP仓库

#INSECURE_REG: '["127.0.0.1/8"]'

INSECURE_REG: '["127.0.0.1/8","www.harbor1.com"]'

# node节点最大pod 数

MAX_PODS: 300

##设置网络组件,先提前下载好flannel,到quay.io网站搜索flannel去下载

sudo docker pull quay.io/coreos/flannel:v0.14.0

# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md,IPIP就是overlay叠加网络模型,就是把容器的ip封装到宿主机里,会降低性能,但这样可以跨子网

CALICO_IPV4POOL_IPIP: "Always"

# coredns 自动安装

#dns_install: "yes"

dns_install: "no"

# metric server 自动安装

metricsserver_install: "no"

metricsVer: "v0.3.6"

# dashboard 自动安装

dashboard_install: "no"

dashboardVer: "v2.2.0"

dashboardMetricsScraperVer: "v1.0.6"

##第一步,准备环境

root@k8s-master1:/etc/kubeasz# ./ezctl setup k8s-01 01

##第二部,安装etcd

root@k8s-master1:/etc/kubeasz# ./ezctl setup k8s-01 02

验证etcd集群是否可用,到任意一个安装了etcd的节点执行下面的命令,显示successfully表示安装成功:

root@k8s-etcd2:~# export NODE_IPS="192.168.241.36 192.168.241.37 192.168.241.43"

root@k8s-etcd2:~# vi /etc/kubernetes/ssl/

ca.pem        etcd-key.pem  etcd.pem     

root@k8s-etcd2:~# for ip in ${NODE_IPS} ;do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379  --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health;done

https://192.168.241.36:2379 is healthy: successfully committed proposal: took = 24.944883ms

https://192.168.241.37:2379 is healthy: successfully committed proposal: took = 30.880253ms

https://192.168.241.43:2379 is healthy: successfully committed proposal: took = 22.769771ms

##第三步,安装docker,

docker的配置在03.runtime.yml 里

root@k8s-master1:/etc/kubeasz# vi playbooks/03.runtime.yml

因为docker的版本是19.03,因此注释掉block下面的这个条件,否则会因为不满足子条件而执行失败,导致无法执行后面的task



./ezctl setup k8s-01 03

./ezctl setup k8s-01 04

./ezctl setup k8s-01 05

./ezctl setup k8s-01 06

root@k8s-master1:/etc/kubeasz# vi roles/docker/tasks/main.yml      注释掉下面几行


#安装coredns

root@k8s-master1:~# kubectl apply -f coredns-huahualin.yaml



wget https://dl.k8s.io/v1.21.5/kubernetes.tar.gz

wget https://dl.k8s.io/v1.21.5/kubernetes-client-linux-amd64.tar.gz

wget https://dl.k8s.io/v1.21.5/kubernetes-server-linux-amd64.tar.gz

wget https://dl.k8s.io/v1.21.5/kubernetes-node-linux-amd64.tar.gz

tar xf kubernetes-client-linux-amd64.tar.gz

tar xf kubernetes-node-linux-amd64.tar.gz

tar xf kubernetes-server-linux-amd64.tar.gz

tar xf kubernetes.tar.gz

cd kubernetes/

cd cluster/addons/dns/coredns/

cp coredns.yaml.base /root/

cd /root/

mv coredns.yaml.base coredns-huahualin.yaml

vi coredns-huahualin.yaml

#__DNS__DOMAIN__替换为和/etc/kubeasz/clusters/k8s-01/hosts中# Cluster DNS Domain一样的域名

        #kubernetes __DNS__DOMAIN__ in-addr.arpa ip6.arpa {

        kubernetes huahualin.local  in-addr.arpa ip6.arpa {

        #如果不是k8s权威的记录,比如不是huahualin.local,就会指定专门的内部的dns地址,再非权威的就转到互联网的dns。

        #forward . /etc/resolv.conf {

        forward . 223.6.6.6 {

        ##修改拉取coreDNS的镜像为国内镜像

        ##docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.3

        ##docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.3  www.harbor1.com/baseimages/coredns:1.8.3

        ##docker push www.harbor1.com/baseimages/coredns:1.8.3

      containers:

      - name: coredns

        image: www.harbor1.com/baseimages/coredns:1.8.3

        imagePullPolicy: IfNotPresent

        resources:

          limits:

            #memory: __DNS__MEMORY__LIMIT__

            memory: 256Mi

    ##修改clusterIP,一般都是service网段的第二个ip,就是host里面配置的SERVICE_CIDR 10.100.0.0/16的第二个地址:10.100.0.2

      #clusterIP: __DNS__SERVER__

  clusterIP: 10.100.0.2

##暴露Prometheus的端口,需要改成nodePort模式

spec:

  type: NodePort

  selector:

    k8s-app: kube-dns

  #clusterIP: __DNS__SERVER__

  clusterIP: 10.100.0.2

  ports:

  - name: dns

    port: 53

    protocol: UDP

  - name: dns-tcp

    port: 53

    protocol: TCP

  - name: metrics

    port: 9153

    protocol: TCP

    targetPort: 9153

    nodePort: 30009


###执行升级:

#到node2查看port 30002是否起来 ,ss -tnl

#用浏览器访问 https://192.168.241.58:30002

需要认证:

在master创建账户:

vi admin-user.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: admin-user

  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: admin-user

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: admin-user

  namespace: kubernetes-dashboard

root@k8s-master1:~# kubectl apply -f admin-user.yaml

serviceaccount/admin-user created

clusterrolebinding.rbac.authorization.k8s.io/admin-user created

root@k8s-master1:~# kubectl get secret -A |grep admin

kubernetes-dashboard  admin-user-token-25dtt                          kubernetes.io/service-account-token  3      31s

root@k8s-master1:~# kubectl  describe secret admin-user-token-25dtt -n kubernetes-dashboard

Name:        admin-user-token-25dtt

Namespace:    kubernetes-dashboard

Labels:     

Annotations:  kubernetes.io/service-account.name: admin-user

              kubernetes.io/service-account.uid: 4d4c3775-1d02-4f26-b7b5-58d1ab309025

Type:  kubernetes.io/service-account-token

Data

====

ca.crt:    1350 bytes

namespace:  20 bytes

token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImxmT3VNSGNRWTQ5ZEJIRUcwNi1rRG1YMGtNR1JDcUgybmNxSVl1SmdUWHMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTI1ZHR0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0ZDRjMzc3NS0xZDAyLTRmMjYtYjdiNS01OGQxYWIzMDkwMjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Z1TO62EHBTd2JIKmu5cqNvHejAhbjPHvrhy1DEk6-LoW10hm60X0z558ifIF5-MtZTdhmqoVXjsbUn8tawGAsZ-i7PH8RD10hOqUUKKrvdTZ-vXOBpoqksocsPSsmjRzNO7tSNzSbiSMU8pGR9EbQNPH7vIqi2I0i7747Kb789iLFZ7drKH2iowf_aE-jSXpq8L-b8Vu_rmkqJJtHBuxzUGYMuqSSOIf9dA5dOkzGCfE7xwIpKFrViLs_uneoSCFCHbOIeIhYZ8pOJKBDQc-5yff4Tuagxo7ruWx3tZQbBmHWlpmNFsMHuJG5yGhyHnRofsBCJPfqfOshJ3pVnPfQw




你可能感兴趣的:(ubuntu20.10+k8s)