【K3S 一】部署K3S集群(单Master)

目录

K3S嵌入式组件及其版本

K3S工作架构拓扑

在线快速安装(Quick-Start - Install Script)

离线安装(Air-Gap Install)

高级配置

私有仓库配置

使用Docker作为容器运行时

普通用户执行客户端命令 

启用Traefik Dashboard

使用Ingress发布服务 

FAQ

 

K3S嵌入式组件及其版本

如下列表所示,K3S嵌入式集成了K8S集群所需的几乎所有关键组件,其中从Kubernetes到FLannel被集成到了K3S二进制可执行程序;从Metrics-server到Local-path-provisioner是通过容器的方式提供。嵌入式组件Kubernetes包括:kube-apisesrver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy;根据传入的参数不同启动不同的组件服务:/usr/local/bin/k3s server 启动Master节点上所需服务;/usr/local/bin/k3s agent启动Worker Node节点上所需服务。

此外,k3s二进制可执行程序还包含K8S及其组件启动所需的配置。

v1.18.17+k3s1 v1.19.9+k3s1 v1.20.5+k3s1
Component Version Component Version Component Version
Kubernetes v1.18.17 Kubernetes v1.19.9 Kubernetes v1.20.5
        Kine v0.6.0
SQLite 3.33.0 SQLite 3.33.0 SQLite 3.33.0
    Etcd v3.4.13-k3s1 Etcd v3.4.13-k3s1
Containerd v1.3.10-k3s2 Containerd v1.4.4-k3s1 Containerd v1.4.4-k3s1
Runc v1.0.0-rc10 Runc v1.0.0-rc92 Runc v1.0.0-rc92
Flannel v0.11.0-k3s.2 Flannel v0.12.0-k3s1 Flannel v0.12.0-k3s1
Metrics-server v0.3.6 Metrics-server v0.3.6 Metrics-server v0.3.6
Traefik 1.7.19 Traefik 1.7.19 Traefik v1.7.19
CoreDNS v1.6.9 CoreDNS v1.6.9 CoreDNS v1.8.0
Helm-controller v0.7.3 Helm-controller v0.7.3 Helm-controller v0.8.3
Local-path-provisioner v0.0.11 Local-path-provisioner v0.0.14 Local-path-provisioner v0.0.19

K3S工作架构拓扑

如下图,应该是v1.18.17+k3s1及以前版本的架构拓扑图,其中Tunnel Proxy的概念是:Running load balancer 127.0.0.1:44975 -> [k3s-node01:6443]。k3s Agent与本地的127.0.0.1:44975通信,以解决Master Cluster和Node Cluster之间的通信问题。

【K3S 一】部署K3S集群(单Master)_第1张图片

在线快速安装(Quick-Start - Install Script)

# 安装master
$ curl -sfL https://get.k3s.io > install_k3s.sh
$ sh install_k3s.sh
或者
$ curl -sfL https://get.k3s.io | sh -

# 安装node
$ K3S_URL=https://k3s-node01:6443 K3S_TOKEN=K10bd1ee9cf94da0a0f02a3797a114aa2b27b2c0b10c9fe9e2abfb4bca2166a0439::server:378661cd25787d1e9c09d16ba77b50f4 sh install_k3s.sh
或者
$ curl -sfL https://get.k3s.io | K3S_URL=https://k3s-node01:6443 K3S_TOKEN=K10bd1ee9cf94da0a0f02a3797a114aa2b27b2c0b10c9fe9e2abfb4bca2166a0439::server:378661cd25787d1e9c09d16ba77b50f4 sh -

K3S_TOKEN来自于Master节点安装时,生成的token文件:/var/lib/rancher/k3s/server/node-token;k3s-node01是Master节点的hostname,需要在Worker Node上配置/etc/hosts文件。执行回显如下:

[sudo] password for xiao: 
[INFO]  Finding release for channel stable
[INFO]  Using v1.20.5+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.20.5+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.20.5+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[sudo] password for xiao: 
Rancher K3s Common (stable)                                                                550  B/s | 998  B     00:01    
Dependencies resolved.
===========================================================================================================================
 Package                 Architecture Version                                        Repository                       Size
===========================================================================================================================
Installing:
 k3s-selinux             noarch       0.2-1.el7_8                                    rancher-k3s-common-stable        13 k
Installing dependencies:
 container-selinux       noarch       2:2.155.0-1.module_el8.3.0+699+d61d9c41        AppStream                        51 k
Enabling module streams:
 container-tools                      rhel8                                                                               
……
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Worker Node安装回显同上,唯一区别在启动的服务为k3s-agent.service(/usr/local/bin/k3s agent),而Master启动的是k3s.service(/usr/local/bin/k3s server)。

参考链接:Quick-Start Guide

离线安装(Air-Gap Install)

# 拷贝k3s二进制可执行程序,并授权
$ sudo mv k3s /usr/local/bin/
$ sudo chmod a+x /usr/local/bin/*

# 将Image解压到相应目录下
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
sudo cp ./k3s-airgap-images-amd64.tar /var/lib/rancher/k3s/agent/images/

# 下载安装脚本,也可通过其他方式下载并上传
$ curl -sfL https://get.k3s.io > install_k3s.sh

# 安装K3S依赖包(可将RPM下载后,离线安装)
yum install -y container-selinux selinux-policy-base
yum install -y https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.2-1.el7_8.noarch.rpm

# 安装K3S(以Worker Node为例)
$ INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://k3s-node01:6443 K3S_TOKEN=K10bd1ee9cf94da0a0f02a3797a114aa2b27b2c0b10c9fe9e2abfb4bca2166a0439::server:378661cd25787d1e9c09d16ba77b50f4 sh install_k3s.sh

安装回显如下:

[sudo] password for xiao: 
[INFO]  Skipping k3s download and verify
[INFO]  Skipping installation of SELinux RPM

[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

参考链接:Air-Gap Install

高级配置

私有仓库配置

K3S默认使用containerd作为容器运行时,并没有安装Docker;与Docker相同,访问Image Registry默认使用HTTPS,所以insecure-registry需要额外配置,因为不是Docker,无法通过/etc/docker/daemon.json进行配置,这里需要在Worker Node上进行如下配置:

mkdir /etc/rancher/k3s/
vi /etc/rancher/k3s/registries.yaml
mirrors:
  192.168.35.100:
    endpoint:
      - "http://192.168.35.100"
systemctl restart k3s-agent

参考链接:Private Registry Configuration 

使用Docker作为容器运行时

$ curl https://releases.rancher.com/install-docker/19.03.sh | sh

$ curl -sfL https://get.k3s.io | sh -s - --docker

普通用户执行客户端命令 

$ kubectl get node
WARN[2021-04-09T19:22:42.934501246+08:00] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions 
error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied

$ ctr c ls
ctr: failed to dial "/run/k3s/containerd/containerd.sock": connection error: desc = "transport: error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: permission denied"
# ll /run/k3s/containerd/containerd.sock
srw-rw----. 1 root root 0 Apr  7 17:48 /run/k3s/containerd/containerd.sock

$ crictl ps -a
FATA[2021-04-09T19:01:20.029650977+08:00] load config file: open /var/lib/rancher/k3s/agent/etc/crictl.yaml: permission denied 

$ ll /var/lib/rancher/k3s/agent/etc/crictl.yaml
-rw-------. 1 root root  61 Apr  9 12:33 crictl.yaml

如上所示,默认情况下,普通用户无法执行kubectl、ctr、crictl等客户端命令。因为依赖的config或socket文件权限太严格,普通用户没有读权限。放开文件的可读权限给普通用户即可。安全期间,还是不要这么做。下面操作可令普通用户执行kubectl命令,其他两个命令还是需要root账号。

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ vi ~/.bash_profile 
export KUBECONFIG=~/.kube/config
$ source ~/.bash_profile

启用Traefik Dashboard

编辑traefik.yaml文件,helm文件中新增dashboard的value(只要是在/var/lib/rancher/k3s/server/manifests/下面的yaml就会被加载执行,修改traefik.yaml文件内容,就会被Helm加载重新安装traefik):

vi /var/lib/rancher/k3s/server/manifests/traefik.yaml
……
spec:
  chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
  valuesContent: |-
……
    metrics:
      prometheus:
        enabled: true
    dashboard:
      enabled: true
	  domain: "traefik.k8s.testing"
……

验证是否生效,执行一下命令查看,然后打开浏览器输入http://traefik.k8s.testing/dashboard/,见下面截图:

$ kubectl get svc -n kube-system
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP    PORT(S)   AGE
……
traefik-dashboard    ClusterIP   10.43.62.217             80/TCP    5h59m
$ kubectl get ingress -n kube-system
NAME                CLASS    HOSTS                 ADDRESS         PORTS   AGE
traefik-dashboard      traefik.k8s.testing   192.168.35.13   80      5h59m
$ kubectl describe ingress traefik-dashboard -n kube-system
Name:             traefik-dashboard
Namespace:        kube-system
Address:          192.168.35.13
Default backend:  default-http-backend:80 ()
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  traefik.k8s.testing  
                          traefik-dashboard:dashboard-http (10.42.0.14:8080)
Annotations:           meta.helm.sh/release-name: traefik
                       meta.helm.sh/release-namespace: kube-system
Events:                
# traefik Pod中虽然启动了多个服务,但只有traefik一个容器,启动dashboard,会更新traefik的配置文件:
$ kubectl describe deploy/traefik -n kube-system
……
    Args:
      --configfile=/config/traefik.toml
……
    Mounts:
      /config from config (rw)
……
  Volumes:
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      traefik
    Optional:  false
# 可知traefik的配置文件来自于Name:traefik的ConfigMap对象,启动dashboard前后,该对象差异如下:
$ kubectl get cm -n kube-system
……
[entryPoints]
……
  [entryPoints.traefik]
  address = ":8080"
……
[api]
  entryPoint = "traefik"
  dashboard = true
……

【K3S 一】部署K3S集群(单Master)_第2张图片

使用Ingress发布服务 

如下Yaml代码是Tomcat App服务的定义,在配置Ingress的时候会用到spec.ports.name或spec.ports.port,分别对应到:ingress.spec.rules.http.paths.backend.service.port.name和ingress.spec.rules.http.paths.backend.service.port.number,两个属性作用相同,不可同时配置。

apiVersion: v1
kind: Service
metadata:
  name: myweb
spec:
#  type: NodePort
#  ports:
#    - port: 8080
#      nodePort: 30080
  type: ClusterIP
  ports:
  - name: tomcat-http
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: myweb
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tomcat-app
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: demo.k8s.testing
    http:
      paths:
      - pathType: "Prefix"
        path: "/demo/"
        backend:
          service: 
            name: myweb
            port: 
#              number: 8080
              name: tomcat-http
$ kubectl apply -f tomcat-ingress.yaml 
ingress.networking.k8s.io/tomcat-app created
$ kubectl get ingress
NAME         CLASS    HOSTS              ADDRESS         PORTS   AGE
tomcat-app      demo.k8s.testing   192.168.35.13   80      3m17s
$ kubectl describe ingress/tomcat-app
Name:             tomcat-app
Namespace:        default
Address:          192.168.35.13
Default backend:  default-http-backend:80 ()
Rules:
  Host              Path  Backends
  ----              ----  --------
  demo.k8s.testing  
                    /demo/   myweb:tomcat-http (10.42.2.14:8080,10.42.2.15:8080)
Annotations:        kubernetes.io/ingress.class: traefik
Events:             

 浏览器访问:http://demo.k8s.testing/demo/(必须访问Ingress配置的Host,直接访问IP地址无效),效果如下:

【K3S 一】部署K3S集群(单Master)_第3张图片

FAQ

问题:kubectl get node没有新添加的worker node,查看k3s-agent服务,报错
Apr 08 10:50:45 k3s-node02 k3s[29057]: time="2021-04-08T10:50:45+08:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Apr 08 10:50:45 k3s-node02 k3s[29057]: time="2021-04-08T10:50:45+08:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/926943070e893920b703e893777d0cdc577dea7609f819abb14a>
Apr 08 10:50:47 k3s-node02 k3s[29057]: time="2021-04-08T10:50:47.570021324+08:00" level=info msg="Starting k3s agent v1.20.5+k3s1 (355fff30)"
Apr 08 10:50:47 k3s-node02 k3s[29057]: time="2021-04-08T10:50:47.570588595+08:00" level=info msg="Module overlay was already loaded"
Apr 08 10:50:47 k3s-node02 k3s[29057]: time="2021-04-08T10:50:47.570643821+08:00" level=info msg="Module nf_conntrack was already loaded"
Apr 08 10:50:47 k3s-node02 k3s[29057]: time="2021-04-08T10:50:47.570657372+08:00" level=info msg="Module br_netfilter was already loaded"
Apr 08 10:50:47 k3s-node02 k3s[29057]: time="2021-04-08T10:50:47.584352720+08:00" level=info msg="Running load balancer 127.0.0.1:44975 -> [k3s-node01:6443]"
Apr 08 10:50:47 k3s-node02 k3s[29057]: time="2021-04-08T10:50:47.586248643+08:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:44975/cacerts\": read tcp 127.0.0.1:57280->127.0.0.1:44975: read: connection reset by peer"
Apr 08 10:50:49 k3s-node02 k3s[29057]: time="2021-04-08T10:50:49.588085938+08:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:44975/cacerts\": read tcp 127.0.0.1:57280->127.0.0.1:44975: read: connection reset by peer"

解决:关闭所有node的firewalld

问题:执行$ curl -sfL https://get.k3s.io | K3S_URL=https://k3s-node01:6443 K3S_TOKEN=K10bd1ee9cf94da0a0f02a3797a114aa2b27b2c0b10c9fe9e2abfb4bca2166a0439::server:378661cd25787d1e9c09d16ba77b50f4 sh -
时报错:

curl: (52) Empty reply from server
解决:配置/etc/hosts,添加192.168.35.13 k3s-node01

你可能感兴趣的:(云计算-容器云,k3s,container)