1.1、k8s安装

1、前言

源于csdn文章,自已安装踩一遍坑

2、环境准备

2.1 服务器准备

去阿里云购买ECS,按量付费,比在自己机器建虚拟机要爽很多
买3台,1台当master,2台当worker

2.2 安装依赖软件

2.3 安装前的配置

  • 关闭、禁用防火墙:
systemctl stop firewalld
systemctl disable firewalld
  • 禁用SELINUX:
setenforce 0
  • 创建 /etc/sysctl.d/k8s.conf 文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
  • 执行如下命令使修改生效:
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
  • 安装 Docker
    step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

Step 2: 添加软件源信息

sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Step 3: 更新并安装 Docker-CE

sudo yum makecache fast
sudo yum -y install docker-ce

Step 4: 开启Docker服务

sudo service docker start

Step 5: 设置开机启动

sudo systemctl enable docker

Step 6: 配置阿里云镜像加速器:

mkdir -p /etc/docker

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://obww7jh1.mirror.aliyuncs.com"]
}
EOF

systemctl daemon-reload

systemctl restart docker

Step 7. 修改ssh配置,让crt不超时

1.修改sshd_config文件 
vim /etc/ssh/sshd_config 
ClientAliveInterval 0 修改保持连接时间, 
ClientAliveCountMax 3 修改保持连接次数。 

最后重启服务
service sshd restart

3、 Master相关软件安装

3.1 etcd安装

在安装master上的其它k8s组件之前,首先要安装etcd,前面我们已经下载过了,现在需要解压一下:

tar -zxvf etcd-v3.3.9-linux-amd64.tar.gz

然后将etcd和etcdctl复制到/usr/bin目录下:

cp etcd etcdctl /usr/bin/

接下来编辑etcd的服务配置文件:

vi /usr/lib/systemd/system/etcd.service

[Unit]
Description=etcd.service
 
[Service]
Type=notify
TimeoutStartSec=0
Restart=always
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
 
[Install]
WantedBy=multi-user.target

创建上面配置中的两个目录:

mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/

编辑环境文件:

vi /etc/etcd/etcd.conf

ETCD_NAME=ETCD Server
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
ETCD_ADVERTISE_CLIENT_URLS="http://172.28.8.193:2379"

最后,启动etcd服务,并验证其正确性:

systemctl daemon-reload
systemctl start etcd.service
etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://172.28.8.193:2379
cluster is healthy

3.2 kube-apiserver安装

首先,进入之前解压的目录中,/server/bin,把kube-apiserver可执行文件复制到/usr/bin目录中:

cp kube-apiserver /usr/bin/

编辑服务文件:

vi /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service
 
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver  \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_LOG \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

编辑 环境文件:

mkdir -p /etc/kubernetes
vi /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

启动服务,并验证其正确性:

systemctl daemon-reload
systemctl start kube-apiserver.service
netstat -tnlp | grep kube-api

3.3 安装kube-controller-manager

首先将kube-controller-namager可执行文件复制到/usr/lib目录中:

cp kube-controller-manager /usr/bin/

编辑启动文件:

vi /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
 
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

编辑环境文件:

vi /etc/kubernetes/controller-manager

KUBE_MASTER="--master=http://127.0.0.1:8080"
KUBE_CONTROLLER_MANAGER_ARGS=""

启动服务并验证其正确性:

systemctl daemon-reload
systemctl start kube-controller-manager.service
netstat -lntp | grep kube-controll

2.4 安装kube-schedule

首先将kube-scheduler可执行文件复制到/usr/bin目录下:

cp kube-scheduler /usr/bin/

编辑启动文件:

vi /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
        $KUBE_MASTER \
        $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编辑环境配置文件:

vi /etc/kubernetes/scheduler

KUBE_MASTER="--master=http://172.28.8.193:8080"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/var/log/kubernetes --v=2"

启动服务并验证其正确性:

systemctl daemon-reload
systemctl start kube-scheduler.service
netstat -lntp | grep kube_schedule
netstat -lntp | grep kube_schedule
netstat -lntp | grep kube-schedule

4、 Node安装

4.1 安装kube-proxy

首先编辑proxy的配置文件:/usr/lib/systemd/system/kube-proxy.service:

vi /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service
 
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

编辑完了之后,wq保存退出,接着开始编辑EnvironmentFile参数中指定的两个配置文件,编辑这两个配置文件之前,首先需要创建一个配置文件目录:

mkdir -p /etc/kubernetes

后面所有的配置文件都放在这里:

vi /etc/kubernetes/proxy

KUBE_PROXY_ARGS=""
vi /etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_LOG_DIR="--log-dir=/var/log/kubernetes"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://172.28.8.193:8080"

这两个文件,分别都是编辑完了之后,wq保存退出。
接下来,启动服务,并验证是否启动成功:

systemctl daemon-reload
systemctl start kube-proxy.service
netstat -lntp | grep kube-proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      27624/kube-proxy    
tcp6       0      0 :::10256                :::*                    LISTEN

4.2 安装kubelet

跟上面的类似,首先需要编辑服务的配置文件

vi /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
 
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

上面的参数WorkingDirectory所指定的目录需要创建:

mkdir -p /var/lib/kubelet

接下来开始编辑配置文件:

vi /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=172.28.8.193"
KUBELET_API_SERVER="--api-servers=http://172.28.8.193:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"

上面的配置中,hostname表示的是当前node的名称。
再编辑一下kubeconfig的配置:

vi /var/lib/kubelet/kubeconfig

apiVersion: v1
kind: Config
users:
- name: kubelet
clusters:
- name: kubernetes
  cluster:
    server: http://172.31.153.198:8080
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: service-account-context
current-context: service-account-context

最后,启动并验证服务:

swapoff -a
systemctl daemon-reload
systemctl start kubelet.service
netstat -tnlp | grep kubelet
tcp        0      0 127.0.0.1:38496         0.0.0.0:*               LISTEN      27972/kubelet       
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      27972/kubelet       
tcp6       0      0 :::10250                :::*                    LISTEN      27972/kubelet       
tcp6       0      0 :::10255                :::*                    LISTEN

6、验证

6.1 kubectl get nodes不成功

在master上报: No resources found.
原因: node节点的kubelet配置错误,即 /var/lib/kubelet/kubeconfig的cluster.server应该配置master的ip,但我配成了node的ip
办法: systemctl status kubelet, 主要看错误日志,如下

Jun 09 13:05:44 iZm5e4q7wqqi371ly7w5f6Z kubelet[28714]: E0609 13:05:44.022260   28714 kubelet.go:2244] node "izm5e4q7wqqi371ly7w5f6z" not found
Jun 09 13:05:44 iZm5e4q7wqqi371ly7w5f6Z kubelet[28714]: E0609 13:05:44.106022   28714 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get http://172.31.153.200:8080/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 172.31.153.200:8080: connect: connection refused

你可能感兴趣的:(1.1、k8s安装)