手工实现 k8s 集群的搭建清单

目标

搭建 1 主 2 从无证书认证的 k8s 集群

准备 3 台主机:

  • 9.1 作为 k8s 集群的 master,9.2、9.3 为 node:
    1. 192.168.9.1
    2. 192.168.9.2
    3. 192.168.9.3
  • etcd 集群的准备:详细介绍
  • Flannel 的安装及配置:详细介绍
  • 下载 k8s_v1.7.4,下载地址

关闭 selinux

  1. 查看状态 /usr/sbin/sestatus -v
  2. 修改 disabled :vi /etc/selinux/config
  3. 重启 Linux

整体规划

master 上安装

  1. kube-apiserver
  2. kube-scheduler
  3. kube-controller-manager

node 上安装

  1. kubelet
  2. kube-proxy

目录结构规划

/app/k8s/bin 这个目录是所有k8s 相关的可执行程序存放目录,所以创建好后要配置到系统环境变量中
/app/k8s/conf 这个目录存放 k8s 相关的配置文件
/app/k8s/kubelet_data 这个目录存放 kubelet 的相关数据文件
/app/k8s/certs 这个目录存放相关的证书文件,本篇没有进行安全证书的认证,但是目录还是需要创建的

三台服务器目录初始化

mkdir -p /app/k8s/{bin, conf, kubelet_data, certs}

可执行程序的安装

  1. 将下载的 kubernetes-server-linux-amd64.tar.gz 解压,并将 kube-apiserver、kube-controller-manager、kubectl kube-scheduler 移入到 master 节点 192.168.9.1 的 /app/k8s/bin 目录下
  2. 将下载的 kubernetes-server-linux-amd64.tar.gz 解压,并将 kubelet 、kube-proxy 移入到 node 节点 192.168.9.2/3 的 /app/k8s/bin 目录下

k8s 的公共配置:

vi /app/k8s/conf/config
三台机器分别创建 /app/k8s/conf、/app/k8s/certs 目录及 /app/k8s/conf/config 配置文件

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.9.1:8080"

K8s 的 api-server 配置

vi /app/k8s/conf/apiserver
仅在 master 节点 192.168.9.1 中配置

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
# KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--insecure-bind-address=192.168.9.1"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# The dir of cert files
KUBE_CERT_DIR="--cert-dir=/app/k8s/certs"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.9.1:2379,http://192.168.9.2:2379,http://192.168.9.3:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

配置 apiserver 的 systemd 启动文件

vi /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/apiserver
ExecStart=/app/k8s/bin/kube-apiserver \
            $KUBE_CERT_DIR \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动 apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

配置 controller-manager

vi /app/k8s/conf/controller-manager
仅在 master 节点 192.168.9.1 中配置

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""

配置 controller-manager 的 systemd 启动文件

vi /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/controller-manager
ExecStart=/app/k8s/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动 controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

配置 scheduler

vi /app/k8s/conf/scheduler
仅 master 节点配置

###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS=""

配置 scheduler 的 systemd 启动文件:

vi /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/scheduler
ExecStart=/app/k8s/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动 kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

验证 master 节点功能是否正常

kubectl -s 192.168.9.1:8080 get componentstatuses

node 节点的配置

配置 kubeconfig

vi /app/k8s/conf/kubeconfig
每个节点都需要配置 kubeconfig,kubeconfig 可通过 kubectl 在 master 上生成 copy 到各个 node 节点的,这里先不多介绍了,大家直接 vi 创建 yaml 文件即可。

apiVersion: v1
clusters:
- cluster:
    server: 192.168.9.1:8080
  name: default
- cluster:
    server: http://192.168.9.1:8080
  name: kubernetes
contexts:
- context:
    cluster: default
    user: ""
  name: default
- context:
    cluster: kubernetes
    user: ""
  name: kubernetes
current-context: default
kind: Config
preferences: {}
users: []

配置 kubelet

vi /app/k8s/conf/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.9.2"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# 此处 hostname-override 根据每个节点实际的服务器信息命名
KUBELET_HOSTNAME="--hostname-override=k8s-node-9.2"

# pod infrastructure container
# 此处 pod-infra-container-image 的值要根据私有仓库中实际的镜像地址填写
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS="--cert-dir=/app/k8s/certs --kubeconfig=/app/k8s/conf/kubeconfig --require-kubeconfig=true --root-dir=/app/k8s/kubelet_data --container-runtime-endpoint=unix:///app/k8s/kubele
t_data/dockershim.sock"

配置 kubelet 的 systemd 的启动文件:

vi /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/kubelet
ExecStart=/app/k8s/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动 kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

配置 kube-proxy

vi /app/k8s/conf/proxy

###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS=""

配置 kube-proxy 的 systemd 启动文件:

vi /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/proxy
ExecStart=/app/k8s/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动 kubelet-proxy

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

在 master 节点检查 node 节点状态

kubectl -s 192.168.9.1:8080 get nodes
如果获取的 node status 状态都是 ready 证明 k8s 集群搭建完毕

你可能感兴趣的:(手工实现 k8s 集群的搭建清单)