本次搭建提K8S集群,所有节点都是以集群方式存在,如下图所示

kubernetes 1.12.0 二进制包搭建_第1张图片


etcd ip分配:

etcd1:172.16.0.5
etcd2:172.16.0.6
etcd3:172.16.0.7
vip:172.16.0.100(为keepalive虚拟ip)
安装程序包括
etcd,(nginx|haproxy),keepalive,chronyclient
注:nginx|haproxy为apiserver提供负载均衡,本次选择nginx,chronyclient为时间同步客户端,安装ntp也可以

master ip分配

master1:172.16.0.2
master1:172.16.0.3
master1:172.16.0.4
安装程序包括
kube-apiserver,kube-schedule,kube-controll,docker(私有仓库),chronyserver,cfssl
注:chronyserver和cfssl只在master1上安装即可

node ip分配

node1:172.16.0.8
node1:172.16.0.9
node1:172.16.0.10
安装程序包括
kubelet,kube-proxy,docker,chronyclient

注:下面所有函数用到的变量,如下
ip=ifconfig eth0 |awk '$1=="inet"{print $2}' #指当前安装的服务器ip地址
clusterip="169.169.0.0" #
vip="172.16.0.100"
dockerstoreip="172.16.0.2"
etcd1ip="172.16.0.5"
etcd2ip="172.16.0.6"
etcd3ip="172.16.0.7"

安装软件版本如下

1.设置下载源为阿里
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF


2.重建yum缓存
yum -y install epel-release
yum clean all
yum makecache
yum -y update (在安装前最好把所有系统都升级下软件版本)


3.centos 7.5
docker 1.11.3
kubernetes 1.12.0(官网下载最新的二进制tar包)
chrony(yum 安装)
cfssl(yum 安装,创建证书使用)
nginx+tcp(编译安装安装,但要支持tcp转发)
keepalived(yum 安装)
flannel 0.7.1(yum 安装)

1.chrony 时间服务器

安装k8s集群前需确保各节点时间同步,不然程序启动后会报时间不同步;既可作时间服务器服务端,也可作客户端
yum -y install chrony

  1. 服务端配置
    #vim /etc/chrony.conf
    server ntp1.aliyun.com iburst #配置时间源,国内可以增加阿里的时间源 ntp1.aliyun.com
    server ntp2.aliyun.com iburst
    server 172.16.0.2 iburst
    allow 172.16.0.0/24 #配置允许同步的客户端网段
    local stratum 10 #配置离线也能作为源服务器
  2. 客户端配置
    vim /etc/chrony.conf
    server 172.16.0.2 iburst #chrony服务端ip地址,其他server删除,下面其他配置保持不变
  3. 所有机器启动服务
    systemctl enable chronyd
    systemctl start chronyd
    systemctl status chronyd

系统参数优化

优化内容包括
关闭不必要服务
系统参数优化
/etc/security/limits.d/20-nproc.conf
/etc/sysctl.conf
关闭selinux
帐号登录安全优化
系统软件升级到最新版本,保持系统干净

cfssl搭建本地CA服务器

下载安装cfssl的二进制工具即可,其实就是三个命令,本次在master1上安装并生成ca相关证书,然后分发

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfsslinfo

创建TLS证书和秘钥说明

这一步在配置kubernetes的所有步骤中最容易出错也最难于排查问题的一步,而这却刚好是第一步,也是
最重要一步kubernetes系统的各组件需要使用TLS 证书对通信进行加密,本文档使用 CloudFlare 的 
PKI 工具集cfssl来生  成Certificate Authority (CA) 和其它证书

生成的 CA 证书和秘钥文件如下:
ca-key.pem
ca.pem
kubernetes-key.pem
kubernetes.pem
kube-proxy.pem
kube-proxy-key.pem
admin.pem
admin-key.pem


使用证书的组件如下:
etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem
kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem、token.csv(非证书,但启动时需要,不然kubelet启动时会报错)
kubelet:使用 ca.pem
kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem
kubectl:使用 ca.pem、admin-key.pem、admin.pem
kube-controller-manager:使用 ca-key.pem、ca.pem
flannel : 使用 ca.pem、kubernetes-key.pem、kubernetes.pem(验证通过后,在etd上给flannel分配的ip段才可以下发)
注意: 以下证书创建操作都在 172.16.0.2 master1主机上执行,然后分发到集群所有主机,证书只需要创建一次即可,以后在向集群中添加新节点时只要将 /etc/kubernetes/pki/ 目录下的证书拷贝到新节点上即可


以上是部署K8S前的准备工作,下面是部署详细流程,最好按顺序来


  1. 创建ca证书签名请求

    为了统一配置与维护,创建以下目录
    mkdir /root/cfssl #所有生成的证书存放在此位置,只在master1上创建
    mkdir /etc/kubernetes/pki #所有服务器创建此目录,存放相应的证书供程序调用
    mkdir /etc/kubernetes/cfg #所有服务器创建此目录,存放相应程序的配置文件

创建ca根证书签名请求文件

cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

创建CA配置文件

cat > ca-config.json << EOF
{
    "signing": {
        "default": {
            "expiry": "876000h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "876000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            },
            "client": {
                "expiry": "876000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "876000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF

生成ca.pem,ca-key.pem证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

创建 kubernetes 证书签名请求文件

cat > kubernetes-csr.json << EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "172.16.0.2",
    "172.16.0.3",
    "172.16.0.4",
    "172.16.0.5",
    "172.16.0.6",
    "172.16.0.7",
    "172.16.0.8",
    "172.16.0.9",
    "172.16.0.10",
    "172.16.0.100",
    "169.169.0.1",
    "10.10.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成 kubernetes 证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

创建admin证书签名请求文件

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
  "algo": "rsa",
  "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

生成admin证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json|cfssljson -bare admin

创建kube-proxy证书签名请求文件

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成 kube-proxy 客户端证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

以上是所有服务需要证书的创建过程,然后根据上面"使用证书的组件"讲到的,把相应的证书放到相应服务器的/etc/kubernetes/pki目录下即可

生成kubeconfig
只在其中1台master执行即可,生成kubeconfig文件(保存在/root/.kube/config文件中)
生成node节点的kubelet使得的bootstrap.kubeconfig文件,传到每台node节点的/etc/kubernetes/cfg文件中
生成node节点的kube-proxy使用的kube-proxy.kubeconfig文件,传到每台node节点的/etc/kubernetes/cfg文件中

创建 kubectl kubeconfig

vip="172.16.0.100"
export KUBE_APISERVER="https://${vip}:6443"

设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER}

设置客户端认证
kubectl config set-credentials admin \
 --client-certificate=/etc/kubernetes/pki/admin.pem \
 --embed-certs=true \
 --client-key=/etc/kubernetes/pki/admin-key.pem

设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin

设置默认上下文
kubectl config use-context kubernetes

创建kubelet bootstrap.kubeconfig

create token.csv
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <

创建 kube-proxy kubeconfig

设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
  --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig 

设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

一,在etcd服务器上安装etcd.service

etcd搭建

yum -y install etcd
创建etcd service

cat > /usr/lib/systemd/system/etcd.service <[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
ExecStart=/bin/bash -c "GOMAXPROCS=\$(nproc) /usr/bin/etcd \
--name=\"\${ETCD_NAME}\" \
--data-dir=\"\${ETCD_DATA_DIR}\" \
--listen-client-urls=\"\${ETCD_LISTEN_CLIENT_URLS}\" \
--cert-file=\"\${ETCD_CERT_FILE}\" \
--key-file=\"\${ETCD_KEY_FILE}\" \
--peer-cert-file=\"\${ETCD_PEER_CERT_FILE}\" \
--peer-key-file=\"\${ETCD_PEER_KEY_FILE}\" \
--trusted-ca-file=\"\${ETCD_TRUSTED_CA_FILE}\" \
--peer-trusted-ca-file=\"\${ETCD_PEER_TRUSTED_CA_FILE}\" \
--initial-advertise-peer-urls=\"\${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" \
--listen-peer-urls=\"\${ETCD_LISTEN_PEER_URLS}\" \
--listen-client-urls=\"\${ETCD_LISTEN_CLIENT_URLS}\" \
--advertise-client-urls=\"\${ETCD_ADVERTISE_CLIENT_URLS}\" \
--initial-cluster-token=\"\${ETCD_INITIAL_CLUSTER_TOKEN}\" \
--initial-cluster=\"\${ETCD_INITIAL_CLUSTER}\" \
--initial-cluster-state=\"\${ETCD_INITIAL_CLUSTER_STATE}\""
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF


  • 创建etcd配置文件

    cat > /etc/etcd/etcd.conf << EOF
    #[Member]
    #ETCD_CORS=""
    ETCD_DATA_DIR="/var/lib/etcd/etcd3.etcd" #把etcd3改为相应的主机名即可,以方便区分
    #ETCD_WAL_DIR=""
    ETCD_LISTEN_PEER_URLS="https://${ip}:2380"
    ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://${ip}:2379"
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    ETCD_NAME="${HOSTNAME}" #相应主机名
    #ETCD_SNAPSHOT_COUNT="100000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    #ETCD_QUOTA_BACKEND_BYTES="0"
    #ETCD_MAX_REQUEST_BYTES="1572864"
    #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
    #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
    #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ip}:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://${ip}:2379"
    #ETCD_DISCOVERY=""
    #ETCD_DISCOVERY_FALLBACK="proxy"
    #ETCD_DISCOVERY_PROXY=""
    #ETCD_DISCOVERY_SRV=""
    ETCD_INITIAL_CLUSTER="etcd1=https://${etcd1ip}:2380,etcd2=https://${etcd2ip}:2380,etcd3=https://${etcd3ip}:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    #ETCD_STRICT_RECONFIG_CHECK="true"
    #ETCD_ENABLE_V2="true"
    #[Proxy]
    #ETCD_PROXY="off"
    #ETCD_PROXY_FAILURE_WAIT="5000"
    #ETCD_PROXY_REFRESH_INTERVAL="30000"
    #ETCD_PROXY_DIAL_TIMEOUT="1000"
    #ETC
    #ETCD_PROXY_READ_TIMEOUT="0"
    #[Security]
    ETCD_CERT_FILE="/etc/kubernetes/pki/kubernetes.pem"
    ETCD_KEY_FILE="/etc/kubernetes/pki/kubernetes-key.pem"
    #ETCD_CLIENT_CERT_AUTH="false"
    ETCD_TRUSTED_CA_FILE="/etc/kubernetes/pki/ca.pem"
    #ETCD_AUTO_TLS="false"
    ETCD_PEER_CERT_FILE="/etc/kubernetes/pki/kubernetes.pem"
    ETCD_PEER_KEY_FILE="/etc/kubernetes/pki/kubernetes-key.pem"
    #ETCD_PEER_CLIENT_CERT_AUTH="false"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/pki/ca.pem"
    #ETCD_PEER_AUTO_TLS="false"
    #[Logging]
    #ETCD_DEBUG="false"
    #ETCD_LOG_PACKAGE_LEVELS=""
    #ETCD_LOG_OUTPUT="default"
    #[Unsafe]
    #ETCD_FORCE_NEW_CLUSTER="false"
    #[Version]
    #ETCD_VERSION="false"
    #ETCD_AUTO_COMPACTION_RETENTION="0"
    #[Profiling]
    #ETCD_ENABLE_PPROF="false"
    #ETCD_METRICS="basic"
    #[Auth]
    #ETCD_AUTH_TOKEN="simple"
    EOF

分别在三台机器都安装,以上是etcd集群搭建完成
注:配置文件中的变量ip可以提前修改为要安装的那台机器的ip地址

启动etcd服务

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

检查集群是否安装成功方法

etcdctl --endpoints=https://${etcd1ip}:2379,https://${etcd2ip}:2379,https://${etcd3ip}:2379 --ca-file=/etc/kubernetes/pki/ca.pem --cert-file=/etc/kubernetes/pki/kubernetes.pem --key-file=/etc/kubernetes/pki/kubernetes-key.pem member list
学会使用etcdctl -h的参数,如果不带上面的证书返回是会报错,可以自行验证

创建flannel 网卡所分配的ip网段,需要先在etcd集群中创建管理

etcdctl --endpoints=https://${etcd1ip}:2379,https://${etcd2ip}:2379,https://${etcd3ip}:2379 \
--ca-file=/etc/kubernetes/pki/ca.pem \
--cert-file=/etc/kubernetes/pki/kubernetes.pem \
--key-file=/etc/kubernetes/pki/kubernetes-key.pem \
mk /atomic.io/network/config '{"Network":"10.10.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}'
注:/atomic.io/network 此目录在下面要安装的flannel的/etc/sysconfig/flanneld文件FLANNEL_ETCD_PREFIX字段中有指定,必须保持一致

检查flannel创建的网段是否成功

etcdctl --endpoints=https://${etcd1ip}:2379,https://${etcd2ip}:2379,https://${etcd3ip}:2379 \
--ca-file=/etc/kubernetes/pki/ca.pem \
--cert-file=/etc/kubernetes/pki/kubernetes.pem \
--key-file=/etc/kubernetes/pki/kubernetes-key.pem \
get /atomic.io/network/config

以上ETCD安装完毕并成功分配了内网pod要使用的网段


二、fannel 网络插件安装

此插件安装服务器为node所有节点及master1,其他节点不需要(由于此处我使用的是云平台的负载均衡替换了nginx+keepalived,所以要在所有master节点也安装)

安装flannel前确保etcd正常启动

flannel插件搭建

yum -y install flannel
flannel服务

cat > /usr/lib/systemd/system/flanneld.service << EOF
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
  -etcd-endpoints=\${FLANNEL_ETCD_ENDPOINTS} \
  -etcd-prefix=\${FLANNEL_ETCD_PREFIX} \
  \$FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

flannel配置文件

#flannel网络与etcd通讯需要tls证书认证,所以也需要ca.pem,kubernetes.pem,kubernetes-key.pem证书
cat > /etc/sysconfig/flanneld << EOF
# Flanneld configuration options  
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://172.16.0.5:2379,https://172.16.0.6:2379,https://172.16.0.7:2379"
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"      #etcd创建网段目录一致
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/pki/ca.pem -etcd-certfile=/etc/kubernetes/pki/kubernetes.pem -etcd-keyfile=/etc/kubernetes/pki/kubernetes-key.pem"     #需要认证证书
EOF

flannel启动服务

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

以上flannel安装成功


三、安装docker

安装节点:master1(搭建私有仓库)、所有node节点

docker安装

yum包安装docker
yum -y install docker

docker服务配置文件

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target 
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=main
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/run/flannel/docker               #增加此行,此文件是启动flannel中mk-docker-opts.sh产生的文件,这样docker才可以加入到flannel网络中
EnvironmentFile=-/run/flannel/subnet.env           #增加此行,此文件是启动flannel中mk-docker-opts.sh产生的文件,这样docker才可以加入到flannel网络中
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --init-path=/usr/libexec/docker/docker-init-current \
          --seccomp-profile=/etc/docker/seccomp.json \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY \
      $REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

先配置flannel网络,再启动docker,这样docker分配到的ip才是之前etcd里创建的的ip段

systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl status dokcer

以上是dokcer安装完毕


docker搭建私有仓库

搭建私有仓库,在master1上执行以下命令

docker run -d -p 5000:5000 --restart=always --name="docker-image" --hostname="master1" -v /dockerstore/docker-image:/registry -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/registry registry

把下载镜像导入到本地仓库,修改以下内容,然后重启docker
修改以下配置文件,指定本地仓库

cat > /etc/docker/daemon.json << EOF
{
"insecure-registries": ["172.16.0.2:5000"]
}
EOF

镜像导入本地仓库

docker tag k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 172.16.0.2:5000/k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker push 172.16.0.2:5000/k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

三、安装nginx+keepalived

先搭建nginx并启动,再搭建keepalived

keepalived搭建

keepalived安装
yum -y install keepalived

nginx检测脚本
cat >/etc/keepalived/check_nginx.sh< /dev/null;echo \$?)
if [[ \$flag != 0 ]];then
        echo "nginx is down,close the keepalived"
        systemctl stop keepalived
fi
EOF

chmod +x /etc/keepalived/check_nginx.sh

keepalived配置文件

cat > /etc/keepalived/keepalived.conf<

keepalived启动服务配置文件

cat >/usr/lib/systemd/system/keepalived.service<

keepalived启动服务

systemctl daemon-reload
systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived

以上keepalived搭建完成


nginx搭建

nginx安装,为apiserver提供TCP负载和为外网访问提供80入口

cd /root/nginx
path=$(pwd)
#安装GCC库
yum -y install gcc gcc-c++ patch

#解压需要安装的模块,模块提前下载好
tar zxvf $path/tengine-2.2.0.tar.gz
tar zxvf $path/tar/openssl-1.0.2p.tar.gz -C $path/src/
tar zxvf $path/tar/zlib-1.2.11.tar.gz -C $path/src/
tar zxvf $path/tar/nginx-accesskey.tar.gz -C $path/src/
tar zxvf $path/tar/lua-nginx-module-0.9.5rc2.tar.gz -C $path/src/
tar zxvf $path/tar/ngx_devel_kit-0.2.19.tar.gz -C $path/src/
unzip $path/tar/pcre-8.40.zip -d $path/src/
unzip $path/tar/waf.zip -d $path/src/
unzip $path/tar/nginx_tcp_proxy_module-master.zip -d $path/src/
unzip $path/tar/ngx_cache_purge-master.zip -d $path/src/

#环境变量
echo "export LD_LIBRARY_PATH=/usr/local/luajit/lib:$LD_LIBRARY_PATH" >> /etc/profile 
echo "export LUAJIT_INC=/usr/local/luajit/include/luajit-2.0" >> /etc/profile
echo "export LUAJIT_LIB=/usr/local/luajit/lib" >> /etc/profile && source /etc/profile

tar zxvf $path/tar/LuaJIT-2.0.5.tar.gz -C $path/src/
cd $path/src/LuaJIT-2.0.5
make PREFIX=/usr/local/luajit
make install PREFIX=/usr/local/luajit

tengine
useradd -s /sbin/nologin nginx
cd $path/tengine-2.2.0
patch -p1 < $path/src/nginx_tcp_proxy_module-master/tcp.patch

./configure --user=nginx --group=nginx \
--prefix=/usr/local/nginx \
--lock-path=/var/run/nginx.lock \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx.pid \
--add-module=$path/src/ngx_devel_kit-0.2.19 \
--add-module=$path/src/lua-nginx-module-0.9.5rc2 \
--add-module=$path/src/ngx_cache_purge-master \
--add-module=$path/src/nginx-accesskey-2.0.3 \
--add-module=$path/src/nginx_tcp_proxy_module-master \
--with-pcre=$path/src/pcre-8.40 \
--with-openssl=$path/src/openssl-1.0.2p \
--with-zlib=$path/src/zlib-1.2.11 \
--with-http_ssl_module \
--with-http_realip_module \
--with-http_gzip_static_module \
--with-http_stub_status_module \
--with-http_flv_module \
--with-http_mp4_module \
--with-http_gunzip_module \
--with-http_random_index_module \
--with-http_secure_link_module \
--with-http_auth_request_module \
--with-http_v2_module \
--with-http_addition_module \
--with-http_sub_module \
--with-file-aio \
--with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' \
--with-ld-opt=-Wl,-rpath,/usr/local/lib

make
make install
echo "alias ng=\"cd /usr/local/nginx/vhost/\"" >> /etc/bashrc
source /etc/bashrc
sourrce /etc/profile
mv $path/src/waf /usr/local/nginx/conf/

ln -s /usr/local/luajit/lib/libluajit-5.1.so.2 /lib64/libluajit-5.1.so.2

nginx做成服务

cat > /usr/lib/systemd/system/nginx.service << EOF
[Unit]
Description=nginx - high performance web server 
After=network.target remote-fs.target nss-lookup.target

[Service] 
Type=forking 
PIDFile=/var/run/nginx.pid 
ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf 
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf 
ExecReload=/usr/local/nginx/sbin/nginx -s reload 
ExecStop=/usr/local/nginx/sbin/nginx -s stop 
ExecQuit=/usr/local/nginx/sbin/nginx -s quit 
PrivateTmp=true

[Install] 
WantedBy=multi-user.target
EOF

nginx启动服务

systemctl daemon-reload
systemctl enable nginx
systemctl start nginx

以上是nginx搭建完成

四、开始搭建master中的kube-apiserver,kube-controller-manager,kube-scheduler

在三台master服务器中搭建
拷贝二进制文件到/usr/local/bin目录下面

  • cd kubernetes/server/bin/
  • cp kube-apiserver /usr/local/bin
  • cp kube-controller-manager /usr/local/bin
  • cp kube-scheduler /usr/local/bin
  • cp kubectl /usr/local/bin

kube-apiserver搭建

kube-apiserver配置文件

cat >/etc/kubernetes/cfg/kube-apiserver<

创建日志目录
mkdir -p /var/log/kubernetes/apiserver/
kube-apiserver服务配置文件

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/cfg/kube-apiserver
ExecStart=/usr/local/bin/kube-apiserver  $KUBE_API_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

kube-apiserver启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

检测kube-apiserver的访问是否正常

master1的机器上测试
curl https://172.16.0.2:6443/version --cert  /etc/kubernetes/pki/ca.pem --key /etc/kubernetes/pki/ca-key.pem --cacert /etc/kubernetes/pki/ca.pem 
{
  "major": "1",
  "minor": "12+",
  "gitVersion": "v1.12.0-rc.2",
  "gitCommit": "5a80e28431c7469d677c5b17277266d1da4e5c8d",
  "gitTreeState": "clean",
  "buildDate": "2018-09-21T21:52:36Z",
  "goVersion": "go1.10.4",
  "compiler": "gc",
  "platform": "linux/amd64"

    测试vip访问
    curl https://172.16.0.100:6443/version --cert  /etc/kubernetes/pki/ca.pem --key /etc/kubernetes/pki/ca-key.pem --cacert /etc/kubernetes/pki/ca.pem 
{
  "major": "1",
  "minor": "12+",
  "gitVersion": "v1.12.0-rc.2",
  "gitCommit": "5a80e28431c7469d677c5b17277266d1da4e5c8d",
  "gitTreeState": "clean",
  "buildDate": "2018-09-21T21:52:36Z",
  "goVersion": "go1.10.4",
  "compiler": "gc",
  "platform": "linux/amd64"

kube-controller-manager搭建

kube-controller-manager配置文件

cat >/etc/kubernetes/cfg/kube-controller-manager <

kube-controller-manager服务配置文件

cat > /usr/lib/systemd/system/kube-controller-manager.service <

kube-controller-manager启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

kube-scheduler搭建

kube-scheduler配置文件

cat > /etc/kubernetes/cfg/kube-scheduler <

kube-scheduler服务配置文件

cat > /usr/lib/systemd/system/kube-scheduler.service <

kube-scheduler启动服务

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

以上是master三台服务器上要安装的所有服务


五、在所有node上安装kubelet,kube-proxy

kubelet搭建

kubelet配置文件

cat > /etc/kubernetes/kubelet.conf << EOF
KUBELET_ARGS="--cgroup-driver=systemd \
--hostname-override=${ip} \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--cert-dir=/etc/kubernetes/pki \
--bootstrap-kubeconfig=/etc/kubernetes/cfg/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \
--register-node=true \
--fail-swap-on=false \
--pod-infra-container-image=${dockerstoreip}:5000/k8s.gcr.io/pause-amd64:3.1" #指定私有仓库的基础镜像pause位置,如果不指定后续pod启动会报错
EOF

kubelet服务配置文件

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/cfg/kubelet.conf
ExecStart=/usr/local/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

kubelet启动服务

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

kube-proxy搭建

kube-proxy配置文件

cat > /etc/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_ARGS="--bind-address=${ip} --hostname-override=${ip} --cluster-cidr=${clusterip}/16 --kubeconfig=/etc/kubernetes/cfg/kube-proxy.kubeconfig"
EOF

kube-proxy服务配置文件

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/cfg/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

kube-proxy启动服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

k8s官网kubernetes-server-linux-amd64.tar.gz包中包括了以下安装的插件,并有相应的yaml文件,只需要下载相应的image镜像文件即可

coredns搭建

#cd kubernetes/cluster/addons/dns/coredns
#cp coredns.yaml.base coredns.yaml
#vim coredns.yaml文件里面的
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes **cluster.local.** in-addr.arpa ip6.arpa {    #修改为cluster.local
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }

 containers:
      - name: coredns
        image: **registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6**   #镜像如可以下载不需要修改
        imagePullPolicy: IfNotPresent

spec:
  selector:
    k8s-app: kube-dns
  clusterIP:** 169.169.0.2**    #修改为自己clusterip,此ip自己定义即可,但一定要是169.169.0网段

修改完以后执行以下命令即可

kubectl apply -f coredns.yaml

检测办法是创建

#kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
#root@:#kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-84c94b7c87-p4n2f   1/1     Running   0          9s
#root@:#kubectl exec -it nginx-84c94b7c87-p4n2f /bin/sh
# cat /etc/resolv.conf
nameserver 169.169.0.2      #此处的dnsip为上面coredns设置的ip了
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

dashboard搭建

在执行以下操作前要先把kubernetes-dashboard-amd64:v1.10.0镜像放到私有仓库里
#cd kubernetes/cluster/addons/dashboard
#ls
dashboard-configmap.yaml  dashboard-controller.yaml  dashboard-rbac.yaml  dashboard-secret.yaml  dashboard-service.yaml
#vim dashboard-controller.yaml
containers:
      - name: kubernetes-dashboard
        image: 172.16.0.2:5000/k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0  #修改镜像为本地仓镜像
        resources:
#kubectl create -f .
#root@:#kubectl get pod,svc,ep -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/coredns-779dfc4d59-25ntp                1/1     Running   0          8h
pod/coredns-779dfc4d59-6bbl2                1/1     Running   0          8h
pod/coredns-779dfc4d59-vjw99                1/1     Running   0          8h
pod/kubernetes-dashboard-66bddbb896-sjg92   1/1     Running   0          2d20h

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/kube-dns               ClusterIP   169.169.0.2             53/UDP,53/TCP   4d21h
service/kubernetes-dashboard   ClusterIP   169.169.1.125           443/TCP         2d20h

NAME                                ENDPOINTS                                              AGE
endpoints/kube-controller-manager                                                    9d
endpoints/kube-dns                  10.10.2.2:53,10.10.54.2:53,10.10.62.2:53 + 3 more...   4d21h
endpoints/kube-scheduler                                                             9d
endpoints/kubernetes-dashboard      10.10.62.3:8443                                        2d20h
发现dashboard创建成功了

浏览器访问如,可能还会遇到其他问题,请查看我的另外一篇k8s 权限理解文档
https://masterip:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.
输入用户名和密码,token后出现以下页面

从报错看出权限不对
#vim cluster-rabc.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
#kubectl create -f cluster-rabc.yaml
再重新打开页面就正常了

kubernetes 1.12.0 二进制包搭建_第2张图片

到此k8s的web-ui搭建完成