kuberntes安装

1、环境概述及组件版本

1.1 节点

IP 服务角色
192.168.149.136 master&node,etcd
192.168.149.137 master&node,etcd
192.168.149.138 master&node,etcd

 

 

 

 

1.2 组件

 

组件 版本 说明
     
     
     
     
     
     
     

 

 

 

 

 

 

 

 

2、系统初始化及证书创建

 

2.1 系统初始化

关闭selinux,firewalld(三台服务器均操作)

systemctl stop firewalld.service
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

2.1.1 修改hosts

192.168.149.136 k8s-136
192.168.149.137 k8s-137
192.168.149.138 k8s-138

ansible k8s -m copy -a 'src=/etc/hosts dest=/etc/hosts' #ansible 分发hosts file

2.1.2 安装基础软件包

yum remove firewalld python-firewall  python-firewall -y
yum install jq socat psmisc bash-completion -y

2.1.3 内核参数设置

vim  /etc/sysctl.d/99-sysctl.conf

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1

ansible k8s -m copy -a 'src=/etc/sysctl.d/99-sysctl.conf dest=/etc/sysctl.d/99-sysctl.conf' #anisble 分发

modprobe br_netfilter
sysctl -p /etc/sysctl.d/99-sysctl.conf 

 

2.2 证书创建

2.2.1 下载证书生成工具

mkdir /usr/local/k8s/{bin,crts} -p 
cd /usr/local/k8s/bin/
wget -O cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget -O cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget -O cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x /usr/local/k8s/bin/cfssl*
添加到环境变量 echo 'export PATH=$PATH:/usr/local/k8s/bin' > /etc/profile.d/k8s.sh . /etc/profile.d/k8s.sh

说明:添加环境变量这一步,三台都需要做

 

2.2.2 制作自签名根证书

cd /usr/local/k8s/crts/ #证书都在这个目录下创建,生成的pem全部放至/etc/kubernetes/ssl/下

配置文件

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF


说明:
 配置文件为CA签署证书提供配置项,可定义多个profile,此处定义了一个名为kubernetes的profile,可以提供服务端和客户端签署认证,签署的证书有效期为87600小时(10年)
  • signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
  • server auth:表示client可以用该 CA 对server提供的证书进行验证;
  • client auth:表示server可以用该 CA 对client提供的证书进行验证。
 

CA证书请求文件

cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

说明:
  • **CN:**Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
  • **O:**Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
 

使用cfssl工具根据请求文件生成证书文件和密钥文件

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ca.pem  # 证书文件
ca-key.pem # 证书对应的密钥文件

 

2.3 使用CA签发证书

创建 kubernetes证书签名请求

cat > kubernetes-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.149.136",
      "api-dev.k8s",
      "10.254.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "ShangHai",
            "L": "ShangHai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

说明:
  • 如果hosts字段不为空则需要指定授权使用该证书的 IP 或域名列表,由于该证书后续被 etcd 集群和 kubernetes master 集群使用,所以上面分别指定了 etcd 集群、kubernetes master 集群的主机 IP;
  • 还需要添加kube-apiserver注册的名为 kubernetes 服务的 IP(一般是 kue-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 192.168.0.1)
 

 

生成 kubernetes 证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

2.3.2 创建 Admin 证书

kubectl使用此证书,具有k8s集群内最高权限

创建 admin 证书签名请求

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

说明:
  • 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
  • kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Groupsystem:masters 与 Role cluster-admin绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
  • OU 指定该证书的 Group 为 system:masterskubelet 使用该证书访问 kube-apiserver时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters,所以被授予访问所有 API 的权限。
 

 

生成 admin 证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

2.3.3 创建 Kube-Proxy 证书

创建 kube-proxy 证书签名请求

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

说明:
  • CN 指定该证书的 User 为system:kube-proxy
  • kube-apiserver 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Rolesystem:node-proxier绑定,该 Role 授予了调用 kube-apiserver Proxy相关 API 的权限。
 

生成 kube-proxy 客户端证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

2.3.4 校验证书

以校验kubernetes证书为例

使用openssl命令校验证书

openssl x509 -noout -text -in kubernetes.pem

说明:
  • 确认 Issuer 字段的内容和 ca-csr.json 一致;
  • 确认 Subject字段的内容和 kubernetes-csr.json一致;
  • 确认 X509v3 Subject Alternative Name 字段的内容和 kubernetes-csr.json一致;
  • 确认 X509v3 Key UsageExtended Key Usage 字段的内容和 ca-config.json 中 kubernetes-profile 一致。

使用 Cfssl-Certinfo 命令校验

cfssl-certinfo -cert kubernetes.pem

2.3.5 分发证书

将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用,虽然node节点并不会用到,但我们3个节点同时充当了master和node

ansible  k8s -a  'mkdir -p /etc/kubernetes/ssl'
mv *.pem /etc/kubernetes/ssl/
ansible k8s -m copy -a 'src=/etc/kubernetes/ssl/ dest=/etc/kubernetes/ssl/'

2.4 配置kubectl

2.4.1 获取软件包

wget https://dl.k8s.io/v1.11.0-beta.2/kubernetes-server-linux-amd64.tar.gz   # 包含了所有的binary程序  
tar -xvf kubernetes-server-linux-amd64.tar.gz
cp kubernetes/server/bin/kubectl /usr/local/k8s/bin/
chmod +x /usr/local/k8s/bin/kubectl

说明:三台都需要安装

 

2.4.2 创建 kubectl kubeconfig 文件

  • 由于kubectl直接使用kubeconfig格式的配置文件,不支持手动指定证书,需要将证书内容注入kubeconfig文件供kubectl使用
  • 后续kube-controller-manager和kube-scheduler也使用该文件,此做法不符合权限最小化原则,但默认提供的controller-manager和scheduler角色绑定权限不足,无法直接使用,等待官方继续完善。
cd /etc/kubernetes/
# 设置集群参数
export KUBE_APISERVER="https://api-dev.k8s"  // 设置为master-api的域名,暂时可以将这个域名先绑定hosts解析 192.168.149.136 api-dev.k8s

# 设置客户端认证参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER}

# 设置上下文参数
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem

# 设置默认上下文
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin

kubectl config use-context kubernetes
cat ~/.kube/config

# 将config文件复制供其他组件使用
cp ~/.kube/config /etc/kubernetes/ssl/admin.kubeconfig

说明:
  • admin.pem 证书 OU 字段值为 system:masters,kube-apiserver 预定义的 RoleBinding cluster-admin 将 Groupsystem:masters 与 Role cluster admin 绑定,该 Role 授予了调用kube-apiserver 相关 API 的权限;
  • 生成的 kubeconfig 被保存到 ~/.kube/config 文件

 此操作三台都需要执行

 

2.5 创建 kubeconfig 文件

kubelet、kube-proxy 等 Node 机器上的进程与 Master 机器的 kube-apiserver 进程通信时需要认证和授权.

kubernetes 1.4 开始支持由 kube-apiserver 为客户端生成 TLS 证书的 TLS Bootstrapping 功能,这样就不需要为每个客户端生成证书了;该功能当前仅支持为 kubelet 生成证书。

2.5.1 创建 TLS Bootstrapping Token

Token auth file

Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。

cd /etc/kubernetes/
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

将token.csv发到所有机器(Master 和 Node)的 /etc/kubernetes/ 目录

ansible k8s -m copy -a 'src=/etc/kubernetes/token.csv dest=/etc/kubernetes/token.csv'

2.5.2 创建 kubelet bootstrapping kubeconfig 文件

cd /etc/kubernetes
# 设置集群参数
export KUBE_APISERVER="https://api-dev.k8s"
# 设置客户端认证参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

说明:
  • –embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
  • 设置客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成。

 

2.5.3 创建 kube-proxy kubeconfig 文件

# 设置集群参数
export KUBE_APISERVER="https://api-dev.k8s"
# 设置客户端认证参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

# 设置上下文参数
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

说明:
  • 设置集群参数和客户端认证参数时 –embed-certs 都为 true,这会将 certificate-authority、client-certificate 和client-key 指向的证书文件内容写入到生成的 kube-proxy.kubeconfig 文件中;
  • kube-proxy.pem 证书中 CN 为 system:kube-proxy,kube-apiserver 预定义的 RoleBinding cluster-admin 将Usersystem:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限

2.5.4 分发 kubeconfig 文件

将两个 kubeconfig 文件分发到所有 Node 机器的 /etc/kubernetes/ 目录

ansible k8s -m copy -a 'src=/etc/kubernetes/bootstrap.kubeconfig dest=/etc/kubernetes/bootstrap.kubeconfig'
ansible k8s -m copy -a 'src=/etc/kubernetes/kube-proxy.kubeconfig dest=/etc/kubernetes/kube-proxy.kubeconfig'

3、etcd集群

     etcd 是 CoreOS 团队发起的开源项目,基于 Go 语言实现,做为一个分布式键值对存储,通过分布式锁,leader选举和写屏障(write barriers)来实现可靠的分布式协作。 
kubernetes系统使用etcd存储所有数据。 
CoreOS官方推荐集群规模5个为宜,我这里使用了3个节点

3.1 安装配置etcd集群

搭建etcd集群有3种方式,分别为Static, etcd Discovery, DNS Discovery。Discovery请参见官网。这里仅以Static方式展示一次集群搭建过程

3.1.1 TLS认证文件

需要为 etcd 集群创建加密通信的 TLS 证书

创建etcd 证书签名请求:

cat > /usr/local/k8s/crts/etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.149.136",
    "192.168.149.137",
    "192.168.149.138"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

说明:
证书的 hosts 字段列表中包含上面三台机器的 IP,否则后续证书校验会失败

生成etcd证书和私钥:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

说明:
生成证书和私钥时需要ca-key.pem和ca.pem,前面被mv到/etc/kubernetes/ssl/,需要重新copy
到/usr/local/k8s/crts

分发证书:

mkdir /etc/etcd/ssl -p
cp etcd*.pem /etc/etcd/ssl/
ansible k8s -a 'mkdir /etc/etcd/ssl -p'
ansible k8s -m copy -a 'src=/etc/etcd/ssl/ dest=/etc/etcd/ssl'

3.1.2 下载二进制程序

到 https://github.com/coreos/etcd/releases 页面下载最新版本的二进制文件,并上传到/usr/local/k8s/bin目录下

wget https://github.com/coreos/etcd/releases/download/v3.3.4/etcd-v3.3.4-linux-amd64.tar.gz
tar -xf etcd-v3.3.4-linux-amd64.tar.gz 
cp etcd-v3.3.4-linux-amd64/{etcd,etcdctl} /usr/local/k8s/bin/


说明:
其他两台也是同样安装方法,需要创建 /usr/local/k8s/bin 目录

3.13 创建etcd配置文件

 
    

cat > /etc/etcd/etcd.conf << EOF
# [member]
ETCD_NAME=etcd3 // 需要改的
ETCD_DATA_DIR="/var/lib/etcd/etcd3.etcd" //需要改的
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.149.138:2380" //需要改的
ETCD_LISTEN_CLIENT_URLS="https://192.168.149.138:2379,http://192.168.149.138:2379"//需要改的
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""

# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.149.138:2380" //需要改的
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.149.136:2380,etcd2=https://192.168.149.137:2380,etcd3=https://192.168.149.138:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.149.138:2379" //需要改的
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"

# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"

# [logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
EOF



说明:
  • 对应的主机修改ip与名字即可
  • 指定etcd的工作目录和数据目录为/var/lib/etcd,需要在启动服务前创建这个目录;
  • 为了保证通信安全,需要指定etcd 的公私钥(cert-file和key-file)、Peers通信的公私钥和CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA 证书(trusted-ca-file);
  • –initial-cluster-state值为new时,–name的参数值必须位于–initial-cluster

3.1.4 创建 etcd 的 systemd unit 文件

创建工作目录及用户

mkdir -p /var/lib/etcd
useradd -s /sbin/nologin etcd
chown -R etcd.etcd /var/lib/etcd
chown -R etcd.etcd /etc/etcd/
chown -R etcd.etcd /usr/local/k8s/bin/etcd* 

systemd unit文件

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/local/k8s/bin/etcd
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

 

3.1.5 启动etcd集群

 

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

说明:
三台需要同时启动

 

3.2 验证etcd集群

查看集群健康状态

etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem --endpoints "https://192.168.149.136:2379" cluster-health

member 394157f0d2d9f30 is healthy: got healthy result from https://192.168.149.138:2379
member 3560f8fafc5c65c8 is healthy: got healthy result from https://192.168.149.136:2379
member b4757a94270a479f is healthy: got healthy result from https://192.168.149.137:2379
cluster is healthy

查看集群成员,并能看出哪个是leader节点

etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem --endpoints "https://192.168.149.136:2379" member list

394157f0d2d9f30: name=etcd3 peerURLs=https://192.168.149.138:2380 clientURLs=https://192.168.149.138:2379 isLeader=true
3560f8fafc5c65c8: name=etcd1 peerURLs=https://192.168.149.136:2380 clientURLs=https://192.168.149.136:2379 isLeader=false
b4757a94270a479f: name=etcd2 peerURLs=https://192.168.149.137:2380 clientURLs=https://192.168.149.137:2379 isLeader=false

 

 

4、部署Flannel网络

 

flannel是CoreOS提供用于解决Dokcer集群跨主机通讯的覆盖网络工具。它的主要思路是:预先留出一个网段,每个主机使用其中一部分,然后每个容器被分配不同的ip;让所有的容器认为大家在同一个直连的网络,底层通过UDP/VxLAN等进行报文的封装和转发。

 

flannel项目地址:https://github.com/coreos/flannel

 

4.1 创建TLS 密钥和证书

etcd 集群启用了双向TLS 认证,所以需要为flanneld 指定与etcd 集群通信的CA 和密钥。

创建flanneld 证书签名请求:

 

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成flanneld 证书和私钥:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

创建配置目录:

mkdir -p /etc/flanneld/ssl
mv flanneld*.pem /etc/flanneld/ssl

4.2 向etcd 写入集群Pod 网段信息

该步骤只需在第一次部署Flannel 网络时执行,后续在其他节点上部署Flanneld 时无需再写入该信息
etcdctl \
  --endpoints=https://192.168.149.136:2379,https://192.168.149.137:2379,https://192.168.149.138:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/flanneld/ssl/flanneld.pem \
  --key-file=/etc/flanneld/ssl/flanneld-key.pem \
  set /kubernetes/network/config '{"Network":"'172.18.0.0/16'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

说明:
写入的 Pod 网段(${CLUSTER_CIDR},172.18.0.0/16) 必须与kube-controller-manager的 --cluster-cidr选项值一致

4.3 安装和配置flanneld

前往flanneld release 页面下载最新版的flanneld 二进制文件

4.3.1 下载程序

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
mkdir flannel
tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz  -C flannel
mv flanneld mk-docker-opts.sh /usr/local/k8s/bin/

4.3.2 创建flanneld的systemd unit 文件

cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/k8s/bin/flanneld \\
  -etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  -etcd-certfile=/etc/flanneld/ssl/flanneld.pem \\
  -etcd-keyfile=/etc/flanneld/ssl/flanneld-key.pem \\
  -etcd-endpoints=https://192.168.149.136:2379,https://192.168.149.137:2379,https://192.168.149.138:2379 \\
  -etcd-prefix=/kubernetes/network
ExecStartPost=/usr/local/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

说明:
  • mk-docker-opts.sh脚本将分配给flanneld 的Pod 子网网段信息写入到/run/flannel/docker 文件中,后续docker 启动时使用这个文件中的参数值为 docker0网桥
  • flanneld 使用系统缺省路由所在的接口和其他节点通信,对于有多个网络接口的机器(内网和公网),可以用 –iface 选项值指定通信接口(上面的 systemd unit 文件没指定这个选项)
 

4.4 其他节点安装flanneld

分发证书:

 ansible k8s -a 'mkdir -p /etc/flanneld/ssl'
 ansible k8s -m copy -a 'src=/etc/flanneld/ssl/ dest=/etc/flanneld/ssl'

安装flanneld:

ansible k8s -m copy -a 'src=/usr/local/k8s/bin/flanneld  dest=/usr/local/k8s/bin mode=0755'
ansible k8s -m copy -a 'src=/usr/local/k8s/bin/mk-docker-opts.sh  dest=/usr/local/k8s/bin mode=0755'

分发systemd unit 文件:

ansible k8s -m copy -a 'src=/usr/lib/systemd/system/flanneld.service  dest=/usr/lib/systemd/system/flanneld.service'

4.5 启动并检查flanneld

启动flanneld:

ansible k8s -m systemd -a 'name=flanneld daemon_reload=yes enabled=yes state=started'

 检查分配给各flanneld 的Pod 网段信息

# 查看集群 Pod 网段(/16)
etcdctl \
  --endpoints=https://192.168.149.136:2379,https://192.168.149.137:2379,https://192.168.149.138:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/flanneld/ssl/flanneld.pem \
  --key-file=/etc/flanneld/ssl/flanneld-key.pem \
  get /kubernetes/network/config

{"Network":"172.18.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}

# 查看已分配的 Pod 子网段列表(/24)
etcdctl \
  --endpoints=https://192.168.149.136:2379,https://192.168.149.137:2379,https://192.168.149.138:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/flanneld/ssl/flanneld.pem \
  --key-file=/etc/flanneld/ssl/flanneld-key.pem \
  ls /kubernetes/network/subnets

/kubernetes/network/subnets/172.18.55.0-24
/kubernetes/network/subnets/172.18.14.0-24
/kubernetes/network/subnets/172.18.79.0-24

# 查看某一 Pod 网段对应的 flanneld 进程监听的 IP 和网络参数

etcdctl \
  --endpoints=https://192.168.149.136:2379,https://192.168.149.137:2379,https://192.168.149.138:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/flanneld/ssl/flanneld.pem \
  --key-file=/etc/flanneld/ssl/flanneld-key.pem \
  get /kubernetes/network/subnets/172.18.14.0-24

{"PublicIP":"192.168.149.138","BackendType":"vxlan","BackendData":{"VtepMAC":"e2:b3:fb:8b:54:3e"}}

各个节点部署完Flanneld 后,查看已分配的Pod 子网段列表:

ip route

default via 192.168.149.2 dev ens33 proto dhcp metric 100 
172.18.14.0/24 via 172.18.14.0 dev flannel.1 onlink 
172.18.55.0/24 via 172.18.55.0 dev flannel.1 onlink 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
192.168.149.0/24 dev ens33 proto kernel scope link src 192.168.149.136 metric 100

5、部署kube-master

 

kubernetes master 节点包含的组件有:

 

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

目前这3个组件需要部署到同一台机器上,三台机器都要部署(后面再部署高可用的master) 
- kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能紧密相关; 
- 同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;

master 节点与node 节点上的Pods 通过Pod 网络通信,所以需要在master 节点上部署Flannel 网络。

说明:master节点三台也都需要部署

5.1 TLS 证书文件

之前已经生产的kubernetes证书文件

ls /etc/kubernetes/ssl/kubernetes* -l

-rw-------. 1 root root 1679 Jun 18 23:16 /etc/kubernetes/ssl/kubernetes-key.pem
-rw-r--r--. 1 root root 1635 Jun 18 23:16 /etc/kubernetes/ssl/kubernetes.pem

5.2 下载二进制程序

前面的server-binary已经包含了所有的程序包

cp kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} /usr/local/k8s/bin/

5.3 配置api-server

cat > /etc/kubernetes/audit-policy.yaml << EOF
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
EOF

5.3.2 创建kube-apiserver 的systemd unit文件

# 添加用户赋予权限
useradd -s /sbin/nologin kube
chown -R kube.kube /etc/kubernetes/
chown -R kube.kube /usr/local/k8s/bin/kube*
mkdir /var/log/kube-audit/
chown -R kube.kube /var/log/kube-audit/
usermod -G etcd kube
chmod -R 777 /etc/etcd/  # 读取etcd证书文件,这块是个坑

# systemd unit 文件

cat  > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
User=kube
ExecStart=/usr/local/k8s/bin/kube-apiserver \\
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \\
  --advertise-address=192.168.149.136 \\
  --bind-address=0.0.0.0 \\
  --insecure-bind-address=192.168.149.136 \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=rbac.authorization.k8s.io/v1 \\
  --kubelet-https=true \\
  --enable-bootstrap-token-auth \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --token-auth-file=/etc/kubernetes/token.csv \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --service-node-port-range=30000-32766 \\
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \\
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
  --etcd-servers=https://192.168.149.136:2379,https://192.168.149.137:2379,https://192.168.149.138:2379 \\
  --enable-swagger-ui=true \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/kube-audit/audit.log \\
  --event-ttl=1h \\
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

说明:
  • kube-apiserver 1.6 版本开始使用 etcd v3 API 和存储格式
  • –authorization-mode=RBAC 指定在安全端口使用RBAC 授权模式,拒绝未通过授权的请求
  • kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通信
  • kubelet、kube-proxy、kubectl 部署在其它 Node 节点上,如果通过安全端口访问 kube-apiserver,则必须先通过 TLS 证书认证,再通过 RBAC 授权
  • kube-proxy、kubectl 通过使用证书里指定相关的 User、Group 来达到通过 RBAC 授权的目的
  • 如果使用了 kubelet TLS Boostrap 机制,则不能再指定 –kubelet-certificate-authority、–kubelet-client-certificate 和 –kubelet-client-key 选项,否则后续 kube-apiserver 校验 kubelet 证书时出现 ”x509: certificate signed by unknown authority“ 错误
  • –admission-control 值必须包含 ServiceAccount,否则部署集群插件时会失败,1.10此选项更名为--enable-admission-plugins
  • –bind-address 不能为 127.0.0.1
  • –service-cluster-ip-range 指定 Service Cluster IP 地址段,该地址段不能路由可达
  • –service-node-port-range=${NODE_PORT_RANGE} 指定 NodePort 的端口范围
  • 缺省情况下 kubernetes 对象保存在etcd/registry 路径下,可以通过 –etcd-prefix 参数进行调整
  • kube-apiserver 1.8版本后需要在–authorization-mode参数中添加Node,即:–authorization-mode=Node,RBAC,否则Node 节点无法注册
 
    

官方参考:kube-apiserver

 

5.3.3 启动kube-apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

5.4 部署kube-controller-manager

5.4.1 创建kube-controller-manager 的systemd unit 文件

cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
User=kube
ExecStart=/usr/local/k8s/bin/kube-controller-manager \\
  --address=127.0.0.1 \\
  --master=http://api-dev.k8s:8080 \\
  --allocate-node-cidrs=true \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --cluster-cidr=172.18.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --leader-elect=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

说明:
  • –address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器
  • –master=http://${MASTER_URL}:使用http(非安全端口)与 kube-apiserver 通信
  • –cluster-cidr 指定 Cluster 中 Pod 的 CIDR 范围,该网段在各 Node 间必须路由可达(flanneld保证)
  • –service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致
  • –cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥
  • –root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件
  • –leader-elect=true 部署多台机器组成的 master 集群时选举产生一处于工作状态的 kube-controller-manager 进程
 

5.4.2 启动kube-controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

说明:/etc/kubernetes目录下的bootstrap.kubeconfig、kube-proxy.kubeconfig 以及/etc/kubernetes/ssl/admin.kubeconfig 这几个文件里的 server 这项要改成:server: http://api-dev.k8s:8080否则controller-manager会启动有问题

5.5 部署kube-scheduler

5.5.1 创建kube-scheduler 的systemd unit文件

cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
User=kube
ExecStart=/usr/local/k8s/bin/kube-scheduler \\
  --address=127.0.0.1 \\
  --master=http://api-dev.k8s:8080 \\
  --leader-elect=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

说明:
  • –address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器
  • –master=http://${MASTER_URL}:使用http(非安全端口)与 kube-apiserver 通信
  • –leader-elect=true 部署多台机器组成的 master 集群时选举产生一处于工作状态的 kube-controller-manager 进程
 

5.5.2 启动kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

5.6 验证master节点

这个需要临时测试要修改下对应的apiserver地址

cd ~
~]# vim .kube/config
server: http://api-dev.k8s:8080 #默认为server: https://api-dev.k8s
kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
其他master节点以此部署,其他master节点api没有做高可用之前不建议开启服务

 

6、部署kube-node

kubernetes Node 节点包含如下组件:

  • docker
  • kubelet
  • kube-proxy

 

说明:node三台都需要部署

6.1 docker-ce安装(三台都执行操作)

6.1.1 Uninstall old versions

yum remove docker docker-common  docker-selinux  docker-engine

6.1.2 Install Docker CE

# install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2

# Use the following command to set up the stable repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# changed mirror to aliyun
sed -i 's@https://download.docker.com/@https://mirrors.aliyun.com/docker-ce/@g' /etc/yum.repos.d/docker-ce.repo

# install docker-ce
yum install docker-ce -y

6.1.3 docker一些自定义配置

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["http://1bdb58cb.m.daocloud.io"],
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

6.1.4 修改docker与flanneld网络配置

vim /usr/lib/systemd/system/docker.service
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS # 修改成这样
systemctl daemon-reload
systemctl enable docker
systemctl restart flanneld.service
systemctl restart docker.service
ifconfig

docker0: flags=4099  mtu 1500
        inet 172.18.79.1  netmask 255.255.255.0  broadcast 172.18.79.255
        ether 02:42:93:12:e6:51  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163  mtu 1450
        inet 172.18.79.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::84d2:d0ff:fed7:7b99  prefixlen 64  scopeid 0x20
        ether 86:d2:d0:d7:7b:99  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 27 overruns 0  carrier 0  collisions 0

6.2 部署kubelet

6.2.1 kubelet bootstapping kubeconfig

RBAC授权

kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色,然后kubelet 才有权限创建认证请求(certificatesigningrequests):

下面两段在哪执行都可以

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

说明:
–user=kubelet-bootstrap 是文件 /etc/kubernetes/token.csv 中指定的用户名,同时也写入了文件 /etc/kubernetes/bootstrap.kubeconfig

另外还需要为Node 请求创建一个RBAC 授权规则:

kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes

kubeconfig在2.5 章节已创建

6.2.2 分发kubelet,kube-proxy二进制文件

cp -r kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/k8s/bin/
scp
kubernetes/server/bin/kubelet 192.168.149.137:/usr/local/k8s/bin/ scp kubernetes/server/bin/kube-proxy 192.168.149.137:/usr/local/k8s/bin/ scp kubernetes/server/bin/kubelet 192.168.149.138:/usr/local/k8s/bin/ scp kubernetes/server/bin/kube-proxy 192.168.149.138:/usr/local/k8s/bin/

6.2.3 创建kubelet 的systemd unit 文件(三台都要执行)

创建工作目录

mkdir /var/lib/kubelet

systemd unit文件

cat > /usr/lib/systemd/system/kubelet.service <[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/k8s/bin/kubelet \\
  --address=192.168.149.136 \\
  --hostname-override=k8s-136 \\
  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --cert-dir=/etc/kubernetes/ssl \\
  --cluster-dns=10.254.0.2 \\
  --cluster-domain=dns-dev.k8s \\
  --hairpin-mode promiscuous-bridge \\
  --allow-privileged=true \\
  --pod-infra-container-image=lc13579443/pause-amd64 \\  # 这个根镜像地址需要修改,可以修改为你的私有镜像地址
  --serialize-image-pulls=false \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

说明:
  • -address 不能设置为 127.0.0.1,否则后续 Pods 访问 kubelet 的 API 接口时会失败,因为 Pods 访问的 127.0.0.1指向自己而不是 kubelet
  • 如果设置了 –hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况
  • -bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求
  • 管理员通过了 CSR 请求后,kubelet 自动在 –cert-dir 目录创建证书和私钥文件(kubelet-client.crt 和 kubelet-client.key),然后写入 –kubeconfig 文件(自动创建 –kubeconfig 指定的文件)
  • 建议在 –kubeconfig 配置文件中指定 kube-apiserver 地址,如果未指定 –api-servers 选项,则必须指定 –require-kubeconfig 选项(在1.10+已废弃此选项)后才从配置文件中读取 kue-apiserver 的地址,否则 kubelet 启动后将找不到 kube-apiserver (日志中提示未找到 API Server),kubectl get nodes 不会返回对应的 Node 信息
  • –cluster-dns 指定 kubedns 的 Service IP(可以先分配,后续创建 kubedns 服务时指定该 IP),–cluster-domain 指定域名后缀,这两个参数同时指定后才会生效

6.2.4 启动kublete

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

说明:
bootstrap.kubeconfig
kubelet.kubeconfig
kube-proxy.kubeconfig 这几个文件里的 server 这项要改成:server: http://api-dev.k8s:8080, 否则kubelet 起不来,或者一会起来,一会down。前面server配置的乱七八糟的。
在/etc/kubernetes目录
 

6.2.5 通过kubelet 的TLS 证书请求

kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。

查看未授权的CSR 请求

kubectl get csr

NAME                                                   AGE       REQUESTOR          CONDITION
node-csr-qE_pX2eoY6u7UbXtGjFHk_fN9BvdaEn6hR_Ww3trQP0   1m        system:unsecured   Pending

通过CSR 请求:

kubectl certificate approve node-csr-qE_pX2eoY6u7UbXtGjFHk_fN9BvdaEn6hR_Ww3trQP0

certificatesigningrequest.certificates.k8s.io/node-csr-qE_pX2eoY6u7UbXtGjFHk_fN9BvdaEn6hR_Ww3trQP0 approved

说明:
可以使用一条命令搞定:kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

自动生成了kubelet kubeconfig 文件和公私钥: 

ls /etc/kubernetes/

kubelet.kubeconfig

ls /etc/kubernetes/ssl

kubelet.key kubelet.crt

查看node节点:

kubectl get node

NAME      STATUS    ROLES     AGE       VERSION
k8s-136   Ready         3m        v1.11.0-beta.2

 

6.3 部署kube-proxy(node节点都需要安装)

kube-proxy 证书秘钥前面都已经制作好了

6.3.1 安装相关程序

yum install conntrack-tools ipvsadm -y(此次实验未用ipvs)

6.3.2 创建kube-proxy 的systemd unit 文件

创建工作目录

mkdir -p /var/lib/kube-proxy 

systemd unit

 
    

cat > /usr/lib/systemd/system/kube-proxy.service <[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

 
    

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/k8s/bin/kube-proxy \\
--bind-address=192.168.149.136 \\
--hostname-override=k8s-136 \\
--cluster-cidr=10.254.0.0/16 \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

 
    

[Install]
WantedBy=multi-user.target
EOF



说明:
  • –hostname-override 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 iptables 规则
  • –cluster-cidr 必须与 kube-apiserver 的 –service-cluster-ip-range 选项值一致
  • kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT
  • –kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息
  • 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限
 

6.3.3 启动kube-proxy

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

验证集群功能 

定义yaml 文件:(将下面内容保存为:nginx-ds.yaml)

apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

添加执行权限

chmod +x /opt/nginx-ds.yml

创建 Pod 和服务:

kubectl create -f nginx-ds.yaml

service/nginx-ds created
daemonset.extensions/nginx-ds created

执行下面的命令查看Pod 和SVC

kubectl get pods -o wide


NAME             READY     STATUS    RESTARTS   AGE       IP            NODE
nginx-ds-7r9gd   1/1       Running   0          9m        172.18.2.2    k8s-136
nginx-ds-j5frg   1/1       Running   0          9m        172.18.63.2   k8s-138
nginx-ds-q8q47   1/1       Running   0          9m        172.18.65.2   k8s-137
kubectl get svc

NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.254.0.1               443/TCP        4h
nginx-ds     NodePort    10.254.100.251           80:31225/TCP   10m
可以看到:

服务IP:10.254.100.251
服务端口:80
NodePort端口:31225

在所有 Node 上执行:
curl 10.254.100.251
curl k8s-136:31225
# 执行上面的命令预期都会输出nginx 欢迎页面内容,表示我们的Node 节点正常运行了


<head>
Welcome to nginx!

head>

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to "http://nginx.org/">nginx.org.
Commercial support is available at "http://nginx.com/">nginx.com.

Thank you for using nginx.

 查看单个pod的信息

 kubectl get -o json pod nginx-ds-7r9gd


{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2018-06-22T11:31:51Z",
"generateName": "nginx-ds-",
"labels": {
"app": "nginx-ds",
"controller-revision-hash": "3018936695",
"pod-template-generation": "1"
},
"name": "nginx-ds-7r9gd",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "DaemonSet",
"name": "nginx-ds",
"uid": "db516972-760f-11e8-a4de-000c2978986d"
}
],
"resourceVersion": "21176",
"selfLink": "/api/v1/namespaces/default/pods/nginx-ds-7r9gd",
"uid": "dbb4df0a-760f-11e8-a4de-000c2978986d"
},
"spec": {
"containers": [
{
"image": "nginx:1.7.9",
"imagePullPolicy": "IfNotPresent",
"name": "my-nginx",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-5pnsj",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"nodeName": "k8s-136",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists"
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists"
},
{
"effect": "NoSchedule",
"key": "node.kubernetes.io/disk-pressure",
"operator": "Exists"
},
{
"effect": "NoSchedule",
"key": "node.kubernetes.io/memory-pressure",
"operator": "Exists"
}
],
"volumes": [
{
"name": "default-token-5pnsj",
"secret": {
"defaultMode": 420,
"secretName": "default-token-5pnsj"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2018-06-22T11:31:52Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-06-22T11:33:24Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": null,
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-06-22T11:31:52Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://e16c6db41396d359ccee609e6619bef4dd42f8f9f9696542267a540ea85a6401",
"image": "nginx:1.7.9",
"imageID": "docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451",
"lastState": {},
"name": "my-nginx",
"ready": true,
"restartCount": 0,
"state": {
"running": {
"startedAt": "2018-06-22T11:33:23Z"
}
}
}
],
"hostIP": "192.168.149.136",
"phase": "Running",
"podIP": "172.18.2.2",
"qosClass": "BestEffort",
"startTime": "2018-06-22T11:31:52Z"
}
}

 

8、部署dashboard

       Kubernetes提供了一个基础的图形化界面,就是这个Dashboard。基本上使用kubectl可以实现的功能现在都可以通过界面去做了,本质上它俩都是调用Kubernetes API。而部署这个Dashboard也非常简答,就是创建一个Pod和一个Service。最新的创建dashboard的yaml文件可以去查看官方:https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

     由于在安装Dashboard过程中没有办法下载两个镜像:kubernetes-dashboard和registry.access.redhat.com/rhel7/pod-infrastructure 所以,事先下载了这两个资源,然后导入镜像,并上传到仓库,以供部署时pull。

8.1 部署docker仓库

下载registry镜像
docker pull registry 

 默认情况下,会将仓库存放于容器内的/var/lib/registry目录下,这样如果容器被删除,则存放于容器中的镜像也会丢失,所以我们一般情况下会指定本地一个目录挂载到容器内的/var/lib/registry下,如下

mkdir /opt/registry/
docker run -d -p 5000:5000 -v /opt/registry/:/var/lib/registry registry  #指定端口和挂载目录
iptables -I INPUT 1 -p tcp --dport 5000 -j ACCEPT (如果有防火墙)

至此我们就创建了一个私有仓库,地址在本机的5000端口上,我的是192.168.149.136,即:192.168.149.136:5000

8.2 上传镜像到私有仓库

      把事先下载好的2个镜像上传到opt目录

      导入镜像

docker load < dashboard.tar
docker load < podinfrastructure.tar

   修改tag 

docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1 192.168.149.136:5000/google_containers/kubernetes-dashboard-amd64:latest
docker tag registry.access.redhat.com/rhel7/pod-infrastructure:latest 192.168.149.136:5000/rhel7/pod-infrastructure:latest

 上传到本地仓库

docker push 192.168.149.136:5000/google_containers/kubernetes-dashboard-amd64:latest
docker push 192.168.149.136:5000/rhel7/pod-infrastructure:latest

8.3 部署dashboard

 编辑并修改kubernetes-dashboard.yaml

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
      # Comment the following annotation if Dashboard must not be deployed on master
      annotations:
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              "key": "dedicated",
              "operator": "Equal",
              "value": "master",
              "effect": "NoSchedule"
            }
          ]
    spec:
      containers:
      - name: kubernetes-dashboard
        image: 192.168.149.136:5000/google_containers/kubernetes-dashboard-amd64:latest  #修改为自己私有仓库地址
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          - --apiserver-host=http://192.168.149.136:8080 #api 地址
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard

创建

kubectl create -f kubernetes-dashboard.yaml

待这个DeploymentService创建成功后,执行命令查看映射的端口号

kubectl --namespace=kube-system get svc

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.254.216.212           80:32315/TCP   51m

查看pod

kubectl get pod --all-namespaces

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
default       nginx-ds-pgm45                          1/1       Running   0          2d
default       nginx-ds-qgvwj                          1/1       Running   0          2d
default       nginx-ds-rxjkr                          1/1       Running   0          2d
kube-system   kubernetes-dashboard-558bc688d7-5frjq   1/1       Running   0          52m

这里需要注意执行命令时要加上--namespace=kube-system。前面我们也已经介绍过了,Kubernetes部署好后默认有两个namespace:default和kube-system。用户自己的默认在default下,kubectl默认也使用这个namespace;Kubernetes自己的系统模块在kube-system下面。

 访问dashboard页面 

http://192.168.149.136:32315

 kuberntes安装_第1张图片

 

 

     

 

转载于:https://www.cnblogs.com/quanloveshui/p/9197785.html

你可能感兴趣的:(java,运维,json)