centos7二进制部署Kubernetes

centos7部署Kubernetes

  • 自建DNS服务
    • (1) 修改主配置文件 /etc/named.conf
    • (2) 区域配置文件/etc/named.rfc1912.zones
    • (3) 主机域配置文件/var/namd/host.com.zone
    • (4) 主机域配置文件/var/namd/od.com.zone
    • (5) 验证解析
  • 安装docker
    • (1) 设置镜像仓库地址
    • (2) 配置国内镜像加速文件
  • 安装私有仓库harbor
    • (1) 下载安装
    • (2) 修改配置参数
    • (3) 基于docker-compose启动harbor
  • 安装nginx
    • (1)下载安装
    • (2) 配置文件
    • (3) 新建仓库项目并推送镜像
  • 自建CA证书
    • 证书下载地址:
    • 创建证书目录
    • (1) 生成配置文件模板
    • (2) 创建CA证书请求文件
    • (3) 生成CA证书及密钥
    • (4) 创建etcd证书请求文件
    • (5) 生成etcd证书及密钥
  • 安装etcd
    • etcd下载地址:
    • (1) 拷贝证书、私钥
    • (2) 创建启etcd启动脚本
    • (3)下载依赖软件,supervisor管理后台进程软件
  • 安装api-server
    • 下载地址:
    • (1) 签发client证书 (api-server{client}-->etcd{server})
      • 创建生成证书签名请求(csr)json配置文件(client)
      • 创建生成证书签名请求(csr)json配置文件(server)
    • (2) 拷贝证书文件(私钥文件属性600)
    • (3) 配置api-server启动脚本
  • 部署4层反向代理
    • (1) 安装nginx
  • 部署keepalived实现api-server双活
    • (1) yum安装
  • 部署kube-controller-manager和kube-scheduler
    • (1) 配置kube-controller-manager启动脚本
    • (2) 创建supervisor配置
    • (3) 配置kube-scheduler启动脚本
    • (4) 创建supervisor配置
  • 部署kubelet运算节点服务
    • (1) 创建证书配置
    • (2) 拷贝证书
    • (2) 创建配置
      • set-cluster集群参数
      • set-credentials客户端认证参数
      • set-context上下文参数
      • use-context默认上下文
    • (3) 绑定授权运算角色权限
    • (4) 启动kubelet服务
    • (5) 创建supervisor配置
  • 部署kube-proxy服务
    • (1) 签发证书
    • (2) 拷贝证书
    • (3) 创建配置
      • set-cluster集群参数
      • set-credentials客户端认证参数
      • set-context上下文参数
      • use-context默认上下文
    • (4)加载ipvs模块
    • (5) 创建启动脚本
    • (6) 创建supervisor配置
    • (7) 验证集群可用性

节点:
192.168.108.128 7-11 (DNS)
192.168.108.129 7-12
192.168.108.130 7-21
192.168.108.131 7-22
192.168.108.132 7-200

安装基本工具

yum install epel-release -y
yum install wget net-tools telnet tree namp lrzsz bind-utils -y
yum install bind -y(安装在某个节点即可,我这选择108.128节点)

自建DNS服务


  • 安装节点(128)

(1) 修改主配置文件 /etc/named.conf

修改项:
listen-on port 53 { any; }; 监听端口
allow-query     { any; };	所有主机可以解析
recursion yes; 				递归查询
named-checkconf 			检查不报错

(2) 区域配置文件/etc/named.rfc1912.zones

zone "host.com" IN {
	type master;
	file "host.com.zone"; #正向解析(名称自定义)
	allow-update { 192.168.108.128; };
};
zone "od.com" IN {
	type master;
	file "od.com.zone";	#正向解析(名称自定义)
	allow-update { 192.168.108.128; };
};

(3) 主机域配置文件/var/namd/host.com.zone

$ORIGIN host.com.
$TTL 600

@       IN SOA  dns.host.com. dnsadmin.host.com. (
                                        2021070601;  serial		# 版本号
                                        10800; 		 refresh 	# 刷新时间
                                        900;		 retry		# 重试时间
                                        604800;		 expire		# 过期时间	
                                        86400; 		 minimum	# 否定答案缓存时间
					)

		        NS      dns.host.com.
$TTL 60

dns     		A       192.168.108.128   
ceshi-128       A       192.168.108.128 
ceshi-129       A       192.168.108.129
ceshi-130       A       192.168.108.130 
ceshi-131       A       192.168.108.131 
ceshi-132       A       192.168.108.132

(4) 主机域配置文件/var/namd/od.com.zone

$ORIGIN od.com.
$TTL 600

@       IN SOA  dns.od.com. dnsadmin.od.com. (
                                        2021070601; serial
                                        10800; 		refresh
                                        900; 		retry
                                        604800; 	expire
                                        86400; 		minimum
					)

		        NS      dns.od.com.
$TTL 60

dns     	A       192.168.108.128  

A记录: 将FQDN解析到IPV4
CHAME: 别名
MX记录: 邮箱服务
NS记录: 域名解析服务器记录,如将子域名指定某个域服务器解析,需要设置NS
AAAA记录:IPV6域名解析
SOA记录: 起始授权机构记录,NS用于标识多台域名解析服务器,SOA记录那台是主服务
PTR记录: 逆向解析,将IPV4解析到FQDN
TTL值: 缓存时间

(5) 验证解析

重启DNS服务
[root@ceshi-128 ~]# systemctl restart named
[root@ceshi-128 ~]# systemctl enable named

将DNS写入网卡配置文件(添加/etc/resolv.conf重启实例后会失效)
[root@ceshi-128 ~]# echo "DNS1=192.168.108.128" >> /etc/sysconfig/network-scripts/ifcfg-ens33

重启网卡服务
[root@ceshi-128 ~]# systemctl restart network

测试域名解析
[root@ceshi-128 ~]# dig -t A ceshi-130.host.com +short
192.168.108.130

安装docker

(1) 设置镜像仓库地址


  • 安装节点(130 131 132)

[root@ceshi-132 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@ceshi-132 ~]# yum makecache fast
[root@ceshi-132 ~]# yum -y install docker-ce 
[root@ceshi-132 ~]# systemctl start docker

(2) 配置国内镜像加速文件

[root@ceshi-132 docker]# cat daemon.json
{
    "graph": "/data/docker",
    "storage-driver": "overlay2", #使用的存储驱动程序
    #设置私有仓库地址
    "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"], 
    "registry-mirrors": ["https://hub-mirror.c.163.com"], #国内镜像加速地址
    "bip": "192.168.108.1/24",	#指定网桥IP 172.7.21.1(次文档各节点修改不同)
    "exec-opts": ["native.cgroupdriver=systemd"], #运行时执行选项
    "live-restore": true	#在容器仍在运行时启用docker的实时还原
 }
[root@ceshi-132 docker]# systemctl daemon-reload
[root@ceshi-132 docker]# systemctl restart docker

[root@ceshi-132 ~]# docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete 
Digest: sha256:df5f5184104426b65967e016ff2ac0bfcd44ad7899ca3bbcf8e44e4461491a9e
Status: Downloaded newer image for hello-world:latest
docker.io/library/hello-world:latest

安装私有仓库harbor


  • 安装节点(132)

(1) 下载安装

[root@ceshi-132 ~]# wget https://github.com/goharbor/harbor/releases/download/v2.2.1/harbor-offline-installer-v2.2.1.tgz
[root@ceshi-132 ~]# tar -xf harbor-offline-installer-v2.2.1.tgz -C /usr/local/
[root@ceshi-132 ~]# mv harbor/ harbor-v2.2.1
[root@ceshi-132 ~]# ln -s /usr/local/harbor-v2.2.1/ /usr/local/harbor 

(2) 修改配置参数

[root@ceshi-132 ~]# mkdir -p  /data/harbor/logs
[root@ceshi-132 ~]# yum install docker-compose -y
[root@ceshi-132 ~]# vi /usr/local/harbor/harbor.yml
修改项:
hostname: harbor.od.com
http:
	port: 180
harbor_admin_password: 12345
location: /data/harbor/logs
[root@ceshi-132 ~]# /usr/local/harbor/install.sh

centos7二进制部署Kubernetes_第1张图片

安装报错:
prepare base dir is set to /usr/local/harbor-v2.2.1
Error happened in config validation...
ERROR:root:Error: The protocol is https but attribute ssl_cert is not set

解决方法
注释ssl部分
重新install.sh
在这里插入图片描述

(3) 基于docker-compose启动harbor

[root@ceshi-132 harbor]# pwd
/usr/local/harbor
[root@ceshi-132 harbor]# docker-compose up -d

安装nginx

(1)下载安装


  • 安装节点(132)

安装依赖:
[root@ceshi-132 harbor]#yum install gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel -y
[root@ceshi-132 harbor]# wget http://tengine.taobao.org/download/tengine-2.2.0.tar.gz
[root@ceshi-132 harbor]# tar xf tengine-2.2.0.tar.gz -C /usr/local
[root@ceshi-132 harbor]# cd /usr/local/tengine-2.2.0/
[root@ceshi-132 harbor]# ./configure 
[root@ceshi-132 harbor]#  make && make install

(2) 配置文件

vi /usr/local/nginx/conf.d/harbor.od.com.conf

    server {
        listen       80;
        server_name  harbor.od.com;				# 使用自建域名
        client_max_body_size 1000m;
        location / {
                proxy_pass http://127.0.0.1:180; #代理harbor地址
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }    
 }
 
 在128DNS上增加一条A记录,用于解析访问harbor域名
 

修改地址: /var/named/od.com.zone
centos7二进制部署Kubernetes_第2张图片

[root@ceshi-128 named]# systemctl restart named
[root@ceshi-128 named]# dig -t A harbor.od.com +short
192.168.108.132

(3) 新建仓库项目并推送镜像

centos7二进制部署Kubernetes_第3张图片

下载nginx镜像
[root@ceshi-132 ~]# docker pull nginx:1.7.9
为下载镜像打标签
[root@ceshi-132 ~]# docker tag 84581e99d807 harhor.od.com/public/nginx:v1.7.9
登陆私有仓库
[root@ceshi-132 ~]# docker login harbor.od.com
Username:admin
Password:******
[root@ceshi-132 ~]# docker push harbor.od.com/public/nginx:v1.7.9
The push refers to repository [harbor.od.com/public/nginx]
5f70bf18a086: Layer already exists 
4b26ab29a475: Layer already exists 
ccb1d68e3fb7: Layer already exists 
e387107e2065: Layer already exists 
63bf84221cce: Layer already exists 
e02dce553481: Layer already exists 
dea2e4984e29: Layer already exists 
v1.7.9: digest: sha256:b1f5935eb2e9e2ae89c0b3e2e148c19068d91ca502e857052f14db230443e4c2 size: 3012

centos7二进制部署Kubernetes_第4张图片

自建CA证书


  • 安装节点(132)

证书下载地址:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl*

创建证书目录


  • 安装节点(132)

[root@ceshi-132 ~]# mkdir -p /opt/certs/

(1) 生成配置文件模板

[root@ceshi-132 certs]# cat ca-config.json 
{
    "signing": {
        "default": {
            "expiry": "175200h" #过期时间
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth" #表示客户端可以用该证书对服务端验证
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth" #表示服务端可以用该证书对服务端验证
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

(2) 创建CA证书请求文件

[root@ceshi-132 certs]# cat ca-csr.json 
{
    "CN": "k8s-ca", 			#kube-apiserver请求用户名,浏览器使用该字段验证是否合格
    "hosts": [
    ],
    "key": {
        "algo": "rsa", 			#指定算法
        "size": 2048
    },
    "names": [
        {
            "C": "CN", 			#国家:验证网站是否合法
            "ST": "shanghai",	# 州,省:上海
            "L": "shanghai",	# 城市: 上海
            "O": "od", 			#组织名称
            "OU": "ops"			#组织单位名称
        }
    ],
    "ca": {
        "expiry": "175200h"		#过期时间
    }
}

(3) 生成CA证书及密钥

[root@ceshi-132 certs]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@ceshi-132 certs]# ll
-rw-r--r-- 1 root root  836 Jul 13 00:57 ca-config.json
-rw-r--r-- 1 root root  993 Jul 13 17:04 ca.csr
-rw-r--r-- 1 root root  327 Jul 13 01:09 ca-csr.json
-rw------- 1 root root 1675 Jul 13 17:04 ca-key.pem
-rw-r--r-- 1 root root 1342 Jul 13 17:04 ca.pem

(4) 创建etcd证书请求文件

[root@ceshi-132 certs]# cat etcd-peer-csr.json 
{
    "CN": "k8s-etcd",
    "hosts": [
	"192.168.108.128",
	"192.168.108.129",
	"192.168.108.130",
	"192.168.108.131"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

(5) 生成etcd证书及密钥

[root@ceshi-132 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssljson -bare etcd-peer
[root@ceshi-132 certs]# ll ./etcd-peer*
-rw-r--r-- 1 root root 1062 Jul 13 17:23 ./etcd-peer.csr
-rw-r--r-- 1 root root  359 Jul 13 17:11 ./etcd-peer-csr.json
-rw------- 1 root root 1679 Jul 13 17:23 ./etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Jul 13 17:23 ./etcd-peer.pem

安装etcd


  • 安装节点(129 130 131)

etcd下载地址:

[root@ceshi-129 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz
[root@ceshi-129 ~]# tar -xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt
[root@ceshi-129 opt]# cd /opt/
[root@ceshi-129 opt]# tar -xf etcd-v3.1.20-linux-amd64 etcd-v3.1.20
[root@ceshi-129 opt]# ln -s /opt/etcd-v3.1.20/ /opt/etcd

(1) 拷贝证书、私钥

[root@ceshi-129 opt]# mkdir -p /opt/etcd/certs /data/etcd /date/logs/etcd-server
[root@ceshi-129 opt]# scp root@ceshi-132:/opt/certs/ca.pem .
[root@ceshi-129 opt]# scp root@ceshi-132:/opt/certs/etcd-peer.pem .
[root@ceshi-129 opt]# scp root@ceshi-132:/opt/certs/etcd-peer-key.pem .
[root@ceshi-129 opt]# useradd etcd -s /sbin/nologin -M
[root@ceshi-129 opt]# chown -R etcd.etcd

将主机根证书的 ca.pem etcd.peer-key.pem etcd-peer.pem拷贝到/opt/etcd/certs/目录中,私钥文件权限600

(2) 创建启etcd启动脚本

[root@ceshi-129 etcd]# cat etcd-server-startup.sh
#!/usr/bin/env bash
./etcd --name etcd-server-7-12 \
       --data-dir /data/etcd/etcd-server \	#数据存储目录
       --listen-peer-urls https://192.168.108.129:2380 \	# 内部etcd集群对端通信2380
       --listen-client-urls https://192.168.108.129:2379,http://127.0.0.1:2379 \ #对外server通信2379
       --quota-backend-bytes 8000000000 \	#后端配额
       --initial-advertise-peer-urls https://192.168.108.129:2380 \	
       --advertise-client-urls https://192.168.108.129:2379,http://127.0.0.1:2379 \
       --initial-cluster etcd-server-7-12=https://192.168.108.129:2380,etcd-server-7-21=https://192.168.108.130:2380,etcd-server-7-22=https://192.168.108.131:2380 \
       --ca-file ./certs/ca.pem \	#ca证书路径
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth \	#验证证书
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \	#etcd证书路径
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout
*配置复制后有部分--需要保留一个空格,否则启动失败*
[root@ceshi-129 etcd]# chmod +x etcd-server-startup.sh 
[root@ceshi-129 etcd]# chown -R etcd.etcd /opt/etcd-v3.1.20/
[root@ceshi-129 etcd]# chown -R etcd.etcd /data/etcd/
[root@ceshi-129 etcd]# chown -R etcd.etcd /data/logs/etcd-server/ 

(3)下载依赖软件,supervisor管理后台进程软件


supervisorctl status #查看所有进程的状态
supervisorctl stop 服务名 #停止服务
supervisorctl start 服务名 #启动服务
supervisorctl restart 服务名 #重启服务
supervisorctl update #配置文件修改后使用该命令加载新的配置
supervisorctl reload #重新启动配置中的所有程序


[root@ceshi-129 etcd]# yum install supervisor -y
[root@ceshi-129 etcd]# systemctl start supervisord
[root@ceshi-129 etcd]# systemctl enable supervisord
[root@ceshi-129 supervisord.d]# cat /etc/supervisord.d/etcd-server.ini
[program:etcd-server]  ; 显示的程序名,类型my.cnf,可以有多个
command=/opt/etcd/etcd-server-startup.sh
numprocs=1             ; 启动进程数 (def 1)
directory=/opt/etcd    ; 启动命令前切换的目录 (def no cwd)
autostart=true         ; 是否自启 (default: true)
autorestart=true       ; 是否自动重启 (default: true)
startsecs=30           ; 服务运行多久判断为成功(def. 1)
startretries=3         ; 启动重试次数 (default 3)
exitcodes=0,2          ; 退出状态码 (default 0,2)
stopsignal=QUIT        ; 退出信号 (default TERM)
stopwaitsecs=10        ; 退出延迟时间 (default 10)
user=etcd              ; 运行用户
redirect_stderr=true   ; 是否重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log
stdout_logfile_maxbytes=64MB  ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4      ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB   ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
;killasgroup=true
;stopasgroup=true
[root@ceshi-129 supervisord.d]# supervisorctl update
[root@ceshi-129 supervisord.d]# supervisorctl status
etcd-server                      STARTING
[root@ceshi-129 supervisord.d]# supervisorctl status
etcd-server                      RUNNING   pid 2264, uptime 0:33:31
[root@ceshi-129 supervisord.d]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1223/master         
tcp        0      0 192.168.108.129:2379    0.0.0.0:*               LISTEN      2265/./etcd         
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      2265/./etcd         
tcp        0      0 192.168.108.129:2380    0.0.0.0:*               LISTEN      2265/./etcd         
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      986/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1223/master         
tcp6       0      0 :::22                   :::*                    LISTEN      986/sshd        
[root@ceshi-129 etcd]# supervisorctl status
etcd-server                      RUNNING   pid 2264, uptime 1 day, 1:02:36
[root@ceshi-130 etcd]# supervisorctl status
etcd-server                      RUNNING   pid 2764, uptime 0:01:17
[root@ceshi-131 etcd]# supervisorctl status
etcd-server                      RUNNING   pid 2405, uptime 0:01:17

查看集群状态,129节点为leader
[root@ceshi-129 etcd]# ./etcdctl member list 
1bd0c15cd2b58ad4: name=etcd-server-7-22 peerURLs=https://192.168.108.131:2380 clientURLs=http://127.0.0.1:2379,https://192.168.108.131:2379 isLeader=false
d68de1494b70edda: name=etcd-server-7-12 peerURLs=https://192.168.108.129:2380 clientURLs=http://127.0.0.1:2379,https://192.168.108.129:2379 isLeader=true
e18afb5e95efe1ea: name=etcd-server-7-21 peerURLs=https://192.168.108.130:2380 clientURLs=http://127.0.0.1:2379,https://192.168.108.130:2379 isLeader=false
  • 其他两节点按照上述步骤,修改etcd-server-7-*(用户名)ip即可
  • –initial-cluster 不需要任何修改,次参数为配置集群节点

安装api-server


  • 安装节点(130 131)

下载地址:

[root@ceshi-130 ~]# wget https://dl.k8s.io/v1.15.10/kubernetes-server-linux-amd64.tar.gz
[root@ceshi-130 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz -C /opt/
[root@ceshi-130 opt]# mv kubernetes/ kubernetes-v1.15.10
[root@ceshi-130 opt]# ln -s /opt/kubernetes-v1.15.10/ /opt/kubernetes
[root@ceshi-130 opt]# cd kubernetes
删除源码文件
[root@ceshi-130 kubernetes]# rm -fr kubernetes-src.tar.gz
[root@ceshi-130 kubernetes]# cd server/bin/
删除docker镜像tar包
[root@ceshi-130 bin]# rm -fr ./*.tar
[root@ceshi-130 bin]# rm -fr ./*_tag
[root@ceshi-130 bin]# ls
apiextensions-apiserver  cloud-controller-manager  hyperkube  kubeadm  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler  mounter

(1) 签发client证书 (api-server{client}–>etcd{server})

创建生成证书签名请求(csr)json配置文件(client)


  • 节点(132)

[root@ceshi-132 certs]# cat /opt/certs/client-csr.json 
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "shanghai",
            "L": "shanghai",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@ceshi-132 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssljson -bare client

创建生成证书签名请求(csr)json配置文件(server)


  • 节点(132)

[root@ceshi-132 certs]# cat api-server-csr.json 
{
    "CN": "k8s-apiserver",
    "hosts": [
      "127.0.0.1",
      "192.168.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local",
      "192.168.108.128",
      "192.168.108.129",
      "192.168.108.130",
      "192.168.108.131",
      "192.168.108.132",
      "192.168.108.133"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "beijing",
            "ST": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@ceshi-132 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server api-server-csr.json | cfssljson -bare apiserver
[root@ceshi-132 certs]# ll api*
-rw-r--r-- 1 root root 1281 Jul 17 00:29 apiserver.csr
-rw-r--r-- 1 root root  640 Jul 17 00:29 api-server-csr.json
-rw------- 1 root root 1679 Jul 17 00:29 apiserver-key.pem
-rw-r--r-- 1 root root 1631 Jul 17 00:29 apiserver.pem

(2) 拷贝证书文件(私钥文件属性600)


  • 安装节点(130 131)

[root@ceshi-130 bin]# mkdir certs
[root@ceshi-130 bin]# cd certs/
[root@ceshi-130 certs]# scp [email protected]:/opt/certs/apiserver-key.pem .
[root@ceshi-130 certs]# scp [email protected]:/opt/certs/apiserver.pem .
[root@ceshi-130 certs]# scp [email protected]:/opt/certs/ca.pem .
[root@ceshi-130 certs]# scp [email protected]:/opt/certs/ca-key.pem .
[root@ceshi-130 certs]# scp [email protected]:/opt/certs/client.pem .
[root@ceshi-130 certs]# scp [email protected]:/opt/certs/client-key.pem .
[root@ceshi-130 certs]# ll
-rw------- 1 root root 1679 Jul 19 10:34 apiserver-key.pem
-rw-r--r-- 1 root root 1631 Jul 19 10:34 apiserver.pem
-rw------- 1 root root 1675 Jul 19 10:34 ca-key.pem
-rw-r--r-- 1 root root 1342 Jul 19 10:34 ca.pem
-rw------- 1 root root 1679 Jul 19 10:35 client-key.pem
-rw-r--r-- 1 root root 1363 Jul 19 10:35 client.pem
[root@ceshi-130 bin]# mkdir conf
配置kube-apiserver审计日志记录和采集
[root@ceshi-130 conf]# vi audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

(3) 配置api-server启动脚本

[root@ceshi-130 bin]# vi kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \ # 数量
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \ #日志目录
  --audit-policy-file ./conf/audit.yaml \	# 日志审计规则
  --authorization-mode RBAC \	#鉴权模式 基于角色访问控制
  --client-ca-file ./cert/ca.pem \	#ca证书
  --requestheader-client-ca-file ./cert/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./cert/ca.pem \
  --etcd-certfile ./cert/client.pem \
  --etcd-keyfile ./cert/client-key.pem \
  --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --service-account-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \ 
  --kubelet-client-certificate ./cert/client.pem \
  --kubelet-client-key ./cert/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./cert/apiserver.pem \
  --tls-private-key-file ./cert/apiserver-key.pem \
  --v 2 # 日志级别
[root@ceshi-130 bin]# chmod +x kube-apiserver.sh
[root@ceshi-130 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver/audit-log
配置到后台管理进程中
[root@ceshi-130 supervisord.d]# vi /etc/supervisord.d/api-server.ini
另一台修改[program:kube-apiserver-7-22]标识出不同即可

[program:kube-apiserver-7-22]
command=/opt/kubernetes/server/bin/kube-apiserver.sh            ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                            ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log        ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)

[root@ceshi-131 supervisord.d]# supervisorctl update
kube-apiserver-7-22: added process group
[root@ceshi-131 bin]# supervisorctl status
etcd-server                      RUNNING   pid 2405, uptime 2 days, 19:47:56
kube-apiserver-7-22              RUNNING   pid 3545, uptime 0:03:32

部署4层反向代理


  • 安装节点(128 129)

(1) 安装nginx

[root@ceshi-128 ~]# yum install gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel -y
[root@ceshi-128 ~]# tar -xf tengine-2.3.3.tar.gz -C /usr/local/
[root@ceshi-128 ~]# cd /usr/local/tengine-2.3.3/
[root@ceshi-128 tengine-2.3.3]# ./configure --with-stream
[root@ceshi-128 tengine-2.3.3]# make && make install
[root@ceshi-128 conf]# cat /usr/local/nginx/conf/nginx.conf
最后添加4层配置文件,不能放置http模块之内,因为http属于7层模型
stream {  #4层反代
	upstream kube-apiserver { # 后台负载地址
	server 192.168.108.130:6443	max_fails=3 fail_timeout=30s;
	server 192.168.108.131:6443	max_fails=3 fail_timeout=30s;
	}
	server {
		listen 7443;	#监听本地端口
		proxy_connect_timeout 2s;
		proxy_timeout 900s;
		proxy_pass kube-apiserver;

	}

}
[root@ceshi-128 conf]# ../sbin/nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@ceshi-128 logs]# systemctl enable nginx
[root@ceshi-128 logs]# systemctl status nginx

部署keepalived实现api-server双活


  • 安装节点(128 129)

(1) yum安装

[root@ceshi-128 ~]# yum install keepalived -y
[root@ceshi-128 ~]# cat /etc/keepalived/check_port.sh 
#!/bin/bash
CHK_PORT=$1
if [ -n $CHK_PORT ];then
	SUM=$(ss -tnl | grep $CHK_PORT |wc -l)
	if [ $SUM -eq 0 ];then	
		systemctl stop keepalived
	fi
else
	echo "port $CHK_PORT is up"
fi
[root@ceshi-128 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived


global_defs {
    router_id lb02 #标识信息
}

vrrp_script check {
    script "/etc/keepalived/check_port.sh 7443"
    interval  5

}

vrrp_instance VI_1 {
    state MASTER    #角色是master
    interface ens33  #vip 绑定端口
    virtual_router_id 40    #让master 和backup在同一个虚拟路由里,id 号必须相同;
    priority 150            #优先级,谁的优先级高谁就是master ;
    advert_int 3            #心跳间隔时间
	nopreempt				#非抢占式()
    authentication {
        auth_type PASS      #认证
        auth_pass 1111      #密码 
    }
    
    virtual_ipaddress {
        192.168.108.133            #虚拟ip
    }
    
    track_script  {
        check
    }
}
[root@ceshi-128 ~]# systemctl enable keepalived
[root@ceshi-128 ~]# systemctl start keepalived
[root@ceshi-128 ~]# ip a | grep 192.168.108.*
    inet 192.168.108.128/24 brd 192.168.108.255 scope global noprefixroute ens33
    inet 192.168.108.133/32 scope global ens33

部署kube-controller-manager和kube-scheduler


  • 安装节点同部署api-server节点(130 131)

(1) 配置kube-controller-manager启动脚本

[root@ceshi-130 bin]# vi /opt/kubernetes/server/bin/kube-controller-manager.sh
#!/bin/bash
./kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--leader-elect true \
--log-dir  /data/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./cert/ca.pem \
--v 2
[root@ceshi-130 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager

(2) 创建supervisor配置

[root@ceshi-130 ~]# vi /etc/supervisord.d/kube-controller-manager.ini
# 不同节点替换kube-controller-manager-7-21
[program:kube-controller-manager-7-21]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                                       ; emit events on stdout writes (default false)
[root@ceshi-130 ~]# supervisorctl update
kube-controller-manager-7-21: added process group

(3) 配置kube-scheduler启动脚本

[root@ceshi-130 bin]# vi /opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/bash
./kube-scheduler \
--leader-elect \
--log-dir  /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \ #本机通信找master不需要证书验证,交叉需要证书验证
--v 2
[root@ceshi-130 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler

(4) 创建supervisor配置

[root@ceshi-130 ~]# vi /etc/supervisord.d/kube-scheduler.ini
# 不同节点替换kube-controller-manager-7-21
[program:kube-scheduler-7-21]
command=/opt/kubernetes/server/bin/kube-scheduler.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                               ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                     ; directory to cwd to before exec (def no cwd)
autostart=true                                                           ; start at supervisord start (default: true)
autorestart=true                                                         ; retstart at unexpected quit (default: true)
startsecs=30                                                             ; number of secs prog must stay running (def. 1)
startretries=3                                                           ; max # of serial start failures (default 3)
exitcodes=0,2                                                            ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                          ; signal used to kill process (default TERM)
stopwaitsecs=10                                                          ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                ; setuid to this UNIX account to run the program
redirect_stderr=true                                                     ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                              ; emit events on stdout writes (default false)
[root@ceshi-130 ~]# supervisorctl update
kube-scheduler-7-21: added process group
[root@ceshi-130 ~]# supervisorctl status
etcd-server                      RUNNING   pid 1640, uptime 0:02:55
kube-apiserver-7-21              RUNNING   pid 1644, uptime 0:02:55
kube-controller-manager-7-21     RUNNING   pid 1631, uptime 0:02:55
kube-scheduler-7-21              RUNNING   pid 1641, uptime 0:02:55
[root@ceshi-130 ~]# ln -s  /opt/kubernetes/server/bin/kubectl  /usr/bin/kubectl
查看组件信息
[root@ceshi-130 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
controller-manager   Healthy   ok                   
etcd-2               Healthy   {"health": "true"}

部署kubelet运算节点服务

(1) 创建证书配置


  • 创建节点132

[root@ceshi-132 certs]# cat kubelet-csr.json 
{
    "CN": "k8s-kubelet",
    "hosts": [ #必须写规划在内的ip
	"127.0.0.1",
	"192.168.108.128",
	"192.168.108.129",
	"192.168.108.130",
	"192.168.108.131",
	"192.168.108.132",
	"192.168.108.133",
	"192.168.108.134",
	"192.168.108.135",
	"192.168.108.136"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "shanghai",
            "L": "shanghai",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@ceshi-132 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssljson -bare kubelet

(2) 拷贝证书

  • 在api-server两节点拷贝证书(130 131)
[root@ceshi-130 cert]# pwd
/opt/kubernetes/server/bin/cert
[root@ceshi-130 cert]# scp [email protected]:/opt/certs/kubelet-key.pem .
[root@ceshi-130 cert]# scp [email protected]:/opt/certs/kubelet.pem .

(2) 创建配置

set-cluster集群参数

[root@ceshi-130 conf]# kubectl config set-cluster myk8s 	\
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.108.133:7443 \ #设置api-server中的VIP
--kubeconfig=kubelet.kubeconfig

set-credentials客户端认证参数

[root@ceshi-130 conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
--client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
--embed-certs=true=true \
--kubeconfig=kubelet.kubeconfig

set-context上下文参数

[root@ceshi-130 conf] # kubectl config set-context myk8s-context \
--cluster=myk8s \ # 集群名称
--user=k8s-node \ # 指定用户
--kubeconfig=kubelet.kubeconfig

use-context默认上下文

[root@ceshi-130 conf] # kubectl config use-context myk8s-context \
--kubeconfig=kubelet.kubeconfig

(3) 绑定授权运算角色权限

[root@ceshi-130 conf]# cat k8s-node.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node #名称
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole #集群角色
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User # k8s-node用户授权运算节点权限
  name: k8s-node
 
[root@ceshi-130 conf]# kubectl create -f k8s-node.yaml 
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
[root@ceshi-130 conf]# kubectl get clusterrolebinding k8s-node
NAME       AGE
k8s-node   91s

pause基础镜像(kubelet依赖)

  • 节点(132)
[root@ceshi-132 ~]# docker pull kubernetes/pause
[root@ceshi-132 ~]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@ceshi-132 ~]# docker push harbor.od.com/public/pause:latest

(4) 启动kubelet服务

[root@ceshi-130 bin]# pwd
/opt/kubernetes/server/bin
[root@ceshi-130 bin]# cat kubelet.sh 
#!/bin/bash
./kubelet \
--anonymous-auth=false \ #不允许匿名登陆
--cgroup-driver systemd \ 
--cluster-dns 192.168.0.2 \ 
--cluster-domain cluster.local \ 
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \ #不关闭swap交换分区
--client-ca-file ./cert/ca.pem \ #根证书
--tls-cert-file ./cert/kubelet.pem \ #kubelet证书
--tls-private-key-file ./cert/kubelet-key.pem \ #私钥
--address 192.168.108.130 \ #主机
--hostname-override ceshi-130.host.com \ #主机名
--image-gc-high-threshold 20 \ 
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \ #指定配置文件
--log-dir /data/logs/kubernetes/kube-kubelet \ #日志文件路径
--pod-infra-container-image harbor.od.com/public/pause:latest \ #私有仓库镜像地址
--root-dir /data/kubelet

[root@ceshi-130 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet  /data/kubelet
[root@ceshi-130 bin]# chmod +x kubelet.sh

(5) 创建supervisor配置

[root@ceshi-130 bin]# vi /etc/supervisord.d/kubelet.ini
修改不同项[program:kube-kubelet-7-22]
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

[root@ceshi-130 bin]# supervisorctl update
kube-kubelet-7-21: added process group
[root@ceshi-130 bin]# supervisorctl status
etcd-server                      RUNNING   pid 1140, uptime 0:14:21
kube-apiserver-7-21              RUNNING   pid 1142, uptime 0:14:21
kube-controller-manager-7-21     RUNNING   pid 1136, uptime 0:14:21
kube-kubelet-7-21                RUNNING   pid 2795, uptime 0:07:02
kube-scheduler-7-21              RUNNING   pid 1141, uptime 0:14:21
[root@ceshi-130 bin]# kubectl get node
NAME                 STATUS   ROLES    AGE     VERSION
ceshi-130.host.com   Ready    <none>   8m31s   v1.15.10
ceshi-131.host.com   Ready    <none>   2m14s   v1.15.10
[root@ceshi-130 bin]# kubectl label node ceshi-130.host.com node-role.kubernetes.io/master=
[root@ceshi-130 bin]# kubectl label node ceshi-130.host.com node-role.kubernetes.io/node=
[root@ceshi-130 bin]# kubectl get nodes
NAME                 STATUS   ROLES         AGE     VERSION
ceshi-130.host.com   Ready    master,node   12m     v1.15.10
ceshi-131.host.com   Ready    <none>        6m42s   v1.15.10

部署kube-proxy服务


  • 部署节点(130 131)

(1) 签发证书

  • 节点(132)
[root@ceshi-132 certs]# vi kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
[root@ceshi-132 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssljson -bare kube-proxy-client
[root@ceshi-132 certs]# ll kube-*
-rw-r--r-- 1 root root 1009 Jul 20 15:58 kube-proxy-client.csr
-rw------- 1 root root 1679 Jul 20 15:58 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1379 Jul 20 15:58 kube-proxy-client.pem
-rw-r--r-- 1 root root  215 Jul 20 15:55 kube-proxy-csr.json

(2) 拷贝证书

  • 节点(130 131)
[root@ceshi-130 cert]# scp [email protected]:/opt/certs/kube-proxy-client-key.pem .
[root@ceshi-130 cert]# scp [email protected]:/opt/certs/kube-proxy-client.pem .

(3) 创建配置

set-cluster集群参数

[root@ceshi-130 conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  --embed-certs=true \
  --server=https://192.168.108.133:7443 \ #VIP
  --kubeconfig=kube-proxy.kubeconfig

set-credentials客户端认证参数

[root@ceshi-130 conf]# kubectl config set-credentials k8s-proxy \
  --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

set-context上下文参数

[root@ceshi-130 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

use-context默认上下文

[root@ceshi-130 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

(4)加载ipvs模块

[root@ceshi-130 ~]# vi ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done
[root@ceshi-130 ~]# chmod +x ipvs.sh
[root@ceshi-130 ~]# ./ipvs.sh 
[root@ceshi-130 ~]# lsmod | grep ip_vs

centos7二进制部署Kubernetes_第5张图片

ip_vs_wrr  		加权轮循调度
ip_vs_wlc 		加权最小连接调度	
ip_vs_sh		源地址散列调度
ip_vs_dh		目标地址散列调度
ip_vs_sed		最短预期延时调度	
ip_vs_nq		不排队调度
ip_vs_rr		轮循调度
ip_vs_lc		最小连接调度

(5) 创建启动脚本

[root@ceshi-130 bin]# vi kube-proxy.sh
#!/bin/bash
./kube-proxy \
--cluster-cidr 172.17.0.0/16 \
--hostname-override 192.168.108.130 \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
[root@ceshi-130 bin]# chmod +x kube-proxy.sh

(6) 创建supervisor配置

[root@ceshi-130 bin]# vi /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-21]
command=/opt/kubernetes/server/bin/kube-proxy.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-proxy/kube-proxy.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
[root@ceshi-130 bin]# mkdir -p /data/logs/kubernetes/kube-proxy/
[root@ceshi-130 bin]# supervisorctl update
[root@ceshi-130 bin]# supervisorctl status
etcd-server                      RUNNING   pid 1140, uptime 1:55:18
kube-apiserver-7-21              RUNNING   pid 1142, uptime 1:55:18
kube-controller-manager-7-21     RUNNING   pid 1136, uptime 1:55:18
kube-kubelet-7-21                RUNNING   pid 2795, uptime 1:47:59
kube-proxy-7-21                  RUNNING   pid 21521, uptime 0:14:38
kube-scheduler-7-21              RUNNING   pid 1141, uptime 1:55:18

启动kube-proxy的时候报错:

Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope

方法1:给匿名用户授予集群管理权限

kubectl create clusterrolebinding anonymous-cluster-myk8s --clusterrole=cluster-admin --user=system:anonymous
然后重启

[root@ceshi-130 bin]# yum install ipvsadm -y
[root@ceshi-130 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.108.130:6443         Masq    1      0          0         
  -> 192.168.108.131:6443         Masq    1      0          0   

(7) 验证集群可用性

[root@ceshi-130 ~]# vi nginx-ceshi.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: harbor.od.com/public/nginx:v1.7.9
        ports:
        - containerPort: 33333

[root@ceshi-130 ~]# kubectl create -f nginx-ceshi.yaml 
daemonset.extensions/nginx-ds created
[root@ceshi-130 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-25fff   1/1     Running   0          2m7s
nginx-ds-fcj25   1/1     Running   0          2m7s
[root@ceshi-130 ~]# kubectl get  pods -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
nginx-ceshi-g6f7f   1/1     Running   1          14h   172.7.200.3   ceshi-131.host.com   <none>           <none>
nginx-ceshi-pdp5q   1/1     Running   1          14h   172.7.200.2   ceshi-130.host.com   <none>           <none>
nginx-ds-25fff      1/1     Running   1          15h   172.7.200.1   ceshi-131.host.com   <none>           <none>
nginx-ds-fcj25      1/1     Running   1          15h   172.7.200.3   ceshi-130.host.com   <none>           <none>
[root@ceshi-130 ~]# curl 172.7.200.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

你可能感兴趣的:(kubernetes)