kubernetes-----部署多master的二进制集群

目录

一.多master的二进制集群分析

二.实验环境分析

三.具体部署

搭建k8s的单节点集群

搭建master2节点

部署负载均衡

修改node的VIP以及pod的创建

搭建k8s的Dashboard


一.多master的二进制集群分析

kubernetes-----部署多master的二进制集群_第1张图片

  • 区别于单master的二进制集群,多master集群对master做了一个高可用,如果master1宕机,Load Balance就会将VIP转移到master2,这样就保证了master的可靠性。
  • 多节点的核心点就是需要指向一个核心的地址,我们之前在做单节点的时候已经将vip地址定义过写入k8s-cert.sh脚本文件中(192.168.18.100),vip开启apiserver,多master开启端口接受node节点的apiserver请求,此时若有新的节点加入,不是直接找moster节点,而是直接找到vip进行spiserver的请求,然后vip再进行调度,分发到某一个master中进行执行,此时master收到请求之后就会给改node节点颁发证书
  • 建立负载均衡缓解了nodes对master的请求压力,减轻了master资源使用
     

二.实验环境分析

角色 IP地址 系统与资源 相关组件
master1 192.168.43.101/24 centos7.4(2C 2G) kube-apiserver kube-controller-manager kube-scheduler etcd
master2 192.168.43.104/24 centos7.4(2C 2G) kube-apiserver kube-controller-manager kube-scheduler
node1 192.168.43.102/24 centos7.4(2C 2G) kubelet kube-proxy docker flannel etcd
node2 192.168.43.103/24 centos7.4(2C 2G) kubelet kube-proxy docker flannel etcd
nginx_lbm 192.168.43.105/24 centos7.4(2C 2G) nginx keepalived
nginx_lbb 192.168.43.106/24 centos7.4(2C 2G) nginx keepalived
VIP 192.168.43.100/24 - -
  • 本实验基于单master基础之上操作,添加一个master2
  • 利用nginx做负载均衡,利用keepalived做负载均衡器的高可用

注:1.9版本之后nginx具有了四层转发的功能(负载均衡),多了stream模块

  • 利用keepalived给master提供的虚拟IP地址,给node访问连接apiserver

三.具体部署

搭建k8s的单节点集群

  • 参考,单master集群部署

搭建master2节点

master1的操作

  • 复制相关文件、脚本
##递归复制/opt/kubernetes和/opt/etcd下的所有文件到master2中
[root@master ~]# scp -r /opt/kubernetes/ [email protected]:/opt/
The authenticity of host '192.168.43.104 (192.168.43.104)' can't be established.
ECDSA key fingerprint is SHA256:AJdR3BBN9kCSEk3AVfaZuyrxhNMoDnzGMOMWlP1gUaQ.
ECDSA key fingerprint is MD5:d4:ab:7b:82:c3:99:b8:5d:61:f2:dc:af:06:38:e7:6c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.43.104' (ECDSA) to the list of known hosts.
[email protected]'s password: 
token.csv                                                 100%   84     5.2KB/s   00:00    
kube-apiserver                                            100%  934   353.2KB/s   00:00    
kube-scheduler                                            100%   94    41.2KB/s   00:00    
kube-controller-manager                                   100%  483   231.5KB/s   00:00    
kube-apiserver                                            100%  184MB  19.4MB/s   00:09    
kubectl                                                   100%   55MB  24.4MB/s   00:02    
kube-controller-manager                                   100%  155MB  26.7MB/s   00:05    
kube-scheduler                                            100%   55MB  31.1MB/s   00:01    
ca-key.pem                                                100% 1679   126.0KB/s   00:00    
ca.pem                                                    100% 1359   514.8KB/s   00:00    
server-key.pem                                            100% 1675   501.4KB/s   00:00    
server.pem                                                100% 1643   649.4KB/s   00:00    

##master2中需要etcd的证书,否则apiserver无法启动
[root@master ~]# scp -r /opt/etcd/ [email protected]:/opt/
[email protected]'s password: 
etcd                                                      100%  516    64.2KB/s   00:00    
etcd                                                      100%   18MB  25.7MB/s   00:00    
etcdctl                                                   100%   15MB  25.9MB/s   00:00    
ca-key.pem                                                100% 1675   118.8KB/s   00:00    
ca.pem                                                    100% 1265   603.2KB/s   00:00    
server-key.pem                                            100% 1675   675.3KB/s   00:00    
server.pem                                                100% 1338   251.5KB/s   00:00    
[root@master ~]# 

##复制执行脚本到master2中
[root@master ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/
[email protected]'s password: 
kube-apiserver.service                                    100%  282    30.3KB/s   00:00    
kube-controller-manager.service                           100%  317    45.9KB/s   00:00    
kube-scheduler.service                                    100%  281   151.7KB/s   00:00    

master2的操作

  • 基本环境设置
##修改主机名
[root@localhost ~]# hostnamectl set-hostname master2
[root@localhost ~]# su

##永久关闭安全性功能
[root@master2 ~]# systemctl stop firewalld
[root@master2 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master2 ~]# setenforce 0
[root@master2 ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 

##关闭网络管理,防止IP地址变化
systemctl stop NetworkManager
systemctl disable NetworkManager 
  • 修改kube-apiserver中的IP地址
[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master2 cfg]# vi kube-apiserver 


KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.43.101:2379,https://192.168.43.102:2379,https://192.168.43.103:2379 \
##修改bind地址,绑定本地地址
--bind-address=192.168.43.104 \
--secure-port=6443 \
##修改对外展示的地址
--advertise-address=192.168.43.104 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

  • 开启服务,并且验证
##开启apieserver服务
[root@master2 cfg]# systemctl start kube-apiserver.service 
[root@master2 cfg]# systemctl enable kube-apiserver.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

##开启控制管理器服务
[root@master2 cfg]# systemctl start kube-controller-manager.service 
[root@master2 cfg]# systemctl enable kube-controller-manager.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

##开启调度器服务
[root@master2 cfg]# systemctl start kube-scheduler.service 
[root@master2 cfg]# systemctl enable kube-scheduler.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

##将执行脚本添加入全局变量
[root@master2 cfg]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
[root@master2 cfg]# source /etc/profile

##查看集群节点,说明master2添加成功
[root@master2 cfg]# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
192.168.43.102   Ready       26h   v1.12.3
192.168.43.103   Ready       26h   v1.12.3
[root@master2 cfg]# 

注:能够添加master节点的前提是在部署单节点时,在server-csr.json中指定添加的地址,要不然生成不了证书,也就添加不了

部署负载均衡

以下操作在nginx_lbm和nginx_lbb中都操作,并且以nginx_lbm为例

  • 编辑keepalived的配置文件的模板
##keepalibed.conf到nginx_lbm和nginx_lbb
[root@nginx_lbm ~]# ls
anaconda-ks.cfg  initial-setup-ks.cfg  keepalived.conf  公共  模板  视频  图片  文档  下载  音乐  桌面
[root@nginx_lbm ~]# cat keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收邮件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 邮件发送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {        ##指定检测nginx服务的脚本路径
    script "/usr/local/nginx/sbin/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface eth0
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress {         ##指定VIP地址
        10.0.0.188/24 
    } 
    track_script {    
        check_nginx
    } 
}

[root@nginx_lbm ~]# 

  • 关闭安全性功能
systemctl stop firewalld.service
setenforce 0
systemctl stop NetworkManager
##可以永久设置
  • 编辑nginx的源,并且安装nginx
[root@nginx_lbm ~]# cat /etc/yum.repos.d/nginx.repo 
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0

##重新加载yum 仓库
[root@nginx_lbm ~]# yum list
##安装nginx
[root@nginx_lbm ~]# yum install nginx -y
  • 编辑nginx配置文件,添加负载均衡功能,并且启动服务
##在nginx的1.19版本增加了四层负载均衡功能,stream
##在之前的版本仅仅支持七层,upstream
##配置负载均衡功能
[root@nginx_lbm ~]# vi /etc/nginx/nginx.conf 
##在第12行以下插入以下内容,在event和http模块之间添加
     12 stream {
     13 
     14    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
     15     access_log  /var/log/nginx/k8s-access.log  main;        ##指定日志目录
     16 
     17     upstream k8s-apiserver {
     18 #此处为master1的ip地址和端口
     19         server 192.168.43.101:6443;
     20 #此处为master2的ip地址和端口
     21         server 192.168.43.102:6443;
     22     }
     23     server {
     24                 listen 6443;
     25                 proxy_pass k8s-apiserver;
     26     }
     27     }

##检查配置文件是否正确
[root@nginx_lbm ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

##编辑主页面,以区别master和backup
[root@nginx_lbm ~]# cd /usr/share/nginx/html/
[root@nginx_lbm html]# ls
50x.html  index.html
[root@nginx_lbm html]# vi index.html

master

或者

backup

##开启nginx服务 [root@nginx_lbm html]# systemctl start nginx ##安装keepalived服务 [root@nginx_lbm html]# yum install keeepalived -y ##覆盖keepalived的配置文件 [root@nginx_lbm ~]# cp keepalived.conf /etc/keepalived/keepalived.conf cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes

注:以上操作在nginx_lbm和nginx_lbb都操作

nginx_lbm的操作

  • 配置keepalived服务
[root@nginx_lbm ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收邮件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 邮件发送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 
##指定检查nginx服务是否关闭,来决定keepalived的状态脚本
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER         ##在nginx_lbm,设置为master
    interface ens33        ##指定网卡名
    virtual_router_id 51     ##24行,vrrp路由ID实例,每个实例是唯一的
    priority 100        ##在master中优先级为100,backup优先级为90
    advert_int 1    
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.43.100/24         ##指定VIP
    } 
##指定执行上述的脚本
    track_script {
        check_nginx
    } 
}

[root@nginx_lbm ~]# vi /etc/nginx/check_nginx.sh  ##这个keepalived服务关闭脚本需要自行创建
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")	
#nginx服务统计数量
##nginx服务开启为2,nginx服务关闭之后为0

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
#匹配为0,关闭keepalived服务

[root@nginx_lbm ~]# chmod +x /etc/nginx/check_nginx.sh        ##添加执行权限

##启动keepalived服务
[root@nginx_lbm ~]# systemctl start keepalived.service


  • 查看vip,存在vip地址
[root@nginx_lbm ~]# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:92:43:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.43.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ba5a:8436:895c:4285/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff

nginx_lbb的操作

  • 配置keepalived服务
##编辑keepalived的配置文件
[root@nginx_lbb ~]# vi /etc/keepalived/keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收邮件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 邮件发送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP        ##不同于nginx_lbm,此处的state为BACKUP
    interface ens33
    virtual_router_id 51        
    priority 90         ##优先级为90,低于nginx_lbm
    advert_int 1    
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.43.100/24 
    } 
    track_script {
        check_nginx
    } 
}

[root@nginx_lbb ~]# vi /etc/nginx/check_nginx.sh 
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")	

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi


[root@nginx_lbb ~]# chmod +x /etc/nginx/check_nginx.sh 


##开启服务
[root@nginx_lbb ~]# systemctl start keepalived.service 
  • 查看vip

kubernetes-----部署多master的二进制集群_第2张图片

验证负载均衡器中的地址漂移

  • 关闭nginx_lbm中的nginx服务
##主动关闭nginx的所有服务
[root@nginx_lbm ~]# pkill nginx

##查看nginx和keepalived状态
[root@nginx_lbm ~]# systemctl status nginx
● nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since 三 2020-04-29 08:40:27 CST; 6s ago
     Docs: http://nginx.org/en/docs/
  Process: 4085 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=1/FAILURE)
 Main PID: 1939 (code=exited, status=0/SUCCESS)


##keepalived服务也被自动关闭
[root@nginx_lbm ~]# systemctl status keepalived.service 
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) Send...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:40:27 nginx_lbm Keepalived[2202]: Stopping
4月 29 08:40:27 nginx_lbm systemd[1]: Stopping LVS and VRRP High Availab....
4月 29 08:40:27 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) sent...
4月 29 08:40:27 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) remo...
4月 29 08:40:28 nginx_lbm systemd[1]: Stopped LVS and VRRP High Availabi....
Hint: Some lines were ellipsized, use -l to show in full.


##查看地址发现,没有vip
[root@nginx_lbm ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:92:43:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ba5a:8436:895c:4285/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
  • 查看nginx_lbm和nginx_lbb的vip

kubernetes-----部署多master的二进制集群_第3张图片

上述界面的出现说明,双机热备成功

  • 恢复vip,在nginx_lbm中先开启nginx,再开启keepalived服务
##在nginx_lbm中启动nginx和keepalived服务
[root@nginx_lbm ~]# systemctl start nginx
[root@nginx_lbm ~]# systemctl start keepalived

##再次查看地址信息,发现vip回到了nginx_lbm
[root@nginx_lbm ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:92:43:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.43.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ba5a:8436:895c:4285/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff

修改node的VIP以及pod的创建

修改node1和node2配置文件

  • 以node2为例,所有node节点都需要修改
[root@node2 ~]# cd /opt/kubernetes/cfg/
[root@node2 cfg]# ls
bootstrap.kubeconfig  kubelet.config      kube-proxy.kubeconfig
flanneld              kubelet.kubeconfig
kubelet               kube-proxy
[root@node2 cfg]# vi bootstrap.kubeconfig 
server: https://192.168.43.100:6443		
#改为Vip的地址
[root@node2 cfg]# vi kubelet.kubeconfig 
server: https://192.168.43.100:6443		
#改为Vip的地址
[root@node2 cfg]# vi kube-proxy.kubeconfig 
server: https://192.168.43.100:6443		
#改为Vip的地址

##在当前目录下,直接自检文件
[root@node2 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.43.100:6443
kubelet.kubeconfig:    server: https://192.168.43.100:6443
kube-proxy.kubeconfig:    server: https://192.168.43.100:6443


##重启服务
[root@node2 cfg]# systemctl restart kubelet.service 
[root@node2 cfg]# systemctl restart kube-proxy.service 
  • 在nginx_lbm上查看nginx的日志,看是否有node访问vip
[root@nginx_lbm ~]# cd /var/log/nginx/
[root@nginx_lbm nginx]# ls
access.log  error.log  k8s-access.log
[root@nginx_lbm nginx]# tail -f k8s-access.log 
192.168.43.102 192.168.43.101:6443 - [29/Apr/2020:08:49:41 +0800] 200 1119
192.168.43.102 192.168.43.102:6443, 192.168.43.101:6443 - [29/Apr/2020:08:49:41 +0800] 200 0, 1119
192.168.43.103 192.168.43.102:6443, 192.168.43.101:6443 - [29/Apr/2020:08:50:08 +0800] 200 0, 1120
192.168.43.103 192.168.43.101:6443 - [29/Apr/2020:08:50:08 +0800] 200 1121

做了负载均衡之后,访问流量都在负载均衡器上,大大缓解了master的压力

在master上创建pod并且测试

  • 创建pod
[root@master ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
  • 查看pod状态
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-8qt6q   1/1     Running   0          24m

##pod的状态有
ContainerCreating    正在创建pod
Running            正在运行pod
  • 绑定群集中的匿名用户赋予管理员权限(解决日志不可看问题)
[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
  • 查看pod的网络,包括pod处在的node和pod中容器的地址
[root@master ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE
nginx-dbddb74b8-8qt6q   1/1     Running   0          30m   172.17.36.2   192.168.43.102   
  • 查看pod日志
##在对应的node1上访问,因为这个pod创建在node1上
##使用上面查看pod网络的的地址
[root@node1 ~]# curl 172.17.36.2



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

[root@node1 ~]# ##在master上查看日志 [root@master ~]# kubectl logs nginx-dbddb74b8-8qt6q 172.17.36.1 - - [29/Apr/2020:13:37:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-" [root@master ~]#

搭建k8s的Dashboard

master1上的操作

  • 在dashboard官网中下载,安装web界面所需要的yaml文件,网址如下:
  • https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

kubernetes-----部署多master的二进制集群_第4张图片

k8s创建pod资源,可以使用两种方式,

命令:kubectl run  [podname] --image=[imageName]

YAML文件:kubectl create -f xxx.yaml

  • 上传yaml文件
##创建dashboard目录
[root@master ~]# mkdir dashboard
[root@master ~]# cd dashboard/
##上传文件
[root@master dashboard]# ls
dashboard-configmap.yaml   dashboard-rbac.yaml    dashboard-service.yaml
dashboard-controller.yaml  dashboard-secret.yaml  k8s-admin.yaml
  • 加载、创建所需文件,顺序必须如下步骤:
##在/root/dashboard目录下操作
##创建dashboard的相关资源
##-f指定YAML文件

#配置授权访问api,角色控制,访问控制
kubectl create -f dashboard-rbac.yaml

#配置进行加密,安全设定
kubectl create -f dashboard-secret.yaml

#配置网站应用
kubectl create -f dashboard-configmap.yaml

#配置控制器
kubectl create -f dashboard-controller.yaml

#配置发布出去服务,给别人进行访问
kubectl create -f dashboard-service.yaml

##查看资源情况
kubectl get [kind] -n [namespace]
  • 查看创建在指定的kube-system命名空间下的pod
##-n指定命名空间
[root@master dashboard]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-65f974f565-bwmlx   1/1     Running   0          47s

##查看指定命名空间下的所有pod资源和service资源
##查看如何访问dashboard
[root@master dashboard]# kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/kubernetes-dashboard-65f974f565-bwmlx   1/1     Running   0          82s

NAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kubernetes-dashboard   NodePort   10.0.0.199           443:30001/TCP   70s

由上可知,访问dashboard可以访问:

https://node的IP地址:30001/(使用kubectl get pods -o wide获得pod在哪个node上)

比如:https://192.168.43.102:30001/

但是访问dashboard需要令牌,所以下面还需要生成令牌

  • 生成自签证书
##制作证书脚本
[root@master dashboard]# vi dashboard-cert.sh
{
   "CN": "Dashboard",
   "hosts": [],
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "CN",
           "L": "BeiJing",
           "ST": "BeiJing"
       }
   ]
}
EOF

K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system


##执行脚本
[root@master dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
[root@master dashboard]# ls
dashboard-cert.sh          dashboard.csr       dashboard.pem          dashboard-service.yaml
dashboard-configmap.yaml   dashboard-csr.json  dashboard-rbac.yaml    k8s-admin.yaml
dashboard-controller.yaml  dashboard-key.pem   dashboard-secret.yaml



##在控制器yaml文件中添加证书,注意yaml文件的格式,使用空格
[root@master dashboard]# vi dashboard-controller.yaml 
#在47行下追加以下内容
          - --tls-key-file=dashboard-key.pem
          - --tls-cert-file=dashboard.pem


##重新部署控制器,之后资源所在地址可能会发生改变,所以重新查看资源所在node
[root@master dashboard]# kubectl apply -f dashboard-controller.yaml 
  • 生成登录token
##生成令牌
[root@master dashboard]# kubectl create -f k8s-admin.yaml 

##vi k8s-admin.yaml,查看kind和namespace
##将令牌保存
[root@master dashboard]# kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-4zpgd        kubernetes.io/service-account-token   3      66s
default-token-pdn6p                kubernetes.io/service-account-token   3      39h
kubernetes-dashboard-certs         Opaque                                11     11m
kubernetes-dashboard-key-holder    Opaque                                2      15m
kubernetes-dashboard-token-4whmf   kubernetes.io/service-account-token   3      15m

##查看token,并且复制
[root@master dashboard]# kubectl describe secret dashboard-admin-token-4zpgd -n kube-system
Name:         dashboard-admin-token-4zpgd
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 36095d9f-89bd-11ea-bb1a-000c29ce5f24

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNHpwZ2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzYwOTVkOWYtODliZC0xMWVhLWJiMWEtMDAwYzI5Y2U1ZjI0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.wRx71hNjdAOuaG8EPEr_yWaAmw_CF-aXwVFk7XeXwW2bzDLRh0RfQV-7nyBbw-wcPVXLbpoWNSYuHFS0vXHWGezk9ssERnErDXjE164H0lR8LkD1NekUQqB8L9jqW9oAZrZ0CkAxUIuijG14BjbAIV5wXmT1aKsK2sZTC0u-IjDcIT2UhjU3LvSL0Fzi4zyEvfl5Yf0Upx6dZ7yNpUd13ziNIP4KJ5DjWesIK-34IG106Kf6y1ehmRdW1Sg0HNvopXhFJPAhp-BkEz_SCmsf89_RDNVBTBSRWCgZdQC78B2VshbJqMRZOIV2IprBFhYKK6AeOY6exCyk1HWQRKFMRw
[root@master dashboard]# 

登录dashboard

kubernetes-----部署多master的二进制集群_第5张图片

生产环境中一般使用令牌形式登录

kubernetes-----部署多master的二进制集群_第6张图片

 

 

 

 

你可能感兴趣的:(kubernetes)