kubernetes学习(二)之规划及部署etcd和master

kubernetes规划及部署etcd和master

  • 1、生产环境k8s平台规划-多集群HA
  • 2、测试环境平台规划
  • 3、官方提供三种部署方式
  • 4、部署单master集群
    • 4.1 集群规划
    • 4.2 初始化服务器
    • 4.2 安装etcd
      • 4.2.1 加密概念
      • 4.2.2 SSL证书和相关概念
      • 4.3 给etcd颁发证书
    • 4.4 部署etcd
      • 4.4.1 安装etcd
      • 4.4.2 配置etcd.service文件
      • 4.4.3 配置etcd.conf文件
      • 4.4.4 同步密钥
      • 4.4.5 启动etcd
    • 4.5 部署master

1、生产环境k8s平台规划-多集群HA

master节点建议3台
etcd节点建议(3,5,7)
worker节点越多越好,根据业务情况具体部署
kubernetes学习(二)之规划及部署etcd和master_第1张图片

2、测试环境平台规划

由于测试环境资源有限,为最大利用虚拟机资源,按照以下规划进行部署:

角色 ip 组件
k8s-master01/LoadBanlancer(master) 192.168.8.21 kube-apiserver/kube-controller-manager/kbe-scheduler/etcd/nginx L4
k8s-master02/LoadBanlancer(slave) 192.168.8.22/192.168.8.20(vip) kube-apiserver/kube-controller-manager/kbe-scheduler/Nginx L4
k8s-node01 192.168.8.23 kubelet/kube-proxy/docker/etcd
k8s-node02 192.168.8.24 kubelet/kube-proxy/docker/etcd

3、官方提供三种部署方式

minikube
- Minikube是一个工具,可以在本地快速运行一个单点的kubernetes,仅用于尝试kubernetes
- 部署地址:https://kubernetes.io/docs/setup/minikube/
kubeadm
- kubeadm也是工具,提供kubeadm init和kubeadm join,用于快速部署kubernetes集群
- 部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
二进制
- 推荐,从官方下载发行版的二进制包,手动部署每个组件,组成kubernetes集群
- 下载地址:https://github.com/kubernetes/kubernetes

4、部署单master集群

4.1 集群规划

master节点
主机名:k8s-master01
IP:192.168.8.21/24
worker节点1
主机名:k8s-node01
IP:192.168.8.23/24
worker节点2
主机名:k8s-node02
IP:192.168.8.24/24
k8s版本:v1.9.11
安装方式:离线-二进制
操作系统版本:centos7.6

4.2 初始化服务器

1、关闭防火墙(以下以master节点举例)

[root@k8s-master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-master ~]# systemctl stop firewalld

2、关闭swap交换分区
注释掉/etc/fstab中的最后一行

[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# vi /etc/fstab
[root@k8s-master ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Mar 19 17:18:10 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/bel-root    /                       xfs     defaults        0 0
UUID=8ddd9b13-2b6a-4706-969e-e80478adbaf0 /boot                   xfs     defaults        0 0
#/dev/mapper/bel-swap    swap                    swap    defaults        0 0
[root@k8s-master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:            972         125         631           7         215         666
Swap:             0           0           0

3、配置主机名

[root@k8s-master01 ~]# hostnamectl set-hostname k8s-master01
[root@k8s-master01 ~]#

4、配置名称解析

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.21 k8s-master01
192.168.8.23 k8s-node01
192.168.8.24 k8s-node02
192.168.8.22 k8s-master02

5、配置时间同步
master01为时间服务器服务端

  • 在master01上安装chrony
[root@k8s-master01 ~]# yum install chrony
base                                                     | 3.6 kB     00:00
extras                                                   | 2.9 kB     00:00
kernel-bek                                               | 2.9 kB     00:00
kernel-lt                                                | 2.9 kB     00:00
kernel-ml                                                | 2.9 kB     00:00
updates                                                  | 3.3 kB     00:00
(1/8): kernel-bek/x86_64/primary_db                        |  13 kB   00:00
(2/8): base/x86_64/group_gz                                | 161 kB   00:00
(3/8): updates/x86_64/updateinfo                           |  31 kB   00:00
(4/8): extras/x86_64/primary_db                            | 187 kB   00:01
(5/8): base/x86_64/primary_db                              | 6.0 MB   00:09
(6/8): kernel-lt/x86_64/primary_db                         |  12 MB   00:15
(7/8): updates/x86_64/primary_db                           |  12 MB   00:22
(8/8): kernel-ml/x86_64/primary_db                         |  28 MB   00:28
Package chrony-3.2-2.el7.x86_64 already installed and latest version
Nothing to do

  • 修改/etc/chrony.conf文件
    server 127.127.1.0 iburst #设置本机为上游时间同步服务器
    allow 192.168.8.0/24 #允许网段时间同步
    local stratum 10
[root@k8s-master01 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 127.127.1.0 iburst

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
allow 192.168.8.0/24

# Serve time even if not synchronized to a time source.
local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

  • 启动chronyd服务
[root@k8s-master01 ~]# systemctl start chronyd
[root@k8s-master01 ~]# systemctl enable chronyd
Created symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service.
[root@k8s-master01 ~]# ss -unl |grep 123
UNCONN     0      0            *:123                      *:*
[root@k8s-master01 ~]#

其他为时间服务器客户端
1、安装chrony
2、配置/etc/chrony.conf文件中server为192.168.8.21
3、启动服务
4、查看时间同步状态chronyc sources

[root@k8s-node01 ~]# yum install chrony
base                                                     | 3.6 kB     00:00
extras                                                   | 2.9 kB     00:00
kernel-bek                                               | 2.9 kB     00:00
kernel-lt                                                | 2.9 kB     00:00
kernel-ml                                                | 2.9 kB     00:00
updates                                                  | 3.3 kB     00:00
(1/8): kernel-bek/x86_64/primary_db                        |  13 kB   00:00
(2/8): base/x86_64/group_gz                                | 161 kB   00:00
(3/8): updates/x86_64/updateinfo                           |  31 kB   00:00
(4/8): extras/x86_64/primary_db                            | 187 kB   00:00
(5/8): base/x86_64/primary_db                              | 6.0 MB   00:07
(6/8): updates/x86_64/primary_db                           |  12 MB   00:19
(7/8): kernel-lt/x86_64/primary_db                         |  12 MB   00:21
(8/8): kernel-ml/x86_64/primary_db                         |  28 MB   00:23
Package chrony-3.2-2.el7.x86_64 already installed and latest version
Nothing to do
[root@k8s-node01 ~]# vi /etc/chrony.conf
[root@k8s-node01 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.8.21 iburst

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking
[root@k8s-node01 ~]# systemctl start chronyd
[root@k8s-node01 ~]# systemctl enable chronyd
Created symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service.
[root@k8s-node01 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* k8s-master01                 10   6    17    15  -2233ns[ +166us] +/- 1366us
[root@k8s-node01 ~]#

6、关闭selinux
修改/etc/selinux/config配置文件中的SELINUX=disabled

[root@k8s-master01 ~]# setenforce 0
[root@k8s-master01 ~]# vi /etc/selinux/config
[root@k8s-master01 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disalbed
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

4.2 安装etcd

4.2.1 加密概念

  • 对称加密:加密和解密用相同的密钥
  • 非对称加密:用公钥和私钥的密钥对实现加密和解密
  • 单向加密:只能加密,不能解密,如md5
[root@k8s-master01 ~]# md5sum /etc/passwd
eec5e3e592b01531a22115271ae65a53  /etc/passwd

4.2.2 SSL证书和相关概念

1、SSL证书来源:

  • 网咯第三方机构购买,通常这种证书是用于让外部用户访问使用
  • 自己给自己发证书-自签证书,会提示网站不受信任,通常用于内部环境

2、PKI(Public Key INfrastructure公钥基础设一个完整的PKI包含以下几部分

  • 端实体(申请者)
  • 注册结构(RC)
  • 签证机构(CA)
  • 证书撤销列表(CRL)
  • 证书存取库

3、自建签证机构CA

  1. openssl
  2. cfssl

4.3 给etcd颁发证书

流程说明简介:

  1. 创建证书颁发机构
  2. 填写表单-写明etcd所在节点的ip
  3. 向证书颁发机构申请证书

证书工具下载
1、下载到/root/cert目录下

[root@k8s-master01 ~]# mkdir cert
[root@k8s-master01 ~]# cd cert
[root@k8s-master01 cert]# ls
[root@k8s-master01 cert]# ls
cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
[root@k8s-master01 cert]# cp cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
[root@k8s-master01 cert]# cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@k8s-master01 cert]# cp cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8s-master01 cert]# ll /usr/local/bin/
total 18808
-rw-r--r--. 1 root root 10376657 Apr 20 18:59 cfssl
-rw-r--r--. 1 root root  6595195 Apr 20 18:58 cfssl-certinfo
-rw-r--r--. 1 root root  2277873 Apr 20 18:59 cfssljson
[root@k8s-master01 cert]# chmod +x /usr/local/bin/cfssl*
[root@k8s-master01 cert]# ll /usr/local/bin/
total 18808
-rwxr-xr-x. 1 root root 10376657 Apr 20 18:59 cfssl
-rwxr-xr-x. 1 root root  6595195 Apr 20 18:58 cfssl-certinfo
-rwxr-xr-x. 1 root root  2277873 Apr 20 18:59 cfssljson

2、创建证书颁发机构

  • 创建CA配置文件
[root@k8s-master01 cert]# cat ca-config.json
{
     
        "signing": {
     
                "default": {
     
                        "expiry": "87600h"
                },
        "profiles": {
     
                "kubernetes": {
     
                        "usages": [
                                "signing",
                                "key encipherment",
                                "server auth",
                                "client auth"
                                ],
                        "expiry": "87600h"
                        }
                }
        }
}

  • 创建CA证书请求文件
[root@k8s-master01 cert]# cat ca-csr.json
{
     
        "CN": "kubernetes",
        "key": {
     
                "algo": "rsa",
                "size": 2048
        },
        "names": [
                {
     
                        "C": "CN",
                        "ST": "SiChuan",
                        "L": "ChengDu",
                        "O": "k8s",
                        "OU": "4Paradigm"
                }
        ]
}

  • 生成CA证书和私钥
[root@k8s-master01 cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/04/20 19:14:09 [INFO] generating a new CA key and certificate from CSR
2020/04/20 19:14:09 [INFO] generate received request
2020/04/20 19:14:09 [INFO] received CSR
2020/04/20 19:14:09 [INFO] generating key: rsa-2048
2020/04/20 19:14:10 [INFO] encoded CSR
2020/04/20 19:14:10 [INFO] signed certificate with serial number 589663934261916779803792102866555957044511047602
[root@k8s-master01 cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3、创建etcd证书签名请求

[root@k8s-master01 cert]# cat etcd-csr.json
{
     
  "CN": "etcd",
  "hosts": [
        "192.168.8.21",
        "192.168.8.23",
        "192.168.8.24"
  ],
  "key": {
     
        "algo": "rsa",
        "size": 2048
  },
  "names": [
         {
     
                "C": "CN",
                        "ST": "SiChuan",
                        "L": "ChengDu",
                        "O": "k8s",
                        "OU": "4Paradigm"
                }
        ]
}

4、向CA申请etcd证书

[root@k8s-master01 cert]# cfssl gencert  -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2020/04/20 19:40:58 [INFO] generate received request
2020/04/20 19:40:58 [INFO] received CSR
2020/04/20 19:40:58 [INFO] generating key: rsa-2048
2020/04/20 19:40:59 [INFO] encoded CSR
2020/04/20 19:40:59 [INFO] signed certificate with serial number 679870118084575764165193307253491539626416002130
2020/04/20 19:40:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 cert]# ls
ca-config.json  ca-csr.json  ca.pem    etcd-csr.json  etcd.pem
ca.csr          ca-key.pem   etcd.csr  etcd-key.pem

4.4 部署etcd

etcd分别部署在3台虚拟机,在master01,node01,node02上分别安装一个etcd

4.4.1 安装etcd

在master01,node01,node02上yum安装etcd

[root@k8s-master01 ~]# ps -ef|grep etcd
root      19778  19699  0 22:24 pts/1    00:00:00 grep --color=auto etcd
[root@k8s-node02 ~]# yum install etcd
Resolving Dependencies
--> Running transaction check
---> Package etcd.x86_64 0:3.3.11-2.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package      Arch           Version                       Repository      Size
================================================================================
Installing:
 etcd         x86_64         3.3.11-2.el7.centos           extras          10 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 10 M
Installed size: 45 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/extras/packages/etcd-3.3.11-2.el7.centos.x86_64                                                                                                                                                             .rpm: Header V4 RSA/SHA256 Signature, key ID f70e5c1d: NOKEY
Public key for etcd-3.3.11-2.el7.centos.x86_64.rpm is not installed
etcd-3.3.11-2.el7.centos.x86_64.rpm                        |  10 MB   00:04
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-BCLinux-7
Importing GPG key 0xF70E5C1D:
 Userid     : "BCLinux-7 "
 Fingerprint: a250 cd47 74c7 f967 c2b7 02d8 36ad 8f05 f70e 5c1d
 Package    : bclinux-release-7-6.1905.el7.bclinux.x86_64 (@anaconda/7.6)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-BCLinux-7
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : etcd-3.3.11-2.el7.centos.x86_64                              1/1
  Verifying  : etcd-3.3.11-2.el7.centos.x86_64                              1/1

Installed:
  etcd.x86_64 0:3.3.11-2.el7.centos

Complete!

4.4.2 配置etcd.service文件

分别在master01,node01,node02上配置
配置都一样。

[root@k8s-master01 etcd]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd \
        --name=\"${
     ETCD_NAME}\" \
        --data-dir=\"${
     ETCD_DATA_DIR}\" \
        --listen-peer-urls=\"${
     ETCD_LISTEN_PEER_URLS}\" \
urls=\"${
     ETCD_ADVERTISE_CLIENT_URLS}\" \
        --initial-cluster-token=\"${
     ETCD_INITIAL_CLUSTER_TOKEN}\" \
        --initial-cluster=\"${
     ETCD_INITIAL_CLUSTER}\"  \
        --initial-cluster-state=\"${
     ETCD_INITIAL_CLUSTER_STATE}\" \
        --listen-client-urls=\"${
     ETCD_LISTEN_CLIENT_URLS}\" \
        --cert-file=\"${
     ETCD_CERT_FILE}\" \
        --key-file=\"${
     ETCD_KEY_FILE}\" \
        --trusted-ca-file=\"${
     ETCD_TRUSTED_CA_FILE}\" \
        --peer-cert-file=\"${
     ETCD_PEER_CERT_FILE}\" \
        --peer-key-file=\"${
     ETCD_PEER_KEY_FILE}\" \
        --peer-trusted-ca-file=\"${
     ETCD_PEER_TRUSTED_CA_FILE}\" \
        --peer-client-cert-auth \
        --client-cert-auth"
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

4.4.3 配置etcd.conf文件

在master01,node01,node02上配置,区别仅在于etcd_name以及https://ip地址

[root@k8s-master01 etcd]# cat /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.8.21:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.8.21:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd01"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.8.21:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.8.21:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.8.21:2380,etcd02=https://192.168.8.23:2380,etcd03=https://192.168.8.24:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

4.4.4 同步密钥

master01,node01,node02上都需要

[root@k8s-master01 etcd]# mkdir -p /etc/etcd/ssl
[root@k8s-master01 etcd]# ls
etcd.conf  ssl
[root@k8s-master01 etcd] cp /root/cert/{
     ca,etcd,etcd-key}.pem ssl/

4.4.5 启动etcd

master01,node01,node02上都需要

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl restart etcd
[root@k8s-master01 ~]# systemctl enable etcd
[root@k8s-master01 ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-04-21 17:13:06 CST; 12s ago
     Docs: https://github.com/coreos
 Main PID: 32898 (etcd)
   CGroup: /system.slice/etcd.service
           └─32898 /usr/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.8.21:2380

Apr 21 17:13:06 k8s-master01 etcd[32898]: established a TCP streaming connection with peer ad1fe97d38022a78 (stream Message reader)
Apr 21 17:13:06 k8s-master01 etcd[32898]: established a TCP streaming connection with peer 1f8487ed3c45c380 (stream Message writer)
Apr 21 17:13:06 k8s-master01 etcd[32898]: established a TCP streaming connection with peer 1f8487ed3c45c380 (stream MsgApp v2 reader)
Apr 21 17:13:06 k8s-master01 etcd[32898]: established a TCP streaming connection with peer ad1fe97d38022a78 (stream Message writer)
Apr 21 17:13:06 k8s-master01 etcd[32898]: established a TCP streaming connection with peer ad1fe97d38022a78 (stream MsgApp v2 writer)
Apr 21 17:13:06 k8s-master01 etcd[32898]: 81e8cbf8670abb56 initialzed peer connection; fast-forwarding 8 ticks (election ticks 10...peer(s)
Apr 21 17:13:06 k8s-master01 etcd[32898]: published {
     Name:etcd01 ClientURLs:[https://192.168.8.21:2379]} to cluster 268325ffd11c23ee
Apr 21 17:13:06 k8s-master01 etcd[32898]: ready to serve client requests
Apr 21 17:13:06 k8s-master01 etcd[32898]: serving client requests on 192.168.8.21:2379
Apr 21 17:13:06 k8s-master01 systemd[1]: Started Etcd Server.
Hint: Some lines were ellipsized, use -l to show in full.

4.5 部署master

在master01上部署
kubernetes-server-linux-amd64下载地址
1、解压压缩包

[root@k8s-master01 ~]# tar -xvf kubernetes-server-linux-amd64.tar.gz
kubernetes/
kubernetes/kubernetes-src.tar.gz
kubernetes/addons/
kubernetes/LICENSES
kubernetes/server/
kubernetes/server/bin/
kubernetes/server/bin/kube-scheduler.tar
kubernetes/server/bin/mounter
kubernetes/server/bin/cloud-controller-manager.docker_tag
kubernetes/server/bin/kubeadm
kubernetes/server/bin/kube-apiserver
kubernetes/server/bin/kube-controller-manager
kubernetes/server/bin/kubectl
kubernetes/server/bin/kube-aggregator
kubernetes/server/bin/cloud-controller-manager.tar
kubernetes/server/bin/kube-scheduler.docker_tag
kubernetes/server/bin/kube-aggregator.docker_tag
kubernetes/server/bin/kube-proxy
kubernetes/server/bin/kube-scheduler
kubernetes/server/bin/kube-controller-manager.tar
kubernetes/server/bin/kube-proxy.tar
kubernetes/server/bin/kube-proxy.docker_tag
kubernetes/server/bin/apiextensions-apiserver
kubernetes/server/bin/kube-controller-manager.docker_tag
kubernetes/server/bin/cloud-controller-manager
kubernetes/server/bin/kube-aggregator.tar
kubernetes/server/bin/kube-apiserver.docker_tag
kubernetes/server/bin/hyperkube
kubernetes/server/bin/kubelet
kubernetes/server/bin/kube-apiserver.tar

2、mv kubernets文件夹到/opt目录下

[root@k8s-master01 ~]# mv kubernetes /opt/

3、在/opt/kubernetes目录下新建3个文件夹

[root@k8s-master01 kubernetes]# mkdir logs
[root@k8s-master01 kubernetes]# mkdir ssl
[root@k8s-master01 kubernetes]# mkdir cfg

4、拷贝pem文件到/opt/kubernetes/ssl目录下

[root@k8s-master01 kubernetes]# cp /root/cert/*.pem ssl/
[root@k8s-master01 kubernetes]# cd ssl/
[root@k8s-master01 ssl]# ls
ca-key.pem  ca.pem  etcd-key.pem  etcd.pem
[root@k8s-master01 ssl]# mv etcd.pem server.pem
[root@k8s-master01 ssl]# mv etcd-key.pem server-key.pem
[root@k8s-master01 ssl]# ll
total 16
-rwxr-xr-x. 1 root root 1679 Apr 21 18:42 ca-key.pem
-rwxr-xr-x. 1 root root 1367 Apr 21 18:42 ca.pem
-rwxr-xr-x. 1 root root 1679 Apr 21 18:42 server-key.pem
-rwxr-xr-x. 1 root root 1436 Apr 21 18:42 server.pem

5、配置kube-apiserver.service文件

[root@k8s-master01 cfg]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/server/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

6、配置kube-controller-manager.service文件

[root@k8s-master01 cfg]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/server/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
[root@k8s-master01 cfg]#

7、配置kube-scheduler.service文件

[root@k8s-master01 cfg]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/server/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
[root@k8s-master01 cfg]#

8、配置kube-apiserver.conf

[root@k8s-master01 cfg]# pwd
/opt/kubernetes/cfg
[root@k8s-master01 cfg]# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.8.21:2379,https://192.168.8.23:2379,https://192.168.8.24:2379 \
--bind-address=192.168.8.21 \
--secure-port=6443 \
--advertise-address=192.168.8.21 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
#--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

9、配置kube-controller-manager.conf文件

[root@k8s-master01 cfg]# cat kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=876000h0m0s"

10、配置kube-scheduler.conf文件

[root@k8s-master01 cfg]# cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"

11、启用TLS Bootstrapping
手动生成token

[root@k8s-master01 cfg]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
0e9aeaafa88e00cca92906288e65bd07

编辑token.csv

[root@k8s-master01 cfg]# cat token.csv
0e9aeaafa88e00cca92906288e65bd07,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

给kubelet-bootstrap授权

[root@k8s-master01 cfg]# /opt/kubernetes/server/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:kubelet-bootstrap --user=kubelet-bootstrap
clusterrolebinding "kubelet-bootstrap" created
[root@k8s-master01 cfg]# cp /opt/kubernetes/server/bin/kubectl /usr/local/bin/

12、启动kubernetes服务
启动apiserver服务

[root@k8s-master01 cfg]# systemctl restart kube-apiserver.service
[root@k8s-master01 cfg]# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-04-21 19:53:05 CST; 7s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 34325 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─34325 /opt/kubernetes/server/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https:

Apr 21 19:53:05 k8s-master01 systemd[1]: Started Kubernetes API Server.
Apr 21 19:53:08 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:08 log.go:33: [restful/swagger] listing is available at http
Apr 21 19:53:08 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:08 log.go:33: [restful/swagger] https://192.168.8.21:6443/sw
Apr 21 19:53:09 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:09 log.go:33: [restful/swagger] listing is available at http
Apr 21 19:53:09 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:09 log.go:33: [restful/swagger] https://192.168.8.21:6443/sw

分别启动controller-manager和scheduler服务

[root@k8s-master01 cfg]# systemctl start kube-controller-manager.service
[root@k8s-master01 cfg]# systemctl start kube-scheduler.service
[root@k8s-master01 cfg]# ps -ef|grep kube
root      34325      1 23 19:53 ?        00:00:09 /opt/kubernetes/server/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubern2379 --bind-address=192.168.8.21 --secure-port=6443 --advertise-address=192.168.8.21 --allow-privileged=true --service-cluster-ip-range=10.odeRestriction --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmistrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/ope=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --sertfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=1
root      34377      1  7 19:53 ?        00:00:00 /opt/kubernetes/server/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/o-cidrs=true --cluster-cidr=10.244.0.0/16 --service-cluster-ip-range=10.0.0.0/24 --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --clservice-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=876000h0m0s
root      34406      1 11 19:53 ?        00:00:00 /opt/kubernetes/server/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubern
root      34415  10657  0 19:53 pts/1    00:00:00 grep --color=auto kube
[root@k8s-master01 cfg]# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-04-21 19:53:05 CST; 51s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 34325 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─34325 /opt/kubernetes/server/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https:

Apr 21 19:53:05 k8s-master01 systemd[1]: Started Kubernetes API Server.
Apr 21 19:53:08 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:08 log.go:33: [restful/swagger] listing is available at http
Apr 21 19:53:08 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:08 log.go:33: [restful/swagger] https://192.168.8.21:6443/sw
Apr 21 19:53:09 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:09 log.go:33: [restful/swagger] listing is available at http
Apr 21 19:53:09 k8s-master01 kube-apiserver[34325]: [restful] 2020/04/21 19:53:09 log.go:33: [restful/swagger] https://192.168.8.21:6443/sw
[root@k8s-master01 cfg]# systemctl status kube-controller-manager.service
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-04-21 19:53:36 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 34377 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─34377 /opt/kubernetes/server/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-ele

Apr 21 19:53:36 k8s-master01 systemd[1]: Started Kubernetes Controller Manager.
Apr 21 19:53:47 k8s-master01 kube-controller-manager[34377]: E0421 19:53:47.057645   34377 core.go:74] Failed to start service controller:
[root@k8s-master01 cfg]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-04-21 19:53:43 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 34406 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─34406 /opt/kubernetes/server/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --mast

配置服务开机启用

[root@k8s-master01 cfg]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@k8s-master01 cfg]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller
[root@k8s-master01 cfg]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

检查进程启动情况

[root@k8s-master01 cfg]# ps -ef|grep kube
root      34325      1  5 19:53 ?        00:00:51 /opt/kubernetes/server/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.8.21:2379,https://192.168.8.23:2379,https://192.168.8.24:2379 --bind-address=192.168.8.21 --secure-port=6443 --advertise-address=192.168.8.21 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 #--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/etcd/ssl/ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log
root      34377      1  2 19:53 ?        00:00:20 /opt/kubernetes/server/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --master=127.0.0.1:8080 --address=127.0.0.1 --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 --service-cluster-ip-range=10.0.0.0/24 --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=876000h0m0s
root      34406      1  0 19:53 ?        00:00:07 /opt/kubernetes/server/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --master=127.0.0.1:8080 --address=127.0.0.1
root      34619  10657  0 20:08 pts/1    00:00:00 grep --color=auto kube

你可能感兴趣的:(kubernetes学习,运维,docker,centos,kubernetes,etcd)