测试改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端

一、环境说明

操作系统 主机名 节点及功能 IP 备注
CentOS7.5 X86_64                                     

k8s-master                            

master/etcd/registry                                   192.168.168.2                             kube-apiserver、kube-controller-manager、kube-scheduler、etcd、docker、calico-image
CentOS7.5 X86_64                                           

work-node01                                            

node01/etcd 192.168.168.3                                     kube-proxy、kubelet、etcd、docker、calico
CentOS7.5 X86_64 work-node02 node02/etcd 192.168.168.4 kube-proxy、kubelet、etcd、docker、calico


 

 

 

测试改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端_第1张图片

测试改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端_第2张图片

集群功能各模块功能描述:

Master节点:
Master节点上面主要由四个模块组成,APIServer,schedule,controller-manager,etcd

APIServer: APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。

schedule: schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。

controller manager: 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。

etcd:etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。

Node节点:
每个Node节点主要由三个模板组成:kublet, kube-proxy

kube-proxy: 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。

kublet:kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。

二、3台主机安装前准备

1)更新软件包和内核
yum -y update
2) 关闭防火墙 
systemclt disable firewalld.service
3) 关闭SELinux
vi /etc/selinux/config
改SELINUX=enforcing为SELINUX=disabled
4)安装常用
yum -y install net-tools ntpdate conntrack-tools
5)优化内核参数
net.ipv4.ip_local_port_range = 30000 60999
net.netfilter.nf_conntrack_max = 26214400
net.netfilter.nf_conntrack_tcp_timeout_established = 86400
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 3600

三、修改三台主机命名

1) k8s-master
hostnamectl --static set-hostname  k8s-master
2) work-node01
hostnamectl --static set-hostname  work-node01
3) work-node02

hostnamectl --static set-hostname  work-node02

四、制作CA证书

1.创建生成证书和存放证书目录(3台主机上都进行此操作)

 
  1. mkdir /root/ssl

  2. mkdir -p /opt/kubernetes/{conf,bin,ssl,yaml}

2.设置环境变量(3台主机上都进行此操作)

 
  1. vi /etc/profile.d/kubernetes.sh

  2. K8S_HOME=/opt/kubernetes

  3. export PATH=$K8S_HOME/bin/:$PATH

  4. source /etc/profile.d/kubernetes.sh

3.安装CFSSL并复制到node01号node02节点

 
  1. cd /root/ssl

  2. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

  3. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

  4. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

  5. chmod +x cfssl*

  6. mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo

  7. mv cfssljson_linux-amd64 /opt/kubernetes/bin/cfssljson

  8. mv cfssl_linux-amd64 /opt/kubernetes/bin/cfssl

 
  1. scp /opt/kubernetes/bin/cfssl* 192.168.168.3:/opt/kubernetes/bin

  2. scp /opt/kubernetes/bin/cfssl* 192.168.168.4:/opt/kubernetes/bin

4.创建用来生成 CA 文件的 JSON 配置文件

 
  1. cd /root/ssl

  2. cat > ca-config.json <

  3. {

  4.   "signing": {

  5.     "default": {

  6.       "expiry": "87600h"

  7.     },

  8.     "profiles": {

  9.       "kubernetes": {

  10.         "usages": [

  11.             "signing",

  12.             "key encipherment",

  13.             "server auth",

  14.             "client auth"

  15.         ],

  16.         "expiry": "87600h"

  17.       }

  18.     }

  19.   }

  20. }

  21. EOF

server auth表示client可以用该ca对server提供的证书进行验证

client auth表示server可以用该ca对client提供的证书进行验证

5.创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件

 
  1. cat > ca-csr.json <

  2. {

  3.   "CN": "kubernetes",

  4.   "key": {

  5.     "algo": "rsa",

  6.     "size": 2048

  7.   },

  8.   "names": [

  9.     {

  10.       "C": "CN",

  11.       "ST": "BeiJing",

  12.       "L": "BeiJing",

  13.       "O": "k8s",

  14.       "OU": "System"

  15.     }

  16.   ]

  17. }

  18. EOF

6.生成CA证书和私钥

 
  1. # cfssl gencert -initca ca-csr.json | cfssljson -bare ca

  2. # ls ca*

  3. ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

将SCP证书分发到各节点

 
  1. # cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl

  2. # scp ca.csr ca.pem ca-key.pem ca-config.json [email protected]:/opt/kubernetes/ssl

  3. # scp ca.csr ca.pem ca-key.pem ca-config.json [email protected]:/opt/kubernetes/ssl

7.创建etcd证书请求

 
  1. # cat > etcd-csr.json <

  2. {

  3. "CN": "etcd",

  4. "hosts": [

  5. "127.0.0.1",

  6. "192.168.168.2",

  7. "192.168.168.3",

  8. "192.168.168.4",

  9. "k8s-master",

  10. "work-node01",

  11. "work-node02"

  12. ],

  13. "key": {

  14. "algo": "rsa",

  15. "size": 2048

  16. },

  17. "names": [

  18. {

  19. "C": "CN",

  20. "ST": "BeiJing",

  21. "L": "BeiJing",

  22. "O": "k8s",

  23. "OU": "System"

  24. }

  25. ]

  26. }

  27. EOF

8.生成 etcd 证书和私钥

 
  1. # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

  2. # ls etc*

  3. etcd.csr  etcd-key.pem  etcd.pem

分发证书文件

 
  1. # cp /root/ssl/etcd*.pem /etc/kubernetes/ssl

  2. # scp /root/ssl/etcd*.pem [email protected]/opt/kubernetes/ssl

  3. # scp /root/ssl/etcd*.pem [email protected]/opt/kubernetes/ssl

五、Etcd集群安装配置(配置前3台主机需时间同步)

1.修改hosts(3台主机上都进行此操作)

vi /etc/hosts

 
  1. # echo '192.168.168.2 k8s-master

  2. 192.168.168.3 work-node01

  3. 192.168.168.4 work-node02' >> /etc/hosts

2.下载etcd安装包

wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz

3.解压安装etcd(3台主机做同样配置)

 
  1. mkdir /var/lib/etcd

  2. tar -zxvf etcd-v3.3.7-linux-amd64.tar.gz

  3. cp etcd etcdctl /opt/kubernetes/bin

4.创建etcd启动文件

 
  1. cat > /usr/lib/systemd/system/etcd.service <

  2. [Unit]

  3. Description=Etcd Server

  4. After=network.target

  5.  
  6. [Service]

  7. Type=simple

  8. WorkingDirectory=/var/lib/etcd

  9. EnvironmentFile=/opt/kubernetes/conf/etcd.conf

  10. # set GOMAXPROCS to number of processors

  11. ExecStart=/bin/bash -c "GOMAXPROCS=1 /opt/kubernetes/bin/etcd"

  12. Type=notify

  13.  
  14. [Install]

  15. WantedBy=multi-user.target

  16. EOF

将etcd.service文件分发到各node节点

 
  1. scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/etcd.service

  2. scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/etcd.service

5.k8s-master(192.168.168.2)编译etcd.conf文件

vi /opt/kubernetes/conf/ectd.conf

 
  1. #[Member]

  2. #ETCD_CORS=""

  3. ETCD_DATA_DIR="/var/lib/etcd/k8s-master.etcd"

  4. #ETCD_WAL_DIR=""

  5. ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"

  6. ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379,http://127.0.0.1:4001"

  7. #ETCD_MAX_SNAPSHOTS="5"

  8. #ETCD_MAX_WALS="5"

  9. ETCD_NAME="k8s-master"

  10. #ETCD_SNAPSHOT_COUNT="100000"

  11. #ETCD_HEARTBEAT_INTERVAL="100"

  12. #ETCD_ELECTION_TIMEOUT="1000"

  13. #ETCD_QUOTA_BACKEND_BYTES="0"

  14. #ETCD_MAX_REQUEST_BYTES="1572864"

  15. #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"

  16. #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"

  17. #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"

  18. #

  19. #[Clustering]

  20. #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

  21. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.168.2:2380"

  22. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.168.2:2379"

  23. #ETCD_DISCOVERY=""

  24. #ETCD_DISCOVERY_FALLBACK="proxy"

  25. #ETCD_DISCOVERY_PROXY=""

  26. #ETCD_DISCOVERY_SRV=""

  27. ETCD_INITIAL_CLUSTER="k8s-master=https://192.168.168.2:2380,work-node01=https://192.168.168.3:2380,work-node02=https://192.168.168.4:2380"

  28. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

  29. ETCD_INITIAL_CLUSTER_STATE="new"

  30. #ETCD_STRICT_RECONFIG_CHECK="true"

  31. #ETCD_ENABLE_V2="true"

  32. #

  33. #[Proxy]

  34. #ETCD_PROXY="off"

  35. #ETCD_PROXY_FAILURE_WAIT="5000"

  36. #ETCD_PROXY_REFRESH_INTERVAL="30000"

  37. #ETCD_PROXY_DIAL_TIMEOUT="1000"

  38. #ETCD_PROXY_WRITE_TIMEOUT="5000"

  39. #ETCD_PROXY_READ_TIMEOUT="0"

  40. #

  41. #[Security]

  42. CLIENT_CERT_AUTH="true"

  43. ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"

  44. ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

  45. ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

  46. PEER_CLIENT_CERT_AUTH="true"

  47. ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"

  48. ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

  49. ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

  50. #

  51. #[Logging]

  52. #ETCD_DEBUG="false"

  53. #ETCD_LOG_PACKAGE_LEVELS=""

  54. #ETCD_LOG_OUTPUT="default"

  55. #

  56. #[Unsafe]

  57. #ETCD_FORCE_NEW_CLUSTER="false"

  58. #

  59. #[Version]

  60. #ETCD_VERSION="false"

  61. #ETCD_AUTO_COMPACTION_RETENTION="0"

  62. #

  63. #[Profiling]

  64. #ETCD_ENABLE_PPROF="false"

  65. #ETCD_METRICS="basic"

  66. #

  67. #[Auth]

  68. #ETCD_AUTH_TOKEN="simple"

6.work-node01(192.168.168.3)编译etcd.conf文件

vi /opt/kubernetes/conf/ectd.conf

 
  1. #[Member]

  2. #ETCD_CORS=""

  3. ETCD_DATA_DIR="/var/lib/etcd/work-node01.etcd"

  4. #ETCD_WAL_DIR=""

  5. ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"

  6. ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379,https://127.0.0.1:4001"

  7. #ETCD_MAX_SNAPSHOTS="5"

  8. #ETCD_MAX_WALS="5"

  9. ETCD_NAME="work-node01"

  10. #ETCD_SNAPSHOT_COUNT="100000"

  11. #ETCD_HEARTBEAT_INTERVAL="100"

  12. #ETCD_ELECTION_TIMEOUT="1000"

  13. #ETCD_QUOTA_BACKEND_BYTES="0"

  14. #ETCD_MAX_REQUEST_BYTES="1572864"

  15. #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"

  16. #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"

  17. #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"

  18. #

  19. #[Clustering]

  20. #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

  21. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.168.3:2380"

  22. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.168.3:2379"

  23. #ETCD_DISCOVERY=""

  24. #ETCD_DISCOVERY_FALLBACK="proxy"

  25. #ETCD_DISCOVERY_PROXY=""

  26. #ETCD_DISCOVERY_SRV=""

  27. ETCD_INITIAL_CLUSTER="k8s-master=https://192.168.168.2:2380,work-node01=https://192.168.168.3:2380,work-node02=https://192.168.168.4:2380"

  28. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

  29. ETCD_INITIAL_CLUSTER_STATE="new"

  30. #ETCD_STRICT_RECONFIG_CHECK="true"

  31. #ETCD_ENABLE_V2="true"

  32. #

  33. #[Proxy]

  34. #ETCD_PROXY="off"

  35. #ETCD_PROXY_FAILURE_WAIT="5000"

  36. #ETCD_PROXY_REFRESH_INTERVAL="30000"

  37. #ETCD_PROXY_DIAL_TIMEOUT="1000"

  38. #ETCD_PROXY_WRITE_TIMEOUT="5000"

  39. #ETCD_PROXY_READ_TIMEOUT="0"

  40. #

  41. #[Security]

  42. CLIENT_CERT_AUTH="true"

  43. ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"

  44. ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

  45. ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

  46. PEER_CLIENT_CERT_AUTH="true"

  47. ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"

  48. ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

  49. ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

  50. #

  51. #[Logging]

  52. #ETCD_DEBUG="false"

  53. #ETCD_LOG_PACKAGE_LEVELS=""

  54. #ETCD_LOG_OUTPUT="default"

  55. #

  56. #[Unsafe]

  57. #ETCD_FORCE_NEW_CLUSTER="false"

  58. #

  59. #[Version]

  60. #ETCD_VERSION="false"

  61. #ETCD_AUTO_COMPACTION_RETENTION="0"

  62. #

  63. #[Profiling]

  64. #ETCD_ENABLE_PPROF="false"

  65. #ETCD_METRICS="basic"

  66. #

  67. #[Auth]

  68. #ETCD_AUTH_TOKEN="simple"

7.work-node02编译etcd.conf文件

vi /opt/kubernetes/conf/ectd.conf

 
  1. #[Member]

  2. #ETCD_CORS=""

  3. ETCD_DATA_DIR="/var/lib/etcd/work-node02.etcd"

  4. #ETCD_WAL_DIR=""

  5. ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"

  6. ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379,https://127.0.0.1:4001"

  7. #ETCD_MAX_SNAPSHOTS="5"

  8. #ETCD_MAX_WALS="5"

  9. ETCD_NAME="work-node02"

  10. #ETCD_SNAPSHOT_COUNT="100000"

  11. #ETCD_HEARTBEAT_INTERVAL="100"

  12. #ETCD_ELECTION_TIMEOUT="1000"

  13. #ETCD_QUOTA_BACKEND_BYTES="0"

  14. #ETCD_MAX_REQUEST_BYTES="1572864"

  15. #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"

  16. #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"

  17. #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"

  18. #

  19. #[Clustering]

  20. #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

  21. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.168.4:2380"

  22. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.168.4:2379"

  23. #ETCD_DISCOVERY=""

  24. #ETCD_DISCOVERY_FALLBACK="proxy"

  25. #ETCD_DISCOVERY_PROXY=""

  26. #ETCD_DISCOVERY_SRV=""

  27. ETCD_INITIAL_CLUSTER="k8s-master=https://192.168.168.2:2380,work-node01=https://192.168.168.3:2380,work-node02=https://192.168.168.4:2380"

  28. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

  29. ETCD_INITIAL_CLUSTER_STATE="new"

  30. #ETCD_STRICT_RECONFIG_CHECK="true"

  31. #ETCD_ENABLE_V2="true"

  32. #

  33. #[Proxy]

  34. #ETCD_PROXY="off"

  35. #ETCD_PROXY_FAILURE_WAIT="5000"

  36. #ETCD_PROXY_REFRESH_INTERVAL="30000"

  37. #ETCD_PROXY_DIAL_TIMEOUT="1000"

  38. #ETCD_PROXY_WRITE_TIMEOUT="5000"

  39. #ETCD_PROXY_READ_TIMEOUT="0"

  40. #

  41. #[Security]

  42. CLIENT_CERT_AUTH="true"

  43. ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"

  44. ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

  45. ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

  46. PEER_CLIENT_CERT_AUTH="true"

  47. ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"

  48. ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

  49. ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

  50. #

  51. #[Logging]

  52. #ETCD_DEBUG="false"

  53. #ETCD_LOG_PACKAGE_LEVELS=""

  54. #ETCD_LOG_OUTPUT="default"

  55. #

  56. #[Unsafe]

  57. #ETCD_FORCE_NEW_CLUSTER="false"

  58. #

  59. #[Version]

  60. #ETCD_VERSION="false"

  61. #ETCD_AUTO_COMPACTION_RETENTION="0"

  62. #

  63. #[Profiling]

  64. #ETCD_ENABLE_PPROF="false"

  65. #ETCD_METRICS="basic"

  66. #

  67. #[Auth]

  68. #ETCD_AUTH_TOKEN="simple"

8.各etcd节点启动etcd并设置开机自动启动

 
  1. # systemctl daemon-reload

  2. # systemctl enable etcd

  3. # systemctl start etcd.service

  4. # systemctl status etcd.service

9.各etcd节点测试验证etcd集群配置

 
  1. # etcd --version //查看etcd安装版本

  2. etcd Version: 3.3.7

  3. Git SHA: 56536de55

  4. Go Version: go1.9.6

  5. Go OS/Arch: linux/amd64

查看etcd健康集群状态

 
  1. # etcdctl --endpoints=https://192.168.168.2:2379 \

  2.   --ca-file=/opt/kubernetes/ssl/ca.pem \

  3.   --cert-file=/opt/kubernetes/ssl/etcd.pem \

  4.   --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health

  5. member b1840b0a404e1103 is healthy: got healthy result from https://192.168.168.2:2379

  6. member d15b66900329a12d is healthy: got healthy result from https://192.168.168.4:2379

  7. member f9794412c46a9cb0 is healthy: got healthy result from https://192.168.168.3:2379

  8. cluster is healthy

查看etcd集群状态

 
  1. # etcdctl --endpoints=https://192.168.168.2:2379 \

  2. --ca-file=/opt/kubernetes/ssl/ca.pem \

  3. --cert-file=/opt/kubernetes/ssl/etcd.pem \

  4. --key-file=/opt/kubernetes/ssl/etcd-key.pem member list

  5. b1840b0a404e1103: name=k8s-master peerURLs=https://192.168.168.2:2380 clientURLs=https://192.168.168.2:2379 isLeader=false

  6. d15b66900329a12d: name=work-node02 peerURLs=https://192.168.168.4:2380 clientURLs=https://192.168.168.4:2379 isLeader=true

  7. f9794412c46a9cb0: name=work-node01 peerURLs=https://192.168.168.3:2380 clientURLs=https://192.168.168.3:2379 isLeader=false

六、3台主机上安装docker-engine

1.详细步骤见“Oracle Linux7安装Docker”

2.配置Docker连接ETCD集群

设置docker的JSON文件

 
  1. # vi /etc/docker/daemon.json

  2. {

  3. "exec-opts": ["native.cgroupdriver=cgroupfs"],

  4. "registry-mirrors": ["https://wghlmi3i.mirror.aliyuncs.com"],

  5. "cluster-store": "etcd://192.168.168.2:2379,192.168.168.3:2379,192.168.168.4:2379"

  6. }

注:各node节点docker bip地址不重复,如node01为172.26.1.1/24,node02为172.26.2.1/24

设置docer启动文件

 
  1. # vi /usr/lib/systemd/system/docker.service

  2. ExecStart=/usr/bin/dockerd

  3. 改为

  4. ExecStart=/usr/bin/dockerd --tlsverify \

  5. --tlscacert=/opt/kubernetes/ssl/ca.pem \

  6. --tlscert=/opt/kubernetes/ssl/etcd.pem \

  7. --tlskey=/opt/kubernetes/ssl/etcd-key.pem -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock

 
  1. # systemctl daemon-reload

  2. # systemctl enable docker.service

  3. # systemctl start docker.service

  4. # systemctl status docker.service

3.测试Docker TLS配置验证情况

七、Kubernetes集群安装配置

1.下载Kubernetes源码(本次使用k8s-1.10.4)

kubernetes-server-linux-amd64.tar.gz

2.解压Kubernets压缩包,生成一个kubernetes目录

tar -zxvf kubernetes-server-linux-amd64.tar.gz

3.配置k8s-master(192.168.168.2)

1)将k8s可执行文件拷贝至kubernets/bin目录下

 
  1. # cp -r /opt/software/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubeadm} /opt/kubernetes/bin/

  2.  
  3. # scp /opt/software/kubernetes/server/bin/{kubectl,kube-proxy,kubelet} [email protected]:/opt/kubernetes/bin/

  4. # scp /opt/software/kubernetes/server/bin/{kubectl,kube-proxy,kubelet} [email protected]:/opt/kubernetes/bin/

2)创建生成K8S csr的JSON配置文件:

 
  1. # cd /root/ssl

  2. # cat > kubernetes-csr.json <

  3. {

  4. "CN": "kubernetes",

  5. "hosts": [

  6. "127.0.0.1",

  7. "192.168.168.2",

  8. "192.168.168.3",

  9. "192.168.168.4",

  10. "10.1.0.1",

  11. "localhost",

  12. "kubernetes",

  13. "kubernetes.default",

  14. "kubernetes.default.svc",

  15. "kubernetes.default.svc.cluster",

  16. "kubernetes.default.svc.cluster.local"

  17. ],

  18. "key": {

  19. "algo": "rsa",

  20. "size": 2048

  21. },

  22. "names": [

  23. {

  24. "C": "CN",

  25. "ST": "BeiJing",

  26. "L": "BeiJing",

  27. "O": "k8s",

  28. "OU": "System"

  29. }

  30. ]

  31. }

  32. EOF

注:10.1.0.1地址为service-cluster网段中第一个ip

3)在/root/ssl目录下生成k8s证书和私钥,并分发到各节点

 
  1. # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

  2.  
  3. # cp kubernetes*.pem /opt/kubernetes/ssl/

  4. # scp cp kubernetes*.pem [email protected]:/opt/kubernetes/ssl/

  5. # scp cp kubernetes*.pem [email protected]:/opt/kubernetes/ssl/

4)创建生成admin证书csr的JSON配置文件

 
  1. # cat > admin-csr.json <

  2. {

  3. "CN": "admin",

  4. "hosts": [],

  5. "key": {

  6. "algo": "rsa",

  7. "size": 2048

  8. },

  9. "names": [

  10. {

  11. "C": "CN",

  12. "ST": "BeiJing",

  13. "L": "BeiJing",

  14. "O": "system:masters",

  15. "OU": "System"

  16. }

  17. ]

  18. }

  19. EOF

5)生成admin证书和私钥

 
  1. # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

  2. # cp admin*.pem /opt/kubernetes/ssl/

6)创建kube-apiserver使用的客户端token文件

 
  1. # mkdir /opt/kubernetes/token //在各k8s节点执行相同步骤

  2.  
  3. # export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') //生成一个程序登录3rd_session

  4. # cat > /opt/kubernetes/token/bootstrap-token.csv <

  5. {BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"

  6. EOF

7)创建基础用户名/密码认证配置

 
  1. # vi /opt/kubernetes/token/basic-auth.csv //添加如下内容

  2. admin,admin,1

  3. readonly,readonly,2

8)创建Kube API Server启动文件

 
  1. # vi /usr/lib/systemd/system/kube-apiserver.service

  2. [Unit]

  3. Description=Kubernetes API Server

  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes

  5. After=network.target

  6. After=etcd.service

  7.  
  8. [Service]

  9. EnvironmentFile=/opt/kubernetes/conf/kube.conf

  10. EnvironmentFile=/opt/kubernetes/conf/apiserver.conf

  11. ExecStart=/opt/kubernetes/bin/kube-apiserver \

  12.         $KUBE_LOGTOSTDERR \

  13.         $KUBE_LOG_LEVEL \

  14.         $KUBE_ETCD_SERVERS \

  15.         $KUBE_API_ADDRESS \

  16.         $KUBE_API_PORT \

  17.         $KUBELET_PORT \

  18.         $KUBE_ALLOW_PRIV \

  19.         $KUBE_SERVICE_ADDRESSES \

  20.         $KUBE_ADMISSION_CONTROL \

  21.         $KUBE_API_ARGS

  22. Restart=on-failure

  23. RestartSec=5

  24. Type=notify

  25. LimitNOFILE=65536

  26.  
  27. [Install]

  28. WantedBy=multi-user.target

mkdir /var/log/kubernetes      //在各k8s节点执行相同步骤
mkdir /var/log/kubernetes/apiserver

9)创建kube.conf文件

 
  1. # vi /opt/kubernetes/conf/kube.conf

  2. ###

  3. # kubernetes system config

  4. #

  5. # The following values are used to configure various aspects of all

  6. # kubernetes services, including

  7. #

  8. # kube-apiserver.service

  9. # kube-controller-manager.service

  10. # kube-scheduler.service

  11. # kubelet.service

  12. # kube-proxy.service

  13. # logging to stderr means we get it in the systemd journal

  14. KUBE_LOGTOSTDERR="--logtostderr=true"

  15. #

  16. # journal message level, 0 is debug

  17. KUBE_LOG_LEVEL="--v=2" //log级别

  18. #

  19. # Should this cluster be allowed to run privileged docker containers

  20. KUBE_ALLOW_PRIV="--allow-privileged=true"

  21. #

  22. # How the controller-manager, scheduler, and proxy find the apiserver

  23. #KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"

  24. KUBE_MASTER="--master=http://127.0.0.1:8080"

注:该配置文件同时被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy共用,在各node节点KUBE_MASTER值注销

向各分发kube.conf文件

 
  1. # scp kube.conf [email protected]:/opt/kubernetes/conf/

  2. # scp kube.conf [email protected]:/opt/kubernetes/conf/

10)生成高级审计配置

 
  1. cat > /opt/kubernetes/yaml/audit-policy.yaml <

  2. # Log all requests at the Metadata level.

  3. apiVersion: audit.k8s.io/v1beta1

  4. kind: Policy

  5. rules:

  6. - level: Metadata

  7. EOF

11)创建kube API Server配置文件并启动

 
  1. # vi /opt/kubernetes/conf/apiserver.conf

  2. ###

  3. ## kubernetes system config

  4. ##

  5. ## The following values are used to configure the kube-apiserver

  6. ##

  7. #

  8. ## The address on the local server to listen to.

  9. #KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com"

  10. KUBE_API_ADDRESS="--advertise-address=192.168.168.2 --bind-address=192.168.168.2 --insecure-bind-address=127.0.0.1"

  11. #

  12. ## The port on the local server to listen on.

  13. #KUBE_API_PORT="--port=8080"

  14. #

  15. ## Port minions listen on

  16. #KUBELET_PORT="--kubelet-port=10250"

  17. #

  18. ## Comma separated list of nodes in the etcd cluster

  19. KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379"

  20. #

  21. ## Address range to use for services

  22. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.1.0.0/16"

  23. #

  24. ## default admission control policies

  25. KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction"

  26. #

  27. ## Add your own!

  28. KUBE_API_ARGS="--authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1 --kubelet-https=true --anonymous-auth=false --enable-bootstrap-token-auth --basic-auth-file=/opt/kubernetes/token/basic-auth.csv --token-auth-file=/opt/kubernetes/token/bootstrap-token.csv --service-node-port-range=30000-32767 --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/kubernetes/ssl/ca.pem --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem --allow-privileged=true --enable-swagger-ui=true --apiserver-count=3 --audit-policy-file=/opt/kubernetes/etc/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kubernetes/apiserver/api-audit.log --log-dir=/var/log/kubernetes/apiserver --event-ttl=1h"

 
  1. # systemctl daemon-reload

  2. # systemctl enable kube-apiserver

  3. # systemctl start kube-apiserver

  4. # systemctl status kube-apiserver

11)创建Kube Controller Manager启动文件

 
  1. # vi /usr/lib/systemd/system/kube-controller-manager.service

  2. [Unit]

  3. Description=Kubernetes Controller Manager

  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes

  5.  
  6. [Service]

  7. EnvironmentFile=/opt/kubernetes/conf/kube.conf

  8. EnvironmentFile=/opt/kubernetes/conf/controller-manager.conf

  9. ExecStart=/opt/kubernetes/bin/kube-controller-manager \

  10.         $KUBE_LOGTOSTDERR \

  11.         $KUBE_LOG_LEVEL \

  12.         $KUBE_MASTER \

  13.         $KUBE_CONTROLLER_MANAGER_ARGS

  14. Restart=on-failure

  15. RestartSec=5

  16.  
  17. [Install]

  18. WantedBy=multi-user.target

mkdir /var/log/kubernetes/controller-manager

13)创建kube Controller Manager配置文件并启动

 
  1. # vi /opt/kubernetes/conf/controller-manager.conf

  2. ###

  3. # The following values are used to configure the kubernetes controller-manager

  4. #

  5. # defaults from config and apiserver should be adequate

  6. #

  7. # Add your own!

  8. KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.1.0.0/16 --cluster-cidr=10.2.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --leader-elect=true --log-dir=/var/log/kubernetes/controller-manager"

注:--service-cluster-ip-range参数指定Cluster中Service 的CIDR范围,该网络在各Node间必须路由不可达,必须和kube-apiserver中的参数一致

 
  1. # systemctl daemon-reload

  2. # systemctl enable kube-controller-manager

  3. # systemctl start kube-controller-manager

  4. # systemctl status kube-controller-manager

14)创建Kube Scheduler启动文件

 
  1. # vi /usr/lib/systemd/system/kube-scheduler.service

  2. [Unit]

  3. Description=Kubernetes Scheduler

  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes

  5.  
  6. [Service]

  7. EnvironmentFile=/opt/kubernetes/conf/kube.conf

  8. EnvironmentFile=/opt/kubernetes/conf/scheduler.conf

  9. ExecStart=/opt/kubernetes/bin/kube-scheduler \

  10.             $KUBE_LOGTOSTDERR \

  11.             $KUBE_LOG_LEVEL \

  12.             $KUBE_MASTER \

  13.             $KUBE_SCHEDULER_ARGS

  14. Restart=on-failure

  15. RestartSec=5

  16.  
  17. [Install]

  18. WantedBy=multi-user.target

mkdir /var/log/kubernetes/scheduler

15)创建Kube Scheduler配置文件并启动

 
  1. # vi /opt/kubernetes/conf/scheduler.conf

  2. ###

  3. # kubernetes scheduler config

  4. #

  5. # default config should be adequate

  6. #

  7. # Add your own!

  8. KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1 --log-dir=/var/log/kubernetes/scheduler"

 
  1. # systemctl daemon-reload

  2. # systemctl enable kube-scheduler

  3. # systemctl start kube-scheduler

  4. # systemctl status kube-scheduler

16)创建kubectl kubeconfig文件

设置集群参数

 
  1. # kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.168.2:6443

  2. Cluster "kubernetes" set.Cluster "kubernetes" set.

设置客户端认证参数

 
  1. # kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem

  2. User "admin" set.

设置上下文参数

 
  1. # kubectl config set-context kubernetes --cluster=kubernetes --user=admin

  2. Context "kubernetes" created.

设置默认上下文

 
  1. # kubectl config use-context kubernetes

  2. Switched to context "kubernetes".

17)验证各组件健康状况

 
  1. # kubectl get cs

  2. NAME                 STATUS    MESSAGE             ERROR

  3. controller-manager   Healthy   ok                  

  4. scheduler            Healthy   ok                  

  5. etcd-0               Healthy   {"health":"true"}   

  6. etcd-1               Healthy   {"health":"true"}   

  7. etcd-2               Healthy   {"health":"true"}

18)创建角色绑定

 
  1. # kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin

  2. clusterrolebindings.rbac.authorization.k8s.io "kube-system-cluster-admin"

注:在kubernetes-1.10.4以前使用如下命令

 
  1. # kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

  2. clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created

19)创建kubelet bootstrapping kubeconfig文件

设置集群参数

 
  1. # kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.168.2:6443 --kubeconfig=bootstrap.kubeconfig

  2. Cluster "kubernetes" set.

设置客户端认证参数

 
  1. # kubectl config set-credentials kubelet-bootstrap --token=1cd425206a373f7cc75c958fd363e3fe --kubeconfig=bootstrap.kubeconfig

  2. User "kubelet-bootstrap" set.

token值为master创建token文件时生成的128bit字符串

设置上下文参数

 
  1. # kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig

  2. Context "default" created.

设置默认上下文

 
  1. # kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

  2. Switched to context "default".

将生成的bootstrap.kubeconfig文件分发到各节点

 
  1. cp bootstrap.kubeconfig /opt/kubernetes/conf/

  2.  
  3. scp bootstrap.kubeconfig [email protected]:/opt/kubernetes/conf/

  4. scp bootstrap.kubeconfig [email protected]:/opt/kubernetes/conf/

20)创建生成kube-proxy证书csr的JSON配置文件

 
  1. # cd /root/ssl

  2. # cat > kube-proxy-csr.json << EOF

  3. {

  4.   "CN": "system:kube-proxy",

  5.   "hosts": [],

  6.   "key": {

  7.     "algo": "rsa",

  8.     "size": 2048

  9.   },

  10.   "names": [

  11.     {

  12.       "C": "CN",

  13.       "ST": "BeiJing",

  14.       "L": "BeiJing",

  15.       "O": "k8s",

  16.       "OU": "System"

  17.     }

  18.   ]

  19. }

  20. EOF

21)生成kube-proxy证书和私钥

 
  1. # cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

  2.  
  3. # cp kube-proxy*.pem /opt/kubernetes/ssl/

  4. # scp kube-proxy*.pem [email protected]:/opt/kubernetes/ssl/

  5. # scp kube-proxy*.pem [email protected]:/opt/kubernetes/ssl/

21)创建kube-proxy kubeconfig文件

设置集群参数

 
  1. # kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.168.2:6443 --kubeconfig=kube-proxy.kubeconfig

  2. Cluster "kubernetes" set.

设置客户端认证参数

 
  1. # kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

  2. User "kube-proxy" set.

设置上下文参数

 
  1. # kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

  2. Context "default" created.

设置默认上下文

 
  1. # kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

  2. Switched to context "default".

分发kube-proxy.kubeconfig文件到各node节点

 
  1. cp /root/ssl/kube-proxy.kubeconfig /opt/kubernetes/conf/

  2. scp /root/ssl/kube-proxy.kubeconfig [email protected]:/opt/kubernetes/conf/

  3. scp /root/ssl/kube-proxy.kubeconfig [email protected]:/opt/kubernetes/conf/

4.配置work-node01/02

1)安装ipvsadm等工具(各node节点相同操作)

yum install -y ipvsadm ipset bridge-utils

2)创建kubelet工作目录(各node节点做相同操作)

mkdir /var/lib/kubelet

3)创建kubelet配置文件

 
  1. # vi /opt/kubernetes/conf/kubelet.conf

  2. ###

  3. ## kubernetes kubelet (minion) config

  4. #

  5. ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

  6. KUBELET_ADDRESS="--address=192.168.168.3"

  7. #

  8. ## The port for the info server to serve on

  9. #KUBELET_PORT="--port=10250"

  10. #

  11. ## You may leave this blank to use the actual hostname

  12. KUBELET_HOSTNAME="--hostname-override=work-node01"

  13. #

  14. ## pod infrastructure container

  15. KUBELET_POD_INFRA_CONTAINER="--pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

  16. #

  17. ## 

  18. #KUBELET_API_SERVER="--api-servers=https://192.168.168.2:6443"

  19. #

  20. ## Add your own!

  21. KUBELET_ARGS="--cluster-dns=10.1.0.2 --experimental-bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig --cert-dir=/opt/kubernetes/ssl --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --allow-privileged=true --cluster-domain=cluster.local. --hairpin-mode=hairpin-veth --fail-swap-on=false --serialize-image-pulls=false --log-dir=/var/log/kubernetes/kubelet"

注:KUBELET_ADDRESS设置为各节点本机IP,KUBELET_HOSTNAME设为各节点的主机名,KUBELET_POD_INFRA_CONTAINER可设置为私有容器仓库地址,如有可设置为KUBELET_POD_INFRA_CONTAINER="--pod_infra_container_image={私有镜像仓库ip}:80/k8s/pause-amd64:v3.0",cni-bin-dir值的路径在创建calico网络时会自动添加

mkdir /var/log/kubernetes/kubelet    //各node节点做相同操作

分发kubelet.conf到各node节点

scp /opt/kubernetes/conf/kubelet.conf [email protected]:/opt/kubernetes/conf/

4)创建CNI网络配置文件

 
  1. mkdir -p /etc/cni/net.d

  2. cat >/etc/cni/net.d/10-calico.conf <

  3. {

  4.     "name": "calico-k8s-network",

  5.     "cniVersion": "0.6.0",

  6.     "type": "calico",

  7.     "etcd_endpoints": "https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379",

  8.     "etcd_key_file": "/opt/kubernetes/ssl/etcd-key.pem",

  9.     "etcd_cert_file": "/opt/kubernetes/ssl/etcd.pem",

  10.     "etcd_ca_cert_file": "/opt/kubernetes/ssl/ca.pem",

  11.     "log_level": "info",

  12.     "mtu": 1500,

  13.     "ipam": {

  14.         "type": "calico-ipam"

  15.     },

  16.     "policy": {

  17.         "type": "k8s"

  18.     },

  19.     "kubernetes": {

  20.         "kubeconfig": "/opt/kubernetes/conf/kubelet.conf"

  21.     }

  22. }

  23. EOF

分发到各nodes节点

scp /etc/cni/net.d/10-calico.conf [email protected]:/etc/cni/net.d

5)创建Kubelet配置文件并启动

 
  1. # vi /usr/lib/systemd/system/kubelet.service

  2. [Unit]

  3. Description=Kubernetes Kubelet Server

  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes

  5. After=docker.service

  6. Requires=docker.service

  7.  
  8. [Service]

  9. WorkingDirectory=/var/lib/kubelet

  10. EnvironmentFile=/opt/kubernetes/conf/kube.conf

  11. EnvironmentFile=/opt/kubernetes/conf/kubelet.conf

  12. ExecStart=/opt/kubernetes/bin/kubelet \

  13.             $KUBE_LOGTOSTDERR \

  14.             $KUBE_LOG_LEVEL \

  15.             $KUBELET_ADDRESS \

  16.             $KUBELET_PORT \

  17.             $KUBELET_HOSTNAME \

  18.             $KUBE_ALLOW_PRIV \

  19.             $KUBELET_POD_INFRA_CONTAINER \

  20.             $KUBELET_ARGS

  21. Restart=on-failure

  22. RestartSec=5

  23.  
  24. [Install]

  25. WantedBy=multi-user.target

分发kubelet.service文件到各node节点

scp /usr/lib/systemd/system/kubelet.service [email protected]:/usr/lib/systemd/system/
 
  1. systemctl daemon-reload

  2. systemctl enable kubelet

  3. systemctl start kubelet

  4. systemctl status kubelet

注:如无法自动在/opt/kubernetes/conf/没有自动生成kubelet.kubeconfig文件可将master中$HOME/.kube/config文件重命名为kubelet.kubeconfig并拷贝至各nodes节点的/opt/kubernetes/conf/目录下

6)查看CSR证书请求(在k8s-master上执行)

 
  1. kubectl get csr

  2. NAME AGE REQUESTOR CONDITION

  3. node-csr-0Vg_d__0vYzmrMn7o2S7jsek4xuQJ2v_YuCKwWN9n7M 4h kubelet-bootstrap Pending

7)批准kubelet 的 TLS 证书请求(在k8s-master上执行)

 
  1. # kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve

  2. certificatesigningrequest.certificates.k8s.io "node-csr-0Vg_d__0vYzmrMn7o2S7jsek4xuQJ2v_YuCKwWN9n7M" approved

8)查看节点状态如果是Ready的状态就说明一切正常(在k8s-master上执行)

 
  1. # kubectl get node

  2. NAME         STATUS    ROLES     AGE       VERSION

  3. work-node01   Ready        11h       v1.10.4

  4. work-node02   Ready        11h       v1.10.4

9)创建kube-proxy工作目录(各node节点做相同操作)

mkdir /var/lib/kube-proxy

10)创建kube-proxy的启动文件

 
  1. # vi /usr/lib/systemd/system/kube-proxy.service

  2. [Unit]

  3. Description=Kubernetes Kube-Proxy Server

  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes

  5. After=network.target

  6.  
  7. [Service]

  8. WorkingDirectory=/var/lib/kube-proxy

  9. EnvironmentFile=/opt/kubernetes/conf/kube.conf

  10. EnvironmentFile=/opt/kubernetes/conf/kube-proxy.conf

  11. ExecStart=/opt/kubernetes/bin/kube-proxy \

  12.             $KUBE_LOGTOSTDERR \

  13.             $KUBE_LOG_LEVEL \

  14.             $KUBE_MASTER \

  15.             $KUBE_PROXY_ARGS

  16. Restart=on-failure

  17. RestartSec=5

  18. LimitNOFILE=65536

  19.  
  20. [Install]

  21. WantedBy=multi-user.target

分发kube-proxy.service到各node节点

scp /usr/lib/systemd/system/kube-proxy.service [email protected]:/usr/lib/systemd/system/

11)创建kube-proxy配置文件并启动

 
  1. # vi /opt/kubernetes/conf/kube-proxy.conf

  2. ###

  3. # kubernetes proxy config

  4.  
  5. # default config should be adequate

  6.  
  7. # Add your own!

  8. KUBE_PROXY_ARGS="--bind-address=192.168.168.3 --hostname-override=work-node01 --kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig --masquerade-all --feature-gates=SupportIPVSProxyMode=true --proxy-mode=ipvs --ipvs-min-sync-period=5s --ipvs-sync-period=5s --ipvs-scheduler=rr --log-dir=/var/log/kubernetes/kube-proxy"

注:bind-address值设为各node节点本机IP,hostname-override值设为各node节点主机名

分发kube-proxy.conf到各node节点

scp /opt/kubernetes/conf/kube-proxy.conf [email protected]:/opt/kubernetes/conf/
 
  1. # systemctl daemon-reload

  2. # systemctl enable kube-proxy

  3. # systemctl start kube-proxy

  4. # systemctl status kube-proxy  

12)查看LVS状态

 
  1. # ipvsadm -L -n

  2. IP Virtual Server version 1.2.1 (size=4096)

  3. Prot LocalAddress:Port Scheduler Flags

  4. -> RemoteAddress:Port Forward Weight ActiveConn InActConn

  5. TCP 10.1.0.1:443 rr persistent 10800

  6. -> 192.168.168.2:6443 Masq 1 0 0

5.配置calico网络

1)下载calico插件

在master节点执行:

 
  1. # wget -N -P /usr/bin/ https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl

  2. # chmod +x /usr/bin/calicoctl

  3. # mkdir /etc/calico/yaml

  4. # docker pull quay.io/calico/node:v3.1.3

在各node节点执行:

 
  1. # wget -N -P /usr/bin/ https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl

  2. # chmod +x /usr/bin/calicoctl

  3. # mkdir /etc/calico/conf

  4. # mkdir /opt/cni/bin

  5. # wget -N -P /opt/cni/bin https://github.com/projectcalico/cni-plugin/releases/download/v3.1.3/calico

  6. # wget -N -P /opt/cni/bin https://github.com/projectcalico/cni-plugin/releases/download/v3.1.3/calico-ipam

  7. # chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam

  8. # docker pull quay.io/calico/node:v3.1.3

查看docker calico image信息

 
  1. # docker images

  2. REPOSITORY TAG IMAGE ID CREATED SIZE

  3. quay.io/calico/node v3.1.3 7eca10056c8e 6 weeks ago 248MB

2)创建calico和etcd交互文件

 
  1. # vi /etc/calico/calicoctl.cfg

  2. apiVersion: projectcalico.org/v3

  3. kind: CalicoAPIConfig

  4. metadata:

  5. spec:

  6.   datastoreType: "etcdv3"

  7.   etcdEndpoints: "https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379"

  8.   etcdKeyFile: "/opt/kubernetes/ssl/etcd-key.pem"

  9.   etcdCertFile: "/opt/kubernetes/ssl/etcd.pem"

  10.   etcdCACertFile: "/opt/kubernetes/ssl/ca.pem"

3)获取calico.yaml(在master执行)

 
  1. wget -N -P /etc/calico/yaml https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml

  2. wget -N -P /etc/calico/yaml https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml

注:此处版本与docker calico image版本相同

修改calico.yaml文件内容

 
  1. # 替换 Etcd 地址

  2. sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379\"@gi' calico.yaml

  3.  
  4. # 替换 Etcd 证书

  5. export ETCD_CERT=`cat /opt/kubernetes/ssl/etcd.pem | base64 | tr -d '\n'`

  6. export ETCD_KEY=`cat /opt/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n'`

  7. export ETCD_CA=`cat /opt/kubernetes/ssl/ca.pem | base64 | tr -d '\n'`

  8.  
  9. sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml

  10. sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml

  11. sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

  12.  
  13. sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml

  14. sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml

  15. sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml

  16.  
  17. #上面是必须要修改的参数,文件中有一个参数是设置pod network地址的,根据实际情况做修改:

  18. - name: CALICO_IPV4POOL_CIDR

  19. value: "10.100.0.0/16"

calico资源进行配置

 
  1. kubectl apply -f /etc/calico/conf/rbac.yaml

  2. kubectl create -f /etc/calico/conf/calico.yaml

4)创建calico配置文件

 
  1. # vi /etc/calico/conf/calico.conf

  2. CALICO_NODENAME="work-node01"

  3. ETCD_ENDPOINTS=https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379

  4. ETCD_CA_CERT_FILE="/opt/kubernetes/ssl/ca.pem"

  5. ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

  6. ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

  7. CALICO_IP="192.168.168.3"

  8. CALICO_IP6=""

  9. CALICO_NETWORKING_BACKEND=bird

  10. CALICO_AS="65412"

  11. CALICO_NO_DEFAULT_POOLS=""

  12. CALICO_LIBNETWORK_ENABLED=true

  13. CALICO_IPV4POOL_IPIP="Always"

  14. IP_AUTODETECTION_METHOD=interface=ens34

  15. IP6_AUTODETECTION_METHOD=interface=ens34

注:CALICO_NODENAME值为各node节点主机名,CALICP_IP值为各node节点网口IP,IP_AUTODETECTION_METHOD和IP6_AUTODETECTION_METHOD值为各node节点对外网口标号

分发calico.conf文件到各node节点

scp /etc/calico/conf/calico.conf [email protected]:/etc/calico/conf/

5)创建calico启动文件

 
  1. # vi /usr/lib/systemd/system/calico-node.service

  2. [Unit]

  3. Description=calico-node

  4. After=docker.service

  5. Requires=docker.service

  6.  
  7. [Service]

  8. User=root

  9. PermissionsStartOnly=true

  10. EnvironmentFile=/etc/calico/conf/calico.conf

  11. ExecStart=/usr/bin/docker run --net=host --privileged --name=calico-node \

  12.  -e NODENAME=${CALICO_NODENAME} \

  13.  -e ETCD_ENDPOINTS=${ETCD_ENDPOINTS} \

  14.  -e ETCD_CA_CERT_FILE=${ETCD_CA_CERT_FILE} \

  15.  -e ETCD_CERT_FILE=${ETCD_CERT_FILE} \

  16.  -e ETCD_KEY_FILE=${ETCD_KEY_FILE} \

  17.  -e IP=${CALICO_IP} \

  18.  -e IP6=${CALICO_IP6} \

  19.  -e CALICO_NETWORKING_BACKEND=${CALICO_NETWORKING_BACKEND} \

  20.  -e AS=${CALICO_AS} \

  21.  -e NO_DEFAULT_POOLS=${CALICO_NO_DEFAULT_POOLS} \

  22.  -e CALICO_LIBNETWORK_ENABLED=${CALICO_LIBNETWORK_ENABLED} \

  23.  -e CALICO_IPV4POOL_IPIP=${CALICO_IPV4POOL_IPIP} \

  24.  -e IP_AUTODETECTION_METHOD=${IP_AUTODETECTION_METHOD} \

  25.  -e IP6_AUTODETECTION_METHOD=${IP6_AUTODETECTION_METHOD} \

  26.  -v /opt/kubernetes/ssl:/opt/kubernetes/ssl \

  27.  -v /var/log/calico:/var/log/calico \

  28.  -v /run/docker/plugins:/run/docker/plugins \

  29.  -v /lib/modules:/lib/modules \

  30.  -v /var/run/calico:/var/run/calico \

  31.  quay.io/calico/node:v3.1.3

  32.  
  33. ExecStop=/usr/bin/docker rm -f calico-node

  34. Restart=always

  35. RestartSec=10

  36.  
  37. [Install]

  38. WantedBy=multi-user.target

mkdir /var/log/calico     //各nodes节点同样操作 

注:NODENAME值为各主机名,IP值为各主机外连网口IP

分发calico-node文件到各node节点

scp /usr/lib/systemd/system/calico-node.service [email protected]:/usr/lib/systemd/system/
 
  1. # systemctl daemon-reload

  2. # systemctl enable calico-node

  3. # systemctl start calico-node

  4. # systemctl status calico-node  

5)在各主机以Docker方式启动calico-node

k8s-master:

calicoctl node run --node-image=quay.io/calico/node:v3.1.3 --ip=192.168.168.2

work-node01/node02:

calicoctl node run --node-image=quay.io/calico/node:v3.1.3 --ip=192.168.168.3
calicoctl node run --node-image=quay.io/calico/node:v3.1.3 --ip=192.168.168.4

查看calico-node启动情况

 
  1. # docker ps

  2. CONTAINER ID   IMAGE           COMMAND     CREATED       STATUS      PORTS      NAMES

  3. a1981d2dae6d   quay.io/calico/node:v3.1.3  "start_runit" 17 seconds ago   Up 17 seconds           calico-node

6)查看calico node信息

 
  1. # # calicoctl get node -o wide

  2. NAME         ASN         IPV4               IPV6   

  3. k8s-master   (unknown)   192.168.168.2/32          

  4. work-node01   65412      192.168.168.3/24          

  5. wrok-node02   65412      192.168.168.4/24          

7)查看peer信息

 
  1. # calicoctl node status

  2. Calico process is running.

  3.  
  4. IPv4 BGP status

  5. +---------------+-------------------+-------+----------+-------------+

  6. | PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |

  7. +---------------+-------------------+-------+----------+-------------+

  8. | 192.168.168.2 | node-to-node mesh | up    | 07:41:40 | Established |

  9. | 192.168.168.3 | node-to-node mesh | up    | 07:41:43 | Established |

  10. +---------------+-------------------+-------+----------+-------------+

  11.  
  12. IPv6 BGP status

  13. No IPv6 peers found.

6.部署kubernetes Dashboard(在master执行)

1)创建dasnbord之前需要创建角色

 
  1. # vi /opt/kubernetes/yaml/dashboard-rbac.yaml

  2. apiVersion: v1

  3. kind: ServiceAccount

  4. metadata:

  5. labels:

  6. k8s-app: kubernetes-dashboard

  7. addonmanager.kubernetes.io/mode: Reconcile

  8. name: kubernetes-dashboard

  9. namespace: kube-system

  10. ---

  11.  
  12. kind: ClusterRoleBinding

  13. apiVersion: rbac.authorization.k8s.io/v1beta1

  14. metadata:

  15. name: kubernetes-dashboard-minimal

  16. namespace: kube-system

  17. labels:

  18. k8s-app: kubernetes-dashboard

  19. addonmanager.kubernetes.io/mode: Reconcile

  20. roleRef:

  21. apiGroup: rbac.authorization.k8s.io

  22. kind: ClusterRole

  23. name: cluster-admin

  24. subjects:

  25. - kind: ServiceAccount

  26. name: kubernetes-dashboard

  27. namespace: kube-system

kubectl create -f /opt/kubernetes/yaml/dashboard-rbac.yaml

2)为dashboard创建控制器

 
  1. # cat dashboard-deployment.yaml

  2. apiVersion: apps/v1beta2

  3. kind: Deployment

  4. metadata:

  5. name: kubernetes-dashboard

  6. namespace: kube-system

  7. labels:

  8. k8s-app: kubernetes-dashboard

  9. kubernetes.io/cluster-service: "true"

  10. addonmanager.kubernetes.io/mode: Reconcile

  11. spec:

  12. selector:

  13. matchLabels:

  14. k8s-app: kubernetes-dashboard

  15. template:

  16. metadata:

  17. labels:

  18. k8s-app: kubernetes-dashboard

  19. annotations:

  20. scheduler.alpha.kubernetes.io/critical-pod: ''

  21. spec:

  22. serviceAccountName: kubernetes-dashboard

  23. containers:

  24. - name: kubernetes-dashboard

  25. image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.3

  26. resources:

  27. limits:

  28. cpu: 100m

  29. memory: 300Mi

  30. requests:

  31. cpu: 100m

  32. memory: 100Mi

  33. ports:

  34. - containerPort: 9090

  35. protocol: TCP

  36. livenessProbe:

  37. httpGet:

  38. scheme: HTTP

  39. path: /

  40. port: 9090

  41. initialDelaySeconds: 30

  42. timeoutSeconds: 30

  43. tolerations:

  44. - key: "CriticalAddonsOnly"

  45. operator: "Exists"

kubectl create -f /opt/kubernetes/yaml/dashboard-deployment.yaml

3)创建service用于暴露服务,用于暴露服务

 
  1. # vi /opt/kubernetes/yaml/dashboard-service.yaml

  2. apiVersion: v1

  3. kind: Service

  4. metadata:

  5. name: kubernetes-dashboard

  6. namespace: kube-system

  7. labels:

  8. k8s-app: kubernetes-dashboard

  9. kubernetes.io/cluster-service: "true"

  10. addonmanager.kubernetes.io/mode: Reconcile

  11. spec:

  12. type: NodePort

  13. selector:

  14. k8s-app: kubernetes-dashboard

  15. ports:

  16. - port: 80

  17. targetPort: 9090

kubectl create -f /opt/kubernetes/yaml/dashboard-service.yaml

4)查看状态

查看service状态:

 
  1. # kubectl get svc -n kube-system

  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

  3. kubernetes-dashboard NodePort 10.1.193.97 80:31988/TCP 59m

注:PORT(S)值中31988为dashboard访问端口

查看pod状态:

 
  1. # kubectl get pods -n kube-system

  2. NAME READY STATUS RESTARTS AGE

  3. calico-kube-controllers-98989846-bgdkv 1/1 Running 2 1h

  4. calico-node-2x8m6 2/2 Running 6 1h

  5. calico-node-hrgg9 2/2 Running 5 1h

  6. kubernetes-dashboard-b9f5f9d87-stz4s 1/1 Running 1 1h

查看所有信息:

 
  1. # kubectl get all -n kube-system

  2. NAME READY STATUS RESTARTS AGE

  3. pod/calico-kube-controllers-98989846-bgdkv 1/1 Running 2 1h

  4. pod/calico-node-2x8m6 2/2 Running 6 1h

  5. pod/calico-node-hrgg9 2/2 Running 5 1h

  6. pod/kubernetes-dashboard-b9f5f9d87-stz4s 1/1 Running 1 1h

  7.  
  8. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

  9. service/kubernetes-dashboard NodePort 10.1.193.97 80:31988/TCP 1h

  10.  
  11. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE

  12. daemonset.apps/calico-node 2 2 2 2 2 2h

  13.  
  14. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

  15. deployment.apps/calico-kube-controllers 1 1 1 1 2h

  16. deployment.apps/kubernetes-dashboard 1 1 1 1 1h

  17.  
  18. NAME DESIRED CURRENT READY AGE

  19. replicaset.apps/calico-kube-controllers-98989846 1 1 1 1h

  20. replicaset.apps/kubernetes-dashboard-b9f5f9d87 1 1 1 1h

5)使用node节点IP访问dashboard(http://192.168.168.3:31988)
测试改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端_第3张图片

你可能感兴趣的:(测试改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端改名称看移动端)