Kubernetes 单点安装

Kubernetes 单点安装

一、环境准备
二、Kubernetes Install
Master配置
1.安装CFSSL工具
2.生成ETCD证书
3.安装启动ETCD
4.安装Docker
5.安装Kubernetes
6.生成分发Kubernetes证书
7.服务配置配置
8.Master 上安装node节点
Node节点配置
1.Docker安装
2.分配证书
3.Node节点配置
4.创建 nginx 代理
5.认证
Calico
1.Calico介绍
2.Calico 安装配置
DNS
部署 DNS 自动扩容部署


温馨提示:整个环境只需要修改IP地址!不要其他    删除的

一、环境准备

本次我们安装Kubernetes不使用集群版本安装,使    用单点安装。

环境准备需要master和node都要进行操作

环境如下:

IP             主机名 节点        服务
192.168.30.147 master master     etcd、kube-apiserver、kube-controller-manage    、kube-scheduler     如果master上不安装Node可以不安装以下服务dock    er、kubelet、kube-proxy、calico
192.168.30.148 node node     docker、kubelet、kube-proxy、nginx(master上n    ode节点可以buanzhuangnginx)

k8s组件版本:v1.11
docker版本:v17.03
etcd版本:v3.2.22
calico版本:v3.1.3
dns版本:1.14.7


Kubernetes版本

本次版本采用v1.11

查看系统及内核版本

➜ cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

➜ uname -a
3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23     17:05:11 UTC 2016 x86_64 x86_64 x86_64     GNU/Linux

#我们要升级内核版本

温馨提示:下面的操作需要在两台服务器上执行

设置主机名,host

    192.168.30.148 node1
    192.168.30.147 master

master node设置互信


设置时间同步

yum -y install ntp
 systemctl enable ntpd
 systemctl start ntpd
 ntpdate -u cn.pool.ntp.org
 hwclock --systohc
 timedatectl set-timezone Asia/Shanghai

关闭swap分区

➜ swapoff -a     #临时关闭swap分区
➜ vim /etc/fstab  #永久关闭swap分区
swap was on /dev/sda11 during installation
UUID=0a55fdb5-a9d8-4215-80f7-f42f75644f69     none  swap    sw      0       0
#注释掉SWAP分区项,即可
#不听我的kubelet启动报错自己百度

设置Yum源

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
yum install wget vim lsof net-tools lrzsz -y

关闭防火墙

 systemctl stop firewalld
 systemctl disable firewalld
 setenforce 0
 sed -i '/SELINUX/s/enforcing/disabled/'/etc/selinux/config


升级内核

不要问我为什么

yum update 
rpm --import     https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-releas    e-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml -y&&
sed -i s/saved/0/g /etc/default/grub&&
grub2-mkconfig -o /boot/grub2/grub.cfg &&reboot

#不重启不生效!

Kubernetes 升级内核失败

查看内核

➜ uname -a
Linux master 4.17.6-1.el7.elrepo.x86_64 #1SMP Wed Jul 11 17:24:30 EDT 2018 x86_64x86_64 x86_64 GNU/Linux


设置内核参数

echo "* soft nofile 190000" >>     /etc/security/limits.conf
echo "* hard nofile 200000" >>     /etc/security/limits.conf
echo "* soft nproc 252144" >>     /etc/security/limits.conf
echo "* hadr nproc 262144" >>     /etc/security/limits.conf
tee /etc/sysctl.conf <<-'EOF'
# System default settings live in     /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new     settings here, or in an     /etc/sysctl.d/.conf file
#
# For more information, see sysctl.conf(5)     and sysctl.d(5).

net.ipv4.tcp_tw_recycle = 0
net.ipv4.ip_local_port_range = 10000 61000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_forward = 1
net.core.netdev_max_backlog = 2000
net.ipv4.tcp_mem = 131072  262144  524288
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_low_latency = 0
net.core.rmem_default = 256960
net.core.rmem_max = 513920
net.core.wmem_default = 256960
net.core.wmem_max = 513920
net.core.somaxconn = 2048
net.core.optmem_max = 81920
net.ipv4.tcp_mem = 131072  262144  524288
net.ipv4.tcp_rmem = 8760  256960  4088000
net.ipv4.tcp_wmem = 8760  256960  4088000
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_syn_retries = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
EOF
echo "options nf_conntrack hashsize=819200"     >> /etc/modprobe.d/mlx4.conf 
modprobe br_netfilter
sysctl -p





二、Kubernetes Install

Master配置

1.安装CFSSL工具

工具说明:

client certificate     用于服务端认证客户端,例如etcdctl、etcd     proxy、fleetctl、docker客户端

server certificate     服务端使用,客户端以此验证服务端身份,例如doc    ker服务端、kube-apiserver

peer certificate     双向证书,用于etcd集群成员间通信


安装CFSSL工具

➜ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/bin/cfssl

➜ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/bin/cfssljson

➜ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64     /usr/bin/cfssl-certinfo

2.生成ETCD证书

etcd作为Kubernetes集群的主数据库,在安装Kubernetes各服务之前需要首先安装和启动

创建CA证书

#创建etcd目录,用户生成etcd证书,请步骤和我保持一致

➜ mkdir /root/etcd_ssl && cd /root/etcd_ssl

cat > etcd-root-ca-csr.json << EOF
{
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "beijing",
      "ST": "beijing",
      "C": "CN"
    }
  ],
  "CN": "etcd-root-ca"
}
EOF

etcd集群证书

cat >  etcd-gencert.json << EOF  
{                                 
  "signing": {                    
    "default": {                  
      "expiry": "87600h"           
    },                            
    "profiles": {                 
      "etcd": {             
        "usages": [               
            "signing",            
            "key encipherment",   
            "server auth", 
            "client auth"  
        ],  
        "expiry": "87600h"  
      }  
    }  
  }  
}  
EOF

过期时间设置成了 87600h

ca-config.json:可以定义多个     profiles,分别指定不同的过期时间、使用场景等    参数;后续在签名证书时使用某个 profile;

signing:表示该证书可用于签名其它证书;生成    的 ca.pem 证书中 CA=TRUE;

server auth:表示client可以用该 CA     对server提供的证书进行验证;

client auth:表示server可以用该CA对client提    供的证书进行验证;

etcd证书签名请求

cat > etcd-csr.json << EOF
{
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "beijing",
      "ST": "beijing",
      "C": "CN"
    }
  ],
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "localhost",
    "192.168.30.147"
  ]
}
EOF

$ hosts写master地址

生成证书

cfssl gencert --initca=true     etcd-root-ca-csr.json \
| cfssljson --bare etcd-root-ca

创建根CA

cfssl gencert --ca etcd-root-ca.pem \
--ca-key etcd-root-ca-key.pem \
--config etcd-gencert.json \
-profile=etcd etcd-csr.json | cfssljson     --bare etcd

ETCD所需证书如下

➜ ll
total 36
-rw-r--r-- 1 root root 1765 Jul 12 10:48     etcd.csr
-rw-r--r-- 1 root root  282 Jul 12 10:48     etcd-csr.json
-rw-r--r-- 1 root root  471 Jul 12 10:48     etcd-gencert.json
-rw------- 1 root root 3243 Jul 12 10:48     etcd-key.pem
-rw-r--r-- 1 root root 2151 Jul 12 10:48     etcd.pem
-rw-r--r-- 1 root root 1708 Jul 12 10:48     etcd-root-ca.csr
-rw-r--r-- 1 root root  218 Jul 12 10:48     etcd-root-ca-csr.json
-rw------- 1 root root 3243 Jul 12 10:48     etcd-root-ca-key.pem
-rw-r--r-- 1 root root 2078 Jul 12 10:48     etcd-root-ca.pem

3.安装启动ETCD

ETCD 只有apiserver和Controller     Manager需要连接

yum install etcd -y && 上传rpm包,使用rpm     -ivh 安装

分发etcd证书
 ➜ mkdir -p /etc/etcd/ssl && cd    /root/etcd_ssl

查看etcd证书

➜ ll /root/etcd_ssl/
total 36
-rw-r--r--. 1 root root 1765 Jul 20 10:46     etcd.csr
-rw-r--r--. 1 root root  282 Jul 20 10:42     etcd-csr.json
-rw-r--r--. 1 root root  471 Jul 20 10:40     etcd-gencert.json
-rw-------. 1 root root 3243 Jul 20 10:46     etcd-key.pem
-rw-r--r--. 1 root root 2151 Jul 20 10:46     etcd.pem
-rw-r--r--. 1 root root 1708 Jul 20 10:46     etcd-root-ca.csr
-rw-r--r--. 1 root root  218 Jul 20 10:40     etcd-root-ca-csr.json
-rw-------. 1 root root 3243 Jul 20 10:46     etcd-root-ca-key.pem
-rw-r--r--. 1 root root 2078 Jul 20 10:46     etcd-root-ca.pem


复制证书到相关目录

mkdir /etcd/ssl
\cp *.pem /etc/etcd/ssl/
chown -R etcd:etcd /etc/etcd/ssl
chown -R etcd:etcd /var/lib/etcd
chmod -R 644 /etc/etcd/ssl/
chmod 755 /etc/etcd/ssl/


配置修改ETCD-master配置

➜ cp /etc/etcd/etcd.conf{,.bak} &&     >/etc/etcd/etcd.conf

cat >/etc/etcd/etcd.conf <

###需要将192.168.30.147修改成master的地址

启动etcd

systemctl daemon-reload
systemctl restart etcd
systemctl enable etcd

测试是否可以使用

export ETCDCTL_API=3
etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.30.147:2379 endpoint health

可用状态如下:

[root@master ~]# export ETCDCTL_API=3
[root@master ~]# etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem     --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.30.147:2379  endpoint health
https://192.168.30.147:2379 is healthy:successfully committed proposal: took =  643.432µs

查看2379 ETCD端口

➜ netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address         State           PID/Program name    
tcp        0      0 192.168.30.147:2379          0.0.0.0:*               LISTEN          2016/etcd           
tcp        0      0 127.0.0.1:2379              0.0.0.0:*               LISTEN          2016/etcd           
tcp        0      0 192.168.30.147:2380          0.0.0.0:*               LISTEN          2016/etcd           
tcp        0      0 0.0.0.0:22                  0.0.0.0:*               LISTEN      965/sshd                
tcp        0      0 127.0.0.1:25                0.0.0.0:*               LISTEN          1081/master         
tcp6       0      0 :::22                       :::*                    LISTEN      965/sshd                
tcp6       0      0 ::1:25                      :::*                    LISTEN          1081/master         
udp        0      0 127.0.0.1:323               0.0.0.0:*                               721/chronyd         
udp6       0      0 ::1:323                     :::*                                    721/chronyd 
##### 以上ETCD安装并配置完成

4.安装Docker

#!/bin/bash
export docker_version=17.03.2
yum install -y yum-utils  device-mapper-persistent-data lvm2  bash-completion
yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/ce ntos/docker-ce.repo
yum makecache all
version=$(yum list docker-ce.x86_64 --showduplicates | sort -r|grep  ${docker_version}|awk '{print $2}')
yum -y install --setopt=obsoletes=0 docker-ce-${version}  docker-ce-selinux-${version}


由于网络经常超时,我们已经把镜像上传上去,可以直接下载我提供的安装包安装即可

docker及K8S包下载 密码:1zov

安装修改配置

设置开机启动并启动docker

systemctl enable docker 
systemctl start docker 

替换docker相关配置
sed -i '/ExecStart=\/usr\/bin\/dockerd/i\ExecStartPost=\/sbin/iptables -I FORWARD -s 0.0.0.0\/0 -d 0.0.0.0\/0 -j ACCEPT' /usr/lib/systemd/system/docker.service
sed -i '/dockerd/s/$/\-\-storage\-driver\=overlay2/g' /usr/lib/systemd/system/docker.service

重启docker
systemctl daemon-reload 
systemctl restart docker

如果之前已安装旧版本,可以卸载安装新的

yum remove docker \
docker-common \
docker-selinux \
docker-engine

5.安装Kubernetes

如何下载Kubernetes

压缩包kubernetes.tar.gz内包含了Kubernetes的服务程序文件、文档和示例;压缩包kubernetes-s    rc.tar.gz内则包含了全部源代码。也可以直接Server Binaries中的kubernetes-server-linux-amd6    4.tar.gz文件,其中包含了Kubernetes需要运行的全部服务程序文件

Kubernetes 下载地址:https://github.com/kubernetes/kubernetes/releases
docker及K8S包下载 密码:1zov

Kubernetes配置

tar xf kubernetes-server-linux-amd64.tar.gz
for i in hyperkube kube-apiserverkube-scheduler kubeletkube-controller-manager kubectl     kube-proxy;do
cp ./kubernetes/server/bin/$i /usr/bin/
chmod 755 /usr/bin/$i
done


6.生成分发Kubernetes证书

设置证书目录

mkdir /root/kubernets_ssl && cd /root/kubernets_ssl 

k8s-root-ca-csr.json证书

cat > k8s-root-ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

k8s-gencert.json证书

cat >  k8s-gencert.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

kubernetes-csr.json 证书

$ hosts字段填写上所有你要用到的节点ip(master),创建 kubernetes 证书签名请求文件 kubernetes-csr.json:

cat >kubernetes-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "192.168.30.147",
        "localhost",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.loca    l"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

kube-proxy-csr.json 证书

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

admin-csr.json证书

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

生成Kubernetes证书

➜ cfssl gencert --initca=true  8s-root-ca-csr.json | cfssljson --bare  k8s-root-ca

➜ for targetName in kubernetes admin  ube-proxy; do
    cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare  $targetName
done

#生成boostrap配置
#注意:在后面的所有关于KUBE_APISERVER  BOOTSTRAP_TOKEN的配置都直接写对应的值不要写变量

export KUBE_APISERVER="https://192.168.30.147:6443"
export BOOTSTRAP_TOKEN=$(head -c 16  /dev/urandom | od -An -t x | tr -d ' ')
echo "Tokne: ${BOOTSTRAP_TOKEN}"
cat > token.csv <> audit-policy.yaml < /etc/kubernetes/config < /etc/kubernetes/apiserver < /etc/kubernetes/controller-manager  <scheduler </etc/kubernetes/kubelet </etc/kubernetes/proxy <
1.Docker安装
参考master

2.分配证书
我们需要去Master上分配证书kubernetesetcd给Node
虽然 Node 节点上没有 Etcd,但是如果部署网络组件,如 calico、flannel 等时,网络组件需要联通 Etcd 就会用到 Etcd 的相关证书。

从Mster节点上将hyperkuber kubelet kubectl kube-proxy 拷贝至node上

for i in hyperkube kubelet kubectl kube-proxy;do
scp ./kubernetes/server/bin/$i 192.168.60.25:/usr/bin/
ssh 192.168.30.148 chmod 755 /usr/bin/$i
done

##这里的IP是node节点ip

分发K8s证书
cd K8S证书目录

cd /root/kubernets_ssl/
for IP in 192.168.30.148;do
    ssh $IP mkdir -p /etc/kubernetes/ssl
    scp *.pem $IP:/etc/kubernetes/ssl
    scp *.kubeconfig token.csv audit-policy.yaml $IP:/etc/kubernetes
    ssh $IP useradd -s /sbin/nologin/ kube
    ssh $IP chown -R kube:kube /etc/kubernetes/ssl
done
分发ETCD证书

for IP in 192.168.30.148;do
    cd /root/etcd_ssl
    ssh $IP mkdir -p /etc/etcd/ssl
    scp *.pem $IP:/etc/etcd/ssl
    ssh $IP chmod -R 644 /etc/etcd/ssl/*
    ssh $IP chmod 755 /etc/etcd/ssl
done
给Node设置文件权限

ssh [email protected] mkdir -p /var/log/kube-audit /usr/libexec/kubernetes &&
ssh [email protected] chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes &&
ssh [email protected] chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes
3.Node节点配置
node 节点上配置文件同样位于 /etc/kubernetes 目录
node 节点只需要修改 config kubelet proxy这三个配置文件,修改如下

#config 通用配置

注意: config 配置文件(包括下面的 kubelet、proxy)中全部未 定义 API Server 地址,因为 kubelet 和 kube-proxy 组件启动时使用了 --require-kubeconfig 选项,该选项会使其从 *.kubeconfig 中读取 API Server 地址,而忽略配置文件中设置的;
所以配置文件中设置的地址其实是无效的

cat > /etc/kubernetes/config </etc/kubernetes/kubelet </etc/kubernetes/proxy < /etc/nginx/nginx.conf </etc/systemd/system/nginx-proxy.service <master:sun-sr-https (ESTABLISHED)
nginx   1925     root    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
lsof: no pwd entry for UID 100
nginx   1934      100    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
lsof: no pwd entry for UID 100
nginx   1935      100    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
启动kubelet-proxy
在启动kubelet之前最好将kube-proxy重启一下

systemctl restart kube-proxy
systemctl enable kubelet

systemctl daemon-reload
systemctl restart kubelet
systemctl enable kubelet
记得检查kubelet状态!

5.认证
由于采用了 TLS Bootstrapping,所以 kubelet 启动后不会立即加入集群,而是进行证书申请

此时只需要在 master 允许其证书申请即可
# 查看 csr

➜  kubectl get csr
NAME        AGE       REQUESTOR           CONDITION
csr-l9d25   2m        kubelet-bootstrap   Pending
'

如果我们将2台都启动了kubelet都配置好了并且启动了,这里会显示2台,一个master一个node
# 签发证书

kubectl certificate approve csr-l9d25  
#csr-l9d25 为证书名称

或者执行kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

# 查看 node
签发完成证书

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready         40m       v1.11.0
node      Ready         39m       v1.11.0
认证后自动生成了kubelet kubeconfig 文件和公私钥:

$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2280 Nov  7 10:26 /etc/kubernetes/kubelet.kubeconfig
$ ls -l /etc/kubernetes/ssl/kubelet*
-rw-r--r-- 1 root root 1046 Nov  7 10:26 /etc/kubernetes/ssl/kubelet-client.crt
-rw------- 1 root root  227 Nov  7 10:22 /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r-- 1 root root 1115 Nov  7 10:16 /etc/kubernetes/ssl/kubelet.crt
-rw------- 1 root root 1675 Nov  7 10:16 /etc/kubernetes/ssl/kubelet.key
#注意:
apiserver如果不启动后续没法操作
kubelet里面配置的IP地址都是本机(master配置node)
Node服务上先启动nginx-proxy在启动kube-proxy。kube-proxy里面地址配置本机127.0.0.1:6443实际上就是master:6443

Calico

calico是一个比较有趣的虚拟网络解决方案,完全利用路由规则实现动态组网,通过BGP协议通告路由。
calico的好处是endpoints组成的网络是单纯的三层网络,报文的流向完全通过路由规则控制,没有overlay等额外开销。
calico的endpoint可以漂移,并且实现了acl。
calico的缺点是路由的数目与容器数目相同,非常容易超过路由器、三层交换、甚至node的处理能力,从而限制了整个网络的扩张。
calico的每个node上会设置大量(海量)的iptables规则、路由,运维、排障难度大。
calico的原理决定了它不可能支持VPC,容器只能从calico设置的网段中获取ip。
calico目前的实现没有流量控制的功能,会出现少数容器抢占node多数带宽的情况。
calico的网络规模受到BGP网络规模的限制。

名词解释

endpoint: 接入到calico网络中的网卡称为endpoint
AS: 网络自治系统,通过BGP协议与其它AS网络交换路由信息
ibgp: AS内部的BGP Speaker,与同一个AS内部的ibgp、ebgp交换路由信息。
ebgp: AS边界的BGP Speaker,与同一个AS内部的ibgp、其它AS的ebgp交换路由信息。

workloadEndpoint: 虚拟机、容器使用的endpoint
hostEndpoints: 物理机(node)的地址

2.Calico 安装配置
Calico 目前部署也相对比较简单,只需要创建一下 yml 文件即可

# 获取相关 Cliaco.yaml 版本我们使用3.1,低版本会有Bug

wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
#替换 Etcd 地址-master这里的IP地址为etcd的地址

sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://192.168.30.147:2379\"@gi' calico.yaml
# 替换 Etcd 证书
修改 Etcd 相关配置,以下列出主要修改部分(etcd 证书内容需要被 base64 转码)

export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'`


sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml
# 设定calico的地址池,注意不要与集群IP与宿主机IP段相同

sed -i s/192.168.0.0/172.16.0.0/g calico.yaml
修改kubelet配置


执行部署操作,注意,在开启 RBAC 的情况下需要单独创建 ClusterRole 和 ClusterRoleBinding
https://www.kubernetes.org.cn/1879.html
RoleBinding和ClusterRoleBinding
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding

##提示有些镜像是需要我们去docker hub下载,我们这里可以将镜像导入
镜像下载地址 密码:ibyt
导入镜像(master和node都需要导入)
pause.tar

不导入镜像会超时

Events:
  Type     Reason                  Age               From               Message
  ----     ------                  ----              ----               -------
  Normal   Scheduled               51s               default-scheduler  Successfully assigned default/nginx-deployment-7c5b578d88-lckk2 to node
  Warning  FailedCreatePodSandBox  5s (x3 over 43s)  kubelet, node      Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection timed out
提示:因为calico的镜像在国外,我这里已经将镜像到处,大家使用docker load ] 4.403 MB/4.403 MB
ddc4cb8dae60: Loading layer [==================================================>]  7.84 MB/7.84 MB
77087b8943a2: Loading layer [==================================================>] 249.3 kB/249.3 kB
c7227c83afaf: Loading layer [==================================================>] 4.801 MB/4.801 MB
2e0e333a66b6: Loading layer [==================================================>] 231.8 MB/231.8 MB
Loaded image: quay.io/calico/node:v3.1.3

master有以下镜像
[root@master ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                    v3.1.3              7eca10056c8e        7 weeks ago         248 MB
quay.io/calico/kube-controllers        v3.1.3              240a82836573        7 weeks ago         55 MB
quay.io/calico/cni                     v3.1.3              9f355e076ea7        7 weeks ago         68.8 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        2 years ago         747 kB
[root@master ~]#

@@@@@@@@@@@@@@@@@@@@@@@@@

Node有以下镜像
[root@node ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                    v3.1.3              7eca10056c8e        7 weeks ago         248 MB
quay.io/calico/cni                     v3.1.3              9f355e076ea7        7 weeks ago         68.8 MB
nginx                                  1.13.5-alpine       ea7bef82810a        9 months ago        15.5 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        2 years ago         747 kB
创建pod及rbac

kubectl apply -f rbac.yaml 
kubectl create -f calico.yaml


Cliaco 官方文档要求 kubelet 启动时要配置使用 cni 插件 --network-plugin=cni,同时 kube-proxy 
不能使用 --masquerade-all 启动(会与 Calico policy 冲突),所以需要修改所有 kubelet 和 proxy 配置文件

#修改所有的kubelet配置,在运行参数中加上以下参数

vim /etc/kubernetes/kubelet
              --network-plugin=cni \
              
#注意在这部的时候最好重启下kubelet服务与docker服务,避免配置更新不及时造成的错误
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

启动之后,查看pod


[root@master ~]# kubectl get pod -o wide --namespace=kube-system
NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
calico-node-8977h                           2/2       Running   0          2m        192.168.60.25   node
calico-node-bl9mf                           2/2       Running   0          2m        192.168.60.24   master
calico-policy-controller-79bc74b848-7l6zb   1/1       Running   0          2m        192.168.60.24   master
Pod Yaml参考https://mritd.me/2017/07/31/calico-yml-bug/



calicoctl
calicoctl 1.0之后calicoctl管理的都是资源(resource),之前版本的ip pool,profile, policy等都是资源。资源通过yaml或者json格式方式来定义,通过calicoctl create或者apply来创建和应用,通过calicoctl get命令来查看

calicoctl 下载

wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl
chmod +x calicoctl 
mv calicoctl /usr/bin/

#下载不下来往上翻,我已经上传到百度云
检查calicoctl是否安装成功

[root@master yaml]# calicoctl version
Version:      v1.3.0
Build date:   
Git commit:   d2babb6
配置calicoctl的datastore

[root@master ~]# mkdir -p /etc/calico/
#编辑calico控制器的配置文件


下载的默认是3.1,修改版本即可下载2.6
2.6版本配置如下

cat > /etc/calico/calicoctl.cfg< test.service.yaml << EOF
kind: Service
apiVersion: v1
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 31000
  type: NodePort
  EOF
  
  
  ##暴露的端口是31000
编辑deploy文件

cat > test.deploy.yaml << EOF
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13.0-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
EOF


Kube-DNS安装


kube-dns下载
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in

手动下载并且修改名字


##新版本新增加了很多东西,如果怕改错请直接下载我的包,这里面设计对接kubelet的配置,例如10.254.0.2以及cluster.local

##建议使用我提供的yaml


sed -i 's/$DNS_DOMAIN/cluster.local/gi' kube-dns.yaml
sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kube-dns.yaml

导入镜像

docker load -i kube-dns.tar

##可以不导入镜像,默认会去yaml文件指定的地方下载,如果使用导入的镜像,请yaml也是用相同的!
创建Pod

kubectl create -f kube-dns.yaml

#需要修改yaml的imag地址,和本地镜像对接
查看pod

[root@master ~]# kubectl get pods --namespace=kube-system 
NAME                                      READY     STATUS    RESTARTS   AGE
calico-kube-controllers-b49d9b875-8bwz4   1/1       Running   0          3h
calico-node-5vnsh                         2/2       Running   0          3h
calico-node-d8gqr                         2/2       Running   0          3h
kube-dns-864b8bdc77-swfw5                 3/3       Running   0          2h
验证

#创建一组pod和Server 查看pod内网通信是否正常

[root@master test]# cat demo.deploy.yml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: demo-deployment
spec:
  replicas: 5
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: daocloud.io/library/tomcat:6.0-jre7
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        
顺便验证一下内外网的通信

kubectl get pod --all-namespaces -o wide
kubectl get svc -o wide

kubectl exec -it $pod-name bash
进入容器后
curl -I $podename:8080

部署 DNS 自动扩容部署
GitHub上下载
GitHub:https://github.com/kubernetes/kubernetes/tree/release-1.8/cluster/addons/dns-horizontal-autoscaler

dns-horizontal-autoscaler-rbac.yaml文件解析:
实际它就创建了三个资源:ServiceAccount、ClusterRole、ClusterRoleBinding ,创建帐户,创建角色,赋予权限,将帐户绑定到角色上面。

导入镜像,要不太慢了

### node 和master都需要哦~

root@node ~]# docker load -i gcr.io_google_containers_cluster-proportional-autoscaler-amd64_1.1.2-r2.tar 
3fb66f713c9f: Loading layer 4.221 MB/4.221 MB
a6851b15f08c: Loading layer 45.68 MB/45.68 MB
Loaded image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2


查看镜像
[root@master ~]# docker images|grep cluster
gcr.io/google_containers/cluster-proportional-autoscaler-amd64   1.1.2-r2            7d892ca550df        13 months ago       49.6 MB
确保对应yaml的镜像

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml

还需要下载一个rbac文件
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in

 kubectl create -f dns-horizontal-autoscaler-rbac.yaml  
 kubectl create -f dns-horizontal-autoscaler.yaml 
## 直接下载需要修改配置
自动扩容yaml文件

[root@master calico]# cat dns-horizontal-autoscaler.yaml
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

kind: ServiceAccount
apiVersion: v1
metadata:
  name: kube-dns-autoscaler
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-dns-autoscaler
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["list"]
  - apiGroups: [""]
    resources: ["replicationcontrollers/scale"]
    verbs: ["get", "update"]
  - apiGroups: ["extensions"]
    resources: ["deployments/scale", "replicasets/scale"]
    verbs: ["get", "update"]
# Remove the configmaps rule once below issue is fixed:
# kubernetes-incubator/cluster-proportional-autoscaler#16
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-dns-autoscaler
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
  - kind: ServiceAccount
    name: kube-dns-autoscaler
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: system:kube-dns-autoscaler
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns-autoscaler
  namespace: kube-system
  labels:
    k8s-app: kube-dns-autoscaler
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-dns-autoscaler
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      containers:
      - name: autoscaler
        image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2
        resources:
            requests:
                cpu: "20m"
                memory: "10Mi"
        command:
          - /cluster-proportional-autoscaler
          - --namespace=kube-system
          - --configmap=kube-dns-autoscaler
          # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base
          - --target=Deployment/kube-dns
          # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
          # If using small nodes, "nodesPerReplica" should dominate.
          - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
          - --logtostderr=true
          - --v=2
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      serviceAccountName: kube-dns-autoscaler
[root@master calico]#

你可能感兴趣的:(Kubernetes)