【kubernetes/k8s 部署】kubernetes 手动二进制部署

本文基于kubernetes 1.12手动二进制部署,可执行文件目录在/opt/k8s/bin

  etcd: https://github.com/etcd-io/etcd/releases/download/v3.2.25/etcd-v3.2.25-linux-amd64.tar.gz 

          /opt/k8s/bin目录:etcd etcdctl

  kubectl: https://dl.k8s.io/v1.12.1/kubernetes-client-linux-amd64.tar.gz

  flanneld: https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

         /opt/k8s/bin目录:flanneld mk-docker-opts.sh 

  kubernetes-server:  https://dl.k8s.io/v1.12.1/kubernetes-server-linux-amd64.tar.gz

         /opt/k8s/bin目录:kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy

 

调整内核参数,对于 k8s (看情况而定)

cat > /etc/sysctl.d/kubernetes.conf

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0    #禁止使用swap空间,只有当系统OOM时才允许使用它

vm.overcomit_memory=1   #不检查物理内存是否足够用

vm.panic_on_oom=0          #开启OOM

fs.inotify.max_user_instances=8192

fs.inotify.max_user_watches=1048576

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

 

准备工作

   所有节点配置 /etc/hosts,好处是可以随意改动master指向,无需重新设置,包括设置的kubectl config,kube-proxy config,bootstrap config,感觉这么设置比较鸡贼,哈哈哈

 10.10.15.70 master.node.local

 1.1 TLS Bootstrapping 使用的Token   

head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成         
BOOTSTRAP_TOKEN="2ab38fcb2b77d7f15ce65db2dd612ab8"    

 1.2 创建CA 证书和密钥

       kubernetes 系统各个组件需要使用TLS证书对通信进行加密

  安装 CFSSL

$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 

$ chmod +x cfssl_linux-amd64 

$ sudo mv cfssl_linux-amd64 /usr/bin/cfssl 

$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 

$ chmod +x cfssljson_linux-amd64 

$ sudo mv cfssljson_linux-amd64 /usr/bin/cfssljson 

$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 

$ chmod +x cfssl-certinfo_linux-amd64 

$ sudo mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo 

     创建CA ca-config.json

  • signing: 证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE
  • server auth: client 可以用该CA 对server 提供的证书进行校验
  • client auth: server 可以用该CA 对client 提供的证书进行验证
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

     证书签名文件 ca-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

    生成CA 证书和私钥:

           $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca

              cp ca* /etc/kubernetes/ssl 传到所有结点上

 

  1.3 配置kubectl 命令行

      默认从~/.kube/config配置文件中获取访问kube-apiserver 地址、证书、用户名等信息

    创建admin 证书

      kubectl 与kube-apiserver 的安全端口通信,需要为安全通信提供TLS 证书和密

cat > admin-csr.json <

     生成admin 证书和私钥

        cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \

            -ca-key=/etc/kubernetes/ssl/ca-key.pem \

           -config=/etc/kubernetes/ssl/ca-config.json \

          -profile=kubernetes admin-csr.json | cfssljson -bare admin

       将admin*.pem同步到所有节点/etc/kubernetes/ssl目录下

     创建kubectl kubeconfig 文件

设置集群参数

    kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=http://master.node.local:6443

设置客户端认证参数

    kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem --token=2ab38fcb2b77d7f15ce65db2dd612ab8

设置上下文参数

    kubectl config set-context kubernetes --cluster=kubernetes --user=admin

设置默认上下文

    kubectl config use-context kubernetes    

 

一. master部署

   master部分主要分为 etcd kube-apiserver kube-controller-manager kube-scheduler

1.1 etcd部署

      下载  wget https://github.com/etcd-io/etcd/releases/download/v3.2.25/etcd-v3.2.25-linux-amd64.tar.gz

  1.1.1 创建etcd TLS 密钥和证书 

  • hosts 字段指定授权使用该证书的etcd节点,一般使用本机IP,127.0.0.1,这个参数--listen-client-urls
  • 该节点为10.10.15.70,其他两个节点修改相应得IP即可
# cat etcd-csr.json 
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.10.15.70"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

  1.1.2 生成etcd证书和私钥        

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \

-ca-key=/etc/kubernetes/ssl/ca-key.pem \

-config=/etc/kubernetes/ssl/ca-config.json \

-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

       将生成得etcd*.pem 拷贝至/etc/etcd/ssl目录下,其他两个节点修改相应得IP使用cfssl命令生成证书和私钥

  1.1.3 /etc/systemd/system/etcd.service

        name分为为三个节点etcd1 etcd2 etcd3,这个需要对号入座

        10.10.15.70为本节点IP(etcd1),etcd2以及etcd3为peer 节点,其他两个节点修改相应的IP

    etcd的工作目录/var/lib/etcd,在启动服务前创建

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/k8s/bin/etcd \
  --name=etcd1 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls=https://10.10.15.70:2380 \
  --listen-peer-urls=https://10.10.15.70:2380 \
  --listen-client-urls=https://10.10.15.70:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://10.10.15.70:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=etcd1=https://10.10.15.70:2380,etcd2=https://10.10.15.71:2380,etcd3=https://10.10.15.81:2380 \
  --initial-cluster-state=new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  1.1.4 启动etcd服务

systemctl daemon-reload

systemctl enable etcd

systemctl start etcd

 

验证:/opt/k8s/bin/etcdctl --endpoints=https://10.10.15.70:2379  --ca-file=/etc/kubernetes/ssl/ca.pem   --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health

1.2 kube-master部署

  1.2.1 创建kubernetes 证书

      hosts中包括VIP master地址,kubernete cluster ip,该节点地址,其他master只需修改节点IP

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.10.15.70",
    "k8s-master-url",
    "10.200.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

  1.2.2 生成kubernetes 证书和私钥

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

    将生成的kubernetes*.pem同步至该master节点的/etc/kubernetes/ssl目录中

  1.2.3 创建kube-apiserver 客户端token

# cat /etc/kubernetes/token.csv 
2ab38fcb2b77d7f15ce65db2dd612ab8,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

    同步token.csv至所有master节点/etc/kubernetes目录

  1.2.4 /etc/systemd/system/kube-apiserver.service

    其他master只需修改IP地址

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --advertise-address=10.10.15.70 \
  --bind-address=0.0.0.0 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --kubelet-https=true \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-cluster-ip-range=10.200.0.0/16 \
  --service-node-port-range=40000-52766 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.10.15.70:2379,https://10.10.15.71:2379,https://10.10.15.71:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=2 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/lib/audit.log \
  --event-ttl=1h \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  1.2.5 启动kube-apiserver服务

systemctl daemon-reload

systemctl enable kube-apiserver

systemctl start kube-apiserver

 

1.3 kube-controller-manager部署

  1.3.1 /etc/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.200.0.0/16 \
  --cluster-cidr=192.170.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

  1.3.2 启动kube-controller-manager服务

systemctl daemon-reload

systemctl enable kube-controller-manager

systemctl start kube-controller-manager

 

1.4 kube-scheduler部署

  1.4.1 /etc/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

  1.4.2 启动kube-scheduler服务

systemctl daemon-reload

systemctl enable kube-scheduler

systemctl start kube-scheduler


二. node部署

    node部署包括的服务有flanneld,docker,kubelet,kube-proxy

2.1 flanneld部署

    这个flanneld在所有node节点部署,负责建立iptables,到其他节点的路由(host-gw模式),watch etcd设置路由变化

    下载:wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

 2.1.1 创建TLS 密钥和证书

     etcd 启用双向TLS 认证,需要创建flanneld与etcd 集群通信的CA 和密钥

cat > flanneld-csr.json <

  2.1.2 生成flanneld 证书和私钥

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

    同步flanneld*.pem到所有node节点/etc/flanneld/ssl目录下

  2.1.3 向etcd 注册Pod network

      设置flanneld为host-gw模式,网络为192.170.0.0/16,各个节点的网络号24位,也就是有250多个IP可以使用,足以

/opt/k8s/bin/etcdctl --endpoints=https://10.10.15.70:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem set /flannel/network/config '{"Network":"'192.170.0.0/16'", "SubnetLen": 24, "Backend": {"Type": "host-gw"}}'

 

  2.1.4 /etc/systemd/system/flanneld.service文件

    不需要IP伪装设置了ip-masq,没有这个需求可以去掉

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \
  -etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  -etcd-certfile=/etc/flanneld/ssl/flanneld.pem \
  -etcd-keyfile=/etc/flanneld/ssl/flanneld-key.pem \
  -etcd-endpoints=https://10.10.15.70:2379,https://10.10.15.71:2379,https://10.10.15.81:2379 \
  -etcd-prefix=/flannel/network \
  -ip-masq=false
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

   生成的/run/flannel/docker文件,这里也是设置docker也关闭ip-masq

DOCKER_OPT_BIP="--bip=192.170.35.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1500"
DOCKER_NETWORK_OPTIONS=" --bip=192.170.35.1/24 --ip-masq=false --mtu=1500"

 

  2.1.5 启动flanneld进程

systemctl daemon-reload

systemctl enable flanneld

systemctl start flanneld

 

2.2 docker部署

  2.2.1 开启路由转发

/etc/sysctl.conf文件添加如下,执行命令sysctl -p

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

  2.2.2 /etc/systemd/system/docker.service

    加入了环境文件,/run/flannel/docker是由mk-docker-opts.sh生成的配置,加入了iptables规则,docker 1.13以后iptables FORWARD chain的默认策略设置为DROP,ping 其它 节点上的Pod IP不通

[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Environment="PATH=/usr/sbin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/sbin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

  2.2.3 启动docker服务

systemctl daemon-reload

systemctl enable docker

systemctl start docker

 

2.3 kubelet部署

      kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色。为Node 请求创建一个RBAC 授权规则

  • --user=kubelet-bootstrap:  文件 /etc/kubernetes/token.csv 中的用户名,写入 /etc/kubernetes/bootstrap.kubeconfig

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes

  2.3.1 创建/etc/kubernetes/bootstrap.kubeconfig 文件 

设置集群参数

    kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://master.node-local:6443 --kubeconfig=bootstrap.kubeconfig

设置客户端认证参数  

    kubectl config set-credentials kubelet-bootstrap --token=2ab38fcb2b77d7f15ce65db2dd612ab8 --kubeconfig=bootstrap.kubeconfig

设置上下文参数

    kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig

设置默认上下文

    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

  2.3.2 /etc/systemd/system/kubelet.service

    其他节点修改地址配置即可

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/k8s/bin/kubelet \
  --address=10.10.15.70 \
  --hostname-override=10.10.15.70 \
  --pod-infra-container-image=harbor.local.com/images/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/k8s/bin \
  --cluster-dns=10.200.254.254 \
  --cluster-domain=zqdl.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2

#kubelet cAdvisor 默认在所有接口监听 4194 端口的请求, 以下iptables限制内网访问
ExecStartPost=/sbin/iptables -A INPUT -s 10.0.0.0/8 -p tcp --dport 4194 -j ACCEPT
ExecStartPost=/sbin/iptables -A INPUT -s 172.16.0.0/12 -p tcp --dport 4194 -j ACCEPT
ExecStartPost=/sbin/iptables -A INPUT -s 192.168.0.0/16 -p tcp --dport 4194 -j ACCEPT
ExecStartPost=/sbin/iptables -A INPUT -p tcp --dport 4194 -j DROP
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

  2.3.3 启动kubelet服务

systemctl daemon-reload

systemctl enable kubelet

systemctl start kubelet

  2.3.4 kubectl approve kubelet TLS 证书

    kubelet 首次启动时向kube-apiserver 发送证书签名请求,approve才能加入到集群

kubectl get csr

kubectl certificate approve node-csr-kWKUc83k2DshGM2jFp2lnt3iWy3qaY0QO1USkbWydNM

 

2.4 kube-proxy部署

  2.4.1 创建kube-proxy 证书签名请求

cat > kube-proxy-csr.json <

  2.4.2 生成kube-proxy 客户端证书和私钥

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

    将生成的kube-proxy*.pem同步到所有节点/etc/kubernetes/ssl目录下

  2.4.2 创建kube-proxy.kubeconfig文件

设置集群参数

  kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://master.node.local--kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数

  kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

设置上下文参数

  kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

设置默认上下文

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

  将生成的kube-proxy.kubeconfig同步到所有node节点/etc/kubernetes目录下

  2.4.3 /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后
# kube-proxy 会对访问 Service IP 的请求做 SNAT,这个特性与calico 实现 network policy冲突,因此禁用
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \
  --bind-address=10.10.15.70 \
  --hostname-override=10.10.15.70 \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  2.4.4 启动kube-proxy服务

systemctl daemon-reload

systemctl enable kube-proxy

systemctl start kube-proxy

  

三. coredns 安装部署

     coredns.yaml 如下

     需要修改的有三处位置:

     kubernetes __PILLAR__DNS__DOMAIN__ 替换为: zqdlsvc.local

     kubedns: 10.254.0.2

     images地址看看能不能下载

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        kubernetes zqdlk8s.local 10.254.0.0/16
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: coredns/coredns:latest
        imagePullPolicy: Always
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 10.254.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

  验证方式:启动两个pod

        # kubectl exec -it alpine nslookup kubernetes
          nslookup: can't resolve '(null)': Name does not resolve

          Name:      kubernetes
          Address 1: 10.254.0.1 kubernetes.default.svc.zqdlk8s.local

 

 

-----------------------------------------------------------------------------------------------------------------------------

Kubernetes 部署安装 calico 网络

 Calico组件:

  •      Felix:Calico agent     运行在每台node上,设置网络信息:IP,路由规则,iptable规则
  •      etcd:calico后端存储
  •      BIRD:  BGP Client: 负责把Felix在各node上设置的路由信息广播到Calico网络( 通过BGP协议)。
  •      BGP Route Reflector: 大规模集群的分级路由分发。
  •      calico: calico命令行管理工具

参考:https://docs.projectcalico.org/v3.7/getting-started/kubernetes/installation/calico 

 

1. Installing with the Kubernetes API datastore—50 nodes or less

下载

curl https://docs.projectcalico.org/v3.7/manifests/calico.yaml -O

修改 pod 的子网掩码

POD_CIDR="" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml

Apply the manifest using the following command.

kubectl apply -f calico.yaml

2. Installing with the etcd datastore

 

建议:

    1. docker 配置

cat > /etc/docker/daemon.json << EOF
{

    "exec-opts": ["native-cgroupdrvier=systemd"].

    "log-dirver": "json-file",

    "log-opts": {

        "max-size": "100m",

    }

}

EOF

你可能感兴趣的:(kubernetes)