1、环境规划
2、安装Docker
3、自签TLS证书
4、部署Etcd集群
5、部署Flannel网络
6、创建Node节点kubeconfig文件
7、获取K8S二进制包
8、运行Master组件
9、运行Node组件
10、查询集群状态
11、启动一个测试示例
12、部署Web UI (Dashboard)
**1.环境规划**
角色 IP 组件
master 192.168.200.101 kube-apiserver
kube-controller-manager
kube-scheduler
etcd
node01 192.168.200.102 kubelet
kube-proxy
docker
flannel
etcd
node02 192.168.200.103 kubelet
kube-proxy
docker
flannel
etcd
2.安装docker
在master/node01/node02操作:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce cat </etc/docker/daemon.json { "registry-mirrors": [ "https://registry.docker-cn.com"],"insecure-registries":["192.168.200.101:5000"] } EOF systemctl start docker systemctl enable docker
3.自签TLS证书
组件 使用的证书
etcd ca.pem,server.pem,server-key.pem
flannel ca.pem,server.pem,server-key.pem
kube-apiserver ca.pem,server.pem,server-key.pem
kubelet ca.pem,ca-key.pem
kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
kubectl ca.pem,admin.pem,admin-key.pem
master操作:
安装证书生成工具 cfssl :
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
生成证书
使用脚本生成:cat certificate.sh
cat > ca-config.json <ca-csr.json < server-csr.json < admin-csr.json < kube-proxy-csr.json < 执行脚本:
sh certificate.sh脚本执行成功会生成一批证书,创建ssl目录存放生成的证书
mkdir -p /root/ssl移动所有证书至/root/ssl
4.部署etcd集群
master操作:
创建kubernets目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl}
上传etcd源码包etcd-v3.2.12-linux-amd64.tar.gz
tar xf etcd-v3.2.12-linux-amd64.tar.gz cd etcd-v3.2.12-linux-amd64
移动etcd命令到kubernets工作目录bin下
cp etcd etcdctl /opt/kubernetes/bin/
移动etcd所需要的证书到kubernets工作目录ssl下
cp /root/ssl/ca*pem /root/ssl/server*pem /opt/kubernetes/ssl/
使用脚本生成配置文件并启动:cat etcd.sh
#!/bin/bash ETCD_NAME=${1:-"etcd01"} ETCD_IP=${2:-"127.0.0.1"} ETCD_CLUSTER=${3:-"etcd01=http://127.0.0.1:2379"} cat </opt/kubernetes/cfg/etcd #[Member] ETCD_NAME="${ETCD_NAME}" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF cat < /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/opt/kubernetes/cfg/etcd ExecStart=/opt/kubernetes/bin/etcd \\ --name=\${ETCD_NAME} \\ --data-dir=\${ETCD_DATA_DIR} \\ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \\ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \\ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \\ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \\ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \\ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \\ --initial-cluster-state=new \\ --cert-file=/opt/kubernetes/ssl/server.pem \\ --key-file=/opt/kubernetes/ssl/server-key.pem \\ --peer-cert-file=/opt/kubernetes/ssl/server.pem \\ --peer-key-file=/opt/kubernetes/ssl/server-key.pem \\ --trusted-ca-file=/opt/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable etcd systemctl restart etcd
在master上使用脚本启动etcd
./etcd.sh etcd01 192.168.200.101 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380
(可选) 为了方便部署,配置master到node互信:
生成公密钥:ssh-keygen 一路回车 生成后发送公钥到node ssh-copy-id [email protected] ssh-copy-id [email protected]
发送master文件到node
scp -r /opt/kubernetes/ [email protected]:/opt scp -r /opt/kubernetes/ [email protected]:/opt scp etcd.sh [email protected]:~ scp etcd.sh [email protected]:~
启动node01的etcd
./etcd.sh etcd02 192.168.200.102 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380
启动node02的etcd
./etcd.sh etcd03 192.168.200.103 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380
进入到/root/ssl目录下,执行以下命令在master查看集群状态
/opt/kubernetes/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" \ cluster-health
5、部署Flannel网络
Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
VXLAN :将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
Flannel :是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。
多主机容器网络通信其他主流方案:隧道方案( Weave、OpenvSwitch ),路由方案(Calico)等。
在master/node上操作(master部署flannel在一些特殊场景会用到):
1)写入分配的子网段到 etcd ,供 flanneld 使用(只在master上操作即可)
/opt/kubernetes/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'2)下载二进制包
wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz tar xf flannel-v0.9.1-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
3)配置 Flannel/systemd管理
使用脚本配置cat flanneld.sh
#!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat </opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/kubernetes/ssl/ca.pem \ -etcd-certfile=/opt/kubernetes/ssl/server.pem \ -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" EOF cat < /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF cat < /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP \$MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld systemctl restart docker
4)启动flannel
./flanneld.sh https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379
5)验证网络
查看已存在的子网
[root@k8s-master ssl]# /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" ls /coreos.com/network/subnets 会显示以下docker子网段: /coreos.com/network/subnets/172.17.78.0-24 /coreos.com/network/subnets/172.17.84.0-24 /coreos.com/network/subnets/172.17.49.0-24查看某个子网详细信息
[root@k8s-master ssl]# /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" get /coreos.com/network/subnets/172.17.78.0-24 子网详细信息: {"PublicIP":"192.168.200.101","BackendType":"vxlan","BackendData":{"VtepMAC":"62:5f:9d:cd:51:aa"}}
如果集群内部节点无法通信,可以添加防火墙规则:
iptables -I INPUT -s 192.168.200.0/24 -j ACCEPT
6、创建Node节点kubeconfig文件
在master节点/root/ssl目录下使用以下脚本:
cat kubeconfig.sh
# 创建 TLS Bootstrapping Token export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <注意:执行此脚本时必须存在kubectl命令(上传kubectl命令到/usr/bin下)
./kubeconfig.sh
会生成以下文件:
token.csv
bootstrap.kubeconfig
kube-proxy.kubeconfig
将配置文件cp到node:
scp *kubeconfig [email protected]:/opt/kubernetes/cfg/ scp *kubeconfig [email protected]:/opt/kubernetes/cfg/
7、获取K8S二进制包、运行Master组件
https://github.com/kubernetes/kubernetes/
移动二进制文件到工作目录bin下:
mv kubectl kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/移动token认证信息到配置目录下:
mv token.csv /opt/kubernetes/cfg/
使用以下脚本apiserver.sh、scheduler.sh、controller-manager.sh:
cat apiserver.sh
#!/bin/bash MASTER_ADDRESS=${1:-"192.168.200.101"} ETCD_SERVERS=${2:-"http://192.168.200.101:2379"} cat </opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --insecure-bind-address=127.0.0.1 \\ --bind-address=${MASTER_ADDRESS} \\ --insecure-port=8080 \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.10.10.0/24 \\ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\ --etcd-certfile=/opt/kubernetes/ssl/server.pem \\ --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" EOF cat < /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
cat controller-manager.sh
#!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat </opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.10.10.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF cat < /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
cat scheduler.sh
#!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat </opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect" EOF cat < /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler 启动组件:
./apiserver.sh 192.168.200.101 https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379 ./scheduler.sh ./controller-manager.sh
查看组件启动状态:
kubectl get cs
8、运行Node组件
mv kubelet kube-proxy /opt/kubernetes/bin/ chmod +x /opt/kubernetes/bin/*
在node节点使用脚本启动kubelet:
cat kubelet.sh
#!/bin/bash NODE_ADDRESS=${1:-"192.168.1.196"} DNS_SERVER_IP=${2:-"10.10.10.2"} cat </opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --address=${NODE_ADDRESS} \\ --hostname-override=${NODE_ADDRESS} \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --cert-dir=/opt/kubernetes/ssl \\ --allow-privileged=true \\ --cluster-dns=${DNS_SERVER_IP} \\ --cluster-domain=cluster.local \\ --fail-swap-on=false \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF cat < /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet 在node01启动:
./kubelet.sh 192.168.200.102在node02启动:
./kubelet.sh 192.168.200.103
如果启动报错:kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
原因是:kubelet-bootstrap并没有权限创建证书。所以要创建这个用户的权限并绑定到这个角色上。
在master执行命令创建kubelet-bootstrap用户:
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
在master节点查看认证状态:
kubectl get csr 显示为等待签名认证状态 NAME AGE REQUESTOR CONDITION node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s 27s kubelet-bootstrap Pending node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI 5m kubelet-bootstrap Pending
master进行签名认证:
kubectl certificate approve node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s kubectl certificate approve node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI
再次查看:
kubectl get csr 显示为签发状态 NAME AGE REQUESTOR CONDITION node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s 5m kubelet-bootstrap Approved,Issued node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI 10m kubelet-bootstrap Approved,Issued
kubectl get node 显示node节点已准备就绪 NAME STATUS ROLES AGE VERSION 192.168.200.102 Ready1m v1.9.0 192.168.200.103 Ready 2m v1.9.0
使用脚本在node节点启动kube-pory
cat proxy.sh
#!/bin/bash NODE_ADDRESS=${1:-"192.168.1.200"} cat </opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=${NODE_ADDRESS} \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF cat < /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy 启动kube-proxy:
在node1启动:
./proxy.sh 192.168.200.102在nodo2启动:
./proxy.sh 192.168.200.103
9、启动一个测试示例
启动一个nginx服务(只能内网访问):
kubectl run nginx --image=nginx --replicas=3 kubectl get pod
启动一个nginx服务(使用NodePort网络映射到外网):
# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort # kubectl get svc nginx
10、部署Web UI (Dashboard)
使用kubernets模板文件dashboard-rbac.yaml、dashboard-deployment.yaml、dashboard-service.yaml:
# kubectl create -f dashboard-rbac.yaml # kubectl create -f dashboard-deployment.yaml # kubectl create -f dashboard-service.yaml
#查看启动
kubectl get pods -n kube-system //获取podid kubectl describe po/podid -n kube-system#查看service信息
kubectl get svc -n kube-system
Kubectl管理工具,远程管理集群服务
在远程服务器上操作:
# 设置集群项中名为kubernetes的apiserver地址与根证书 ./kubectl config set-cluster kubernetes --server=https://192.168.200.101:6443 --certificate-authority=ca.pem # 设置用户项中cluster-admin用户证书认证字段 ./kubectl config set-credentials cluster-admin --certificate-authority=ca.pem --client-key=admin-key.pem --client-certificate=admin.pem # 设置环境项中名为default的默认集群和用户 ./kubectl config set-context default --cluster=kubernetes --user=cluster-admin # 设置默认环境项为default ./kubectl config use-context default