节点启动顺序为:1、etcd 2、flannel 3、docker 4、kubelet 和kube-proxy,本文flannel未使用systemd部署,因此写成脚本添加到rc.local中去。
1、拷贝kubectl安装中生成的配置文件作为kubelet的启动生成文件,这样就不用再进行csr验证。
cp ~/.kube/config /etc/kubernetes/kubelet.kubeconfig
2、kubelet以及kube-proxy的systemd文件见下,本文安装了三个节点,按照具体情况修改文件。
root@ubuntu133:/etc/systemd/system# cat kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2 \
--allow-privileged=true \
--api-servers=http://192.168.15.132:8080 \
--address=192.168.15.133 \
--hostname-override=192.168.15.133 \
--pod-infra-container-image=docker.xxx.com:5000/pod-infrastructure:v2017 \ #该镜像需要自行下载,放置到私有仓库中
--cgroup-driver=cgroupfs \
--cluster-dns=10.254.0.2 \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--require-kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--cluster-domain=cluster.local \ #此处应注意,后面dns配置文件中会遇到
--hairpin-mode promiscuous-bridge \
--serialize-image-pulls=false
Restart=on-failure
[Install]
WantedBy=multi-user.target

root@ubuntu133:/etc/systemd/system# cat kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--master=http://192.168.15.132:8080 \
--bind-address=192.168.15.133 \
--hostname-override=192.168.15.133 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--cluster-cidr=10.254.0.0/16
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

3、验证
root@ubuntu132:/etc/systemd/system# kubectl get node
NAME STATUS AGE VERSION
192.168.15.132 Ready 7d v1.6.0
192.168.15.133 Ready 9d v1.6.0
192.168.15.134 Ready 9d v1.6.0

4、测试集群,跑一个容器并创建一个svc
root@ubuntu132:~/dnsyaml# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=docker.xxx.com:5000/nginx1.9:v2017 --port=80
deployment "nginx" created
root@ubuntu132:~/dnsyaml# kubectl expose deployment nginx --type=NodePort --name=test
service "test" exposed

oot@ubuntu132:~/dnsyaml# kubectl get svc test
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test 10.254.238.72 80:8455/TCP 17

在节点上测试,kubelet节点上,service仅仅能在node节点上进行测试,如需其他非kubelet测试就是用nodeport进行验证 curl http://192.168.15.132:8455
root@ubuntu132:~/dnsyaml# curl "10.254.238.72 80"
显示nginx网页内容!