9.集群中测试启用一个nginx实例 If you see this page, the nginx web server is successfully installed and For online documentation and support please refer to Thank you for using nginx. If you see this page, the nginx web server is successfully installed and For online documentation and support please refer to Thank you for using nginx.
1.在1台master上执行以下命令
[root@master1 ~]# kubectl run nginx --replicas=2 --labels="run=nginx-service" --image=172.16.0.2:5000/docker.io/nginx --port=80
deployment.apps/nginx created
nginx 为实例名
--replicas=2 创建2个复本
--labels 标签
--image 镜像地址,搭建的是本地私有仓库
--port 启用80端口
查看应用名
[root@master1 ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 2 2 2 2 9s
查看复本名
[root@master1 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-6c9b9fc894 2 2 2 13s
查看node节点
[root@master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.16.0.8 Ready
172.16.0.9 Ready
查看启动的pod名称
[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-6c9b9fc894-8ccwr 1/1 Running 0 22s
nginx-6c9b9fc894-wx449 1/1 Running 0 22s
删除nginx应用执行以下命令即可
[root@master1 ~]# kubectl delete deployment nginx
deployment.extensions "nginx" deleted
或kubectl delete deploy/nginx
[root@master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 169.169.27.9
kubernetes ClusterIP 169.169.0.1
或kubectl delete svc/example-service
缩减少或者扩容pod
[root@master1 ~]# kubectl scale deployment nginx --replicas=3
deployment.extensions/nginx scaled
[root@master1 ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 3 3 3 3 6d
[root@master1 ~]# kubectl get deployment -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 3 3 3 3 6d nginx 172.16.0.2:5000/docker.io/nginx run=wbb
分配一个虚拟集群ip(169.169.0.0段的ip)
[root@master2 ~]#kubectl expose deployment nginx --type=NodePort --name=nginx-service
查看svc集群ip分配情况及删除
[root@master2 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 169.169.0.1
nginx-service NodePort 169.169.157.14
[root@master1 ~]#kubectl delete deployment nginx
[root@master1 ~]# kubectl delete svc example-service
service "example-service" deleted
查看创建的pod分配情况nginx-service
[root@master2 ~]# kubectl describe svc nginx-service
Name: nginx-service
Namespace: default
Labels: run=nginx-service
Annotations:
Selector: run=nginx-service
Type: NodePort
IP: 169.169.157.14
Port:
TargetPort: 800/TCP
NodePort:
Endpoints: 10.10.12.2:800,10.10.36.2:800
Session Affinity: None
External Traffic Policy: Cluster
Events:
[root@node1 ~]# curl -L http://10.10.36.2
Welcome to nginx!
working. Further configuration is required.
nginx.org.
Commercial support is available at
nginx.com.
[root@node2 ~]# curl -L http://10.10.12.2
Welcome to nginx!
working. Further configuration is required.
nginx.org.
Commercial support is available at
nginx.com.
以后安装完后,发现二个node间不能互相访问,即在node1上执行curl http://10.10.12.2,发现获取不到nginx数据,执行以下二个即可
modprobe ip_tables;
iptables -P FORWARD ACCEPT;
注意: 此时可能会出现不同node节点上面的pod之间网络不通,解决方法如下
设置所有节点iptables
yum install iptables-services -y;
systemctl disable iptables;
systemctl stop iptables;
modprobe ip_tables;
iptables -P FORWARD ACCEPT;
10.coredns安装
第一种方式:从官网下载
mkdir coredns && cd coredns
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
chmod +x deploy.sh
./deploy.sh -i 10.96.0.10 > coredns.yml
kubectl apply -f coredns.yml
查看
kubectl get pods --namespace kube-system
kubectl get svc --namespace kube-system
然后在所有node节点的
[root@node2 kubernetes]# cat kubelet.conf
KUBELET_ARGS="--cgroup-driver=systemd
--hostname-override=172.16.0.9
--cert-dir=/etc/kubernetes/pki
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig
--cluster-dns=169.169.0.2
--cluster-domain=cluster.local" 增加以上二行
然后重启kubelet服务
测试是否生效
1.在master服务器上生成二个nginx服务
kubectl run nginx --replicas=2 --labels="run=wbb" --image=172.16.0.2:5000/docker.io/nginx --port=800
deployment.apps/nginx created
2.在node上查看,nginx的容器和coredns的容器已启动
[root@node2 kubernetes]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4462774c0860 172.16.0.2:5000/docker.io/nginx@sha256:0b5c73966ec996a05672c4aea0a0d1910c6d7495147805ef88205bff51e119f3 "nginx -g 'daemon ..." 32 minutes ago Up 32 minutes k8s_nginx_nginx-66b6fb98fd-gdz97_default_5c2de123-c2ed-11e8-af1a-5254d2b1bb60_0
bfe9625300e7 k8s.gcr.io/pause:3.1 "/pause" 32 minutes ago Up 32 minutes k8s_POD_nginx-66b6fb98fd-gdz97_default_5c2de123-c2ed-11e8-af1a-5254d2b1bb60_0
3ca7f4570d93 docker.io/coredns/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a "/coredns -conf /e..." 3 hours ago Up 3 hours k8s_coredns_coredns-55f86bf584-95xd7_kube-system_5f915bfb-c2d5-11e8-af1a-5254d2b1bb60_0
b204ac0c0a88 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-55f86bf584-95xd7_kube-system_5f915bfb-c2d5-11e8-af1a-5254d2b1bb60_0
在集群的master,node服务器上执行以下,说明nginx启动正常
[root@node2 kubernetes]# curl -L http://10.10.36.2
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
登录任意台nginx容器里检查/etc/resolv.conf里配置文件是否已修改为169.169.0.2dns的虚拟ip
[root@master1 coredns]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-66b6fb98fd-g54fk 1/1 Running 0 34m
nginx-66b6fb98fd-gdz97 1/1 Running 0 34m
说明容器里面的dns已指向了coredns了
[root@master1 coredns]# kubectl exec -it nginx-66b6fb98fd-gdz97 /bin/bash
root@nginx-66b6fb98fd-gdz97:/# cat /etc/resolv.conf
nameserver 169.169.0.2
search default.svc.cluster.local svc.cluster.local cluster.local hk1.zfcloud.com
options ndots:5
-------------------------------------------------------------------------------------------------------------------------------------------------------以上coredns安装完毕
第二种方式:
下载kubernetes源码包时,里面已包括了coredns的安装脚本放在
按上面方式也是一样可以安装
11.dashboard-ui平台安装
1.下载镜像文件
然后导入私有仓库k8s.gcr.io/kubernetes-dashboard-amd64
k8s.gcr.io/kube-apiserver-amd64 v1.11.3 3de571b6587b 2 weeks ago 187 MB
172.16.0.2:5000/k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122 MB
2.从kubernetes源码包里直接安装
ls /root/kubernetes/cluster/addons/dashboard
dashboard-controller.yaml dashboard-rbac.yaml dashboard-secret.yaml dashboard-service.yaml
vim dashboard-controller.yaml
[root@master1 dashboard]# cat dashboard-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: dashboard
containers:
- name: kubernetes-dashboard
image: 172.16.0.2:5000/k8s.gcr.io/kubernetes-dashboard-amd64 修改镜像文件为本地搭建的私有仓库,然后下载镜像
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
[root@master1 dashboard]# cat dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
type: NodePort 增加此行
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
2.创建文件
[root@master1 dashboard]# kubectl create -f kubernetes-dashboard.yaml dashboard-rbac.yaml dashboard-secret.yaml dashboard-service.yaml 最好一个个文件执行
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
3.删除yaml文件产生的容器,用以下方式
kubectl delete -f rc-nginx.yaml
[root@master1 dashboard]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-55f86bf584-6lg79 1/1 Running 0 2d 10.10.36.3 172.16.0.8
coredns-55f86bf584-95xd7 1/1 Running 0 2d 10.10.12.3 172.16.0.9
kubernetes-dashboard-58c47d9476-gv6x4 1/1 Running 0 1d 10.10.36.4 172.16.0.8 出现此行说明dashboard已安装完成,并启动成功,安装在172.16.0.8的node节点服务器上,容器的ip为10.10.36.4
[root@master1 dashboard]# kubectl get svc -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 169.169.0.253/UDP,53/TCP 2d k8s-app=kube-dns
kubernetes-dashboard NodePort 169.169.110.19780:17189/TCP 1d k8s-app=kubernetes-dashboard 由于启用了Nodeport,所 容器有对外映射一个17189的端口,到此dashboard平台已搭建完毕,但还是不能访问
如何在外网访问,并打开页面
之前在etcd有安装haproxy+keepalived
vim /etc/haproxy/haproxy.conf文件中最下面一行增如下内容
listen dashborad
bind *:8086 外网访问端口为:8086
mode tcp
maxconn 65535
balance roundrobin
server node1 10.10.36.4:9090 check inter 10000 fall 2 rise 2 weight 1 etcd服务器可以直接访问dashboard 容器的ip地址,10.10.36.4的9090端口
或者用下面也可以
server node1 172.16.0.8:17189 check inter 10000 fall 2 rise 2 weight 1 node网卡节点ip:映射的外网端口.通过harpoxy反射代理来访问
然后重启haproxy
浏览器输入以下链接访问页面
http://172.16.0.100:8086的负载均衡ip地址
-----------------------------------------------------------------------------------------------------------------------------------------------到此dashboard安装完毕