Kubernetes单节点实验的一些记录

为了对k8s有一个直观的印象,搜了一个实验开始step by step,期间遇到的一些问题,都记录在这里。

我使用的教程的网址是https://www.cnblogs.com/neutronman/p/8047547.html

解决Pod状态一直是ContainerCreating的问题

Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory

按照教程上说的

yum install *rhsm* -y

然后删除pod再重新创建,并没有解决,后来搜到的如下方法才搞定。

# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
# rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

mysql的server创建yaml文件

  • 原教程中没有加nodePort,应该是因为只创建了一个mysql的pod,所以直接用3306这个mysql常用端口
  • 更一般的,考虑到后续可能会有多节点,所以建议添加nodePort再创建
# cat mysql-svc.yaml
apiVersion: v1                      
kind: Service                              #表明是K8s Service
metadata: 
  name: mysql                              #Service的全局唯一名称
spec:
  ports:
    - port: 3306                           #Service提供服务的端口号
      nodePort: 31101
  selector:                                #Service对应的Pod拥有这里定义的标签
    app: mysql
  type: NodePort

几个show结果的分析

services

# kubectl get service
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   10.254.0.1               443/TCP          1h
mysql        10.254.116.110          3306:31101/TCP   32m
myweb        10.254.93.182           8080:30001/TCP   1h
#
  • 除了k8s本身之外,一共启动了两个services:mysql和myweb
  • service是一组pod的服务抽象,相当于一组pod的LB,负责将请求分发给对应的pod
  • 注意到每个service都会提供一个10.254.x.y的IP,这个是Cluster-IP,是一个虚IP
  • Pod之间访问service,用Cluster-IP+containerPort,例如通过10.254.116.110:3306访问mysql
  • K8s集群之外访问service,用服务器IP+nodePort,例如可以通过浏览器访问http://<服务器IP>:30001/打开网页


服务端口

# netstat -apn | grep 31101
tcp6       0      0 :::31101                :::*                    LISTEN      17216/kube-proxy    
# netstat -apn | grep 30001
tcp6       0      0 :::30001                :::*                    LISTEN      17216/kube-proxy    
# 
  • kube-proxy的作用主要是负责service的实现,具体来说,就是实现了内部从pod到service和外部的从node port向service的访问
  • kube-proxy是所有pod的服务抽象,担负着透明代理和负载均衡的角色,其实就是将某个访问service的请求,通过一套算法和规则转发给后端的pod
  • 无论pod内还是pod外要访问service,都会有kube-proxy来转发到service所代表的一个具体的pod上

pods

# kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
mysql-f9f5j   1/1       Running   0          1h
myweb-0ghm7   1/1       Running   0          1h
myweb-5m78d   1/1       Running   0          1h
myweb-6g4x6   1/1       Running   0          1h
myweb-87fvm   1/1       Running   0          1h
myweb-svf8n   1/1       Running   0          1h
# 
  • mysql的pod建立了一个,myweb一次就创建了五个,确实非常方便

service的内部IP

# kubectl describe svc mysql
Name:           mysql
Namespace:      default
Labels:         
Selector:       app=mysql
Type:           NodePort
IP:         10.254.116.110
Port:            3306/TCP
NodePort:        31101/TCP
Endpoints:      172.17.0.2:3306
Session Affinity:   None
No events.
# 
# kubectl describe svc myweb
Name:           myweb
Namespace:      default
Labels:         
Selector:       app=myweb
Type:           NodePort
IP:         10.254.93.182
Port:            8080/TCP
NodePort:        30001/TCP
Endpoints:      172.17.0.3:8080,172.17.0.4:8080,172.17.0.5:8080 + 2 more...
Session Affinity:   None
No events.
# 
  • mysql只启动了一个pod,因此分配了一个内部IP :172.17.0.2
  • myweb启动了五个pod组成LB,细分内部IP为172.17.0.3~7,分别指向这五个myweb pod,他们是myweb这个service LB的成员

网络接口

# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno16777984:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:56:a9:7c:18 brd ff:ff:ff:ff:ff:ff
    inet 10.25.130.254/16 brd 10.25.255.255 scope global noprefixroute eno16777984
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fea9:7c18/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:1c:ee:af brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:1c:ee:af brd ff:ff:ff:ff:ff:ff
5: docker0:  mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f3:1e:6e:0c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f3ff:fe1e:6e0c/64 scope link 
       valid_lft forever preferred_lft forever
7: veth97cd88d@if6:  mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether fe:31:43:92:2c:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc31:43ff:fe92:2cf3/64 scope link 
       valid_lft forever preferred_lft forever
9: vethd739ec1@if8:  mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 76:af:fd:c4:d3:f8 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::74af:fdff:fec4:d3f8/64 scope link 
       valid_lft forever preferred_lft forever
11: vethc9fcdfc@if10:  mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ce:f2:84:06:79:86 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::ccf2:84ff:fe06:7986/64 scope link 
       valid_lft forever preferred_lft forever
13: veth842480c@if12:  mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether e2:20:a3:f5:6d:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::e020:a3ff:fef5:6dc8/64 scope link 
       valid_lft forever preferred_lft forever
15: vethc6943c5@if14:  mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 62:71:e5:a2:9d:cb brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::6071:e5ff:fea2:9dcb/64 scope link 
       valid_lft forever preferred_lft forever
17: vethe9473c3@if16:  mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 3e:5c:ca:eb:31:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::3c5c:caff:feeb:31f1/64 scope link 
       valid_lft forever preferred_lft forever
#
  • 可以看到网桥docker0,IP是172.17.0.1/16
  • 在同一个node上,pod之间都是通过veth链接到docker0网桥上,docker0会动态地分配IP地址给pod
  • 同一个节点上的pod就是通过这个网桥通信的

最后demo网页没有显示出来的问题

  • 原文最后访问http://:30001/demo,没有显示出表格,反而网页报错
Error:com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
  • 只访问http://:30001是不对的,因为即便没有连接mysql,开这个网页也会显示出猫的,这个是tomcat独立就可以完成
  • 经过搜索和摸索,最后确定myweb-rc.yaml要是这个样子
# cat myweb-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: myweb
spec:
  replicas: 5                                       #Pod副本期待数量为5
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: kubeguide/tomcat-app:v1
        ports: 
        - containerPort: 8080
  • 删除已有重新创建
# kubectl delete -f  myweb-svc.yaml
# kubectl delete -f  myweb-rc.yaml
# kubectl create -f  myweb-rc.yaml
# kubectl create -f  myweb-svc.yaml
  • 终于看到表格啦!


进入pod shell

先get pod的名字

# kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
mysql-g7bxp   1/1       Running   0          50m
myweb-4640d   1/1       Running   0          38m
myweb-469nq   1/1       Running   0          38m
myweb-g47sb   1/1       Running   0          38m
myweb-kk2sb   1/1       Running   0          38m
myweb-lkwd4   1/1       Running   0          38m
#

例如要进入myweb-4640d

# kubectl exec -it myweb-4640d sh
# cat /etc/issue
Debian GNU/Linux 8 \n \l

# 

坑爹的是没有vi/vim,apt-get install也不能装,需要添加一些源

# cat > sources.list <

然后就可以装vim了

# apt-get install vim -y

先记录这么多……

你可能感兴趣的:(Kubernetes单节点实验的一些记录)