03 | Kubernetes | Kubernetes实战

03 | Kubernetes | Kubernetes实战

Pod

持续集成:提交代码——>代码构建——>可部署的包——>打包镜像——>推送到镜像仓库
交付件:镜像
实现方式:
1.手动完成
2.gitlab+jenkins+docker+harbor
k8s持续部署:
kubectl命令行/yaml文件——>创建资源——>将应用暴露出去——>更新镜像
——>回滚到上一镜像版本或是指定镜像版本——>删除资源

创建资源

kubectl create deployment java-web --image=lizhenliang/java-demo
kubectl get pods

在这里插入图片描述

将应用暴露出去

kubectl expose deployment java-web --port=8000 --target-port=8080 --name=java-web-service --type=NodePort
kubectl get service
kubectl get all

在这里插入图片描述

更新镜像

kubectl get deployment
kubectl edit deployment java-web
kubectl set image deployment java-web java-demo=tomcat
kubectl get pods

kubectl exec -it java-web-85b88dc9f4-s49xh bash
ls
cd webapps
mkdir ROOT
cd ROOT
echo "Hello" > index.html

回滚

# 查看历史版本
kubectl rollout history deployment/java-web

03 | Kubernetes | Kubernetes实战_第1张图片

kubectl rollout undo deployment/java-web
kubectl get pods
kubectl rollout history deployment/java-web
# 记录操作命令
kubectl set image deployment java-web java-demo=tomcat --record=true
# 回滚到指定版本
kubectl rollout undo deployment/java-web --to-revision=3

删除

kubectl delete deployment/java-web

扩容

kubectl create deployment java-web --image=lizhenliang/java-demo
kubectl scale deployment java-web --replicas=10
kubectl get pods
kubectl get pods -o wide

Deployment:无状态部署

# 功能:
部署无状态应用
管理Pod和ReplicaSet(副本集合)
具有上线部署、副本设定、滚动升级、回滚等功能
提供声明式更新
# 部署无状态应用:
不用过多基础环境,比如数据存储、网络ID
Pod挂掉,IP发生了变化
启动顺序

创建

kubectl create deployment web --image=nginx --dry-run -o yaml > nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
            app: web
  strategy: {}
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
kubectl apply -f nginx.yaml
kubectl get deployment
kubectl get rs

# 查看资源
kubectl api-resources
kubectl get po
kubectl get rs
kubectl describe rs web-5dcb957ccc

发布

kubectl expose deployment web --port=90 --target-port=80 --type=NodePort --name=web --dry-run -o yaml > service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: web
spec:
  ports:
  - port: 90
    protocol: TCP
    targetPort: 80
  selector:
    app: web
  type: NodePort
kubectl apply -f service.yaml
kubectl get svc

在这里插入图片描述

升级

kubectl set image deployment web nginx=nginx:1.16

03 | Kubernetes | Kubernetes实战_第2张图片

web-5dcb957ccc       0         0         0       29m
web-6cfc49fdb5       3         3         3       3m31s

RS作用:
1、控制副本数量
2、管理滚动升级(利用两个RS来实现)
3、发布版本管理

滚动更新:pod不会全部一次性更新
up:扩容  down:缩容
kubectl describe rs web-5dcb957ccc(旧)

Normal  SuccessfulCreate  31m    replicaset-controller  Created pod: web-5dcb957ccc-tddlm
Normal  SuccessfulCreate  31m    replicaset-controller  Created pod: web-5dcb957ccc-d8fnr
Normal  SuccessfulCreate  31m    replicaset-controller  Created pod: web-5dcb957ccc-9pp8f
Normal  SuccessfulDelete  4m51s  replicaset-controller  Deleted pod: web-5dcb957ccc-9pp8f
Normal  SuccessfulDelete  2m46s  replicaset-controller  Deleted pod: web-5dcb957ccc-d8fnr
Normal  SuccessfulDelete  2m27s  replicaset-controller  Deleted pod: web-5dcb957ccc-tddlm
kubectl describe deploy web

Scaled up replica set web-bbcf684cb to 1
Scaled down replica set web-6cfc49fdb5 to 2
Scaled up replica set web-bbcf684cb to 2
Scaled down replica set web-6cfc49fdb5 to 1
Scaled up replica set web-bbcf684cb to 3
Scaled down replica set web-6cfc49fdb5 to 0

回滚

kubectl rollout undo deployment/web --revision=2

kubectl rollout history deployment/web

扩容

kubectl scale deployment/web --replicas=5
kubectl describe deployment/web

DaemonSet:守护进程部署

功能:
在每一个Node上运行一个Pod
新加入的Node也同样会自动运行一个Pod

应用场景:
Agent
监控采集工具
日志采集工具

03 | Kubernetes | Kubernetes实战_第3张图片

kubectl get pods -n kube-system

kube-flannel-ds-9fngr                1/1     Running   0          41h
kube-flannel-ds-d2w9q                1/1     Running   0          41h
kube-flannel-ds-hm6k4                1/1     Running   0          41h
cp nginx.yaml ds.yaml
vim ds.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: web
  name: monitor
spec:
  selector:
    matchLabels:
            app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: monitor
kubectl apply -f ds.yaml
kubectl get pod

monitor-j4k9x              1/1     Running   0          25s
monitor-vrs6h              1/1     Running   0          25s
kubectl describe node k8s-master | grep Taint

在这里插入图片描述

Job:批处理

运维定时任务:
1、备份
2、程序自动拉起
3、上传ftp
4、推数据

开发定时任务:
1、处理数据
2、推送短信
3、离线数据处理


crontab任务统一管理:
1、ansible 配置文件同步
2、web管理平台

job

kubectl create job pi --image=perl --dry-run -o yaml


vim job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4


kubectl create -f job.yaml
kubectl get pod
kubectl get job
pi-5fkr7                   0/1     Completed   0          2m13s
kubectl logs pi-5fkr7

Cronjob

分时日月周

vim cronjob.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
kubectl apply -f cronjob.yaml
kubectl get pod
hello-1606445760-xnblw     0/1     Completed   0          37s
kubectl get cronjob

在这里插入图片描述

kubectl logs hello-1606445760-xnblw

service

意义:
防止Pod失联(服务发现)
定义一组Pod的访问策略(负载均衡)

03 | Kubernetes | Kubernetes实战_第4张图片

service只支持四层负载均衡:
四层:OSI中的传输层,TCP/UDP,四元组,只负责IP数据包转发
七层:OSI中的应用层,HTTP、FTP、SNMP协议,可以拿到这些协议头部信息,那就可以实现基于协议层面的处理
三种类型:
ClusterIP:集群内部使用,默认分配一个稳定的IP地址,即VIP,只能在集群内部访问(同Namespace内的Pod)。
NodePort:对外暴露应用。在每个节点上启用一个端口来暴露服务,可以在集群外部访问。也会分配一个稳定内部集群IP地址。访问地址:<NodeIP>:<NodePort>
LoadBalancer:对外暴露应用,适用公有云、与NodePort类似,在每个节点上启用一个端口来暴露服务。除此之外,Kubernetes会请求底层云平台上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去。
NodePort访问流程:
user -> 域名(公网IP)-> node ip:port -> iptables/ipvs -> pod

一般生产环境node都是部署在内网,那30008这个端口怎么让互联网用户访问呢?
1、找一台有公网IP的服务器,装一个nginx,反向代理 -> node ip:port
2、只用你们外部负载均衡器(Nginx、LVS、HAProxy) -> node ip:port



LoadBalancer访问流程:
user -> 域名(公网IP) -> 公有云上的负载均衡器(自动配置,控制器去完成) -> node ip:port

03 | Kubernetes | Kubernetes实战_第5张图片

03 | Kubernetes | Kubernetes实战_第6张图片
03 | Kubernetes | Kubernetes实战_第7张图片

kubectl delete $(kubectl get deploy -o name)
kubectl delete ds monitor
kubectl delete cronjob hello



kubectl create deployment web --image=nginx
kubectl expose deployment web --port=80 --target-port=80 --name=web --dry-run -o yaml > service.yaml
vim service.yaml


apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: web
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
kubectl apply -f service.yaml
kubectl get svc web -o yaml
cp service.yaml nodeport.yaml

vim nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: web-2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
  type: NodePort
kubectl apply -f nodeport.yaml
kubectl get svc
netstat -anpt | grep 31559
kubectl get ep
vim nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: web-2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30008
  selector:
    app: web
  type: NodePort
kubectl apply -f nodeport.yaml
kubectl get svc

03 | Kubernetes | Kubernetes实战_第8张图片

代理模式

03 | Kubernetes | Kubernetes实战_第9张图片
03 | Kubernetes | Kubernetes实战_第10张图片

userspace:自己在用户态实现的转发
iptables:阻断ip通信、端口映射NAT、跟踪包状态、数据包的修改
ipvs:LVS基于ipvs模块实现的四层负载均衡器,例如阿里云SLB四层LVS FULLNAT,七层Tengine
lsmod | grep ip_vs
# 启用ip_vs
modprobe ip_vs
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4


kube-proxy:
1、实现pod数据包转发
2、将service相关规则落地实现
iptables-save KUBE-SVC-RRHDV4CFXHW7RWV3 > a
iptables规则:
Nodeport -> KUBE-SVC-RRHDV4CFXHW7RWV3 -> KUBE-SEP-XE5XFBDN7LLHNVMO -> -j DNAT --to-destination 10.244.2.2:80
clusterip -> KUBE-SVC-RRHDV4CFXHW7RWV3 -> KUBE-SEP-XE5XFBDN7LLHNVMO -> -j DNAT --to-destination 10.244.2.2:80
kubectl scale deployment/web --replicas=3
kubectl get pods -o wide


从上往下匹配
-A KUBE-SVC-RRHDV4CFXHW7RWV3 -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-BGV4R4Y32WAZO4AD

-A KUBE-SVC-RRHDV4CFXHW7RWV3 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XE5XFBDN7LLHNVMO

-A KUBE-SVC-RRHDV4CFXHW7RWV3 -j KUBE-SEP-CYUFO3BNAYR2R5O3

上面是转发到3个pod规则(轮询)。

kubectl get cm -n kube-system
kubectl edit cm kube-proxy -n kube-system

/mode
mode: "ipvs"


kubectl get pods -n kube-system -o wide
kubectl delete pod kube-proxy-rrp7z -n kube-system

yum -y install ipvsadm
ipvsadm -L -n

lvs:虚拟服务器,真实服务器

172.17.0.1 192.168.31.73 127.0.0.1 10.244.2.1 10.244.2.0
iptables:
灵活,功能强大 
规则遍历匹配和更新,呈线性时延

IPVS: 
工作在内核态,有更好的性能 
调度算法丰富:rr,wrr,lc,wlc,ip hash

DNS

程序比写死IP更好的方式?
1、DNS解析
2、绑定hosts
CoreDNS Pod -> 获取service(apiserver)-> 更新到本地 
kubelet运行pod -> pod默认走coredns解析

DNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析。
采用NodePort对外暴露应用,前面加一个LB实现统一访问入口
优先使用IPVS代理模式 
集群内应用采用DNS名称访问

Ingress

NodePort存在的不足: 
一个端口只能一个服务使用,端口需提前规划
只支持4层负载均衡
通过Service相关联 
通过Ingress Controller实现Pod的负载均衡
支持TCP/UDP 4层和HTTP 7

03 | Kubernetes | Kubernetes实战_第11张图片

Ingress Controller

部署Ingress Controller
创建Ingress规则

03 | Kubernetes | Kubernetes实战_第12张图片

官方文档:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
注意事项: 
镜像地址修改成国内的:lizhenliang/nginx-ingress-controller:0.20.0 
使用宿主机网络:hostNetwork: true 
、其他主流控制器: 
Traefik:  HTTP反向代理、负载均衡工具 
Istio:服务治理,控制入口流量
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml

delete -f deploy.yaml
user -> 域名 -> node ip:80/443 -> ingress controller -> 域名分流 -> pod
kubectl apply -f ingress-controller.yaml
kubectl get pods -n ingress-nginx
vim ingress.yml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: example.ctnrs.com
    http:
      paths:
      - path: /
        backend:
          serviceName: web
          servicePort: 80
kubectl apply -f ingress.yml
kubectl get ingress
netstat -antp | grep 80
netstat -antp | grep 443
官方:
https://kubernetes.io/docs/concepts/services-networking/ingress/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80

根据Url路由到多个服务

www.ctnrs.com/a  北京地区
www.ctnrs.com/b  上海地区

在nginx怎么实现?

location /a {
	proxy_pass ;
}

location /b {
	proxy_pass http://xxx:80;

}
kubectl create deployment web1 --image=tomcat
kubectl create deployment web2 --image=lizhenliang/java-demo

kubectl expose deployment web1 --port=80
kubectl expose deployment web2 --port=80

kubectl get pods,svc

kubectl edit svc web1
kubectl edit svc web2
kubectl get pods
kubectl exec -it web1-647ccf6958-ckqts bash

在这里插入图片描述

vim ingress.yml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: example.ctnrs.com
    http:
      paths:
      - path: /a
        backend:
          serviceName: web1
          servicePort: 80
  - host: example.ctnrs.com
    http:
      paths:
      - path: /b
        backend:
          serviceName: web2
          servicePort: 80
kubectl apply -f ingress.yml
ingress controller pod -> 获取service(apiserver)-> 应用到本地nginx

控制器获取service关联的pod应用到nginx
nginx 提供七层负载均衡



固定ingress controller到两个node上(daemonset+nodeselector)
    user -> 域名 -> vip(keepalived) ha -> pod

固定ingress controller到两个node上(daemonset+nodeselector)
    user -> 域名 -> LB(nginx、lvs、haproxy) -> ingress controller -> pod

管理应用程序配置

Secret

加密数据并存放Etcd中,让Pod的容器以挂载Volume方式访问。 

应用场景:
https证书
secret存放docker registry认证信息
存放文件内容或者字符串,例如用户名密码

ConfigMap

与Secret类似,区别在于ConfigMap保存的是不需要加密配置信息。 
应用场景:应用配置文件

configmap怎么动态让应用生效?
重建pod
应用程序本身实现监听本地配置文件,如果发生变化触发配置热更新
使用sidecar容器监听配置文件是否更新,如果发生变化触发socket、http通知应用热更新

Pod数据持久化

Kubernetes中的Volume提供了在容器中挂载外部存储的能力 
Pod需要设置卷来源(spec.volume)和挂载点(spec.containers.volumeMounts)两个信息后才可以使用相应的Volume
本地卷:hostPath,emptyDir
网络卷:nfs,ceph(cephfs,rbd),glusterfs
公有云:aws,azure
k8s资源:downwardAPI,configMap,secret

emptyDir

创建一个空卷,挂载到Pod中的容器。Pod删除该卷也会被删除。 
应用场景:Pod中容器之间数据共享
vim emptydir.yaml

apiVersion: v1 
kind: Pod 
metadata: 
  name: my-pod 
spec: 
  containers: 
  - name: write 
    image: centos 
    command: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"] 
    volumeMounts: 
      - name: data 
        mountPath: /data 
  
  - name: read 
    image: centos 
    command: ["bash","-c","tail -f /data/hello"] 
    volumeMounts: 
      - name: data 
        mountPath: /data 
  volumes: 
  - name: data 
    emptyDir: {}
kubectl apply -f emptydir.yaml
kubectl get pods
kubectl logs my-pod -c read
kubectl logs my-pod -c read -f

03 | Kubernetes | Kubernetes实战_第13张图片

kubectl get pods -o wide
ls /var/lib/kubelet/
ls /var/lib/kubelet/pods/

在这里插入图片描述

docker ps 
docker ps |grep f37868fc
ls /var/lib/kubelet/pods/f37868fc-c4b4-45c1-b75c-9cc997165239
ls /var/lib/kubelet/pods/f37868fc-c4b4-45c1-b75c-9cc997165239/containers/
ls /var/lib/kubelet/pods/f37868fc-c4b4-45c1-b75c-9cc997165239/volumes/
ls /var/lib/kubelet/pods/f37868fc-c4b4-45c1-b75c-9cc997165239/volumes/kubernetes.io~empty-dir
ls /var/lib/kubelet/pods/f37868fc-c4b4-45c1-b75c-9cc997165239/volumes/kubernetes.io~empty-dir/data/

03 | Kubernetes | Kubernetes实战_第14张图片

docker ps 查看容器名,找到对应POD ID
/var/lib/kubelet/pods/<POD ID>/volumes/kubernetes.io~empty-dir/data

hostPath

挂载Node文件系统上文件或者目录到Pod中的容器。 
应用场景:Pod中容器需要访问宿主机文件
emptyDir == volume       
hostPath == bindmount    (日志采集agent、监控agent、/proc)
vim hostpath.yaml

apiVersion: v1 
kind: Pod 
metadata: 
  name: my-pod-2 
spec: 
  containers: 
  - name: busybox 
    image: busybox 
    args: 
    - /bin/sh 
    - -c 
    - sleep 36000 
    volumeMounts: 
    - name: data 
      mountPath: /data 
  volumes: 
  - name: data 
    hostPath: 
      path: /tmp 
      type: Directory
kubectl apply -f hostpath.yaml
kubectl get pods
kubectl delete pod my-pod
kubectl get pods -o wide


kubectl exec -it my-pod-2 sh
touch /tmp/lzx

在这里插入图片描述

NFS

yum -y install nfs-utils
vim /etc/exports

/ifs/kubernetes *(rw,no_root_squash)
mkdir -p /ifs/kubernetes
systemctl start nfs-server
systemctl status nfs-server
systemctl enable nfs-server
yum -y install nfs-utils
mount -t nfs 10.19.151.244:/ifs/kubernetes /mnt/
umount /mnt/
在这里插入代码片
kubectl create deployment web --image=nginx --dry-run -o yaml > web.yaml

vim web.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        nfs:
          server: 10.19.151.244
          path: /ifs/kubernetes
kubectl apply -f web.yaml
kubectl get pods
kubectl exec -it web-8df74d765-vsq5m bash
cd /usr/share/nginx/html
echo "lzx" > index.html
ls /ifs/kubernetes/
cat /ifs/kubernetes/index.html
kubectl scale deployment web --replicas=3
kubectl get pods
kubectl get pods -o wide

PersistentVolume

PersistentVolume(PV):对存储资源创建和使用的抽象,使得存储作为集群中的资源管理
静态 
动态 
PersistentVolumeClaim(PVC):让用户不需要关心具体的Volume实现细节
cp web.yaml pv-pvc.yaml
静态供给
vim pv-pvc.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
mv pv-pvc.yaml deployment-pvc.yaml
vim pv.yaml

apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: pv00001
spec: 
  capacity: 
    storage: 5Gi 
  accessModes: 
    - ReadWriteMany 
  nfs: 
    path: /ifs/kubernetes/pv00001 
    server: 10.19.151.245

---
apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: pv00002
spec: 
  capacity: 
    storage: 10Gi 
  accessModes: 
    - ReadWriteMany 
  nfs: 
    path: /ifs/kubernetes/pv00002
    server: 10.19.151.245

---
apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: pv00003
spec: 
  capacity: 
    storage: 50Gi 
  accessModes: 
    - ReadWriteMany 
  nfs: 
    path: /ifs/kubernetes/pv00003
    server: 10.19.151.245 
kubectl apply -f pv.yaml
mkdir -p /ifs/kubernetes/{pv00001,pv00002,pv00003}
kubectl get pv
kubectl apply -f deployment-pvc.yaml
kubectl get pv
kubectl get pvc

在这里插入图片描述
在这里插入图片描述
03 | Kubernetes | Kubernetes实战_第15张图片

kubectl get pods
kubectl exec -it web-8df74d765-cr9mc bash
cd /usr/share/nginx/html/
echo "lzx" > index.html
容量并不是必须对应(pv!=pvc),根据就近选择合适pv
扩容:1.11版本支持动态扩容(k8s层面),具体还要根据后端存储支持
动态供给
Dynamic Provisioning机制工作的核心在于StorageClass的API对象。 
StorageClass声明存储插件,用于自动创建PV。
https://kubernetes.io/docs/concepts/storage/storage-classes/
https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy
class.yaml
deployment.yaml
rbac.yaml
mkdir nfs-client
cd nfs-client
vim class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
vim deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: lizhenliang/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.19.151.245
            - name: NFS_PATH
              value: /ifs/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.19.151.245
            path: /ifs/kubernetes
vim rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f .
cd ..
cp deployment-pvc.yaml deployment-pvc-auto.yaml
kubectl get sc
vim deployment-pvc-auto.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web-lzx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: my-pvc-lzx
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc-lzx
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
kubectl get pv
kubectl apply -f deployment-pvc-auto.yaml
kubectl get pv
kubectl get pvc
ls /ifs/kubernetes/

在这里插入图片描述

有状态应用部署

问题1:Nginx镜像,Node1故障,在Node2自动拉起新的Pod,还能不能继续提供服务?
可以。


问题2:Mysql镜像,Node1故障,在Node2自动拉起新的Pod,还能不能继续提供服务?
不可以。

可以的前提:你必须做数据持久化 /var/lib/mysql

问题3:Mysql主从,两个Pod分别分配到了Node1和Node2,如果Node1故障,在Node2自动拉起新的Pod,还能不能继续提供服务?
不可以。
有状态应用:在k8s一般指的分布式应用程序,mysql主从、zookeeper集群、etcd集群
1、数据持久化
2、IP地址,名称(为每个Pod分配一个固定DNS名称)
3、启动顺序


无有状态应用:nginx、api、微服务jar

StatefulSet

StatefulSet: 
部署有状态应用 
解决Pod独立生命周期,保持Pod启动顺序和唯一性 
1. 稳定,唯一的网络标识符,持久存储 
2. 有序,优雅的部署和扩展、删除和终止 
3. 有序,滚动更新 
应用场景:数据库、分布式应用

稳定的网络ID

Headless Service
mkdir 1225
cd 1225
kubectl expose deployment web --port=80 --target-port=80 --dry-run -o yaml > service.yaml

cat service.yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web-lzx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
status:
  loadBalancer: {}
cp service.yaml headless-service.yaml
vim headless-service.yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  clusterIP: None
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
status:
  loadBalancer: {}
kubectl get svc
kubectl delete svc web
kubectl apply -f service.yaml
kubectl apply -f headless-service.yaml
kubectl get svc

03 | Kubernetes | Kubernetes实战_第16张图片

稳定的存储

cp ../web.yaml sts.yaml
vim sts.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: web
  name: etcd
spec:
  serviceName: "etcd"
  replicas: 3
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
vim sts.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: etcd
spec:
  clusterIP: None
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: web
  name: etcd
spec:
  serviceName: "etcd"
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
kubectl apply -f sts.yaml
kubectl get pods

在这里插入图片描述

kubectl get svc

03 | Kubernetes | Kubernetes实战_第17张图片

vim web.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        nfs:
          server: 10.19.151.244
          path: /ifs/kubernetes
kubectl apply -f web.yaml
kubectl get pods
kubectl get ep

03 | Kubernetes | Kubernetes实战_第18张图片

vim bs.yaml

apiVersion: v1
kind: Pod
metadata:
  name: lzx
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 36000
kubectl apply -f bs.yaml
kubectl get pods
kubectl exec -it lzx sh
kubectl delete -f bs.yaml
kubectl apply -f bs.yaml
kubectl get pods
nslookup web
nslookup etcd

03 | Kubernetes | Kubernetes实战_第19张图片

vim sts.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: etcd
spec:
  clusterIP: None
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: etcd
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: web
  name: etcd
spec:
  serviceName: "etcd"
  replicas: 3
  selector:
    matchLabels:
      app: etcd
  template:
    metadata:
      labels:
        app: etcd
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
kubectl delete -f sts.yaml
kubectl apply -f sts.yaml
kubectl get pods
kubectl exec -it lzx sh
nslookup etcd

03 | Kubernetes | Kubernetes实战_第20张图片

ClusterIP A记录格式: 
<service-name>.<namespace-name>.svc.cluster.local 
ClusterIP=None A记录格式: 
<statefulsetName-index>.<service-name> .<namespace-name>.svc.cluster.local 
示例:web-0.nginx.default.svc.cluster.local 
StatefulSet的存储卷使用VolumeClaimTemplate创建, 
称为卷申请模板,当StatefulSet使用VolumeClaimTemplate 创建一个PersistentVolume时,
同样也会为每个Pod分配 并创建一个编号的PVC。
kubectl get pv
vim sts.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: etcd
spec:
  clusterIP: None
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: etcd
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: web
  name: etcd
spec:
  serviceName: "etcd"
  replicas: 3
  selector:
    matchLabels:
      app: etcd
  template:
    metadata:
      labels:
        app: etcd
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
        volumeMounts:
        - name: data
          mountPath: /var/lib/etcd

  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "managed-nfs-storage"
      resources:
        requests:
          storage: 1Gi
kubectl delete -f sts.yaml
kubectl apply -f sts.yaml
kubectl get pods
kubectl get pv
ls /ifs/kubernetes/
kubectl exec -it etcd-0 bash
ls
cd /usr/share/nginx/html/
ls
cd /var/lib/etcd/
touch 1.data
ls
ls /ifs/kubernetes/default-data-etcd-0-pvc-5bc218d3-4cca-4ae1-84d7-f48f30a2590f
kubectl get pods
kubectl delete pod etcd-0
kubectl exec -it etcd-0 bash
ls /var/lib/etcd/
分布式应用之前有一个拓扑关系

pod - pvc - pv 


https://kubernetes.io/zh/docs/tutorials/stateful-application/zookeeper/
https://github.com/lizhenliang/etcd-statefulset

鉴权框架与用户

安全框架

访问K8S集群的资源需要过三关:认证、鉴权、准入控制 
普通用户若要安全访问集群API Server,往往需要证书、 Token或者用户名+密码;Pod访问,需要ServiceAccount 
K8S安全控制框架主要由下面3个阶段进行控制,每一个阶段 都支持插件方式,通过API Server配置来启用插件。 

1. Authentication(鉴权) 
2. Authorization(授权) 
3. Admission Control(准入控制)

03 | Kubernetes | Kubernetes实战_第21张图片

https://10.19.151.243:6443/

03 | Kubernetes | Kubernetes实战_第22张图片

cat /root/.kube/config
kubectl get svc

https://kubernetes.default

传输

端口:6443
通信采用https

鉴权

三种客户端身份认证: 
HTTPS 证书认证:基于CA证书签名的数字证书认证
HTTP Token认证:通过一个Token来识别用户 
HTTP Base认证:用户名+密码的方式认证

授权

RBAC(Role-Based Access Control,基于角色的访问控制):负责完成授权(Authorization)工作。

根据API请求属性,决定允许还是拒绝。 `在这里插入代码片`
user:用户名 
group:用户分组 
extra:用户额外信息 
API 
请求路径:例如/api,/healthz 
API请求方法:get,list,create,update,patch,watch,delete
HTTP请求方法:get,post,put,delete 
资源 
子资源 
命名空间 
API组

准入控制

Adminssion Control实际上是一个准入控制器插件列表,发送到API Server的请求都需要经过这个列表中的每个准入控制器 插件的检查,检查不通过,则拒绝请求。
docker ps
docker exec -it 40d83d2ab3a5 sh
kube-apiserver -h | grep admission-plugins
--enable-admission-plugins strings

Pod资源限制:request limit
命名空间限制:ResourceQuota
Pod默认资源限制:LimitRanger

RBAC授权

RBAC(Role-Based Access Control,基于角色的访问控制),允许通过Kubernetes API动态配置策略。

角色 
Role:授权特定命名空间的访问权限 
ClusterRole:授权所有命名空间的访问权限 

角色绑定 
RoleBinding:将角色绑定到主体(即subject)
ClusterRoleBinding:将集群角色绑定到主体 

主体(subject) 
User:用户 
Group:用户组 
ServiceAccount:服务账号
CN:用户名
tar -zxf cfssl.tar.gz -C /usr/local/bin/
ll /usr/local/bin/
第一步:
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

cat > aliang-csr.json <<EOF
{
  "CN": "aliang",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes aliang-csr.json | cfssljson -bare aliang
第二步:
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca.crt \
  --embed-certs=true \
  --server=https://10.19.151.243:6443 \
  --kubeconfig=aliang.kubeconfig

# 设置客户端认证
kubectl config set-credentials aliang \
  --client-key=aliang-key.pem \
  --client-certificate=aliang.pem \
  --embed-certs=true \
  --kubeconfig=aliang.kubeconfig

# 设置默认上下文
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=aliang \
  --kubeconfig=aliang.kubeconfig

# 设置当前使用配置
kubectl config use-context kubernetes --kubeconfig=aliang.kubeconfig
cat aliang.kubeconfig
kubectl --kubeconfig=aliang.kubeconfig get pods
RBAC授权:
1、创建角色(规则)
2、创建主体(用户)
3、角色与主体绑定
vim rbac.yaml

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: aliang
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac.yaml
kubectl --kubeconfig=aliang.kubeconfig get pods
kubectl --kubeconfig=aliang.kubeconfig delete pod etcd-0
kubectl --kubeconfig=aliang.kubeconfig get deploy
授权规则示例:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
cat /home/nfs-client/rbac.yaml
vim rbac.yaml

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: ["extensions","apps",""]
  resources: ["pods","services","deployments"]
  verbs: ["get", "watch", "list","delete"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: aliang
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac.yaml
kubectl --kubeconfig=aliang.kubeconfig get deploy
kubectl --kubeconfig=aliang.kubeconfig get svc
kubectl --kubeconfig=aliang.kubeconfig delete svc java-web-service
kubectl --kubeconfig=aliang.kubeconfig delete svc web1
kubectl --kubeconfig=aliang.kubeconfig delete svc web2
kubectl --kubeconfig=aliang.kubeconfig delete svc web-lzx
kubectl --kubeconfig=aliang.kubeconfig delete svc web-2

你可能感兴趣的:(容器,kubernetes)