【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台

  • 一、Weave Scope介绍
    • 1.Weave Scope简介
    • 2.Weave Scope的特点
    • 3.Weave Scope的组成
  • 二、检查本地kubernetes集群状态
    • 1.检查工作节点状态
    • 2.检查系统pod状态
  • 三、安装nfs共享存储
    • 1.安装nfs
    • 2.创建共享目录
    • 3.配置共享目录
    • 4.使配置生效
    • 5.重启nfs相关服务
      • ①设置nfs服务开机启动
      • ②重启nfs服务
    • 6.其他节点检查nfs共享
  • 四、配置storageclass
    • 1.编写sc.yaml文件
    • 2.应用sc.yaml文件
    • 3.查看sc资源对象
  • 五、安装ingress负载均衡器
    • 1.下载ingress-nginx的yaml文件
    • 2.创建负载均衡器
    • 3.查看ingress状态
  • 六、安装Weave Scope服务端
    • 1.创建命名空间
    • 2.编辑scope-app.yaml
    • 3.应用scope-app.yaml文件
    • 4.查看pod状态
  • 七、安装Weave Scope代理
    • 1.编辑scope-agent.yam文件
    • 2.应用scope-agent.yaml文件
    • 3.查看pod状态
  • 八、查看ingress-nginx状态
    • 1.查看ingress控制器详细信息
    • 2.查看scope-app的svc
    • 3.查看反向代理
  • 九、访问Weave Scope
    • 1.本地电脑配置hosts文件
    • 2.访问web
    • 3.查看kubernetes节点
    • 3.查看系统组件
    • 4.查看某个工作节点系统状态信息
    • 5.查看pod状态
    • 6.查看某个pod日志
    • 6.查看pod的详细信息

一、Weave Scope介绍

1.Weave Scope简介

Weave Scope 是一款 Docker 和 Kubernetes 的可视化监控工具。它提供了自上而下的应用程序视图以及整个基础架构视图,用户可以轻松对分布式的容器化应用进行实时监控和问题诊断.

2.Weave Scope的特点

1.交互式拓扑界面
2.图形模式和表格模式
3.过滤功能
4.搜索功能
5.实时度量
6.容器排错
7.插件扩展

3.Weave Scope的组成

Probe Agent:负责收集容器和宿主的信息,发送给App。
App:负责处理收集的信息,生成相应报告,并以交互界面的形式展示。

二、检查本地kubernetes集群状态

1.检查工作节点状态

[root@k8s-master ~]# kubectl get nodes -owide
NAME         STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-master   Ready    control-plane,master   7d15h   v1.23.1   192.168.3.201   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.6
k8s-node01   Ready    <none>                 7d15h   v1.23.1   192.168.3.202   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.6
k8s-node02   Ready    <none>                 7d15h   v1.23.1   192.168.3.203   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.6

2.检查系统pod状态

[root@k8s-master ~]# kubectl get pods -n kube-system 
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-7bc6547ffb-2nf66   1/1     Running   1 (23h ago)   7d15h
calico-node-8c4pn                          1/1     Running   1 (27h ago)   7d15h
calico-node-f28qq                          1/1     Running   1 (23h ago)   7d15h
calico-node-wmc2j                          1/1     Running   1 (23h ago)   7d15h
coredns-6d8c4cb4d-6gm4x                    1/1     Running   1 (23h ago)   7d15h
coredns-6d8c4cb4d-7vxlz                    1/1     Running   1 (23h ago)   7d15h
etcd-k8s-master                            1/1     Running   1 (23h ago)   7d15h
kube-apiserver-k8s-master                  1/1     Running   1 (23h ago)   7d15h
kube-controller-manager-k8s-master         1/1     Running   1 (23h ago)   7d15h
kube-proxy-8dfw8                           1/1     Running   1 (23h ago)   7d15h
kube-proxy-ghzrv                           1/1     Running   1 (23h ago)   7d15h
kube-proxy-j867z                           1/1     Running   1 (27h ago)   7d15h
kube-scheduler-k8s-master                  1/1     Running   1 (23h ago)   7d15h

三、安装nfs共享存储

1.安装nfs

 yum install -y nfs-utils

2.创建共享目录

mkdir -p /nfs/data

3.配置共享目录

echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

4.使配置生效

exportfs -r

5.重启nfs相关服务

①设置nfs服务开机启动

 systemctl enable --now rpcbind
 systemctl enable --now  nfs-server

②重启nfs服务

service rpcbind stop
service nfs stop
service rpcbind start
service nfs start

6.其他节点检查nfs共享

[root@k8s-node01 ~]#  showmount -e 192.168.3.201
Export list for 192.168.3.201:
/nfs/data *

四、配置storageclass

1.编写sc.yaml文件

[root@k8s-master scope]# cat sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: nfs-storage
 annotations:
   storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
 archiveOnDelete: "true"  

---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nfs-client-provisioner
 labels:
   app: nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
spec:
 replicas: 1
 strategy:
   type: Recreate
 selector:
   matchLabels:
     app: nfs-client-provisioner
 template:
   metadata:
     labels:
       app: nfs-client-provisioner
   spec:
     serviceAccountName: nfs-client-provisioner
     containers:
       - name: nfs-client-provisioner
         image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
         # resources:
         #    limits:
         #      cpu: 10m
         #    requests:
         #      cpu: 10m
         volumeMounts:
           - name: nfs-client-root
             mountPath: /persistentvolumes
         env:
           - name: PROVISIONER_NAME
             value: k8s-sigs.io/nfs-subdir-external-provisioner
           - name: NFS_SERVER
             value: 192.168.3.201 ## 指定自己nfs服务器地址
           - name: NFS_PATH  
             value: /nfs/data  ## nfs服务器共享的目录
     volumes:
       - name: nfs-client-root
         nfs:
           server: 192.168.3.201
           path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: nfs-client-provisioner-runner
rules:
 - apiGroups: [""]
   resources: ["nodes"]
   verbs: ["get", "list", "watch"]
 - apiGroups: [""]
   resources: ["persistentvolumes"]
   verbs: ["get", "list", "watch", "create", "delete"]
 - apiGroups: [""]
   resources: ["persistentvolumeclaims"]
   verbs: ["get", "list", "watch", "update"]
 - apiGroups: ["storage.k8s.io"]
   resources: ["storageclasses"]
   verbs: ["get", "list", "watch"]
 - apiGroups: [""]
   resources: ["events"]
   verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: run-nfs-client-provisioner
subjects:
 - kind: ServiceAccount
   name: nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
roleRef:
 kind: ClusterRole
 name: nfs-client-provisioner-runner
 apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: leader-locking-nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
rules:
 - apiGroups: [""]
   resources: ["endpoints"]
   verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: leader-locking-nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
subjects:
 - kind: ServiceAccount
   name: nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
roleRef:
 kind: Role
 name: leader-locking-nfs-client-provisioner
 apiGroup: rbac.authorization.k8s.io

2.应用sc.yaml文件

[root@k8s-master scope]# 
[root@k8s-master scope]# kubectl apply -f sc.yaml 
storageclass.storage.k8s.io/nfs-storage created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

3.查看sc资源对象

[root@k8s-master scope]# kubectl get sc
NAME                    PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  6m34s

五、安装ingress负载均衡器

1.下载ingress-nginx的yaml文件

wget 'https://oss-public.obs.cn-south-1.myhuaweicloud.com:443/ingress-nginx/ingress-nginx.yml?AccessKeyId=8QZQXILP1SCWCCLMSGIH&Expires=1660039750&Signature=2QsNqXejoifFVJjaJl7XSa88AgY%3D'

2.创建负载均衡器

[root@k8s-master scope]# kubectl apply -f ingress-nginx.yml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
[root@k8s-master scope]# 

3.查看ingress状态

[root@k8s-master scope]# kubectl get pods -n ingress-nginx -owide
NAME                                        READY   STATUS      RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-79cm5        0/1     Completed   0          34m   10.244.85.193   k8s-node01   <none>           <none>
ingress-nginx-admission-patch-jbz68         0/1     Completed   0          34m   10.244.85.194   k8s-node01   <none>           <none>
ingress-nginx-controller-7bcfbb6786-tdv6n   1/1     Running     0          34m   192.168.3.203   k8s-node02   <none>           <none>

六、安装Weave Scope服务端

1.创建命名空间

[root@k8s-master scope]# kubectl create namespace weave
namespace/weave created

2.编辑scope-app.yaml

[root@k8s-master scope]# cat scope-app.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: weave

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: weave-scope
  namespace: weave
  labels:
    name: weave-scope

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: weave-scope
  labels:
    name: weave-scope
rules:
  - apiGroups:
      - ''
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
      - delete
  - apiGroups:
      - ''
    resources:
      - pods/log
      - services
      - nodes
      - namespaces
      - persistentvolumes
      - persistentvolumeclaims
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - deployments
      - daemonsets
      - statefulsets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - deployments
      - daemonsets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - deployments/scale
    verbs:
      - get
      - update
  - apiGroups:
      - extensions
    resources:
      - deployments/scale
    verbs:
      - get
      - update
  - apiGroups:
      - storage.k8s.io
    resources:
      - storageclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - volumesnapshot.external-storage.k8s.io
    resources:
      - volumesnapshots
      - volumesnapshotdatas
    verbs:
      - list
      - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: weave-scope
  labels:
    name: weave-scope
roleRef:
  kind: ClusterRole
  name: weave-scope
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: ServiceAccount
    name: weave-scope
    namespace: weave

---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: scope-app
  namespace: weave
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: abc.scope.com
    http:
      paths:
      - backend:
          service:
            name: weave-scope-app
            port: 
              number: 80
        path: /
        pathType: Prefix



---
apiVersion: v1
kind: Service
metadata:
  name: weave-scope-app
  namespace: weave
  labels:
    name: weave-scope-app
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: app
spec:
  ports:
    - name: app
      port: 80
      protocol: TCP
      targetPort: 4040
#      nodePort: 31232
  selector:
    name: weave-scope-app
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: app
 # type: NodePort
    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: weave-scope-app
  namespace: weave
  labels:
    name: weave-scope-app
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: weave-scope-app
      app: weave-scope
      weave-cloud-component: scope
      weave-scope-component: app
  template:
    metadata:
      labels:
        name: weave-scope-app
        app: weave-scope
        weave-cloud-component: scope
        weave-scope-component: app
    spec:
      containers:
        - name: app
          image: docker.io/weaveworks/scope:1.13.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 4040
              protocol: TCP
          args:
            - '--mode=app'
          command:
            - /home/weave/scope
          env: []


3.应用scope-app.yaml文件

kubectl apply -f scope-app.yaml

4.查看pod状态

[root@k8s-master scope]# kubectl get pod -n weave 
NAME                               READY   STATUS    RESTARTS   AGE
weave-scope-app-75df8f8754-kr9mv   1/1     Running   0          11m

七、安装Weave Scope代理

1.编辑scope-agent.yam文件

[root@k8s-master scope]# cat scope-agent.yaml 
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: weave-scope-cluster-agent
  namespace: weave
  labels:
    name: weave-scope-cluster-agent
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: cluster-agent
spec:
  replicas: 1
  selector:
    matchLabels:
      name: weave-scope-cluster-agent
      app: weave-scope
      weave-cloud-component: scope
      weave-scope-component: cluster-agent
  template:
    metadata:
      labels:
        name: weave-scope-cluster-agent
        app: weave-scope
        weave-cloud-component: scope
        weave-scope-component: cluster-agent
    spec:
      serviceAccountName: weave-scope
      containers:
        - name: scope-cluster-agent
          image: docker.io/weaveworks/scope:1.13.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 4041
              protocol: TCP
          args:
            - '--mode=probe'
            - '--probe-only'
            - '--probe.kubernetes.role=cluster'
            - '--probe.http.listen=:4041'
            - '--probe.publish.interval=4500ms'
            - '--probe.spy.interval=2s'
            - 'weave-scope-app.weave.svc.cluster.local:80'
          command:
            - /home/weave/scope
          env: []
          resources:
            limits:
              memory: 2000Mi
            requests:
              cpu: 25m
              memory: 80Mi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: weave-scope-agent
  namespace: weave
  labels:
    name: weave-scope-agent
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: agent
spec:
  updateStrategy:
    type: RollingUpdate
  minReadySeconds: 5
  selector:
    matchLabels:
      name: weave-scope-agent
      app: weave-scope
      weave-cloud-component: scope
      weave-scope-component: agent
  template:
    metadata:
      labels:
        name: weave-scope-agent
        app: weave-scope
        weave-cloud-component: scope
        weave-scope-component: agent
    spec:
      containers:
        - name: scope-agent
          image: docker.io/weaveworks/scope:1.13.1
          imagePullPolicy: IfNotPresent
          args:
            - '--mode=probe'
            - '--probe-only'
            - '--probe.kubernetes.role=host'
            - '--probe.publish.interval=4500ms'
            - '--probe.spy.interval=2s'
            - '--probe.docker.bridge=docker0'
            - '--probe.docker=true'
            - 'weave-scope-app.weave.svc.cluster.local:80'
          command:
            - /home/weave/scope
          env: []
          resources:
            limits:
              memory: 2000Mi
            requests:
              cpu: 100m
              memory: 100Mi
          securityContext:
            privileged: true
          volumeMounts:
            - name: scope-plugins
              mountPath: /var/run/scope/plugins
            - name: sys-kernel-debug
              mountPath: /sys/kernel/debug
            - name: docker-socket
              mountPath: /var/run/docker.sock
      volumes:
        - name: scope-plugins
          hostPath:
            path: /var/run/scope/plugins
        - name: sys-kernel-debug
          hostPath:
            path: /sys/kernel/debug
        - name: docker-socket
          hostPath:
            path: /var/run/docker.sock
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      hostPID: true
      tolerations:
        - effect: NoSchedule
          operator: Exists
        - effect: NoExecute
          operator: Exists


2.应用scope-agent.yaml文件

kubectl apply -f scope-agent.yaml

3.查看pod状态

[root@k8s-master scope]# kubectl get pod -n weave 
NAME                                        READY   STATUS    RESTARTS   AGE
weave-scope-agent-pbzkz                     1/1     Running   0          123m
weave-scope-agent-xj76q                     1/1     Running   0          123m
weave-scope-agent-zbp75                     1/1     Running   0          123m
weave-scope-app-75df8f8754-kr9mv            1/1     Running   0          136m
weave-scope-cluster-agent-86f4db4c7-9fhnm   1/1     Running   0          40s

八、查看ingress-nginx状态

1.查看ingress控制器详细信息

[root@k8s-master scope]# kubectl get pod -n ingress-nginx -owide
NAME                                        READY   STATUS      RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-79cm5        0/1     Completed   0          3h    10.244.85.193   k8s-node01   <none>           <none>
ingress-nginx-admission-patch-jbz68         0/1     Completed   0          3h    10.244.85.194   k8s-node01   <none>           <none>
ingress-nginx-controller-7bcfbb6786-tdv6n   1/1     Running     0          3h    192.168.3.203   k8s-node02   <none>           <none>

2.查看scope-app的svc

[root@k8s-master scope]# kubectl get svc -n weave 
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
weave-scope-app   ClusterIP   10.105.150.140   <none>        80/TCP    119m

3.查看反向代理

[root@k8s-master scope]# kubectl get ingress -n weave 
NAME        CLASS    HOSTS           ADDRESS         PORTS   AGE
scope-app   <none>   abc.scope.com   192.168.3.203   80      21m

九、访问Weave Scope

1.本地电脑配置hosts文件

C:\Windows\System32\drivers\etc\hosts
【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第1张图片

2.访问web

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第2张图片

3.查看kubernetes节点

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第3张图片

3.查看系统组件

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第4张图片

4.查看某个工作节点系统状态信息

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第5张图片

5.查看pod状态

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第6张图片

6.查看某个pod日志

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第7张图片

6.查看pod的详细信息

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台_第8张图片

你可能感兴趣的:(云原生,kubernetes,云原生,docker)