基于rancher使用kubernates

  • ubuntu系统,macOS更好,windows玩的溜也行.

摘要

kubernates功能繁杂,刚接触之人,即使勉力窥得全貌,但实际应用中,用不到一成.借助rancher,不但省去数日苦修的功夫,又可直接使用k8s常用成熟的功能.简单了解下k8s的基础架构和基本命令,即可上手.

配置rancher-cli

参见

1. 下载\配置\登录

  1. 下载链接
  2. Linux下配置环境变量可以任意目录下执行rancher命令
  3. 在rancher网页版控制台创建token,使用rancher loginde 命令登录
rancher login https://xx.xx.xx.xx/v3  --token xxxxxxxxxx

2. 集群监控命令

  • 切换命名空间
jia@jia:~$ rancher context switch
NUMBER    CLUSTER NAME          PROJECT ID        PROJECT NAME   PROJECT DESCRIPTION
1         xqkj-kubernetes       c-6mxlt:p-d5h6f   系统空间           System project created for the cluster
2         xqkj-kubernetes       c-6mxlt:p-qr9kv   zy             
3         xqkj-kubernetes       c-6mxlt:p-xgcdx   默认空间           Default project created for the cluster
4         xqkj-kubernetes-pro   c-xmqxg:p-62jtx   System         System project created for the cluster
5         xqkj-kubernetes-pro   c-xmqxg:p-m8tjm   Default        Default project created for the cluster
Select a Project:3
INFO[0008] Setting new context to project 默认空间          
INFO[0008] Saving config to /home/jia/.rancher/cli2.json 

  • 查看集群状态
jia@jia:~$ rancher kubectl cluster-info
Kubernetes master is running at https://47.102.41.70/k8s/clusters/c-6mxlt
metrics-server is running at https://47.102.41.70/k8s/clusters/c-6mxlt/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://47.102.41.70/k8s/clusters/c-6mxlt/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  • 查看node状态
jia@jia:~$ rancher kubectl top nodes
NAME                                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
cn-shanghai.i-uf62lelgjwuyiwuxfp6j   121m         6%     5163Mi          75%       
cn-shanghai.i-uf68nqijeqfoc81wkqvb   160m         8%     5923Mi          86%       
cn-shanghai.i-uf68nqijeqfoc81wkqvc   141m         7%     4379Mi          64%  
  • 查看pod状态
jia@jia:~$ rancher kubectl top pods
NAME                             CPU(cores)   MEMORY(bytes)   
apollo-admin-694b45b7bc-86sm9    3m           383Mi           
apollo-config-7496bf6d7f-xq5fr   6m           455Mi           
bell-5489bcdc56-qfgrg            8m           766Mi           
confucius-697c5c5fd5-fhj4b       14m          1049Mi          
kotler-59c7cd7c48-fqw9s          13m          745Mi           
platon-95c5c8cfc-dh2zl           9m           1054Mi          
socrates-546bcd7f5b-5vbhp        11m          944Mi           
zipkin-db66577bc-k44sx           4m           254Mi 
  • 查看pod详情
jia@jia:~$ rancher kubectl describe  pod  apollo-admin-694b45b7bc-86sm9
Name:               apollo-admin-694b45b7bc-86sm9
Namespace:          default
Priority:           0
PriorityClassName:  
Node:               cn-shanghai.i-uf68nqijeqfoc81wkqvb/172.19.100.137
Start Time:         Mon, 22 Apr 2019 13:34:34 +0800
Labels:             pod-template-hash=694b45b7bc
                    workload.user.cattle.io/workloadselector=deployment-default-apollo-admin
Annotations:        cattle.io/timestamp: 2019-04-22T05:34:34Z
                    field.cattle.io/ports:
                      [[{"containerPort":8090,"dnsName":"apollo-admin-loadbalancer","kind":"LoadBalancer","name":"8090tcp80873","protocol":"TCP","sourcePort":80...
Status:             Running
IP:                 10.0.1.185
Controlled By:      ReplicaSet/apollo-admin-694b45b7bc
Containers:
  apollo-admin:
    Container ID:   docker://d8b9bcf44b6df9a713e3f23ac756d3cd73e6ed091744f3c9d8aae964d9dcaafe
    Image:          registry-vpc.cn-shanghai.aliyuncs.com/quanwai_base/base:apollo-adminserviceV1
    Image ID:       docker-pullable://registry-vpc.cn-shanghai.aliyuncs.com/quanwai_base/base@sha256:d6ba0ee2152fac329eaf34442f533cd45e55ec3004a7fcb1bd67aaaee786ce3d
                                        ...............

3. 部署常用命令

所有部署命令都必须指定资源描述文件deployment

  • kubectl apply -f
若资源不存在则创建,存在则更新
jia@jia:~/IdeaProjects/kotler$ rancher kubectl apply -f deployment.yaml 
configmap/filebeat-kotler unchanged
deployment.apps/kotler configured

使用rancher控制台管理k8s集群

只要明确以下几点,即可利用rancher控制台简单管理k8s集群

  1. 镜像仓库
    公用仓库是默认的,私有仓库需要配置,如不配置,则无法下载自己的镜像.私有仓库配置如下图:


    image
  2. 容器开放端口
    一般容器都需要开放端口,让外部访问,按照k8s容器端口访问规则,rancher提供以下几种选项:

    image

  3. 容器挂载卷

    1. 挂载配置卷
      1. 声明配置卷


        image
      2. 挂载配置卷


        image
    2. 挂载存储卷
      1. 存储卷就是要存储文件的,所有集群下的容器均可访问,好像是分布式文件存储的感觉.在解决神策打点数据采集的问题时,就是通过NAS存储方案解决的.
      2. 首先要在阿里云上申请NAS存储服务,再利用rancher在集群上创建PV.最后在命令空间下创建PVC.PV就是持久卷-集群共享,所有得在集群上创建,PVC是在命名空间下声明的PV属性(读写权限,大小等).

创建PV

创建PV

创建PVC
创建PVC

使用卷
使用卷

3. 容器共享卷
共享卷是同一个pod下,多个容器共享同一个目录,在使用filebeat采集服务容器日志的时候,就需要filebeat容器共享服务容器的日志目录.(同一个pod下服务容器作为主容器,日志收集容器叫做伴随容器).

在主容器创建空目录卷,同样的在伴随容器也创建,两者的挂载目录要一致.


创建空目录卷
  1. 容器健康检查
    服务要提供心跳检测接口,以便k8s检查容器心跳,判断容器状态,及时调度容器.
    rancher上配置如下:
    image

    有几个地方需要注意一下.给服务留下充足的初始化时间,心跳检测不宜太频繁,时间间隔大概30秒即可.超时时间设置为3~5秒.
livenessProbe: # 存活探针检查
  failureThreshold: 3 # 默认失败次数
  httpGet:
    path: /promotion/heartbeat
    port: 8081
    scheme: HTTP
  initialDelaySeconds: 60
  periodSeconds: 5
  successThreshold: 1
  timeoutSeconds: 3
readinessProbe:  # 就绪状态检查
  failureThreshold: 3
  httpGet:
    path: /promotion/heartbeat
    port: 8081
    scheme: HTTP
  initialDelaySeconds: 60
  periodSeconds: 5
  successThreshold: 2
  timeoutSeconds: 3
  1. 注入主机环境变量
    环境变量主要作为服务启动时区分正式测试环境配置的钥匙.无论以何种形式引用环境变量,最终都是一个key-value形式的Linux环境变量.


    image
- env: # 注入环境变量
    - name: APOLLO_META
      value: http://139.224.169.234:8086
    - name: env
      value: dev
    - name: ZK_ADDRESS
      valueFrom:
        configMapKeyRef:
          name: configmap
          key: ZK_ADDRESS
  1. 资源限定
    资源限定包括两项,启动时请求的资源和运行时最高可利用资源.资源是指CPU和memory.
    image
resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 20m
    memory: 128Mi

kubernates的deployment文件

#定义日志收集相关配置的一个configmap
apiVersion: v1
kind: ConfigMap # 声明这个资源的类型
metadata:
  name: filebeat-kotler # 声明配置的名称
  namespace: default # 声明命令空间
data:  # data属性下是具体配置内容,配置内容直接元模原样copy出来就行.
  filebeat.yml: |
    filebeat.prospectors:
    - input_type: log
      paths:
        - "/data/applogs/kotler/kotler.log"
      fields:
          project: kotler-test
      multiline:
            pattern: ^\d{4}
            negate: true
            match: after
            max_lines: 1000
            timeout: 10s
    output.logstash:
      hosts: ["172.19.1.12:5044"]


---
apiVersion: apps/v1beta2
kind: Deployment # 声明资源类型
metadata: # 声明资源的元数据包括标签和注解(给选择器选择,方便调度),都是ke-value形式
  annotations:
    deployment.kubernetes.io/revision: "4"
    field.cattle.io/creatorId: u-nbbnv
    field.cattle.io/publicEndpoints: '[{"port":8093,"protocol":"TCP","serviceName":"default:kotler-loadbalancer","allNodes":false}]'
  labels:
    cattle.io/creator: norman
    workload.user.cattle.io/workloadselector: deployment-default-kotler
  name: kotler
  namespace: default
spec: # 控制器的规描述
  progressDeadlineSeconds: 600
  replicas: 1 # 启动几个容器
  revisionHistoryLimit: 10
  selector:  # 选择器
    matchLabels:
      workload.user.cattle.io/workloadselector: deployment-default-kotler
  strategy: # 升级策略 
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template: # 模板,创建和升级容器的模板
    metadata:
      annotations:
        cattle.io/timestamp: 2019-03-15T08:52:38Z
field.cattle.io/ports: '[[{"containerPort":8081,"dnsName":"kotler-loadbalancer","kind":"LoadBalancer","name":"8081tcp80913","protocol":"TCP","sourcePort":8093}]]'
      labels:
        workload.user.cattle.io/workloadselector: deployment-default-kotler
    spec: # 容器具体的描述
      containers:
      - env: # 注入环境变量
        - name: APOLLO_META
          value: http://139.224.169.234:8086
        - name: env
          value: dev
        - name: ZK_ADDRESS
          valueFrom:
            configMapKeyRef:
              name: configmap
              key: ZK_ADDRESS
        image: registry-vpc.cn-shanghai.aliyuncs.com/quanwai_pre/pre:kotlerV37
        imagePullPolicy: Always  # 镜像拉去策略
        livenessProbe: # 存活探针 
          failureThreshold: 3 # 默认失败次数
          httpGet:
            path: /promotion/heartbeat
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3
        name: kotler
        volumeMounts: # 绑定的卷
        - name: applogs
          mountPath: /data/applogs/kotler
        - name: kotler-pv
          mountPath: /data/appdatas/sa
        - name: cat
          mountPath: /data/appdatas/cat
        ports: # 端口声明
        - containerPort: 8081
          name: 8081tcp80913
          protocol: TCP
        readinessProbe:  # 就绪状态检查
          failureThreshold: 3
          httpGet:
            path: /promotion/heartbeat
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 5
          successThreshold: 2
          timeoutSeconds: 3
        resources:
          limits:
            cpu: 200m
            memory: 256Mi
          requests:
            cpu: 20m
            memory: 128Mi
  - image: registry-vpc.cn-shanghai.aliyuncs.com/quanwai_base/base:filebeat  # 伴随容器的配置,和主容器类似
        imagePullPolicy: Always
        name: filebeat
        args: [
          "-c", "/etc/filebeat/filebeat.yml",
          "-e",
        ]
        volumeMounts:
        - name: applogs
          mountPath: /data/applogs/kotler
        - name: filebeat
          mountPath: /etc/filebeat
      volumes: # pod声明的卷
      - name: applogs
        emptyDir: {}
      - name: filebeat
        configMap:
          name: filebeat-kotler
      - name: cat
        configMap:
          name: cat-config
      - name: kotler-pv  # pvc
        persistentVolumeClaim:
          claimName: nas-for-kubernetes
      dnsPolicy: ClusterFirst
      restartPolicy: Always # pod的失败重启策略

后记

k8s的sidecar(日志收集方案),启动一个名叫filebeat的伴随容器,共享主容器的日志目录,收集主容器的日志,上报到logstash.每个主容器都有对应的伴随容器.

基于阿里云NAS的存储方案,云存储方案的选择是有坑的,第一次选用的是OSS,结果是多容器读写同一个文件会产写数据丢失的情况,也就是说并不是所有存储方案都能适用所有的数据读写场景.初步看来,OSS方案,很方便向共享空间上传文件,
,适合读的场景,NAS方案更适合对文件写频繁的情况下.

你可能感兴趣的:(基于rancher使用kubernates)