第六章 Kubernetes支撑云原生应用开发案例

文章目录

  • 第6章 Kubernetes支撑云原生应用开发案例
    • 6.1 Kubernetes与云原生应用
      • 6.1.1 云原生
      • 6.1.2 kubernetes与云原生应用
      • 6.1.3 kubernetes应用
    • 6.2 高可用私有镜像仓库搭建
      • 6.2.1 高可用私有容器镜像仓库搭建
        • 6.2.1.1 高可用Harbor搭建步骤
          • 1) 部署Harbor依赖的external系统
            • a. 部署minio
            • b. 部署redis
            • c. 部署mysql
          • 2) harbor部署
            • a. 安装docker-compose工具
            • b. 下载harbor离线安装文件,导入数据库registry scheme
            • c. 配置harbor.cfg和docker-compose.yaml
          • 3) 启动Harbor node1
          • 4) 启动Harbor node2
          • 5) 验证安装结果
    • 6.3 搭建Kubernetes集群的Logging设施
      • 传统应用实例对比云原生应用实例
      • 6.3.1 基于Elasticsearch技术栈搭建kubernetes的集群Logging设施
        • 6.3.1.1 部署Elasticsearch
        • 6.3.1.2 部署Fluentd
        • 6.3.1.3 部署Kibana
          • 6.3.1.3.1 下载kibana-deployment.yaml, kibana-service.yaml文件
          • 6.3.1.3.2 修改配置文件 kibana-deployment.yaml
          • 6.3.1.3.3 启动服务
        • 6.3.1.4 验证安装结果
        • 6.3.1.5 创建index pattern, 查看日志
    • 6.4 service mesh介绍
      • 6.4.1 微服务的"痛点" -- 服务治理
      • 6.4.2 服务网格(service mesh)项目
    • 6.5 istio架构和安装
      • 6.5.1 istio v1.0.2安装
    • 6.6 sidecar 注入
      • 6.6.1 部署由istio v1.0.2管制的pod
      • 6.6.2 添加istio工具至环境变量PATH
      • 6.6.3 使用istio命令行工具创建应用
      • 6.6.4 使用istio命令行工具创建应用 -- 支持自动注入

第6章 Kubernetes支撑云原生应用开发案例

6.1 Kubernetes与云原生应用

6.1.1 云原生

  • 当前主流的应用构建和部署模式
  • 云原生应用基金会CNCF对云原生的定义
  • 要点:以云为操作系统、以容器为应用承载、以服务为服务单元、以自动化为运维手段

6.1.2 kubernetes与云原生应用

  • kubernetes在云原生领域扮演的角色
    • kubernetes就是云原生应用的"操作系统"
    • kubernetes还是一个可移植、跨公有云、私有云、混合云的"操作系统"
    • kubernetes为云原生应用赋能

6.1.3 kubernetes应用

  • 从对云原生应用的开发和运维支撑的角度去挑战三个应用安全
    • 高可用私有镜像仓库搭建
    • 基于ElasticSearch技栈搭建kubernetes的集群Logging设施
    • 基于istio service mesh实现服务治理

6.2 高可用私有镜像仓库搭建

6.2.1 高可用私有容器镜像仓库搭建

  • 开源技术选型指标考量
    • 受关注度(start数量)
    • 社区反馈情况(issue提问数量)
    • 开发者积极性(roadmap、频繁提交、积极应对issue)

综上,这里使用vmware 的harbor

  • Harbor介绍
    • VMware中国团队开源的企业级镜像仓库项目
    • 聚集镜像仓库的企业级需求
    • 2018年成为cncf Sandbox项目
  • Harbor我
    • 支持镜像复制
    • 支持访问控制RBAC和LDAP/AD认证
    • 支持无用镜像数据的自动回收和删除
    • 支持中文、web UI、日志审计
    • 提供RESTful API,便于扩展,并与企业内部开发运维流水线集成
    • 支持镜像的漏洞安全扫描
  • 企业需要高可用仓库
    • 镜像仓库已经成为开发运维一体化流水线的核心组件
    • 单一实例的镜像仓库可能无法满足企业内部大量节点上传和下载的性能需求

6.2.1.1 高可用Harbor搭建步骤

1) 部署Harbor依赖的external系统
a. 部署minio

MinIO is an object storage server released under Apache License v2.0

ref: https://min.io/download#/linux

$ wget https://dl.min.io/server/minio/release/linux-amd64/minio
$ chmod +x minio
$ cp minio /usr/local/bin
$ MINIO_DATA=${HOME}/.minio/data; mkdir -p ${MINIO_DATA}; minio server ${MINIO_DATA}

最新版本的minio集群部署需要至少4台机器,我这里的IP地址是从172.16.81.161 - 172.16.81.164,需要使用下面的命令在每个节点上都要执行一遍

# 每个节点上都要执行一遍
MINIO_ACCESS_KEY=admin MINIO_SECRET_KEY=mino_admin_123 nohup minio server \
http://172.16.81.161/root/.minio/data/ \
http://172.16.81.162/root/.minio/data/ \
http://172.16.81.163/root/.minio/data/ \
http://172.16.81.164/root/.minio/data/ &>> /var/log/minio.log &

# 下面是启动日志
Waiting for a minimum of 2 disks to come online (elapsed 0s)

Waiting for a minimum of 2 disks to come online (elapsed 2s)

Waiting for all other servers to be online to format the disks.
Waiting for all other servers to be online to format the disks.
Status:         4 Online, 0 Offline. 
Endpoint:  http://172.16.81.161:9000  http://192.168.50.128:9000  http://10.0.0.1:9000  http://127.0.0.1:9000              
AccessKey: admin 
SecretKey: mino_admin_123 

Browser Access:
   http://172.16.81.161:9000  http://192.168.50.128:9000  http://10.0.0.1:9000  http://127.0.0.1:9000              

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://172.16.81.161:9000 admin mino_admin_123

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

等待正常启动后,可以使用AccessKey, SecretKey使用任意的一个节点地址的9000端口来登录,测试在其任意一个节点上传文件后,其他的节点也可以看到上传的文件

b. 部署redis
$ apt install redis-server
# 修改redis配置文件中`bind 127.0.0.1`为`bind 192.168.x.x`供外部可以访问
$ vim /etc/redis/redis.conf
c. 部署mysql

安装的是mariadb-server,这里有个坑就是3306端口只绑定127.0.0.1,导致其它的主机不可访问,需要修改下面的配置文件

# 修改该地址为你服务器外网地址,然后重启服务
$ grep '^bind' /etc/mysql/mariadb.conf.d/50-server.cnf
bind-address		= 172.16.81.162
$ apt install mysql-server

MariaDB [(none)]> create user harbor identified by '17arBor1@3';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> show create database mysql;
+----------+-------------------------------------------------------------------+
| Database | Create Database                                                   |
+----------+-------------------------------------------------------------------+
| mysql    | CREATE DATABASE `mysql` /*!40100 DEFAULT CHARACTER SET utf8mb4 */ |
+----------+-------------------------------------------------------------------+
1 row in set (0.00 sec)

MariaDB [(none)]> create database registry;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> select user,host from mysql.user;
+--------+-----------+
| user   | host      |
+--------+-----------+
| harbor | %         |
| root   | localhost |
+--------+-----------+
2 rows in set (0.00 sec)

MariaDB [(none)]> grant all on registry.* to 'harbor';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> show grants for 'harbor';
+-------------------------------------------------------------------------------------------------------+
| Grants for harbor@%                                                                                   |
+-------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'harbor'@'%' IDENTIFIED BY PASSWORD '*43ECB3C9353A949CE36173D3613955766003A2B1' |
| GRANT ALL PRIVILEGES ON `registry`.* TO 'harbor'@'%'                                                  |
+-------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

2) harbor部署

需要在每个node上都安装harbor,这里就只演示一个节点

a. 安装docker-compose工具

ref: https://docs.docker.com/compose/install/

$ curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$  docker-compose version
docker-compose version 1.24.0, build 0aa59064
docker-py version: 3.7.2
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018
b. 下载harbor离线安装文件,导入数据库registry scheme

这里以1.5.3的版本为例,下载离线安装文件

ref: https://github.com/goharbor/harbor/blob/master/docs/installation_guide.md

$ wget https://storage.googleapis.com/harbor-releases/harbor-offline-installer-v1.5.3.tgz

# 解压到/opt目录下,然后进入到该目录,如
$ cd /opt/harbor/ha

# 这里需要修改下registry.sql中的sql定义语句,把所有的256替换为255,然后到导入到mysql的registry库中
$ mysql -u habor -h 172.16.81.162 -p registry < registry.sql 
Enter password: 
c. 配置harbor.cfg和docker-compose.yaml
# 修改修改的地方
diff harbor.cfg harbor.cfg.ori
7c7
< hostname = hub.hienha.org:8070
---
> hostname = reg.mydomain.com
130c130
< db_host = 172.16.81.162
---
> db_host = mysql
133c133
< db_password = 123456
---
> db_password = root123
139c139
< db_user = habor
---
> db_user = root
145c145
< redis_url = 172.16.81.163:6379
---
> redis_url = redis:6379
177c177
# 这里要使用minio,所以存储类型为s3
< registry_storage_provider_name = s3
---
> registry_storage_provider_name = filesystem
180c180
# 下面accesskey, secretkey分别对应的是minio中的key和密码,注意这里的配置会生成对应的yaml文件,所以':'后面的空格是一定要有的,生成的配置文件路径在`~path/harbor/common/config/`
< registry_storage_provider_config = accesskey: admin,secretkey: mino_admin_123,region: us-east-1,regionendpoint: http://172.16.81.161:9000, bucket: harbor,encrypt:false,secure: false,chunksize: 5242880,rootdirectory: /
---
> registry_storage_provider_config =

$ diff docker-compose.yml docker-compose.yml.ori
138c138
<       - 8070:80    # 这里的8070和上面harbor.cfg中的对应
---
>       - 80:80
3) 启动Harbor node1
$ cd /opt/harbor

# 这里使用的外部配置,如mysql, redis, 所以需要使用--ha参数
$ ./install.sh --ha

查看启动的容器

$ docker-compose ps
       Name                     Command                       State                                        Ports                              
----------------------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver   /harbor/start.sh                 Up (health: starting)                                                                   
harbor-jobservice    /harbor/start.sh                 Up                                                                                      
harbor-log           /bin/sh -c /usr/local/bin/ ...   Up (healthy)            127.0.0.1:1514->10514/tcp                                       
harbor-ui            /harbor/start.sh                 Up (health: starting)                                                                   
nginx                nginx -g daemon off;             Up (unhealthy)          0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:80->80/tcp

绑定好hosts后,可以通过http://hub.hienha.org测试下,默认用户名为admin, 密码是Harbor12345

安装过程中如果有报错,可以看/var/log/harbor下面查看对应的日志,还可以先根据下面命令判断那个容器报错

docker-compose ps
       Name                     Command                  State                                     Ports
---------------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver   /harbor/start.sh                 Up (healthy)
harbor-jobservice    /harbor/start.sh                 Up
harbor-log           /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp
harbor-ui            /harbor/start.sh                 Up (healthy)
nginx                nginx -g daemon off;             Up (healthy)   0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:8070->80/tcp
registry             /entrypoint.sh serve /etc/ ...   Up (healthy)   5000/tcp
4) 启动Harbor node2

安装方式与node1,scp到node2上后,执行./install.sh --ha,前提还是需要先安装 docker-compose

5) 验证安装结果
  • a. 通过web界面登录
  • b. 命令行登录推送镜像至harbor
$ docker login hub.hienha.org:8070
$ docker image tag kube-apiserver-amd64:v1.10.3 hub.hienha.org:8070/foo/kube-apiserver-amd64:v1.10.3
$ docker push hub.hienha.org:8070/foo/kube-apiserver-amd64:v1.10.3

6.3 搭建Kubernetes集群的Logging设施

传统应用实例对比云原生应用实例

  • 生存周期:长久的 vs 短暂的
  • 通过重新配置可以改变行为 vs 通过重新生成改变行为
  • 持久化存储 vs 短暂的

6.3.1 基于Elasticsearch技术栈搭建kubernetes的集群Logging设施

  • 选用的架构是EFK(Elasticsearch, Fluentd, Kibana)
  • 部署步骤

6.3.1.1 部署Elasticsearch

以容器方式部署Elasticsearch
ref: * https://www.elastic.co/guide/en/elasticsearch/reference/6.2/docker.html

  • https://www.docker.elastic.co/
  • 下载Elasticsearch镜像
$ docker pull docker.elastic.co/elasticsearch/elasticsearch:6.2.4
  • 创建elasticsearch数据存放目录
$ mkdir -p ~/.data/es_data
$ chmod g+rwx ~/.data/es_data/
$ chown 1000:1000 ~/.data/es_data/
  • 启动elasticsearch容器并验证结果
$ docker run -d --restart=unless-stopped -p 9200:9200 -p 9300:9300 -v /root/.data/es_data/:/usr/share/elasticsearch/data --ulimit nofile=65536:65536 -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.4

# 需要注意内存合理的分配,不然es无法正常启动
$ curl http://127.0.0.1:9200/_cat/health
1559030293 07:58:13 docker-cluster green 1 1 1 1 0 0 0 0 - 100.0%

6.3.1.2 部署Fluentd

fluentd由daemonset控制器控制

  • 下载fluentd-daemonset-elasticsearch-rbac.yaml
$ wget https://raw.githubusercontent.com/fluent/fluentd-kubernetes-daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml
  • 修改fluentd yaml文件中elasticsearch地址
66c66
<             value: "elasticsearch-logging"
---
>             value: "172.16.81.164"    # 这里对应elasticsearch地址
  • 启动fluentd并验证
$ kubectl apply -f fluentd-daemonset-elasticsearch-rbac.yaml -n kube-system 
serviceaccount "fluentd" created
clusterrole.rbac.authorization.k8s.io "fluentd" created
clusterrolebinding.rbac.authorization.k8s.io "fluentd" created
daemonset.extensions "fluentd" created

# 通过下面可以看到,fluent已经处于运行状态了
$ kubectl get pod -o wide -n kube-system | grep fluent
fluentd-df9gx                           1/1       Running            1          2m        192.168.50.97    dbk8s-node-02
fluentd-rtjch                           1/1       Running            0          2m        192.168.50.17    dbk8s-node-01
fluentd-ss86h                           1/1       Running            1          2m        192.168.50.129   dbk8s-master

# 启动日志
2019-05-29 08:37:36 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:38:06 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:38:36 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:39:06 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:39:36 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:40:07 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
  • 相关报错

2019-05-29 06:31:57 +0000 [error]: unexpected error error_class=Errno::EACCES error=#

这里是一个权限问题,就是用指定的用户身份来运行,如下

diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
65,66d64
<           - name: FLUENT_UID
<             value: "0"
68c66
<             value: "172.16.81.164"
---
>             value: "elasticsearch-logging"

*https://github.com/fluent/fluentd-kubernetes-daemonset/commit/694ff3a79f7c09b6bb7f740da07e1a75ad1f3aa7

Could not push logs to Elasticsearch, resetting connection and trying again

$ diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
65,72d64
<           - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
<             value: "false"
<           - name: FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR
<             value: "true"
<           - name: FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE
<             value: "true"
<           - name: FLUENT_UID
<             value: "0"
74c66
<             value: "172.16.81.164"
---
>             value: "elasticsearch-logging"

ref:

  • https://github.com/uken/fluent-plugin-elasticsearch#stopped-to-send-events-on-k8s-why
  • https://github.com/uken/fluent-plugin-elasticsearch/issues/525
  • https://github.com/uken/fluent-plugin-elasticsearch/issues/525#issuecomment-452975273

[warn]: temporarily failed to flush the buffer. next_retry=2019-05-29 07:58:22 +0000 error_class=“MultiJson::AdapterError” error=“Did not recognize your adapter specification (cannot load such file – bigdecimal).” plugin_id=“out_es”

出现这个问题的主要原因是镜像中ruby环境依赖不全导致的,所以尝试换个镜像,删除原来的pod再重新部署一遍

$ diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
63,64c63
<         # image: fluent/fluentd-kubernetes-daemonset:elasticsearch
<         image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch  # 这里使用v1.4-debian-elasticsearch这个镜像
---
>         image: fluent/fluentd-kubernetes-daemonset:elasticsearch
66,75d64
<           - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
<             value: "false"
<           - name: FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR
<             value: "true"
<           - name: FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE
<             value: "true"
<           - name: FLUENT_UID
<             value: "0"
<           - name: FLUENTD_SYSTEMD_CONF
<             value: "disable"
77c66
<             value: "172.16.81.164"
---
>             value: "elasticsearch-logging"

$ kubectl -n kube-system delete -f fluentd-daemonset-elasticsearch-rbac.yaml
serviceaccount "fluentd" deleted
clusterrole.rbac.authorization.k8s.io "fluentd" deleted
clusterrolebinding.rbac.authorization.k8s.io "fluentd" deleted
daemonset.extensions "fluentd" deleted

$ kubectl -n kube-system apply -f fluentd-daemonset-elasticsearch-rbac.yaml
serviceaccount "fluentd" created
clusterrole.rbac.authorization.k8s.io "fluentd" created
clusterrolebinding.rbac.authorization.k8s.io "fluentd" created
daemonset.extensions "fluentd" created

ref: https://github.com/fluent/fluentd-kubernetes-daemonset/issues/230

in_systemd_kubelet] Systemd::JournalError: No such file or directory retrying in

这里是修改下fluentd-daemonset-elasticsearch-rbac.yaml配置文件,最终修改的地方如下

$ diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
63,64c63
<         # image: fluent/fluentd-kubernetes-daemonset:elasticsearch
<         image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch
---
>         image: fluent/fluentd-kubernetes-daemonset:elasticsearch
66,75d64
<           - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
<             value: "false"
<           - name: FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR
<             value: "true"
<           - name: FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE
<             value: "true"
<           - name: FLUENT_UID
<             value: "0"
<           - name: FLUENTD_SYSTEMD_CONF    # 这里禁用收集systemd消息
<             value: "disable"
77c66
<             value: "172.16.81.164"
---
>             value: "elasticsearch-logging"

ref: * https://github.com/fluent/fluentd-kubernetes-daemonset#disable-systemd-input

  • https://github.com/fluent/fluentd-kubernetes-daemonset/issues/203

6.3.1.3 部署Kibana

6.3.1.3.1 下载kibana-deployment.yaml, kibana-service.yaml文件
$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml
$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml
6.3.1.3.2 修改配置文件 kibana-deployment.yaml

需要修改的内容如下

$ diff kibana-deployment.yaml kibana-deployment.yaml.ori
23c23
<         image: docker.elastic.co/kibana/kibana:6.2.4    # 这里保持和elasticsearch版本一致
---
>         image: docker.elastic.co/kibana/kibana-oss:6.6.1
32c32
<             value: http://172.16.81.164:9200    # 这里是elasticsearch的地址
---
>             value: http://elasticsearch-logging:9200
6.3.1.3.3 启动服务
$ kubectl apply -f kibana-deployment.yaml
deployment.apps "kibana-logging" created

$ kubectl apply -f kibana-service.yaml
service "kibana-logging" created

通过日志发现已经启动了

{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}

6.3.1.4 验证安装结果

在master节点上添加代理,不然无法正常访问kibana

# 下面命令表示代理的地址,及允许访问的主机
$ kubectl proxy --address=172.16.81.161 --accept-hosts='^.*$'

在浏览器中输入类似下面地址尝试访问kibana

http://172.16.81.161:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/home?_g=()

另外一种方法是通kubectl port-forward命令来实现(该方法目前测试没有成功)

$ kubectl port-forward kibana-logging-597b75c4f7-r4xl6 5601:5601 -n kube-system &

$ netstat -lnp | grep 5601
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      11780/kubectl
tcp6       0      0 ::1:5601                :::*                    LISTEN      11780/kubectl

可以看到只是绑定的本地地址,所以还需要使用端口转发(虚拟机的话)或者是其他的代理软件,如nginx

6.3.1.5 创建index pattern, 查看日志

正常登录后(默认kibana没有做认证),按照如下步骤即可看到集群的日志

    1. “Management” --> “Index Patterns” --> “Create Index Pattern” --> “Index pattern: log*” --> “Next step” --> “Time Filter field name
      : @timestamp” --> “Create index pattern”
  • “Discover” 选择对应的索引,即可看到日志

6.4 service mesh介绍

基于istio service mesh实现服务治理

6.4.1 微服务的"痛点" – 服务治理

  • 微服务化易弄,服务治理难搞
  • 现有方案多是框架或者库级别
  • 对业务逻辑有侵入性

6.4.2 服务网格(service mesh)项目

  • Linkerd
  • Linkerd2
  • istio

6.5 istio架构和安装

6.5.1 istio v1.0.2安装

  • 下载安装包
$ wget https://github.com/istio/istio/releases/download/1.0.2/istio-1.0.2-linux.tar.gz
  • 安装istio CRD
$ cd ~/soft/istio-1.0.2
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "destinationrules.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "serviceentries.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "gateways.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "envoyfilters.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "policies.authentication.istio.io" created
customresourcedefinition.apiextensions.k8s.io "meshpolicies.authentication.istio.io" created
customresourcedefinition.apiextensions.k8s.io "httpapispecbindings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "httpapispecs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotaspecbindings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotaspecs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rules.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "attributemanifests.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "bypasses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "circonuses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "deniers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "fluentds.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "kubernetesenvs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "listcheckers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "memquotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "noops.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "opas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "prometheuses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rbacs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "redisquotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicecontrols.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "signalfxs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "solarwindses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "stackdrivers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "statsds.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "stdios.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "apikeys.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "authorizations.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "checknothings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "kuberneteses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "listentries.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "logentries.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "edges.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "metrics.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "reportnothings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicecontrolreports.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "tracespans.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rbacconfigs.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "serviceroles.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicerolebindings.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "adapters.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "instances.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "templates.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "handlers.config.istio.io" created

# 查看crd资源
$ kubectl get crd -n istio-system
NAME                                    AGE
adapters.config.istio.io                1m
apikeys.config.istio.io                 1m
attributemanifests.config.istio.io      1m
authorizations.config.istio.io          1m
bypasses.config.istio.io                1m
checknothings.config.istio.io           1m
circonuses.config.istio.io              1m
deniers.config.istio.io                 1m
destinationrules.networking.istio.io    1m
edges.config.istio.io                   1m
envoyfilters.networking.istio.io        1m
fluentds.config.istio.io                1m
gateways.networking.istio.io            1m
handlers.config.istio.io                1m
httpapispecbindings.config.istio.io     1m
httpapispecs.config.istio.io            1m
instances.config.istio.io               1m
kubernetesenvs.config.istio.io          1m
kuberneteses.config.istio.io            1m
listcheckers.config.istio.io            1m
listentries.config.istio.io             1m
logentries.config.istio.io              1m
memquotas.config.istio.io               1m
meshpolicies.authentication.istio.io    1m
metrics.config.istio.io                 1m
noops.config.istio.io                   1m
opas.config.istio.io                    1m
policies.authentication.istio.io        1m
prometheuses.config.istio.io            1m
quotas.config.istio.io                  1m
quotaspecbindings.config.istio.io       1m
quotaspecs.config.istio.io              1m
rbacconfigs.rbac.istio.io               1m
rbacs.config.istio.io                   1m
redisquotas.config.istio.io             1m
reportnothings.config.istio.io          1m
rules.config.istio.io                   1m
servicecontrolreports.config.istio.io   1m
servicecontrols.config.istio.io         1m
serviceentries.networking.istio.io      1m
servicerolebindings.rbac.istio.io       1m
serviceroles.rbac.istio.io              1m
signalfxs.config.istio.io               1m
solarwindses.config.istio.io            1m
stackdrivers.config.istio.io            1m
statsds.config.istio.io                 1m
stdios.config.istio.io                  1m
templates.config.istio.io               1m
tracespans.config.istio.io              1m
virtualservices.networking.istio.io     1m
  • 安装istio核心组件并验证安装结果

下载核心镜像(需要在所有work node上操作)

istio_ver=1.0.2
istio_images=(citadel \
              pilot \
              proxy_debug \
              proxy_init \
              proxyv2 \
              grafana \
              galley \
              sidecar_injector \
              mixer \
              servicegraph \
              )

for img in ${istio_images[@]}; 
do
	image="istio/$img:${istio_ver}"
	docker image pull ${image}
	docker image tag ${image} gcr.io/istio-release/$img
	docker image rm ${image}
done
  • 安装istio
$ kubectl apply -f install/kubernetes/istio-demo.yaml 
namespace "istio-system" created
configmap "istio-galley-configuration" created
configmap "istio-grafana-custom-resources" created
configmap "istio-statsd-prom-bridge" created
configmap "prometheus" created
configmap "istio-security-custom-resources" created
configmap "istio" created
configmap "istio-sidecar-injector" created
serviceaccount "istio-galley-service-account" created
serviceaccount "istio-egressgateway-service-account" created
serviceaccount "istio-ingressgateway-service-account" created
serviceaccount "istio-grafana-post-install-account" created
clusterrole.rbac.authorization.k8s.io "istio-grafana-post-install-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-grafana-post-install-role-binding-istio-system" created
job.batch "istio-grafana-post-install" created
serviceaccount "istio-mixer-service-account" created
serviceaccount "istio-pilot-service-account" created
serviceaccount "prometheus" created
serviceaccount "istio-cleanup-secrets-service-account" created
clusterrole.rbac.authorization.k8s.io "istio-cleanup-secrets-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-cleanup-secrets-istio-system" created
job.batch "istio-cleanup-secrets" created
serviceaccount "istio-citadel-service-account" created
serviceaccount "istio-sidecar-injector-service-account" created
customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "destinationrules.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "serviceentries.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "gateways.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "envoyfilters.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "httpapispecbindings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "httpapispecs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "quotaspecbindings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "quotaspecs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "rules.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "attributemanifests.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "bypasses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "circonuses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "deniers.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "fluentds.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "kubernetesenvs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "listcheckers.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "memquotas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "noops.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "opas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "prometheuses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "rbacs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "redisquotas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "servicecontrols.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "signalfxs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "solarwindses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "stackdrivers.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "statsds.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "stdios.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "apikeys.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "authorizations.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "checknothings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "kuberneteses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "listentries.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "logentries.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "edges.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "metrics.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "quotas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "reportnothings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "servicecontrolreports.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "tracespans.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "rbacconfigs.rbac.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "serviceroles.rbac.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "servicerolebindings.rbac.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "adapters.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "instances.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "templates.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "handlers.config.istio.io" configured
clusterrole.rbac.authorization.k8s.io "istio-galley-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-egressgateway-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-ingressgateway-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-mixer-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-pilot-istio-system" created
clusterrole.rbac.authorization.k8s.io "prometheus-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-citadel-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-sidecar-injector-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-galley-admin-role-binding-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-egressgateway-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-ingressgateway-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-mixer-admin-role-binding-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-pilot-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "prometheus-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-citadel-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-sidecar-injector-admin-role-binding-istio-system" created
service "istio-galley" created
service "istio-egressgateway" created
service "istio-ingressgateway" created
service "grafana" created
service "istio-policy" created
service "istio-telemetry" created
service "istio-statsd-prom-bridge" created
deployment.extensions "istio-statsd-prom-bridge" created
service "istio-pilot" created
service "prometheus" created
service "istio-citadel" created
service "servicegraph" created
service "istio-sidecar-injector" created
deployment.extensions "istio-galley" created
deployment.extensions "istio-egressgateway" created
deployment.extensions "istio-ingressgateway" created
deployment.extensions "grafana" created
deployment.extensions "istio-policy" created
deployment.extensions "istio-telemetry" created
deployment.extensions "istio-pilot" created
deployment.extensions "prometheus" created
deployment.extensions "istio-citadel" created
deployment.extensions "servicegraph" created
deployment.extensions "istio-sidecar-injector" created
deployment.extensions "istio-tracing" created
gateway.networking.istio.io "istio-autogenerated-k8s-ingress" created
horizontalpodautoscaler.autoscaling "istio-egressgateway" created
horizontalpodautoscaler.autoscaling "istio-ingressgateway" created
horizontalpodautoscaler.autoscaling "istio-policy" created
horizontalpodautoscaler.autoscaling "istio-telemetry" created
horizontalpodautoscaler.autoscaling "istio-pilot" created
service "jaeger-query" created
service "jaeger-collector" created
service "jaeger-agent" created
service "zipkin" created
service "tracing" created
mutatingwebhookconfiguration.admissionregistration.k8s.io "istio-sidecar-injector" created
attributemanifest.config.istio.io "istioproxy" created
attributemanifest.config.istio.io "kubernetes" created
stdio.config.istio.io "handler" created
logentry.config.istio.io "accesslog" created
logentry.config.istio.io "tcpaccesslog" created
rule.config.istio.io "stdio" created
rule.config.istio.io "stdiotcp" created
metric.config.istio.io "requestcount" created
metric.config.istio.io "requestduration" created
metric.config.istio.io "requestsize" created
metric.config.istio.io "responsesize" created
metric.config.istio.io "tcpbytesent" created
metric.config.istio.io "tcpbytereceived" created
prometheus.config.istio.io "handler" created
rule.config.istio.io "promhttp" created
rule.config.istio.io "promtcp" created
kubernetesenv.config.istio.io "handler" created
rule.config.istio.io "kubeattrgenrulerule" created
rule.config.istio.io "tcpkubeattrgenrulerule" created
kubernetes.config.istio.io "attributes" created
destinationrule.networking.istio.io "istio-policy" created
destinationrule.networking.istio.io "istio-telemetry" created
  • 查看各pod的运行情况
# 下面有些状态为completed的其实是一些job容器,任务执行完成后就显示为Completed
$ kubectl get pods -n istio-system -o wide
NAME                                        READY     STATUS      RESTARTS   AGE       IP               NODE
grafana-6cbdcfb45-dwqf6                     1/1       Running     0          31m       172.100.192.2    dbk8s-node-01
istio-citadel-6b6fdfdd6f-6h2x2              1/1       Running     0          31m       172.100.192.6    dbk8s-node-01
istio-cleanup-secrets-vdnfd                 0/1       Completed   0          31m       172.100.240.3    dbk8s-node-02
istio-egressgateway-56bdd5fcfb-6ck5j        1/1       Running     0          31m       172.100.192.4    dbk8s-node-01
istio-galley-96464ff6-cd4n7                 1/1       Running     0          31m       172.100.240.5    dbk8s-node-02
istio-grafana-post-install-tspwc            0/1       Completed   0          31m       172.100.192.2    dbk8s-node-01
istio-ingressgateway-7f4dd7d699-x2rjp       1/1       Running     0          31m       172.100.192.5    dbk8s-node-01
istio-pilot-6f8d49d4c4-2rl6d                2/2       Running     0          31m       172.100.240.4    dbk8s-node-02
istio-policy-67f4d49564-c92kq               2/2       Running     0          31m       172.100.240.3    dbk8s-node-02
istio-sidecar-injector-69c4bc7974-fzthd     1/1       Running     0          31m       172.100.192.9    dbk8s-node-01
istio-statsd-prom-bridge-7f44bb5ddb-mtrms   1/1       Running     0          31m       172.100.192.3    dbk8s-node-01
istio-telemetry-76869cd64f-2qls7            2/2       Running     0          31m       172.100.192.7    dbk8s-node-01
istio-tracing-ff94688bb-rwfw2               1/1       Running     0          31m       172.100.192.11   dbk8s-node-01
prometheus-84bd4b9796-zqqcm                 1/1       Running     0          31m       172.100.192.8    dbk8s-node-01
servicegraph-c6456d6f5-phv7j                1/1       Running     0          31m       172.100.192.10   dbk8s-node-01
  • 查看相关服务
$ kubectl get service -n istio-system 
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                   AGE
grafana                    ClusterIP      10.106.69.220            3000/TCP                                                                                                                  3m
istio-citadel              ClusterIP      10.103.36.221            8060/TCP,9093/TCP                                                                                                         3m
istio-egressgateway        ClusterIP      10.107.41.248            80/TCP,443/TCP                                                                                                            3m
istio-galley               ClusterIP      10.101.181.251           443/TCP,9093/TCP                                                                                                          3m
istio-ingressgateway       LoadBalancer   10.108.31.91          80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30164/TCP,8060:31908/TCP,853:30900/TCP,15030:31201/TCP,15031:30619/TCP   3m
istio-pilot                ClusterIP      10.101.199.175           15010/TCP,15011/TCP,8080/TCP,9093/TCP                                                                                     3m
istio-policy               ClusterIP      10.102.208.233           9091/TCP,15004/TCP,9093/TCP                                                                                               3m
istio-sidecar-injector     ClusterIP      10.100.197.136           443/TCP                                                                                                                   3m
istio-statsd-prom-bridge   ClusterIP      10.110.224.158           9102/TCP,9125/UDP                                                                                                         3m
istio-telemetry            ClusterIP      10.110.30.160            9091/TCP,15004/TCP,9093/TCP,42422/TCP                                                                                     3m
jaeger-agent               ClusterIP      None                     5775/UDP,6831/UDP,6832/UDP                                                                                                3m
jaeger-collector           ClusterIP      10.99.110.81             14267/TCP,14268/TCP                                                                                                       3m
jaeger-query               ClusterIP      10.109.57.170            16686/TCP                                                                                                                 3m
prometheus                 ClusterIP      10.100.13.165            9090/TCP                                                                                                                  3m
servicegraph               ClusterIP      10.96.61.245             8088/TCP                                                                                                                  3m
tracing                    ClusterIP      10.104.129.0             80/TCP                                                                                                                    3m
zipkin                     ClusterIP      10.108.134.206           9411/TCP

istio v1.0.0之后,一些常用的插件就默认集成了,所以不需要额外安装了

需要关注下面的两个服务,可以通过WebUI的方式查看

$ kubectl -n istio-system get service | egrep 'prometheus|grafana'
grafana                    ClusterIP      10.106.69.220            3000/TCP                                                                                                                  33m
prometheus                 ClusterIP      10.100.13.165            9090/TCP                                                                    
# 查看prometheus, grafana的pod名称
$ kubectl   get pods -o wide -n istio-system  | egrep '^(prometheus|grafana)'
grafana-6cbdcfb45-dwqf6                     1/1       Running     0          44m       172.100.192.2    dbk8s-node-01
prometheus-84bd4b9796-zqqcm                 1/1       Running     0          44m       172.100.192.8    dbk8s-node-01

# 在master节点上建立两台端口转发规则                                              33m
$ kubectl port-forward prometheus-84bd4b9796-zqqcm 9090:9090 -n istio-system &
$ kubectl port-forward grafana-6cbdcfb45-dwqf6 3000:3000 -n istio-system &

这时就可以在本机通过下面的URL来访问了

  • http://127.0.0.1:9090/targets
  • http://127.0.0.1:3000/d/3/pilot-dashboard?refresh=5s&orgId=1

6.6 sidecar 注入

6.6.1 部署由istio v1.0.2管制的pod

  • 建立istio-example名字空间
$ kubectl create namespace istio-example
namespace "istio-example" created

6.6.2 添加istio工具至环境变量PATH

$ cd ~/istio-1.0.2; cp bin/istioctl /usr/local/bin/
$ istioctl version
Version: 1.0.2
GitRevision: d639408fded355fb906ef2a1f9e8ffddc24c3d64
User: root@66ce69d4a51e
Hub: gcr.io/istio-release
GolangVersion: go1.10.1
BuildStatus: Clean

6.6.3 使用istio命令行工具创建应用

这里之前用到的nginx为例

$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:alpine
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nginx-config
              mountPath: /etc/nginx/conf.d
            - name: web-root
              mountPath: /usr/share/nginx/html
      volumes:
        - name: nginx-config
          configMap:
            name: nginx-config
        - name: web-root
          hostPath:
            path: /var/www/html

# 手动执行管道前半段命令,发现该工具是先将已有的配置文件生成一份kubectl apply的标准yaml文件,管道后面部分和之前的一样
$ istioctl kube-inject -f nginx-deployment.yaml | kubectl -n istio-example apply -f -
deployment.apps "nginx-deployment-example" created
# 查看pod状态,下面的状态还是Init状态
$ kubectl get pods -n istio-example -o wide
NAME                                        READY     STATUS     RESTARTS   AGE       IP        NODE
nginx-deployment-example-787df9b456-g4jsk   0/2       Init:0/1   0          6m            dbk8s-node-01

$ kubectl -n istio-example describe pods/nginx-deployment-example-787df9b456-g4jsk

... ...

Containers:
  nginx:
    Container ID:
    Image:          nginx:alpine
    Image ID:
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    
    Mounts:
      /etc/nginx/conf.d from nginx-config (rw)
      /usr/share/nginx/html from web-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-k8xzf (ro)
  # 这里可以看到多一个istio-proxy容器
  istio-proxy:
    Container ID:
    Image:         gcr.io/istio-release/proxyv2:1.0.2
    Image ID:
    Port:          
    Host Port:     
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy

... ...

删除刚刚在istio-example名字空间下创建的pod

# 只需要把原来的create替换为delete即可
$ istioctl kube-inject -f nginx-deployment.yaml | kubectl -n istio-example delete -f -
deployment.apps "nginx-deployment-example" deleted
$ kubectl get pods -n istio-example -o wide
No resources found.

6.6.4 使用istio命令行工具创建应用 – 支持自动注入

$ kubectl label namespace istio-example istio-injection=enable
namespace "istio-example" labeled

$ kubectl -n istio-example create -f yamls/nginx-deployment.yaml
deployment.apps "nginx-deployment-example" created

这里关于istio的例子,只介绍了关于istio注入sidecar(边车)的两种方式,还没有一个完整的示例,后续有时间继续补充,或者是单独再写一篇。

你可能感兴趣的:(第六章 Kubernetes支撑云原生应用开发案例)