综上,这里使用vmware 的harbor
MinIO is an object storage server released under Apache License v2.0
ref: https://min.io/download#/linux
$ wget https://dl.min.io/server/minio/release/linux-amd64/minio
$ chmod +x minio
$ cp minio /usr/local/bin
$ MINIO_DATA=${HOME}/.minio/data; mkdir -p ${MINIO_DATA}; minio server ${MINIO_DATA}
最新版本的minio集群部署需要至少4台机器,我这里的IP地址是从172.16.81.161 - 172.16.81.164,需要使用下面的命令在每个节点上都要执行一遍
# 每个节点上都要执行一遍
MINIO_ACCESS_KEY=admin MINIO_SECRET_KEY=mino_admin_123 nohup minio server \
http://172.16.81.161/root/.minio/data/ \
http://172.16.81.162/root/.minio/data/ \
http://172.16.81.163/root/.minio/data/ \
http://172.16.81.164/root/.minio/data/ &>> /var/log/minio.log &
# 下面是启动日志
Waiting for a minimum of 2 disks to come online (elapsed 0s)
Waiting for a minimum of 2 disks to come online (elapsed 2s)
Waiting for all other servers to be online to format the disks.
Waiting for all other servers to be online to format the disks.
Status: 4 Online, 0 Offline.
Endpoint: http://172.16.81.161:9000 http://192.168.50.128:9000 http://10.0.0.1:9000 http://127.0.0.1:9000
AccessKey: admin
SecretKey: mino_admin_123
Browser Access:
http://172.16.81.161:9000 http://192.168.50.128:9000 http://10.0.0.1:9000 http://127.0.0.1:9000
Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://172.16.81.161:9000 admin mino_admin_123
Object API (Amazon S3 compatible):
Go: https://docs.min.io/docs/golang-client-quickstart-guide
Java: https://docs.min.io/docs/java-client-quickstart-guide
Python: https://docs.min.io/docs/python-client-quickstart-guide
JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide
等待正常启动后,可以使用AccessKey
, SecretKey
使用任意的一个节点地址的9000端口来登录,测试在其任意一个节点上传文件后,其他的节点也可以看到上传的文件
$ apt install redis-server
# 修改redis配置文件中`bind 127.0.0.1`为`bind 192.168.x.x`供外部可以访问
$ vim /etc/redis/redis.conf
安装的是mariadb-server,这里有个坑就是3306端口只绑定127.0.0.1,导致其它的主机不可访问,需要修改下面的配置文件
# 修改该地址为你服务器外网地址,然后重启服务
$ grep '^bind' /etc/mysql/mariadb.conf.d/50-server.cnf
bind-address = 172.16.81.162
$ apt install mysql-server
MariaDB [(none)]> create user harbor identified by '17arBor1@3';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show create database mysql;
+----------+-------------------------------------------------------------------+
| Database | Create Database |
+----------+-------------------------------------------------------------------+
| mysql | CREATE DATABASE `mysql` /*!40100 DEFAULT CHARACTER SET utf8mb4 */ |
+----------+-------------------------------------------------------------------+
1 row in set (0.00 sec)
MariaDB [(none)]> create database registry;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> select user,host from mysql.user;
+--------+-----------+
| user | host |
+--------+-----------+
| harbor | % |
| root | localhost |
+--------+-----------+
2 rows in set (0.00 sec)
MariaDB [(none)]> grant all on registry.* to 'harbor';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show grants for 'harbor';
+-------------------------------------------------------------------------------------------------------+
| Grants for harbor@% |
+-------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'harbor'@'%' IDENTIFIED BY PASSWORD '*43ECB3C9353A949CE36173D3613955766003A2B1' |
| GRANT ALL PRIVILEGES ON `registry`.* TO 'harbor'@'%' |
+-------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
需要在每个node上都安装harbor,这里就只演示一个节点
ref: https://docs.docker.com/compose/install/
$ curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ docker-compose version
docker-compose version 1.24.0, build 0aa59064
docker-py version: 3.7.2
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
这里以1.5.3的版本为例,下载离线安装文件
ref: https://github.com/goharbor/harbor/blob/master/docs/installation_guide.md
$ wget https://storage.googleapis.com/harbor-releases/harbor-offline-installer-v1.5.3.tgz
# 解压到/opt目录下,然后进入到该目录,如
$ cd /opt/harbor/ha
# 这里需要修改下registry.sql中的sql定义语句,把所有的256替换为255,然后到导入到mysql的registry库中
$ mysql -u habor -h 172.16.81.162 -p registry < registry.sql
Enter password:
# 修改修改的地方
diff harbor.cfg harbor.cfg.ori
7c7
< hostname = hub.hienha.org:8070
---
> hostname = reg.mydomain.com
130c130
< db_host = 172.16.81.162
---
> db_host = mysql
133c133
< db_password = 123456
---
> db_password = root123
139c139
< db_user = habor
---
> db_user = root
145c145
< redis_url = 172.16.81.163:6379
---
> redis_url = redis:6379
177c177
# 这里要使用minio,所以存储类型为s3
< registry_storage_provider_name = s3
---
> registry_storage_provider_name = filesystem
180c180
# 下面accesskey, secretkey分别对应的是minio中的key和密码,注意这里的配置会生成对应的yaml文件,所以':'后面的空格是一定要有的,生成的配置文件路径在`~path/harbor/common/config/`
< registry_storage_provider_config = accesskey: admin,secretkey: mino_admin_123,region: us-east-1,regionendpoint: http://172.16.81.161:9000, bucket: harbor,encrypt:false,secure: false,chunksize: 5242880,rootdirectory: /
---
> registry_storage_provider_config =
$ diff docker-compose.yml docker-compose.yml.ori
138c138
< - 8070:80 # 这里的8070和上面harbor.cfg中的对应
---
> - 80:80
$ cd /opt/harbor
# 这里使用的外部配置,如mysql, redis, 所以需要使用--ha参数
$ ./install.sh --ha
查看启动的容器
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver /harbor/start.sh Up (health: starting)
harbor-jobservice /harbor/start.sh Up
harbor-log /bin/sh -c /usr/local/bin/ ... Up (healthy) 127.0.0.1:1514->10514/tcp
harbor-ui /harbor/start.sh Up (health: starting)
nginx nginx -g daemon off; Up (unhealthy) 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:80->80/tcp
绑定好hosts后,可以通过http://hub.hienha.org
测试下,默认用户名为admin
, 密码是Harbor12345
安装过程中如果有报错,可以看/var/log/harbor
下面查看对应的日志,还可以先根据下面命令判断那个容器报错
docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver /harbor/start.sh Up (healthy)
harbor-jobservice /harbor/start.sh Up
harbor-log /bin/sh -c /usr/local/bin/ ... Up (healthy) 127.0.0.1:1514->10514/tcp
harbor-ui /harbor/start.sh Up (healthy)
nginx nginx -g daemon off; Up (healthy) 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:8070->80/tcp
registry /entrypoint.sh serve /etc/ ... Up (healthy) 5000/tcp
安装方式与node1,scp到node2上后,执行./install.sh --ha
,前提还是需要先安装 docker-compose
$ docker login hub.hienha.org:8070
$ docker image tag kube-apiserver-amd64:v1.10.3 hub.hienha.org:8070/foo/kube-apiserver-amd64:v1.10.3
$ docker push hub.hienha.org:8070/foo/kube-apiserver-amd64:v1.10.3
以容器方式部署Elasticsearch
ref: * https://www.elastic.co/guide/en/elasticsearch/reference/6.2/docker.html
$ docker pull docker.elastic.co/elasticsearch/elasticsearch:6.2.4
$ mkdir -p ~/.data/es_data
$ chmod g+rwx ~/.data/es_data/
$ chown 1000:1000 ~/.data/es_data/
$ docker run -d --restart=unless-stopped -p 9200:9200 -p 9300:9300 -v /root/.data/es_data/:/usr/share/elasticsearch/data --ulimit nofile=65536:65536 -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.4
# 需要注意内存合理的分配,不然es无法正常启动
$ curl http://127.0.0.1:9200/_cat/health
1559030293 07:58:13 docker-cluster green 1 1 1 1 0 0 0 0 - 100.0%
fluentd由daemonset控制器控制
$ wget https://raw.githubusercontent.com/fluent/fluentd-kubernetes-daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml
66c66
< value: "elasticsearch-logging"
---
> value: "172.16.81.164" # 这里对应elasticsearch地址
$ kubectl apply -f fluentd-daemonset-elasticsearch-rbac.yaml -n kube-system
serviceaccount "fluentd" created
clusterrole.rbac.authorization.k8s.io "fluentd" created
clusterrolebinding.rbac.authorization.k8s.io "fluentd" created
daemonset.extensions "fluentd" created
# 通过下面可以看到,fluent已经处于运行状态了
$ kubectl get pod -o wide -n kube-system | grep fluent
fluentd-df9gx 1/1 Running 1 2m 192.168.50.97 dbk8s-node-02
fluentd-rtjch 1/1 Running 0 2m 192.168.50.17 dbk8s-node-01
fluentd-ss86h 1/1 Running 1 2m 192.168.50.129 dbk8s-master
# 启动日志
2019-05-29 08:37:36 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:38:06 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:38:36 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:39:06 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:39:36 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 08:40:07 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5
2019-05-29 06:31:57 +0000 [error]: unexpected error error_class=Errno::EACCES error=#
这里是一个权限问题,就是用指定的用户身份来运行,如下
diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
65,66d64
< - name: FLUENT_UID
< value: "0"
68c66
< value: "172.16.81.164"
---
> value: "elasticsearch-logging"
*https://github.com/fluent/fluentd-kubernetes-daemonset/commit/694ff3a79f7c09b6bb7f740da07e1a75ad1f3aa7
Could not push logs to Elasticsearch, resetting connection and trying again
$ diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
65,72d64
< - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
< value: "false"
< - name: FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR
< value: "true"
< - name: FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE
< value: "true"
< - name: FLUENT_UID
< value: "0"
74c66
< value: "172.16.81.164"
---
> value: "elasticsearch-logging"
ref:
[warn]: temporarily failed to flush the buffer. next_retry=2019-05-29 07:58:22 +0000 error_class=“MultiJson::AdapterError” error=“Did not recognize your adapter specification (cannot load such file – bigdecimal).” plugin_id=“out_es”
出现这个问题的主要原因是镜像中ruby环境依赖不全导致的,所以尝试换个镜像,删除原来的pod再重新部署一遍
$ diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
63,64c63
< # image: fluent/fluentd-kubernetes-daemonset:elasticsearch
< image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch # 这里使用v1.4-debian-elasticsearch这个镜像
---
> image: fluent/fluentd-kubernetes-daemonset:elasticsearch
66,75d64
< - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
< value: "false"
< - name: FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR
< value: "true"
< - name: FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE
< value: "true"
< - name: FLUENT_UID
< value: "0"
< - name: FLUENTD_SYSTEMD_CONF
< value: "disable"
77c66
< value: "172.16.81.164"
---
> value: "elasticsearch-logging"
$ kubectl -n kube-system delete -f fluentd-daemonset-elasticsearch-rbac.yaml
serviceaccount "fluentd" deleted
clusterrole.rbac.authorization.k8s.io "fluentd" deleted
clusterrolebinding.rbac.authorization.k8s.io "fluentd" deleted
daemonset.extensions "fluentd" deleted
$ kubectl -n kube-system apply -f fluentd-daemonset-elasticsearch-rbac.yaml
serviceaccount "fluentd" created
clusterrole.rbac.authorization.k8s.io "fluentd" created
clusterrolebinding.rbac.authorization.k8s.io "fluentd" created
daemonset.extensions "fluentd" created
ref: https://github.com/fluent/fluentd-kubernetes-daemonset/issues/230
in_systemd_kubelet] Systemd::JournalError: No such file or directory retrying in
这里是修改下fluentd-daemonset-elasticsearch-rbac.yaml配置文件,最终修改的地方如下
$ diff fluentd-daemonset-elasticsearch-rbac.yaml fluentd-daemonset-elasticsearch-rbac.yaml.ori
63,64c63
< # image: fluent/fluentd-kubernetes-daemonset:elasticsearch
< image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch
---
> image: fluent/fluentd-kubernetes-daemonset:elasticsearch
66,75d64
< - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
< value: "false"
< - name: FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR
< value: "true"
< - name: FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE
< value: "true"
< - name: FLUENT_UID
< value: "0"
< - name: FLUENTD_SYSTEMD_CONF # 这里禁用收集systemd消息
< value: "disable"
77c66
< value: "172.16.81.164"
---
> value: "elasticsearch-logging"
ref: * https://github.com/fluent/fluentd-kubernetes-daemonset#disable-systemd-input
$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml
$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml
需要修改的内容如下
$ diff kibana-deployment.yaml kibana-deployment.yaml.ori
23c23
< image: docker.elastic.co/kibana/kibana:6.2.4 # 这里保持和elasticsearch版本一致
---
> image: docker.elastic.co/kibana/kibana-oss:6.6.1
32c32
< value: http://172.16.81.164:9200 # 这里是elasticsearch的地址
---
> value: http://elasticsearch-logging:9200
$ kubectl apply -f kibana-deployment.yaml
deployment.apps "kibana-logging" created
$ kubectl apply -f kibana-service.yaml
service "kibana-logging" created
通过日志发现已经启动了
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-29T09:07:26Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
在master节点上添加代理,不然无法正常访问kibana
# 下面命令表示代理的地址,及允许访问的主机
$ kubectl proxy --address=172.16.81.161 --accept-hosts='^.*$'
在浏览器中输入类似下面地址尝试访问kibana
http://172.16.81.161:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/home?_g=()
另外一种方法是通kubectl port-forward
命令来实现(该方法目前测试没有成功)
$ kubectl port-forward kibana-logging-597b75c4f7-r4xl6 5601:5601 -n kube-system &
$ netstat -lnp | grep 5601
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 11780/kubectl
tcp6 0 0 ::1:5601 :::* LISTEN 11780/kubectl
可以看到只是绑定的本地地址,所以还需要使用端口转发(虚拟机的话)或者是其他的代理软件,如nginx
正常登录后(默认kibana没有做认证),按照如下步骤即可看到集群的日志
基于istio service mesh实现服务治理
$ wget https://github.com/istio/istio/releases/download/1.0.2/istio-1.0.2-linux.tar.gz
$ cd ~/soft/istio-1.0.2
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "destinationrules.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "serviceentries.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "gateways.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "envoyfilters.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "policies.authentication.istio.io" created
customresourcedefinition.apiextensions.k8s.io "meshpolicies.authentication.istio.io" created
customresourcedefinition.apiextensions.k8s.io "httpapispecbindings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "httpapispecs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotaspecbindings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotaspecs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rules.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "attributemanifests.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "bypasses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "circonuses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "deniers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "fluentds.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "kubernetesenvs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "listcheckers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "memquotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "noops.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "opas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "prometheuses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rbacs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "redisquotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicecontrols.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "signalfxs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "solarwindses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "stackdrivers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "statsds.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "stdios.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "apikeys.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "authorizations.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "checknothings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "kuberneteses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "listentries.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "logentries.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "edges.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "metrics.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "reportnothings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicecontrolreports.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "tracespans.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rbacconfigs.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "serviceroles.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicerolebindings.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "adapters.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "instances.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "templates.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "handlers.config.istio.io" created
# 查看crd资源
$ kubectl get crd -n istio-system
NAME AGE
adapters.config.istio.io 1m
apikeys.config.istio.io 1m
attributemanifests.config.istio.io 1m
authorizations.config.istio.io 1m
bypasses.config.istio.io 1m
checknothings.config.istio.io 1m
circonuses.config.istio.io 1m
deniers.config.istio.io 1m
destinationrules.networking.istio.io 1m
edges.config.istio.io 1m
envoyfilters.networking.istio.io 1m
fluentds.config.istio.io 1m
gateways.networking.istio.io 1m
handlers.config.istio.io 1m
httpapispecbindings.config.istio.io 1m
httpapispecs.config.istio.io 1m
instances.config.istio.io 1m
kubernetesenvs.config.istio.io 1m
kuberneteses.config.istio.io 1m
listcheckers.config.istio.io 1m
listentries.config.istio.io 1m
logentries.config.istio.io 1m
memquotas.config.istio.io 1m
meshpolicies.authentication.istio.io 1m
metrics.config.istio.io 1m
noops.config.istio.io 1m
opas.config.istio.io 1m
policies.authentication.istio.io 1m
prometheuses.config.istio.io 1m
quotas.config.istio.io 1m
quotaspecbindings.config.istio.io 1m
quotaspecs.config.istio.io 1m
rbacconfigs.rbac.istio.io 1m
rbacs.config.istio.io 1m
redisquotas.config.istio.io 1m
reportnothings.config.istio.io 1m
rules.config.istio.io 1m
servicecontrolreports.config.istio.io 1m
servicecontrols.config.istio.io 1m
serviceentries.networking.istio.io 1m
servicerolebindings.rbac.istio.io 1m
serviceroles.rbac.istio.io 1m
signalfxs.config.istio.io 1m
solarwindses.config.istio.io 1m
stackdrivers.config.istio.io 1m
statsds.config.istio.io 1m
stdios.config.istio.io 1m
templates.config.istio.io 1m
tracespans.config.istio.io 1m
virtualservices.networking.istio.io 1m
下载核心镜像(需要在所有work node上操作)
istio_ver=1.0.2
istio_images=(citadel \
pilot \
proxy_debug \
proxy_init \
proxyv2 \
grafana \
galley \
sidecar_injector \
mixer \
servicegraph \
)
for img in ${istio_images[@]};
do
image="istio/$img:${istio_ver}"
docker image pull ${image}
docker image tag ${image} gcr.io/istio-release/$img
docker image rm ${image}
done
$ kubectl apply -f install/kubernetes/istio-demo.yaml
namespace "istio-system" created
configmap "istio-galley-configuration" created
configmap "istio-grafana-custom-resources" created
configmap "istio-statsd-prom-bridge" created
configmap "prometheus" created
configmap "istio-security-custom-resources" created
configmap "istio" created
configmap "istio-sidecar-injector" created
serviceaccount "istio-galley-service-account" created
serviceaccount "istio-egressgateway-service-account" created
serviceaccount "istio-ingressgateway-service-account" created
serviceaccount "istio-grafana-post-install-account" created
clusterrole.rbac.authorization.k8s.io "istio-grafana-post-install-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-grafana-post-install-role-binding-istio-system" created
job.batch "istio-grafana-post-install" created
serviceaccount "istio-mixer-service-account" created
serviceaccount "istio-pilot-service-account" created
serviceaccount "prometheus" created
serviceaccount "istio-cleanup-secrets-service-account" created
clusterrole.rbac.authorization.k8s.io "istio-cleanup-secrets-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-cleanup-secrets-istio-system" created
job.batch "istio-cleanup-secrets" created
serviceaccount "istio-citadel-service-account" created
serviceaccount "istio-sidecar-injector-service-account" created
customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "destinationrules.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "serviceentries.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "gateways.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "envoyfilters.networking.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "httpapispecbindings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "httpapispecs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "quotaspecbindings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "quotaspecs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "rules.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "attributemanifests.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "bypasses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "circonuses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "deniers.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "fluentds.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "kubernetesenvs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "listcheckers.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "memquotas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "noops.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "opas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "prometheuses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "rbacs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "redisquotas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "servicecontrols.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "signalfxs.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "solarwindses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "stackdrivers.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "statsds.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "stdios.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "apikeys.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "authorizations.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "checknothings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "kuberneteses.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "listentries.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "logentries.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "edges.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "metrics.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "quotas.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "reportnothings.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "servicecontrolreports.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "tracespans.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "rbacconfigs.rbac.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "serviceroles.rbac.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "servicerolebindings.rbac.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "adapters.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "instances.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "templates.config.istio.io" configured
customresourcedefinition.apiextensions.k8s.io "handlers.config.istio.io" configured
clusterrole.rbac.authorization.k8s.io "istio-galley-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-egressgateway-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-ingressgateway-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-mixer-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-pilot-istio-system" created
clusterrole.rbac.authorization.k8s.io "prometheus-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-citadel-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-sidecar-injector-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-galley-admin-role-binding-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-egressgateway-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-ingressgateway-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-mixer-admin-role-binding-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-pilot-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "prometheus-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-citadel-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-sidecar-injector-admin-role-binding-istio-system" created
service "istio-galley" created
service "istio-egressgateway" created
service "istio-ingressgateway" created
service "grafana" created
service "istio-policy" created
service "istio-telemetry" created
service "istio-statsd-prom-bridge" created
deployment.extensions "istio-statsd-prom-bridge" created
service "istio-pilot" created
service "prometheus" created
service "istio-citadel" created
service "servicegraph" created
service "istio-sidecar-injector" created
deployment.extensions "istio-galley" created
deployment.extensions "istio-egressgateway" created
deployment.extensions "istio-ingressgateway" created
deployment.extensions "grafana" created
deployment.extensions "istio-policy" created
deployment.extensions "istio-telemetry" created
deployment.extensions "istio-pilot" created
deployment.extensions "prometheus" created
deployment.extensions "istio-citadel" created
deployment.extensions "servicegraph" created
deployment.extensions "istio-sidecar-injector" created
deployment.extensions "istio-tracing" created
gateway.networking.istio.io "istio-autogenerated-k8s-ingress" created
horizontalpodautoscaler.autoscaling "istio-egressgateway" created
horizontalpodautoscaler.autoscaling "istio-ingressgateway" created
horizontalpodautoscaler.autoscaling "istio-policy" created
horizontalpodautoscaler.autoscaling "istio-telemetry" created
horizontalpodautoscaler.autoscaling "istio-pilot" created
service "jaeger-query" created
service "jaeger-collector" created
service "jaeger-agent" created
service "zipkin" created
service "tracing" created
mutatingwebhookconfiguration.admissionregistration.k8s.io "istio-sidecar-injector" created
attributemanifest.config.istio.io "istioproxy" created
attributemanifest.config.istio.io "kubernetes" created
stdio.config.istio.io "handler" created
logentry.config.istio.io "accesslog" created
logentry.config.istio.io "tcpaccesslog" created
rule.config.istio.io "stdio" created
rule.config.istio.io "stdiotcp" created
metric.config.istio.io "requestcount" created
metric.config.istio.io "requestduration" created
metric.config.istio.io "requestsize" created
metric.config.istio.io "responsesize" created
metric.config.istio.io "tcpbytesent" created
metric.config.istio.io "tcpbytereceived" created
prometheus.config.istio.io "handler" created
rule.config.istio.io "promhttp" created
rule.config.istio.io "promtcp" created
kubernetesenv.config.istio.io "handler" created
rule.config.istio.io "kubeattrgenrulerule" created
rule.config.istio.io "tcpkubeattrgenrulerule" created
kubernetes.config.istio.io "attributes" created
destinationrule.networking.istio.io "istio-policy" created
destinationrule.networking.istio.io "istio-telemetry" created
# 下面有些状态为completed的其实是一些job容器,任务执行完成后就显示为Completed
$ kubectl get pods -n istio-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
grafana-6cbdcfb45-dwqf6 1/1 Running 0 31m 172.100.192.2 dbk8s-node-01
istio-citadel-6b6fdfdd6f-6h2x2 1/1 Running 0 31m 172.100.192.6 dbk8s-node-01
istio-cleanup-secrets-vdnfd 0/1 Completed 0 31m 172.100.240.3 dbk8s-node-02
istio-egressgateway-56bdd5fcfb-6ck5j 1/1 Running 0 31m 172.100.192.4 dbk8s-node-01
istio-galley-96464ff6-cd4n7 1/1 Running 0 31m 172.100.240.5 dbk8s-node-02
istio-grafana-post-install-tspwc 0/1 Completed 0 31m 172.100.192.2 dbk8s-node-01
istio-ingressgateway-7f4dd7d699-x2rjp 1/1 Running 0 31m 172.100.192.5 dbk8s-node-01
istio-pilot-6f8d49d4c4-2rl6d 2/2 Running 0 31m 172.100.240.4 dbk8s-node-02
istio-policy-67f4d49564-c92kq 2/2 Running 0 31m 172.100.240.3 dbk8s-node-02
istio-sidecar-injector-69c4bc7974-fzthd 1/1 Running 0 31m 172.100.192.9 dbk8s-node-01
istio-statsd-prom-bridge-7f44bb5ddb-mtrms 1/1 Running 0 31m 172.100.192.3 dbk8s-node-01
istio-telemetry-76869cd64f-2qls7 2/2 Running 0 31m 172.100.192.7 dbk8s-node-01
istio-tracing-ff94688bb-rwfw2 1/1 Running 0 31m 172.100.192.11 dbk8s-node-01
prometheus-84bd4b9796-zqqcm 1/1 Running 0 31m 172.100.192.8 dbk8s-node-01
servicegraph-c6456d6f5-phv7j 1/1 Running 0 31m 172.100.192.10 dbk8s-node-01
$ kubectl get service -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.106.69.220 3000/TCP 3m
istio-citadel ClusterIP 10.103.36.221 8060/TCP,9093/TCP 3m
istio-egressgateway ClusterIP 10.107.41.248 80/TCP,443/TCP 3m
istio-galley ClusterIP 10.101.181.251 443/TCP,9093/TCP 3m
istio-ingressgateway LoadBalancer 10.108.31.91 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30164/TCP,8060:31908/TCP,853:30900/TCP,15030:31201/TCP,15031:30619/TCP 3m
istio-pilot ClusterIP 10.101.199.175 15010/TCP,15011/TCP,8080/TCP,9093/TCP 3m
istio-policy ClusterIP 10.102.208.233 9091/TCP,15004/TCP,9093/TCP 3m
istio-sidecar-injector ClusterIP 10.100.197.136 443/TCP 3m
istio-statsd-prom-bridge ClusterIP 10.110.224.158 9102/TCP,9125/UDP 3m
istio-telemetry ClusterIP 10.110.30.160 9091/TCP,15004/TCP,9093/TCP,42422/TCP 3m
jaeger-agent ClusterIP None 5775/UDP,6831/UDP,6832/UDP 3m
jaeger-collector ClusterIP 10.99.110.81 14267/TCP,14268/TCP 3m
jaeger-query ClusterIP 10.109.57.170 16686/TCP 3m
prometheus ClusterIP 10.100.13.165 9090/TCP 3m
servicegraph ClusterIP 10.96.61.245 8088/TCP 3m
tracing ClusterIP 10.104.129.0 80/TCP 3m
zipkin ClusterIP 10.108.134.206 9411/TCP
istio v1.0.0之后,一些常用的插件就默认集成了,所以不需要额外安装了
需要关注下面的两个服务,可以通过WebUI的方式查看
$ kubectl -n istio-system get service | egrep 'prometheus|grafana'
grafana ClusterIP 10.106.69.220 3000/TCP 33m
prometheus ClusterIP 10.100.13.165 9090/TCP
# 查看prometheus, grafana的pod名称
$ kubectl get pods -o wide -n istio-system | egrep '^(prometheus|grafana)'
grafana-6cbdcfb45-dwqf6 1/1 Running 0 44m 172.100.192.2 dbk8s-node-01
prometheus-84bd4b9796-zqqcm 1/1 Running 0 44m 172.100.192.8 dbk8s-node-01
# 在master节点上建立两台端口转发规则 33m
$ kubectl port-forward prometheus-84bd4b9796-zqqcm 9090:9090 -n istio-system &
$ kubectl port-forward grafana-6cbdcfb45-dwqf6 3000:3000 -n istio-system &
这时就可以在本机通过下面的URL来访问了
$ kubectl create namespace istio-example
namespace "istio-example" created
$ cd ~/istio-1.0.2; cp bin/istioctl /usr/local/bin/
$ istioctl version
Version: 1.0.2
GitRevision: d639408fded355fb906ef2a1f9e8ffddc24c3d64
User: root@66ce69d4a51e
Hub: gcr.io/istio-release
GolangVersion: go1.10.1
BuildStatus: Clean
这里之前用到的nginx为例
$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
- name: web-root
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-config
configMap:
name: nginx-config
- name: web-root
hostPath:
path: /var/www/html
# 手动执行管道前半段命令,发现该工具是先将已有的配置文件生成一份kubectl apply的标准yaml文件,管道后面部分和之前的一样
$ istioctl kube-inject -f nginx-deployment.yaml | kubectl -n istio-example apply -f -
deployment.apps "nginx-deployment-example" created
# 查看pod状态,下面的状态还是Init状态
$ kubectl get pods -n istio-example -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-example-787df9b456-g4jsk 0/2 Init:0/1 0 6m dbk8s-node-01
$ kubectl -n istio-example describe pods/nginx-deployment-example-787df9b456-g4jsk
... ...
Containers:
nginx:
Container ID:
Image: nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/etc/nginx/conf.d from nginx-config (rw)
/usr/share/nginx/html from web-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k8xzf (ro)
# 这里可以看到多一个istio-proxy容器
istio-proxy:
Container ID:
Image: gcr.io/istio-release/proxyv2:1.0.2
Image ID:
Port:
Host Port:
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
... ...
删除刚刚在istio-example名字空间下创建的pod
# 只需要把原来的create替换为delete即可
$ istioctl kube-inject -f nginx-deployment.yaml | kubectl -n istio-example delete -f -
deployment.apps "nginx-deployment-example" deleted
$ kubectl get pods -n istio-example -o wide
No resources found.
$ kubectl label namespace istio-example istio-injection=enable
namespace "istio-example" labeled
$ kubectl -n istio-example create -f yamls/nginx-deployment.yaml
deployment.apps "nginx-deployment-example" created
这里关于istio的例子,只介绍了关于istio注入sidecar(边车)的两种方式,还没有一个完整的示例,后续有时间继续补充,或者是单独再写一篇。