这篇文章记录一下k8s集群安装Helm、Istio 和 Kiali,方便学习相关概念。
前提需要k8s集群:
可参考:Arm64架构(MacBookPro M1)虚拟机安装k8s1.27.3版本记录及问题总结
Helm是k8s机群的包管理器,我们可以通过Helm在k8s集群安装应用。
Istio是一个功能强大的服务网格平台,为微服务架构提供了一套丰富的工具和功能,以简化和增强服务之间的通信、安全性和可观察性。
Kiali 仪表板展示了网格的概览以及 Bookinfo 示例应用的各个服务之间的关系。 它还提供过滤器来可视化流量的流动。
Helm版本支持策略:https://helm.sh/zh/docs/topics/version_skew/
Istio 版本支持策略:https://istio.io/latest/zh/docs/releases/supported-releases/
官方文档:https://helm.sh/zh/docs/intro/quickstart/
Helm安装方式很简单,只需要执行一条命令,执行相应脚本完成安装。如果你需要安装对应版本,每个Helm 版本都提供了各种操作系统的二进制版本:https://github.com/helm/helm/releases,这些版本可以手动下载和安装,然后解压,将helm移动到需要的目录中(mv linux-amd64/helm /usr/local/bin/helm
)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
[root@k8s-master ~]# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 11345 100 11345 0 0 8068 0 0:00:01 0:00:01 --:--:-- 8063
[WARNING] Could not find git. It is required for plugin installation.
Downloading https://get.helm.sh/helm-v3.12.1-linux-arm64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
[root@k8s-master ~]# helm version
version.BuildInfo{Version:"v3.12.1", GitCommit:"f32a527a060157990e2aa86bf45010dfb3cc8b8d", GitTreeState:"clean", GoVersion:"go1.20.4"}
[root@k8s-master ~]# helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
[root@k8s-master ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
[root@k8s-master ~]#
[root@k8s-master ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
官方文档:https://istio.io/latest/zh/docs/setup/getting-started/
Istio也可以执行脚本安装,但是我的网络不通没办法。
[root@k8s-master ~]# curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 102 100 102 0 0 359 0 --:--:-- --:--:-- --:--:-- 359
0 0 0 0 0 0 0 0 --:--:-- 0:01:36 --:--:-- 0
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
[root@k8s-master ~]# cat >> /etc/hosts << EOF
> 75.2.60.5 istio.io
> EOF
> [root@k8s-master ~]# curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 102 100 102 0 0 337 0 --:--:-- --:--:-- --:--:-- 337
0 0 0 0 0 0 0 0 --:--:-- 0:01:32 --:--:-- 0
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
只能下载二进制包,上传到虚拟机进行部署。
二进制包下载:https://github.com/istio/istio/releases/tag/1.18.0
下载对应版本
使用tar -zxvf
命令进行解压,进入文件夹,将istio命令加入环境变量。
采用 demo 配置组合。 它包含了一组专为测试准备的功能集合,另外还有用于生产或性能测试的配置组合。
cd istio-1.18.0
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
并且给命名空间添加标签,指示 Istio 在部署应用的时候,自动注入 Envoy
边车(Sidecar
)代理:
kubectl label namespace default istio-injection=enabled
然后安装官方的Demo示例:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
[root@k8s-master ~]# ll | grep istio
-rw-r--r--. 1 root root 25307383 7月 5 23:16 istio-1.18.0-linux-arm64.tar.gz
[root@k8s-master ~]# tar -zxvf istio-1.18.0-linux-arm64.tar.gz
[root@k8s-master ~]# ll | grep istio
drwxr-x---. 6 root root 115 6月 7 16:01 istio-1.18.0
-rw-r--r--. 1 root root 25307383 7月 5 23:16 istio-1.18.0-linux-arm64.tar.gz
[root@k8s-master ~]# cd istio-1.18.0
[root@k8s-master istio-1.18.0]# ls
bin LICENSE manifests manifest.yaml README.md samples tools
[root@k8s-master istio-1.18.0]# export PATH=$PWD/bin:$PATH
[root@k8s-master istio-1.18.0]# istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.
[root@k8s-master istio-1.18.0]# kubectl label namespace default istio-injection=enabled
namespace/default labeled
[root@k8s-master istio-1.18.0]# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
[root@k8s-master istio-1.18.0]#
应用很快会启动起来。当每个 Pod 准备就绪时,Istio 边车将伴随应用一起部署。
[root@k8s-master istio-1.18.0]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.111.93.162 9080/TCP 12m
kubernetes ClusterIP 10.96.0.1 443/TCP 18h
productpage ClusterIP 10.97.94.189 9080/TCP 12m
ratings ClusterIP 10.106.155.115 9080/TCP 12m
reviews ClusterIP 10.106.49.5 9080/TCP 12m
[root@k8s-master istio-1.18.0]# kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-7c7dbcb4b5-jx866 2/2 Running 0 12m
productpage-v1-664d44d68d-v722l 2/2 Running 0 12m
ratings-v1-844796bf85-kktgq 2/2 Running 0 12m
reviews-v1-5cf854487-gn6xv 2/2 Running 0 12m
reviews-v2-955b74755-rp9b5 2/2 Running 0 12m
reviews-v3-797fc48bc9-wspwt 2/2 Running 0 12m
[root@k8s-master istio-1.18.0]#
确认上面的操作都正确之后,运行下面命令,通过检查返回的页面标题来验证应用是否已在集群中运行,并已提供网页服务:
[root@k8s-master istio-1.18.0]# kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o ".* "
Simple Bookstore App</title>
[root@k8s-master istio-1.18.0]#
此时,BookInfo 应用已经部署,但还不能被外界访问。 要开放访问,需要创建 Istio 入站网关(Ingress Gateway), 它会在网格边缘把一个路径映射到路由。
[root@k8s-master istio-1.18.0]# kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created
# 确保配置文件没有问题:
[root@k8s-master istio-1.18.0]# istioctl analyze
✔ No validation issues found when analyzing namespace: default.
[root@k8s-master istio-1.18.0]#
执行下面命令以判断您的 Kubernetes 集群环境是否支持外部负载均衡:
[root@k8s-master ~]# kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.103.45.216 15021:31564/TCP,80:30704/TCP,443:30854/TCP,31400:30301/TCP,15443:30563/TCP 20h
[root@k8s-master ~]#
设置 EXTERNAL-IP
的值之后, 环境就有了一个外部的负载均衡器,可以将其用作入站网关。 但如果 EXTERNAL-IP 的值为 (或者一直是 <pending
> 状态), 则您的环境则没有提供可作为入站流量网关的外部负载均衡器。 在这个情况下,您还可以用服务(Service)的NodePort
访问网关。
如果你的环境中没有外部负载均衡器,那就选择一个NodePort
来代替。
设置入站 IP 地址和端口:
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
设置环境变量 GATEWAY_URL:
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "$GATEWAY_URL"
echo "http://$GATEWAY_URL/productpage"
把上面命令的输出地址复制粘贴到浏览器并访问,确认 Bookinfo 应用的产品页面是否可以打开。
[root@k8s-master ~]# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
[root@k8s-master ~]# export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
[root@k8s-master ~]# export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
[root@k8s-master ~]# echo "$GATEWAY_URL"
192.168.153.102:30704
[root@k8s-master ~]# echo "http://$GATEWAY_URL/productpage"
http://192.168.153.102:30704/productpage
Istio 和几个遥测应用做了集成。 遥测能帮我们了解服务网格的结构、展示网络的拓扑结构、分析网格的健康状态。
使用下面说明部署 Kiali 仪表板、 以及 Prometheus、 Grafana、 还有 Jaeger。
kubectl apply -f samples/addons
# 查询kiali在滚动更新期间的状态
kubectl rollout status deployment/kiali -n istio-system
要想外部访问kiali 的web页面,还需要创建一个NodePort
Service。
kubectl -n istio-system expose service kiali --type=NodePort --name=kiali-external
kubectl get svc -n istio-system
kubectl -n istio-system get service kiali-external -o=jsonpath='{.spec.ports[0].nodePort}'
[root@k8s-master ~]# cd istio-1.18.0
[root@k8s-master istio-1.18.0]# kubectl apply -f samples/addons
serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/loki created
configmap/loki created
configmap/loki-runtime created
service/loki-memberlist created
service/loki-headless created
service/loki created
statefulset.apps/loki created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created
[root@k8s-master istio-1.18.0]# kubectl rollout status deployment/kiali -n istio-system
deployment "kiali" successfully rolled out
[root@k8s-master istio-kiali]# kubectl -n istio-system expose service kiali --type=NodePort --name=kiali-external
service/kiali-external exposed
[root@k8s-master istio-kiali]# kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 10.111.39.235 80/TCP,443/TCP 21h
istio-ingressgateway LoadBalancer 10.103.45.216 15021:31564/TCP,80:30704/TCP,443:30854/TCP,31400:30301/TCP,15443:30563/TCP 21h
istiod ClusterIP 10.109.218.54 15010/TCP,15012/TCP,443/TCP,15014/TCP 21h
kiali ClusterIP 10.105.43.99 20001/TCP,9090/TCP 21h
kiali-external NodePort 10.110.49.251 20001:31430/TCP,9090:30588/TCP 9s
[root@k8s-master istio-kiali]# kubectl -n istio-system get service kiali-external -o=jsonpath='{.spec.ports[0].nodePort}'
31430[root@k8s-master istio-kiali]#
通过集群IP地址+NodePort 访问:http://192.168.153.102:31430/
要查看追踪数据,必须向服务发送请求。请求的数量取决于 Istio 的采样率。 采样率在安装 Istio 时设置,默认采样速率为 1%。在第一个跟踪可见之前,您需要发送至少 100 个请求。 使用以下命令向 productpage 服务发送 100 个请求:
for i in `seq 1 100`; do curl -s -o /dev/null http://$GATEWAY_URL/productpage; done
Kiali 仪表板展示了网格的概览以及 Bookinfo 示例应用的各个服务之间的关系。 它还提供过滤器来可视化流量的流动。
当我使用kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
命令部署Demo时,发现Pod一直启动不下来。
[root@k8s-master istio-1.18.0]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-v1-7c7dbcb4b5-lw8hr 0/2 Init:CrashLoopBackOff 5 (82s ago) 4m13s 172.16.85.212 k8s-node01
productpage-v1-664d44d68d-lgc4k 0/2 Init:CrashLoopBackOff 5 (69s ago) 4m12s 172.16.58.203 k8s-node02
ratings-v1-844796bf85-7s4zp 0/2 Init:CrashLoopBackOff 5 (87s ago) 4m13s 172.16.85.213 k8s-node01
reviews-v1-5cf854487-ztl9l 0/2 Init:CrashLoopBackOff 5 (73s ago) 4m13s 172.16.58.202 k8s-node02
reviews-v2-955b74755-tm6cj 0/2 Init:CrashLoopBackOff 5 (74s ago) 4m13s 172.16.85.214 k8s-node01
reviews-v3-797fc48bc9-s29zm 0/2 Init:CrashLoopBackOff 5 (78s ago) 4m13s 172.16.85.215 k8s-node01
开始以为是由于镜像下载不下来,单独下载了镜像
crictl pull docker.io/istio/examples-bookinfo-details-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-productpage-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-ratings-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-reviews-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-reviews-v2:1.17.0
crictl pull docker.io/istio/examples-bookinfo-reviews-v3:1.17.0
crictl pull docker.io/istio/proxyv2:1.18.0
[root@k8s-node02 istio-1.18.0]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/calico/cni v3.25.0 0bb8d6f033a05 81.1MB
docker.io/calico/kube-controllers v3.25.0 2a83e28de3677 27.1MB
docker.io/calico/node v3.25.0 8a2dff14388de 82.2MB
docker.io/istio/examples-bookinfo-details-v1 1.17.0 8c7b34204cae9 59.8MB
docker.io/istio/examples-bookinfo-productpage-v1 1.17.0 348980125f0b0 64.7MB
docker.io/istio/examples-bookinfo-ratings-v1 1.17.0 18290de2e4a28 54.2MB
docker.io/istio/examples-bookinfo-reviews-v1 1.17.0 9dc1566776c17 412MB
docker.io/istio/examples-bookinfo-reviews-v2 1.17.0 5233615dc9972 412MB
docker.io/istio/examples-bookinfo-reviews-v3 1.17.0 fbb7b7ceabf34 412MB
docker.io/istio/proxyv2 1.18.0 c901fe029266e 90.4MB
docker.io/kubernetesui/dashboard v2.3.1 5bb89698273d8 65.4MB
registry.aliyuncs.com/google_containers/pause 3.8 4e42fb3c9d90e 268kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns v1.10.1 97e04611ad434 14.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.27.3 fb73e92641fd5 21.4MB
[root@k8s-node02 istio-1.18.0]#
在三个节点都把镜像下载下来了,但是容器还是无法启动。
后来查看了Pod的日志,发现这个问题:error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat'
最后找到解决方案:
cat <> /etc/modules-load.d/k8s.conf
overlay
br_netfilter
nf_nat
xt_REDIRECT
xt_owner
iptable_nat
iptable_mangle
iptable_filter
EOT
modprobe br_netfilter ; modprobe nf_nat ; modprobe xt_REDIRECT ; modprobe xt_owner; modprobe iptable_nat; modprobe iptable_mangle; modprobe iptable_filter
https://stackoverflow.com/questions/73473680/service-deployed-with-istio-doesnt-start-minikube-docker-mac-m1
https://github.com/istio/istio/issues/36762
容器运行成功:
[root@k8s-master istio-1.18.0]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-v1-7c7dbcb4b5-jx866 2/2 Running 0 12m 172.16.85.220 k8s-node01
productpage-v1-664d44d68d-v722l 2/2 Running 0 12m 172.16.58.207 k8s-node02
ratings-v1-844796bf85-kktgq 2/2 Running 0 12m 172.16.85.221 k8s-node01
reviews-v1-5cf854487-gn6xv 2/2 Running 0 12m 172.16.58.206 k8s-node02
reviews-v2-955b74755-rp9b5 2/2 Running 0 12m 172.16.85.222 k8s-node01
reviews-v3-797fc48bc9-wspwt 2/2 Running 0 12m 172.16.85.223 k8s-node01
[root@k8s-master istio-1.18.0]#