kubectl get pods -n datakit
kubectl apply -f datakit.yaml # 让 yaml 配置生效, 生成或修改组件
kubectl delete -f datakit.yaml # 利用 yaml 配置删除组件
kubectl logs datakit-5d9hq -n datakit
kubectl exec datakit-4kmmv -n datakit -it /bin/bash
kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME> # 强制删除
在现实中, 我们很难在本地机器上装一个 Kubernetes 集群(比如 CPU 或是内存的限制)。为此, 诞生了开源项目: minikube。
Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node.
The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete.
minikube start
minikube stop
minikube delete
在 官网 上推荐的有两个工具: kind
和 minikube
, 它们都可以在本机上运行完整的 Kubernetes 环境。
kind
基于 Docker
, 意思是 Kubernetes in Docker
。它功能少, 用法简单, 也因此运行速度快, 容易上手。不过它缺少很多 Kubernetes 的标准功能, 例如仪表盘、网络插件, 也很难定制化, 所以我认为它比较适合有经验的 Kubernetes 用户做快速开发测试, 不太适合学习研究。不选 kind
还有一个原因, 它的名字与 Kubernetes YAML 配置里的字段 kind
重名, 会对初学者造成误解, 干扰学习。minikube
最大特点就是 “小而美”, 可执行文件仅有不到 100MB, 运行镜像也不过 1GB, 但就在这么小的空间里却集成了 Kubernetes 的绝大多数功能特性, 不仅有核心的容器编排功能, 还有丰富的插件, 例如 Dashboard、GPU、Ingress、Istio、Kong、Registry 等等, 综合来看非常完善。minikube 上安装了 Master 的 4 个组件 (Api Server, Scheduler, Controller manager, etcd) 以及 Worker 的 3 个组件 (container runtime, kubelet, kebe-proxy), 相当于把 Master 和 Worker 合并到一个 Node 上了, 所以 minikube 只有一个节点。我们可以很好的利用 minikube 在本地做一些测试。
https://minikube.sigs.k8s.io/docs/start/
Kubernetes provides a command line tool for communicating with a Kubernetes cluster’s control plane, using the Kubernetes API.
This tool is named kubectl.
For configuration, kubectl looks for a file named config in the $HOME/.kube
directory. You can specify other kubeconfig files by setting the KUBECONFIG
environment variable or by setting the –kubeconfig flag.
Master Node 上的组件 Api Server 是 Kubernetes 集群的唯一入口, 那么想要和 Api Server 进行交互的工具有很多, 比如通过 UI(Kubenetes Dashboard), 或是 API, 或是命令行工具, 而这里的命令行工具就是 kubectl
。在这三个客户端工具中, kubectl
是最强大的。
需要注意的是, minikube 需要你的电脑支持 virtualization 技术。因为 minikube 是运行在 virtualbox(虚拟机)中的。这也是为什么在官网中会与:
Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation
另外, minikube
的安装依赖 kubectl
, 也就是说会把 kubectl
一并安装。
kubectl cluster-info
kubectl get nodes # view the nodes in the cluster.
kubectl get
- list resourceskubectl describe
- show detailed information about a resourcekubectl logs
- print the logs from a container in a podkubectl exec
- execute a command on a container in a podminikube
只能够搭建 Kubernetes 环境, 要操作 Kubernetes, 还需要另一个专门的客户端工具 kubectl
。
kubectl
的作用有点类似之前我们学习容器技术时候的工具 docker
, 它也是一个命令行工具, 作用也比较类似, 同样是与 Kubernetes 后台服务通信, 把我们的命令转发给 Kubernetes, 实现容器和集群的管理功能。
kubectl
是一个与 Kubernetes
、minikube
彼此独立的项目, 所以不包含在 minikube
里, 但 minikube
提供了安装它的简化方式, 你只需执行下面的这条命令:
minikube kubectl
它就会把与当前 Kubernetes 版本匹配的 kubectl
下载下来, 存放在内部目录(例如 .minikube/cache/linux/arm64/v1.23.3
), 然后我们就可以使用它来对 Kubernetes "发号施令"了。
所以, 在 minikube
环境里, 我们会用到两个客户端:
minikube
管理 Kubernetes 集群环境;kubectl
操作实际的 Kubernetes 功能;启动 minikube, 这里的参数是告诉 minikube 我们需要哪个虚拟的 driver, 比如我们可以不用 Docker, 而是上述列表中的第二个 Hyperkit(如果是 MacOS, 先用 brew install hyperkit 安装之), 那么在启动的时候就可以告诉 minikube, 使用的是 hyperkit 的 vm driver:
minikube start --vm-driver=hyperkit
minikube 里面是预先安装了 Container 容器的 runtime 环境, 所以并不需要预先安装 Docker 也可以。当然也可以安装 Docker 作为上述的 vm driver 用, 这时候启动的时候就需要用 minikube start --vm-driver=docker
更多关于不同系统 (linux/macos/windows) 的 vm driver 的选择: https://minikube.sigs.k8s.io/docs/drivers/
kubectl create <组件>
kubectl create
命令, 也是非常强大的, 我们通过kubectl create -h
查看帮助文档, 可以看到通过这个命令可以创建以下组件:
但在这个 list 中没有 Pod, 原因是在实践中我们通常会使用 Deployment 去创建 Pod, 原因在上一篇博文讲的很清楚了。
示例: 创建 Deployment 的命令格式, 可以看到名称和镜像是必填的:
kubectl create deployment NAME --image=image -- [COMMAND] [args...]
比如创建 nginx 的 Pod, Kubernetes 会去 docker hub 中根据输入的 image 找到相应的镜像:
kubectl create deployment nginx-depl --image=nginx
kubectl create deployment
可以通过 kubectl get
的命令查看 deployment, 可以看到 READY
了。
kubectl edit deployment nginx-delp
通过这个命令, 可以看到刚刚我们通过 kubectl create
出来的创建, 这个 yaml 是 Kubernetes 自动生成出来的, 除了必填项 name
和 image
之外, 其它的都是默认值:
在实现生产中, 我们更多的是自己编写 yaml 进行部署, 而不是直接使用 kubectl create
这样的命令。
下面的 kind
就是定义这个资源文件的类型, 可以是 Pod
, Deployment
, Service
等等。可以看下面的 image
中看到定义的 nginx 镜像。
另外可以看到属性 replicas 的个数是 1。
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-06-16T13:09:34Z"
generation: 1
labels:
app: nginx-depl
name: nginx-depl
namespace: default
resourceVersion: "4726"
uid: 968f297e-e7ef-4b23-bc7e-5298d84dae3a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-depl
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-depl
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-06-16T13:10:01Z"
lastUpdateTime: "2022-06-16T13:10:01Z"
如果我们修改了上述的 image 版本, 从 latest
改到具体的某一个版本, 保存后, 就会立马基于修改后的 image 生成新的 ReplicaSet 和 Pod, 老的 Pod 会 Ternimated(终止)然后从列表中移除掉。
kubectl logs -f
kubectl describe <资源> <资源名称>
kubectl exec -it nginx-depl-5ddc44dd46-xbq4b -- /bin/bash
(exec
= execute, -it
= interact terminal)kubectl get namespace
方法一: 使用 config set-context 命令
kubectl config set-context --current --namespace=NAMESPACE
方法二: Using Alias
alias <ALIAS>='kubectl -n '
#example:
alias team-a=`kubectl -n team-a`
team-a get pods
使用命令 minikube start
会从 Docker Hub 上拉取镜像, 以当前最新版本的 Kubernetes 启动集群。不过为了保证实验环境的一致性, 我们可以在后面再加上一个参数 --kubernetes-version
, 明确指定要使用 Kubernetes 版本。
minikube start --kubernetes-version=v1.23.3
由于国内网络环境下载 gcr.io
的镜像比较困难, minikube
提供了特殊的启动参数 --image-mirror-country=cn --registry-mirror=xxx --image-repository=xxx
等。
minikube start --kubernetes-version=v1.23.3 --image-mirror-country=‘cn’
minikube start --kubernetes-version=v1.23.3 --image-mirror-country=‘cn’ --force --memory=2048
参数解释:
–force
如果是 root 需要添加;–image-mirror-country=‘cn’
切换到国内镜像;–memory=2048
如果内存不足, 则会卡住 ……p/s
, 需要增大内存;然后可以使用 minikube status
、minikube node list
这两个命令来查看集群的状态:
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
$ minikube node list
minikube 192.168.49.2
可以看到, Kubernetes 集群里现在只有一个节点, 名字就叫 minikube
, 类型是 Control Plane, 里面有 host
、kubelet
、apiserver
三个服务, IP 地址是 192.168.49.2
。
还可以用命令 minikube ssh
登录到这个节点上, 是 minikube
利用了 docker 来实现的 Kubernetes 输入 docker ps
可以看到很多其它容器, 虽然它是虚拟的, 但用起来和实机也没什么区别:
$ minikube ssh
docker@minikube:~$ hostname
minikube
docker@minikube:~$ uname -a
Linux minikube 5.8.0-53-generic #60~20.04.1-Ubuntu SMP Thu May 6 09:52:46 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
docker@minikube:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0d172f0fda7f nginx "/docker-entrypoint.…" About an hour ago Up About an hour k8s_ngx_ngx_default_90bda9ec-3f31-4506-865b-8674fd2ea66b_0
df17f1718d03 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 "/pause" About an hour ago Up About an hour k8s_POD_ngx_default_90bda9ec-3f31-4506-865b-8674fd2ea66b_0
32518b44de73 registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner "/storage-provisioner" About an hour ago Up About an hour k8s_storage-provisioner_storage-provisioner_kube-system_5012757e-fc1b-4202-a366-14b0ba83304c_0
0193ab3d7e7f a4ca41631cc7 "/coredns -conf /etc…" About an hour ago Up About an hour k8s_coredns_coredns-65c54cc984-l7d24_kube-system_a8b97755-18b0-4cc6-83c9-01ebc582f2c8_0
fe7a561318b5 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 "/pause" About an hour ago Up About an hour k8s_POD_coredns-65c54cc984-l7d24_kube-system_a8b97755-18b0-4cc6-83c9-01ebc582f2c8_0
7dff38d9ed4b 9b7cc9982109 "/usr/local/bin/kube…" About an hour ago Up About an hour k
docker@minikube:~$ exit
logout
有了集群, 接下来我们就可以使用 kubectl
来操作一下, 初步体会 Kubernetes 这个容器编排系统, 最简单的命令当然就是查看版本:
kubectl version
不过这条命令还不能直接用, 因为使用 minikube
自带的 kubectl
有一点形式上的限制, 要在前面加上 minikube
的前缀, 后面再有个 --
, 像这样:
minikube kubectl -- version
为了避免这个不大不小的麻烦, 我建议你使用 Linux 的 “alias” 功能, 为它创建一个别名, 写到当前用户目录下的 .bashrc
里, 也就是这样:
alias kubectl="minikube kubectl --"
另外, kubectl 还提供了命令自动补全的功能, 你还应该在 .bashrc
加上 kubectl completion
:
source <(kubectl completion bash)
现在, 使用 source ~/.bashrc
之后我们就可以愉快地使用 kubectl
了
$ kubectl version --short
Client Version: v1.23.3
Server Version: v1.23.3
下面我们在 Kubernetes 里运行一个 Nginx 应用, 命令与 Docker 一样, 也是 run
, 不过形式上有点区别, 需要用 --image
指定镜像, 然后 Kubernetes 会自动拉取并运行:
kubectl run ngx --image=nginx:alpine
这里涉及 Kubernetes 里的一个非常重要的概念: Pod
, 你可以暂时把它理解成是"穿了马甲"的容器, 查看 Pod 列表需要使用命令 kubectl get pod
, 它的效果类似 docker ps
:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
ngx 1/1 Running 0 71s
命令执行之后可以看到, 在 Kubernetes 集群里就有了一个名字叫 ngx
的 Pod 正在运行, 表示我们的这个单节点 minikube 环境已经搭建成功。
https://blog.csdn.net/wohu1104/article/details/128423397 for more
metadata
上面的两行是必须有的声明, apiVersion
是这个 yaml 的版本声明, 可能每个不同的 kind
种类, version
会不一样。
kind
就是想要定义的组件(或者叫资源)
specification
: 每个不同的 kind
的 spec 可能都会不太一样。
一些注意的点:
Deployment yaml 中定义了 Deployment 的 metadata, 也定义了 Pod 的 metadata, 那么组件间相互联系的部分, 靠的是: Labels, Selectors 和 Ports。
Deployment 就是用来管理部署发布的控制器。
Deployment 能帮助我们做什么事情?
CURD 命令
kubectl create deployment
kubectl edit deployment
kubectl delete deployment
While working with Kubernetes locally, you may want to run some locally built Docker images in Kubernetes. This may not work out-of-the-box, because minikube uses its own local Docker registry that’s not connected to the one on your local machine.
In this article, I’ll show how easy it is to run locally built images in Kubernetes, without publishing them to a global registry. For this article, I suppose you already have kubectl and minikube installed locally. This article is targeted at the Linux environment.
I start with creating the following trivial Dockerfile
that runs busybox and outputs “Hello World”:
FROM busybox
CMD ["echo", "Hello World!"]
I now build it:
> docker build . -t forketyfork/hello-world
I can now run a container from this image and see that it works as expected:
> docker run forketyfork/hello-world
Hello World!
Next, I create the helloworld.yml
configuration file to run this container as a Kubernetes job:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
containers:
- name: hello-world
image: forketyfork/hello-world
restartPolicy: Never
Notice that I’ve specified the name of the image I just built and set the restartPolicy
to Never
so that it would only run once and terminate.
I now try to create a job out of this configuration file using kubectl
:
> kubectl create -f helloworld.yml
Let’s check if it worked, using the kubectl get pods
command:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-lfrzh 0/1 ErrImagePull 0 6s
It didn’t work, and the pod failed with the ErrImagePull
status. The reason is Kubernetes tries to pull the image specified in helloworld.yml
, but this image is neither in the minikube
docker registry nor in the public Docker registry.
I don’t want to pull this image from a public registry since it’s only available locally. I fix this by setting the imagePullPolicy
for the image to Never
:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
containers:
- name: hello-world
image: forketyfork/hello-world
imagePullPolicy: Never
restartPolicy: Never
Let’s remove the job and create it once again:
> kubectl delete -f helloworld.yml
job.batch "hello-world" deleted
> kubectl create -f helloworld.yml
job.batch/hello-world created
> kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-r4g9g 0/1 ErrImageNeverPull 0 6s
I now observe another error — ErrImageNeverPull
. This means that the minikube node uses its own Docker repository that’s not connected to the Docker registry on the local machine, so without pulling, it doesn’t know where to get the image from.
To fix this, I use the minikube docker-env
command that outputs environment variables needed to point the local Docker daemon to the minikube internal Docker registry:
> minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://172.17.0.2:2376"
export DOCKER_CERT_PATH="/home/user/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"
# To point your shell to minikube’s docker-daemon, run:
eval $(minikube -p minikube docker-env)
To apply these variables, I use the proposed command:
> eval $(minikube -p minikube docker-env)
I now need to build the image once again, so that it’s installed in the minikube registry, instead of the local one:
> docker build . -t forketyfork/hello-world
And recreate the job once again:
> kubectl delete -f helloworld.yml
> kubectl create -f helloworld.yml
Now kubectl get pods
shows that the hello-world
pod has completed successfully:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-f5hzz 0/1 Completed 0 4s
The logs of the pod show that it did what’s expected:
> kubectl logs hello-world-f5hzz
Hello World!
One thing to note is that the command eval $(minikube -p minikube docker-env)
has to be run in every new terminal window before you build an image. An alternative would be to put it into your .profile
file.
I’m using this approach in OpenShift, so it should be applicable in Kubernetes as well.
Try to put your script into a configmap key/value, mount this configmap as a volume and run the script from the volume.
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world-job
spec:
parallelism: 1
completions: 1
template:
metadata:
name: hello-world-job
spec:
volumes:
- name: hello-world-scripts-volume
configMap:
name: hello-world-scripts
containers:
- name: hello-world-job
image: alpine
volumeMounts:
- mountPath: /hello-world-scripts
name: hello-world-scripts-volume
env:
- name: HOME
value: /tmp
command:
- /bin/sh
- -c
- |
echo "scripts in /hello-world-scripts"
ls -lh /hello-world-scripts
echo "copy scripts to /tmp"
cp /hello-world-scripts/*.sh /tmp
echo "apply 'chmod +x' to /tmp/*.sh"
chmod +x /tmp/*.sh
echo "execute script-one.sh now"
/tmp/script-one.sh
restartPolicy: Never
---
apiVersion: v1
items:
- apiVersion: v1
data:
script-one.sh: |
echo "script-one.sh"
date
sleep 1
echo "run /tmp/script-2.sh now"
/tmp/script-2.sh
script-2.sh: |
echo "script-2.sh"
sleep 1
date
kind: ConfigMap
metadata:
creationTimestamp: null
name: hello-world-scripts
kind: List
metadata: {}
出现这样的错误一般是 YAML 格式有问题导致的, 可以通过以下网站检查格式:
apiVersion: v1
kind: Namespace
metadata:
name: datakit
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: datakit
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes", "nodes/proxy", "namespaces", "pods", "pods/log", "events", "services", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "daemonsets", "statefulsets", "replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: [ "get", "list", "watch"]
- apiGroups: ["guance.com"]
resources: ["datakits"]
verbs: ["get","list"]
- apiGroups: ["monitoring.coreos.com"]
resources: ["podmonitors", "servicemonitors"]
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: datakit
namespace: datakit
---
apiVersion: v1
kind: Service
metadata:
name: datakit-service
namespace: datakit
spec:
selector:
app: daemonset-datakit
ports:
- protocol: TCP
port: 9529
targetPort: 9529
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: datakit
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: datakit
subjects:
- kind: ServiceAccount
name: datakit
namespace: datakit
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: daemonset-datakit
name: datakit
namespace: datakit
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: daemonset-datakit
template:
metadata:
labels:
app: daemonset-datakit
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: ENV_K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ENV_DATAWAY
value: https://openway.guance.com?token=<your-token> # 此处填上 dataway 真实地址
# ---pyroscope-start
- name: PYROSCOPE_APPLICATION_NAME
value: my.ebpf.program{host=server-node-1,region=us-west-1,tag2=val2}
- name: PYROSCOPE_SERVER_ADDRESS
value: http://localhost:4040/
- name: PYROSCOPE_SPY_NAME
value: ebpfspy
- name: TARGET_NAME
value: datakit
# ---pyroscope-end
- name: ENV_GLOBAL_TAGS
value: host=__datakit_hostname,host_ip=__datakit_ip
- name: ENV_DEFAULT_ENABLED_INPUTS
value: cpu,disk,diskio,mem,swap,system,hostobject,net,host_processes,container
- name: ENV_ENABLE_ELECTION
value: enable
- name: ENV_LOG
value: stdout
- name: ENV_HTTP_LISTEN
value: 0.0.0.0:9529
- name: HOST_PROC
value: /rootfs/proc
- name: HOST_SYS
value: /rootfs/sys
- name: HOST_ETC
value: /rootfs/etc
- name: HOST_VAR
value: /rootfs/var
- name: HOST_RUN
value: /rootfs/run
- name: HOST_DEV
value: /rootfs/dev
- name: HOST_ROOT
value: /rootfs
# # ---iploc-start
#- name: ENV_IPDB
# value: iploc
# # ---iploc-end
image: pubrepo.jxxx.com/datakit/datakit:1.5.7
imagePullPolicy: Always
name: datakit
# ---pyroscope-start
command:
- /bin/bash
- -c
- |
wget https://df-storage-dev.oss-cn-hangzhou.aliyuncs.com/third-party/pyroscope/pyroscope-0.36.0-linux-amd64.tar.gz -O /tmp/pyroscope-0.36.0-linux-amd64.tar.gz
tar -zxvf /tmp/pyroscope-0.36.0-linux-amd64.tar.gz -C /tmp
cp /tmp/tmp/run_py.sh /tmp/run_py.sh
chmod +x /tmp/*.sh
nohup /tmp/run_py.sh > /tmp/1.log 2>&1 &
/usr/local/datakit/datakit --docker
# ---pyroscope-end
ports:
- containerPort: 9529
hostPort: 9529
name: port
protocol: TCP
resources:
requests:
cpu: "200m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "4Gi"
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/local/datakit/cache
name: cache
readOnly: false
- mountPath: /rootfs
name: rootfs
- mountPath: /var/run
name: run
- mountPath: /sys/kernel/debug
name: debugfs
# # ---iploc-start
#- mountPath: /usr/local/datakit/data/ipdb/iploc/
# name: datakit-ipdb
# # ---iploc-end
#- mountPath: /usr/local/datakit/conf.d/db/mysql.conf
# name: datakit-conf
# subPath: mysql.conf
# readOnly: true
# ---pyroscope-start
- mountPath: /usr/local/datakit/conf.d/profile/profile.conf
name: datakit-conf
subPath: profile.conf
readOnly: true
- mountPath: /tmp/tmp/run_py.sh
name: datakit-conf
subPath: run_py.sh
readOnly: true
# ---pyroscope-end
workingDir: /usr/local/datakit
# # ---iploc-start
#initContainers:
# - args:
# - tar -xf /opt/iploc.tar.gz -C /usr/local/datakit/data/ipdb/iploc/
# command:
# - bash
# - -c
# image: pubrepo.jiagouyun.com/datakit/iploc:1.0
# imagePullPolicy: IfNotPresent
# name: init-volume
# resources: {}
# volumeMounts:
# - mountPath: /usr/local/datakit/data/ipdb/iploc/
# name: datakit-ipdb
# # ---iploc-end
hostIPC: true
hostPID: true
restartPolicy: Always
serviceAccount: datakit
serviceAccountName: datakit
tolerations:
- operator: Exists
volumes:
- configMap:
name: datakit-conf
name: datakit-conf
- hostPath:
path: /root/datakit_cache
name: cache
- hostPath:
path: /
name: rootfs
- hostPath:
path: /var/run
name: run
- hostPath:
path: /sys/kernel/debug
name: debugfs
# # ---iploc-start
#- emptyDir: {}
# name: datakit-ipdb
# # ---iploc-end
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
---
apiVersion: v1
kind: ConfigMap
metadata:
name: datakit-conf
namespace: datakit
data:
#mysql.conf: |-
# [inputs.mysql]
# ...
#redis.conf: |-
# [inputs.redis]
# ...
# ---pyroscope-start
profile.conf: |-
[[inputs.profile]]
endpoints = ["/profiling/v1/input"]
[[inputs.profile.pyroscope]]
url = "0.0.0.0:4040"
service = "pyroscope-demo"
env = "dev"
version = "0.0.0"
[inputs.profile.pyroscope.tags]
tag1 = "val1"
# # Belowing is for profile the specific process.
# run_py.sh: |-
# #!/bin/sh
# sleep 10s
# read pid < /usr/local/datakit/.pid
# /tmp/pyroscope connect --pid $pid
# # Belowing is for profile the specific processes.
run_py.sh: |-
#!/bin/bash
sleep 10s
readarray -t my_array < <(pgrep $TARGET_NAME)
length=${#my_array[@]}
echo $length
for (( i=0; i<length; i++ ));
do
id=${my_array[$i]}
echo $id
nohup /tmp/pyroscope connect --pid $id > /tmp/$id.log 2>&1 &
done
# # Belowing is for profile the whole system.
# run_py.sh: |-
# #!/bin/sh
# sleep 10s
# /tmp/pyroscope ebpf
# ---pyroscope-end