通过 Service 统一入口进行访问,然后由 Controller 创建 Pod 进行部署。
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 生产环境建议不要关闭防火墙而是像下面这样:
## Master 节点
### firewall-cmd --zone=public --permanent --add-rich-rule='rule protocol value=\"vrrp\" accept'
### firewall-cmd --permanent --add-port=6443/tcp --add-port=16443/tcp --add-port=2379-2380/tcp --add-port=10250-10252/tcp
### firewall-cmd --reload
## Node 节点
### firewall-cmd --permanent --add-port=10251-10252/tcp --add-port=30000-32767/tcp
### firewall-cmd --reload
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
节点名称 | 角色 | NAT | Host-Only |
---|---|---|---|
cloud-mn01 | master | 10.0.2.101 | 192.168.1.101 |
cloud-dn01 | node | 10.0.2.201 | 192.168.1.201 |
cloud-dn02 | node | 10.0.2.202 | 192.168.1.202 |
# 添加hosts
cat >> /etc/hosts << EOF
192.168.1.101 cloud-mn01
192.168.1.201 cloud-dn01
192.168.1.202 cloud-dn02
EOF
# 设置主机名
hostnamectl set-hostname cloud-mn01
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker.service
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet
kubeadm init \
--apiserver-advertise-address=192.168.1.101 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.101:6443 --token 76yhkd.edbqca3hk81hyjgs \
--discovery-token-ca-cert-hash sha256:5717e5ded7a5984a6bb2731e1e4235c646b0f25597d9f8ff62c6b291709e6faf
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 192.168.1.101:6443 --token 76yhkd.edbqca3hk81hyjgs \
--discovery-token-ca-cert-hash sha256:5717e5ded7a5984a6bb2731e1e4235c646b0f25597d9f8ff62c6b291709e6faf
# 默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:
# kubeadm token create --print-join-command
# 部署网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 查看系统 pod 运行状态
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-grwjc 1/1 Running 1 28m
coredns-7ff77c879f-zpjdj 1/1 Running 1 28m
etcd-cloud-mn01 1/1 Running 1 28m
kube-apiserver-cloud-mn01 1/1 Running 1 28m
kube-controller-manager-cloud-mn01 1/1 Running 1 28m
kube-flannel-ds-2zwl6 1/1 Running 2 24m
kube-flannel-ds-59gzc 1/1 Running 1 24m
kube-flannel-ds-6f9mk 1/1 Running 1 24m
kube-proxy-2gz4j 1/1 Running 1 27m
kube-proxy-pscqj 1/1 Running 1 28m
kube-proxy-scbqz 1/1 Running 1 28m
kube-scheduler-cloud-mn01 1/1 Running 1 28m
# 查看系统节点状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
cloud-dn01 Ready <none> 17m v1.18.0
cloud-dn02 Ready <none> 17m v1.18.0
cloud-mn01 Ready master 18m v1.18.0
# 创建 pod
kubectl create deployment nginx --image=nginx
# 暴露 pod 端口
kubectl expose deployment nginx --port=80 --type=NodePort
# 查看 pod 端口(30531)
kubectl get pod,svc
[root@cloud-mn01 ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-f89759699-g2qd2 1/1 Running 0 14m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
service/nginx NodePort 10.109.97.101 <none> 80:30531/TCP 13m
[root@cloud-mn01 ~]#
浏览器输入Node IP + Pod Port 即192.168.1.201:30531 或 192.168.1.202:30531
kubectl [command] [TYPE] [NAME] [flags]
command
:指定要对资源 执行的操作,例如 create、get、describe 和 delete;TYPE
:指定 资源类型。资源类型是大小写敏感的,开发者能够以单数、复数和缩略的形式例如 kubectl get pod [pod-name]、kubectl get pods [pod-name] 和 kubectl get po [pod-name];NAME
:指定 资源名称,名称也是大小写敏感的。如果省略名称,则会显示所有资源;flags
:指定 可选参数。例如,可用 -s 或者 -server 参数指定 Kubernetes API server 的地址和端口。以数据为中心、可读性高、用来表达数据序列格式的标记语言。
# [root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx \
# > -o yaml --dry-run=client
## -o yaml : 输出 yaml 资源清单文件
## --dry-run=client : 不真正执行部署操作
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: webapp
name: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: webapp
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
[root@cloud-mn01 ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 2d3h
[root@cloud-mn01 ~]# kubectl get deploy nginx -o=yaml
字段 | 说明 |
---|---|
apiVersion | API 版本 |
kind | 资源类型 |
metadata | 资源元数据 |
spec | 资源规格 |
replicas | 副本数量 |
selector | 标签选择器 |
template | Pod 模板 |
metadata | Pod 元数据 |
spec | Pod 规格 |
containers | 容器配置 |
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx:1.14
imagePullPolicy: IfNotPresent
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
apiVersion: v1
kind: Pod
metadata:
name: dns-test
spec:
containers:
- name: busybox
image: busybox:1.28.4
args:
- /bin/sh
- -c
- sleep 3600
restartPolicy: never
livenessProbe(存活检查):如果检查失败,将杀死容器,根据 Pod 的 restartPolicy 来操作;
readinessProbe(就绪检查):如果检查失败,Kubernetes 会把 Pod 从 service endpoints 中剔除。
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healty
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Probe 支持以下三种检查方法:
resources:
requests:
memory: "64Mi"
cpu: "250m"
spec:
nodeSelector:
env_role: dev
containers:
- name: nginx
image: nginx:1.15
[root@cloud-mn01 ~]# kubectl label node cloud-dn01 env_role=dev
node/cloud-dn01 labeled
[root@cloud-mn01 ~]# kubectl get node cloud-dn01 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
cloud-dn01 Ready <none> 8d v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env_role=dev,kubernetes.io/arch=amd64,kubernetes.io/hostname=cloud-dn01,kubernetes.io/os=linux
[root@cloud-mn01 ~]#
与之前 nodeSelector 类似,根据节点上的标签约束来决定 Pod 分配到哪些节点上。
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
# 一定要满足
requiredDuringSchedulingIgnoreDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env_role
# 常用操作符:In NotIn Exists Gt Lt DoesNotExists
operator: In
values:
- dev
- test
# 偏好,非硬性要求
preferredDuringSchedulingIgnoreDuringExecution:
- wight: 1
preference:
matchExpressions:
- key: group
operator: In
values:
- highCompute
nodeSelector 和 nodeAffinity 是 Pod 级别的属性,Taint 污点是节点级别的属性。
# : 没有设置污点
# NoSchedule : 一定不被调度
# PreferNoSchedule : 尽量不被调度
# NoExecute : 不被调度,且驱逐现已分配的 Pod 到其它节点
[root@cloud-mn01 ~]# kubectl describe node cloud-mn01 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl taint node cloud-dn01 node-health=red:NoSchedule
node/cloud-dn01 tainted
[root@cloud-mn01 ~]# kubectl describe node cloud-dn01 | grep Taint
Taints: node-health=red:NoSchedule
[root@cloud-mn01 ~]# kubectl create deployment webapp --image nginx
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl scale deployment webapp --replicas=3
deployment.apps/webapp scaled
# Pod 均分配到了 cloud-dn02
[root@cloud-mn01 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
webapp-59d9889648-8pd5k 0/1 ContainerCreating 0 29s <none> cloud-dn02 <none> <none>
webapp-59d9889648-bjtrn 0/1 ContainerCreating 0 49s <none> cloud-dn02 <none> <none>
webapp-59d9889648-qk662 0/1 ContainerCreating 0 29s <none> cloud-dn02 <none> <none>
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl taint node cloud-dn01 node-health=red:NoSchedule-
node/cloud-dn01 untainted
[root@cloud-mn01 ~]# kubectl describe node cloud-dn01 | grep Taint
Taints: <none>
[root@cloud-mn01 ~]# kubectl delete deployment webapp
deployment.apps "webapp" deleted
[root@cloud-mn01 ~]# kubectl create deployment webapp --image nginx
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl scale deployment webapp --replicas=3
deployment.apps/webapp scaled
# Pods 被均匀分配到各个节点
[root@cloud-mn01 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
webapp-59d9889648-25v4c 0/1 ContainerCreating 0 6s <none> cloud-dn02 <none> <none>
webapp-59d9889648-6mkgv 0/1 ContainerCreating 0 12s <none> cloud-dn02 <none> <none>
webapp-59d9889648-sv5gd 0/1 ContainerCreating 0 6s <none> cloud-dn01 <none> <none>
[root@cloud-mn01 ~]#
表示该 Pod 可以容忍打上 “node-health=red:PreferNoSchedule” 污点的节点
spec:
tolerations:
- key: "node-health"
operator: "Equal"
value: "red"
effect: "PreferNoSchedule"
# Pod
labels:
app: nginx
# Controller
selector:
matchLabels:
app: nginx
部署无状态应用、管理 Pod 和 ReplicaSet、部署及滚动升级等功能。
[root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx --dry-run=client -o yaml > webapp.yaml
[root@cloud-mn01 ~]# kubectl apply -f webapp.yaml
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
webapp-59d9889648-8lw5z 0/1 ContainerCreating 0 5s
[root@cloud-mn01 ~]#
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: webapp
name: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: webapp
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
[root@cloud-mn01 ~]# kubectl expose deployment webapp --port=80 --type=NodePort --dry-run=client -o yaml > webapp-svc.yaml
[root@cloud-mn01 ~]# kubectl apply -f webapp-svc.yaml
service/webapp created
[root@cloud-mn01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
webapp NodePort 10.96.140.221 <none> 80:30935/TCP 2s
[root@cloud-mn01 ~]#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: webapp
name: webapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webapp
type: NodePort
status:
loadBalancer: {}
## 指定容器的版本
[root@cloud-mn01 ~]# grep image webapp.yaml
- image: nginx:1.14
[root@cloud-mn01 ~]# kubectl apply -f webapp.yaml
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl describe pod webapp | grep Image
Image: nginx:1.14
Image ID: docker-pullable://nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
[root@cloud-mn01 ~]#
## 升级
[root@cloud-mn01 ~]# kubectl set image deployment webapp nginx=nginx:1.15
deployment.apps/webapp image updated
[root@cloud-mn01 ~]# kubectl describe pod webapp | grep Image
Image: nginx:1.15
Image ID: docker-pullable://nginx@sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68
[root@cloud-mn01 ~]#
## 查看升级回滚状态
[root@cloud-mn01 ~]# kubectl rollout status deployment webapp
deployment "webapp" successfully rolled out
## 查看升级回滚历史
[root@cloud-mn01 ~]# kubectl rollout history deployment webapp
deployment.apps/webapp
REVISION CHANGE-CAUSE
1 <none>
2 <none>
## 回滚到上一个版本
[root@cloud-mn01 ~]# kubectl rollout undo deployment webapp
deployment.apps/webapp rolled back
## 回滚到指定版本
[root@cloud-mn01 ~]# kubectl rollout undo deployment webapp --to-revision=2
deployment.apps/webapp rolled back
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
webapp-74d879b68f-ltcrv 1/1 Running 0 107s
[root@cloud-mn01 ~]# kubectl scale deployment webapp --replicas=5
deployment.apps/webapp scaled
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
webapp-74d879b68f-4pvm7 0/1 ContainerCreating 0 2s
webapp-74d879b68f-727zj 1/1 Running 0 2s
webapp-74d879b68f-brnrh 0/1 ContainerCreating 0 2s
webapp-74d879b68f-lf8ch 0/1 ContainerCreating 0 2s
webapp-74d879b68f-ltcrv 1/1 Running 0 2m11s
[root@cloud-mn01 ~]#
用于部署有状态应用
# 认为 Pod 都是一样的
# 没有顺序要求
# 不用考虑在哪个 node 运行
# 随意进行伸缩和扩展
# 让每个 Pod 都是独立的
# 唯一的网络标识符,持久存储
# 保证 Pod 启动顺序,比如 mysql 主从
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
# 无头 Service 即 ClusterIP 为 None
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
# 类型:有状态容器集合
kind: StatefulSet
metadata:
name: nginx-statefulset
namespace: default
spec:
serviceName: nginx
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
[root@cloud-mn01 ~]# kubectl apply -f sts.yaml
service/nginx created
statefulset.apps/nginx-statefulset created
[root@cloud-mn01 ~]# kubectl get statefulset
NAME READY AGE
nginx-statefulset 3/3 2m7s
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-statefulset-0 1/1 Running 0 73s
nginx-statefulset-1 1/1 Running 0 53s
nginx-statefulset-2 1/1 Running 0 27s
[root@cloud-mn01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
nginx ClusterIP None <none> 80/TCP 75s
[root@cloud-mn01 ~]# kubectl delete statefulset --all
statefulset.apps "nginx-statefulset" deleted
[root@cloud-mn01 ~]# kubectl delete svc nginx
service "nginx" deleted
[root@cloud-mn01 ~]#
deployment
与 statefulset
的区别statefulset 部署的应用中的每个 Pod 都是拥有唯一标识的。
# 根据 主机名 按照一定规则生成域名
# 格式:主机名称.service名称.名称空间.svc.cluster.local
# 例如:nginx-statefulset-0.nginx.default.svc.cluster.local
部署守护进程,确保所有的 node 运行同一个 Pod。使用场景如:在每个 node 节点安装数据采集工具。
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-test
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
containers:
- name: logs
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: varlog
mountPath: /tmp/log
volumes:
- name: varlog
hostPath:
path: /var/log
[root@cloud-mn01 ~]# kubectl apply -f ds.yaml
daemonset.apps/ds-test created
[root@cloud-mn01 ~]# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds-test 2 2 2 2 2 <none> 7m22s
[root@cloud-mn01 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ds-test-dxd5b 1/1 Running 0 7m33s 10.244.2.18 cloud-dn02 <none> <none>
ds-test-lp9js 1/1 Running 0 7m33s 10.244.1.16 cloud-dn01 <none> <none>
[root@cloud-mn01 ~]#
# 登录容器
[root@cloud-mn01 ~]# kubectl exec -it ds-test-lp9js bash
root@ds-test-lp9js:/# ls /tmp/log
anaconda boot.log-20210727 cron firewalld maillog pods secure-20210727 tuned
audit boot.log-20210728 cron-20210727 grubby maillog-20210727 qemu-ga spooler wtmp
boot.log btmp dmesg grubby_prune_debug messages rhsm spooler-20210727 yum.log
boot.log-20210720 containers dmesg.old lastlog messages-20210727 secure tallylog
root@ds-test-lp9js:/#
# node 节点上模拟日志文件生成
[root@cloud-dn01 ~]# echo "this is a test" > /var/log/test.log
[root@cloud-dn01 ~]#
# 在 Pod 中查看(node中数据文件的变化反映在 pod 中)
root@ds-test-lp9js:/# cat /tmp/log/test.log
this is a test
root@ds-test-lp9js:/#
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
[root@cloud-mn01 ~]# kubectl create -f job.yaml
job.batch/pi created
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ds-test-dxd5b 1/1 Running 0 44m
ds-test-lp9js 1/1 Running 0 44m
pi-lmzjq 0/1 ContainerCreating 0 19s
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ds-test-dxd5b 1/1 Running 0 45m
ds-test-lp9js 1/1 Running 0 45m
pi-lmzjq 0/1 Completed 0 2m1s
[root@cloud-mn01 ~]# kubectl get jobs
NAME COMPLETIONS DURATION AGE
pi 1/1 81s 2m3s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl logs pi-lmzjq
3.1415926535897932...
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl delete -f job.yaml
job.batch "pi" deleted
[root@cloud-mn01 ~]#
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
[root@cloud-mn01 ~]# kubectl apply -f cronjob.yaml
cronjob.batch/hello created
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ds-test-dxd5b 1/1 Running 0 51m
ds-test-lp9js 1/1 Running 0 51m
hello-1627443540-4rjl5 0/1 Completed 0 25s
[root@cloud-mn01 ~]# kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 47s 3m10s
[root@cloud-mn01 ~]# kubectl logs hello-1627443540-4rjl5
Wed Jul 28 03:39:23 UTC 2021
Hello from the Kubernetes cluster
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ds-test-dxd5b 1/1 Running 0 53m
ds-test-lp9js 1/1 Running 0 53m
hello-1627443540-4rjl5 0/1 Completed 0 2m12s
hello-1627443600-bw4kw 0/1 Completed 0 71s
hello-1627443660-nwql2 0/1 ContainerCreating 0 11s
[root@cloud-mn01 ~]# kubectl delete -f cronjob.yaml
cronjob.batch "hello" deleted
[root@cloud-mn01 ~]#
加密数据存在 etcd 里面,让 Pod 容器以挂载 Volumn 方式进行访问。
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: YWJjZDEyMzQuLg==
[root@cloud-mn01 ~]# kubectl create -f secret.yaml
secret/mysecret created
[root@cloud-mn01 ~]# kubectl get secrets
NAME TYPE DATA AGE
default-token-29kgd kubernetes.io/service-account-token 3 10d
mysecret Opaque 2 13s
[root@cloud-mn01 ~]#
apiVersion: v1
kind: Pod
metadata:
name: secret-var
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
[root@cloud-mn01 ~]# kubectl apply -f secret_var.yaml
pod/secret-var created
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
secret-var 0/1 ContainerCreating 0 10s
[root@cloud-mn01 ~]# kubectl exec -it secret-var bash
root@secret-var:/# echo $SECRET_USERNAME
admin
root@secret-var:/# echo $SECRET_PASSWORD
abcd1234..
root@secret-var:/#
apiVersion: v1
kind: Pod
metadata:
name: secret-vol
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
[root@cloud-mn01 ~]# kubectl apply -f secret-vol.yaml
pod/secret-vol created
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
secret-var 1/1 Running 0 2m44s
secret-vol 0/1 ContainerCreating 0 5s
[root@cloud-mn01 ~]# kubectl exec -it secret-vol bash
root@secret-vol:/# cat /etc/foo/username
adminroot@secret-vol:/# cat /etc/foo/password
abcd1234..root@secret-vol:/#
[root@cloud-mn01 ~]# cat redis.properties
redis.host=simwor.com
redis.port=6379
redis.password=abcd1234..
[root@cloud-mn01 ~]# kubectl create configmap redis-config --from-file=redis.properties
configmap/redis-config created
[root@cloud-mn01 ~]# kubectl get configmap
NAME DATA AGE
redis-config 1 9s
[root@cloud-mn01 ~]# kubectl describe configmap redis-config
Name: redis-config
Namespace: default
Labels: >
Annotations: >
Data
====
redis.properties:
----
redis.host=simwor.com
redis.port=6379
redis.password=abcd1234..
Events: >
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat configmap-vol.yaml
apiVersion: v1
kind: Pod
metadata:
name: configmap-vol
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: redis-config
restartPolicy: Never
[root@cloud-mn01 ~]# kubectl apply -f configmap-vol.yaml
pod/configmap-vol created
[root@cloud-mn01 ~]# kubectl logs configmap-vol
redis.host=simwor.com
redis.port=6379
redis.password=abcd1234..
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat myconfig.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfig
namespace: default
data:
special.level: info
special.type: hello
[root@cloud-mn01 ~]# kubectl apply -f myconfig.yaml
configmap/myconfig created
[root@cloud-mn01 ~]# kubectl describe configmap myconfig
Name: myconfig
Namespace: default
Labels: >
Annotations:
Data
====
special.level:
----
info
special.type:
----
hello
Events: >
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat configmap-val.yaml
apiVersion: v1
kind: Pod
metadata:
name: configmap-val
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]
env:
- name: LEVEL
valueFrom:
configMapKeyRef:
name: myconfig
key: special.level
- name: TYPE
valueFrom:
configMapKeyRef:
name: myconfig
key: special.type
restartPolicy: Never
[root@cloud-mn01 ~]# kubectl apply -f configmap-val.yaml
pod/configmap-val created
[root@cloud-mn01 ~]# kubectl logs configmap-val
info hello
[root@cloud-mn01 ~]#
通过 Service 实现 Pod 的服务发现和负载均衡
# Pod
labels:
app: nginx
# Service
selector:
app: nginx
访问 K8s 资源时都要统一经由 api-server 并完成
认证、鉴权、准入控制
三步。
role
特定命名空间访问权限和ClusterRole
所有命名空间访问权限;roleBinding
角色绑定到主体和 ClusterRoleBinding
集群角色绑定到主体;[root@cloud-mn01 ~]# kubectl create namespace roledemo
namespace/roledemo created
[root@cloud-mn01 ~]# kubectl get ns
NAME STATUS AGE
default Active 11d
kube-node-lease Active 11d
kube-public Active 11d
kube-system Active 11d
roledemo Active 13s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx -n roledemo
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl get pod -n roledemo
NAME READY STATUS RESTARTS AGE
webapp-59d9889648-b8mwl 1/1 Running 0 61s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: roledemo
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
[root@cloud-mn01 ~]# kubectl create -f role.yaml
role.rbac.authorization.k8s.io/pod-reader created
[root@cloud-mn01 ~]# kubectl get role -n roledemo
NAME CREATED AT
pod-reader 2021-07-30T01:36:03Z
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat role-binding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: roledemo
subjects:
- kind: User
name: lucy # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
[root@cloud-mn01 ~]# kubectl create -f role-binding.yaml
rolebinding.rbac.authorization.k8s.io/read-pods created
[root@cloud-mn01 ~]# kubectl get rolebinding -n roledemo
NAME ROLE AGE
read-pods Role/pod-reader 19s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl expose deployment webapp --port=80 --target-port=80 --type=NodePort
service/webapp exposed
[root@cloud-mn01 ~]#
ingress controller
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
[root@cloud-mn01 ~]# kubectl apply -f ingress-controller.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
[root@cloud-mn01 ~]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-766fb9f77-l2jbh 0/1 Running 0 47s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat ingress-rule.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.ingressdemo.com
http:
paths:
- path: /
backend:
serviceName: webapp
servicePort: 80
[root@cloud-mn01 ~]# kubectl apply -f ingress-rule.yaml
ingress.networking.k8s.io/example-ingress created
[root@cloud-mn01 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> example.ingressdemo.com 80 11s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl get pods -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-766fb9f77-l2jbh 1/1 Running 0 14m 192.168.1.202 cloud-dn02 <none> <none>
[root@cloud-mn01 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> example.ingressdemo.com 80 8m57s
[root@cloud-mn01 ~]#
已知 ingress controller 部署在
192.168.1.202 cloud-dn02
节点上,将192.168.1.202 example.ingressdemo.com
配置在 windows hosts 文件中即可实现域名访问。
整体进行管理
,实现高效复用
,并可以进行应用级别的版本管理
。Helm 是一个 Kubernetes 的包管理工具,就像 Linux 下的包管理器,如 yum/apt 等,可以很方便的将之前打包好的 yaml 文件部署到 kubernetes 上。
[root@cloud-mn01 ~]# tar -zxf helm-v3.0.0-linux-amd64.tar.gz
[root@cloud-mn01 ~]# ll
total 11800
-rw-r--r-- 1 root root 12082866 Sep 7 2020 helm-v3.0.0-linux-amd64.tar.gz
drwxr-xr-x 2 3434 3434 50 Nov 13 2019 linux-amd64
[root@cloud-mn01 ~]# mv linux-amd64/
helm LICENSE README.md
[root@cloud-mn01 ~]# mv linux-amd64/helm /usr/bin
[root@cloud-mn01 ~]# hel
helm help
[root@cloud-mn01 ~]# hel
[root@cloud-mn01 ~]# helm repo add stable https://charts.helm.sh/stable
"stable" has been added to your repositories
[root@cloud-mn01 ~]# helm repo add kubelog https://charts.kubelog.com/stable
"kubelog" has been added to your repositories
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "kubelog" chart repository
Update Complete. ⎈ Happy Helming!⎈
[root@cloud-mn01 ~]# helm repo list
NAME URL
stable https://charts.helm.sh/stable
kubelog https://charts.kubelog.com/stable
[root@cloud-mn01 ~]#
命令 | 描述 |
---|---|
create | 创建一个 chart 并指定名字 |
dependency | 管理 chart 依赖 |
get | 下载一个 release。可用子命令:all、hooks、manifest、notes、values |
history | 获取 release 历史 |
install | 安装一个 chart |
list | 列出 release |
package | 将 chart 目录打包到 chart 存档文件中 |
pull | 从远程仓库中下载 chart 并解压到本地 # helm pull stable/mysql – untar |
repo | 添加,列出,移除,更新和索引 chart 仓库。可用子命令:add、index、list、remove、update |
rollback | 从之前版本回滚 |
search | 根据关键字搜索 chart。可用子命令:hub、repo |
show | 查看 chart 详细信息。可用子命令:all、chart、readme、values |
status | 显示已命名版本的状态 |
template | 本地呈现模板 |
uninstall | 卸载一个 release |
upgrade | 更新一个 release |
version | 查看 helm |
[root@cloud-mn01 ~]# helm search repo weave
NAME CHART VERSION APP VERSION DESCRIPTION
kubelog/weave-cloud 0.3.9 1.4.0 DEPRECATED - Weave Cloud is a add-on to Kuberne...
kubelog/weave-scope 1.1.12 1.12.0 DEPRECATED - A Helm chart for the Weave Scope c...
stable/weave-cloud 0.3.9 1.4.0 DEPRECATED - Weave Cloud is a add-on to Kuberne...
stable/weave-scope 1.1.12 1.12.0 DEPRECATED - A Helm chart for the Weave Scope c...
[root@cloud-mn01 ~]# helm install ui stable/weave-scope
NAME: ui
LAST DEPLOYED: Fri Jul 30 12:57:02 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You should now be able to access the Scope frontend in your web browser, by
using kubectl port-forward:
kubectl -n default port-forward $(kubectl -n default get endpoints \
ui-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040
then browsing to http://localhost:8080/.
For more details on using Weave Scope, see the Weave Scope documentation:
https://www.weave.works/docs/scope/latest/introducing/
[root@cloud-mn01 ~]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ui default 1 2021-07-30 12:57:02.102575712 +0800 CST deployed weave-scope-1.1.12 1.12.0
[root@cloud-mn01 ~]# helm status ui
NAME: ui
LAST DEPLOYED: Fri Jul 30 12:57:02 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You should now be able to access the Scope frontend in your web browser, by
using kubectl port-forward:
kubectl -n default port-forward $(kubectl -n default get endpoints \
ui-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040
then browsing to http://localhost:8080/.
For more details on using Weave Scope, see the Weave Scope documentation:
https://www.weave.works/docs/scope/latest/introducing/
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
ui-weave-scope ClusterIP 10.105.66.201 <none> 80/TCP 5m36s
[root@cloud-mn01 ~]# kubectl edit svc ui-weave-scope
# spec: type: ClusterIP -> NodePort
spec:
clusterIP: 10.105.66.201
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: weave-scope
component: frontend
release: ui
sessionAffinity: None
type: NodePort
service/ui-weave-scope edited
[root@cloud-mn01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
ui-weave-scope NodePort 10.105.66.201 <none> 80:30657/TCP 6m24s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# helm create mychart
Creating mychart
[root@cloud-mn01 ~]# ll mychart/
total 8
drwxr-xr-x 2 root root 6 Jul 30 13:10 charts
-rw-r--r-- 1 root root 905 Jul 30 13:10 Chart.yaml
drwxr-xr-x 3 root root 146 Jul 30 13:10 templates
-rw-r--r-- 1 root root 1490 Jul 30 13:10 values.yaml
[root@cloud-mn01 ~]#
# Chart.yaml - 当前 chart 属性配置文件
# templates - 模板 yaml 文件
# values.yaml - yaml 文件可以使用的全局变量
[root@cloud-mn01 mychart]# kubectl create deployment webapp --image=nginx --dry-run=client -o yaml > templates/deployment.yaml
[root@cloud-mn01 mychart]# kubectl expose deployment webapp --port=80 --target-port=80 --type=NodePort --dry-run=client -o yaml > templates/service.yaml
Error from server (NotFound): deployments.apps "webapp" not found
[root@cloud-mn01 mychart]# kubectl create deployment webapp --image=nginx
[root@cloud-mn01 mychart]# kubectl expose deployment webapp --port=80 --target-port=80 --type=NodePort --dry-run=client -o yaml > templates/service.yaml
[root@cloud-mn01 mychart]# kubectl delete deployment webapp
deployment.apps "webapp" deleted
[root@cloud-mn01 mychart]#
[root@cloud-mn01 ~]# helm install webapp mychart
NAME: webapp
LAST DEPLOYED: Fri Jul 30 13:25:03 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mychart,app.kubernetes.io/instance=webapp" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
[root@cloud-mn01 ~]# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
weave-scope-cluster-agent-ui 1/1 1 1 28m
weave-scope-frontend-ui 1/1 1 1 28m
webapp 0/1 1 0 13s
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
weave-scope-agent-ui-4mdsb 1/1 Running 0 28m
weave-scope-agent-ui-cps6j 1/1 Running 0 28m
weave-scope-agent-ui-lh4mh 1/1 Running 0 28m
weave-scope-cluster-agent-ui-7498b8d4f4-7gj92 1/1 Running 0 28m
weave-scope-frontend-ui-649c7dcd5d-kwbtn 1/1 Running 0 28m
webapp-59d9889648-m78pb 0/1 ContainerCreating 0 17s
[root@cloud-mn01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
ui-weave-scope NodePort 10.105.66.201 <none> 80:30657/TCP 28m
webapp NodePort 10.97.72.40 <none> 80:32391/TCP 21s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# vi mychart/templates/deployment.yaml
[root@cloud-mn01 ~]# grep replicas mychart/templates/deployment.yaml
replicas: 3
[root@cloud-mn01 ~]# helm upgrade webapp mychart
Release "webapp" has been upgraded. Happy Helming!
NAME: webapp
LAST DEPLOYED: Fri Jul 30 13:26:31 2021
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mychart,app.kubernetes.io/instance=webapp" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
weave-scope-agent-ui-4mdsb 1/1 Running 0 29m
weave-scope-agent-ui-cps6j 1/1 Running 0 29m
weave-scope-agent-ui-lh4mh 1/1 Running 0 29m
weave-scope-cluster-agent-ui-7498b8d4f4-7gj92 1/1 Running 0 29m
weave-scope-frontend-ui-649c7dcd5d-kwbtn 1/1 Running 0 29m
webapp-59d9889648-5zwxr 0/1 ContainerCreating 0 5s
webapp-59d9889648-m78pb 1/1 Running 0 93s
webapp-59d9889648-wp946 0/1 ContainerCreating 0 5s
[root@cloud-mn01 ~]#
在 values.yaml 定义变量,模板中变化的字段取变量的值,动态渲染模板达到高效复用。
[root@cloud-mn01 ~]# cat mychart/values.yaml
image: nginx
label: nginx
port: 80
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: {{ .Values.label}}
name: {{ .Release.Name}}
spec:
replicas: 3
selector:
matchLabels:
app: {{ .Values.label}}
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: {{ .Values.label}}
spec:
containers:
- image: {{ .Values.image}}
name: nginx
resources: {}
status: {}
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: {{ .Values.label}}
name: {{ .Release.Name}}
spec:
ports:
- port: {{ .Values.port}}
protocol: TCP
targetPort: 80
selector:
app: {{ .Values.label}}
type: NodePort
status:
loadBalancer: {}
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# helm install webapp1 --dry-run mychart
NAME: webapp1
LAST DEPLOYED: Fri Jul 30 13:47:25 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
MANIFEST:
---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx
name: webapp1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: NodePort
status:
loadBalancer: {}
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: webapp1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
[root@cloud-mn01 ~]# helm install webapp1 mychart
NAME: webapp1
LAST DEPLOYED: Fri Jul 30 13:47:58 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
[root@cloud-mn01 ~]# kubectl get deployment webapp1
NAME READY UP-TO-DATE AVAILABLE AGE
webapp1 0/3 3 0 12s
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
weave-scope-agent-ui-4mdsb 1/1 Running 0 52m
weave-scope-agent-ui-cps6j 1/1 Running 0 52m
weave-scope-agent-ui-lh4mh 1/1 Running 0 52m
weave-scope-cluster-agent-ui-7498b8d4f4-7gj92 1/1 Running 0 52m
weave-scope-frontend-ui-649c7dcd5d-kwbtn 1/1 Running 0 52m
webapp-59d9889648-5zwxr 1/1 Running 0 22m
webapp-59d9889648-m78pb 1/1 Running 0 24m
webapp-59d9889648-wp946 1/1 Running 0 22m
webapp1-f89759699-6js79 1/1 Running 0 85s
webapp1-f89759699-pwn5s 1/1 Running 0 85s
webapp1-f89759699-wdrwd 1/1 Running 0 85s
[root@cloud-mn01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
ui-weave-scope NodePort 10.105.66.201 <none> 80:30657/TCP 52m
webapp NodePort 10.97.72.40 <none> 80:32391/TCP 24m
webapp1 NodePort 10.97.85.235 <none> 80:31916/TCP 88s
[root@cloud-mn01 ~]#
这里将 nfs 部署在管理节点,同时所有 node 节点也要安装 nfs-utils。
[root@cloud-mn01 ~]# yum install nfs-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.ustc.edu.cn
* updates: mirrors.163.com
Package 1:nfs-utils-1.3.0-0.68.el7.1.x86_64 already installed and latest version
Nothing to do
[root@cloud-mn01 ~]# cat /etc/exports
/data/nfs *(rw,no_root_squash)
[root@cloud-mn01 ~]# mkdir -p /data/nfs
[root@cloud-mn01 ~]# systemctl start nfs
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-test
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
nfs:
server: 192.168.1.101
path: /data/nfs
[root@cloud-mn01 ~]# kubectl apply -f nfs-test.yaml
deployment.apps/nfs-test created
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-test-5c77cd745-qhqk7 1/1 Running 0 50s
[root@cloud-mn01 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-test 1/1 1 1 58s
[root@cloud-mn01 ~]# kubectl expose deployment nfs-test --port=80 --target-port=80 --type=NodePort
service/nfs-test exposed
[root@cloud-mn01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
nfs-test NodePort 10.106.16.61 <none> 80:30716/TCP 7s
[root@cloud-mn01 ~]# echo "Hello NFS" > /data/nfs/index.html
[root@cloud-mn01 ~]# curl cloud-dn01:30716
Hello NFS
[root@cloud-mn01 ~]#
NFS 会直接暴露服务器地址和目录
,更推荐使用PV的方式申请存储,使用PVC方式实现持久化(PVC不关注存储服务器在哪里)。
[root@cloud-mn01 ~]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /data/nfs
server: 192.168.1.101
[root@cloud-mn01 ~]# cat pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pvc-test
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl apply -f pv.yaml
persistentvolume/my-pv created
[root@cloud-mn01 ~]# kubectl apply -f pvc.yaml
deployment.apps/pvc-test created
persistentvolumeclaim/my-pvc created
[root@cloud-mn01 ~]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/my-pv 5Gi RWX Retain Bound default/my-pvc 14s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/my-pvc Bound my-pv 5Gi RWX 11s
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pvc-test-58b7bf955f-blpvb 1/1 Running 0 100s
[root@cloud-mn01 ~]# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
pvc-test 1/1 1 1 109s
[root@cloud-mn01 ~]# echo "Hello PV" > /data/nfs/index.html
[root@cloud-mn01 ~]# kubectl exec -it pvc-test-58b7bf955f-blpvb cat /usr/share/nginx/html/index.html
Hello PV
[root@cloud-mn01 ~]#
Prometheus + Grafana:前者定期抓取监控数据,后者人性化图表展示。
[root@cloud-mn01 ~]# kubectl apply -f node-exportor.yaml
daemonset.apps/node-exporter created
service/node-exporter created
[root@cloud-mn01 prometheus]# kubectl apply -f rbac-setup.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[root@cloud-mn01 prometheus]# kubectl apply -f configmap.yaml
configmap/prometheus-config created
[root@cloud-mn01 prometheus]# kubectl apply -f prometheus.deploy.yml
deployment.apps/prometheus created
[root@cloud-mn01 prometheus]# kubectl apply -f prometheus.svc.yml
service/prometheus created
[root@cloud-mn01 ~]# kubectl get deployments -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 12d
prometheus 1/1 1 1 60s
[root@cloud-mn01 ~]#
[root@cloud-mn01 grafana]# kubectl apply -f grafana-deploy.yaml
deployment.apps/grafana-core created
[root@cloud-mn01 grafana]# kubectl apply -f grafana-svc.yaml
service/grafana created
[root@cloud-mn01 grafana]# kubectl apply -f grafana-ing.yaml
ingress.extensions/grafana created
[root@cloud-mn01 grafana]# kubectl get deployments -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 12d
grafana-core 0/1 1 0 21s
prometheus 1/1 1 1 4m3s
[root@cloud-mn01 grafana]#
[root@cloud-mn01 ~]# cat node-exportor.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-system
labels:
k8s-app: node-exporter
spec:
selector:
matchLabels:
k8s-app: node-exporter
template:
metadata:
labels:
k8s-app: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
protocol: TCP
name: http
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: kube-system
spec:
ports:
- name: http
port: 9100
nodePort: 31672
protocol: TCP
type: NodePort
selector:
k8s-app: node-exporter
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat prometheus/rbac-setup.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
[root@cloud-mn01 ~]# cat prometheus/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: kube-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module: [http_2xx]
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
[root@cloud-mn01 ~]# cat prometheus/prometheus.deploy.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: prom/prometheus:v2.0.0
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
serviceAccountName: prometheus
volumes:
- name: data
emptyDir: {}
- name: config-volume
configMap:
name: prometheus-config
[root@cloud-mn01 ~]# cat prometheus/prometheus.svc.yml
---
kind: Service
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus
namespace: kube-system
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 30003
selector:
app: prometheus
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# cat grafana/grafana-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: kube-system
labels:
app: grafana
component: core
spec:
replicas: 1
selector:
matchLabels:
app: grafana
component: core
template:
metadata:
labels:
app: grafana
component: core
spec:
containers:
- image: grafana/grafana:4.2.0
name: grafana-core
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
# - name: GF_AUTH_ANONYMOUS_ORG_ROLE
# value: Admin
# does not really work, because of template variables in exported dashboards:
# - name: GF_DASHBOARDS_JSON_ENABLED
# value: "true"
readinessProbe:
httpGet:
path: /login
port: 3000
# initialDelaySeconds: 30
# timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var
volumes:
- name: grafana-persistent-storage
emptyDir: {}
[root@cloud-mn01 ~]# cat grafana/grafana-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
component: core
spec:
type: NodePort
ports:
- port: 3000
selector:
app: grafana
component: core
[root@cloud-mn01 ~]# cat grafana/grafana-ing.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: kube-system
spec:
rules:
- host: k8s.grafana
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 3000
[root@cloud-mn01 ~]#
[root@cloud-mn01 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.99.175.201 <none> 3000:32531/TCP 107s
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 12d
node-exporter NodePort 10.106.35.20 <none> 9100:31672/TCP 9m53s
prometheus NodePort 10.96.200.111 <none> 9090:30003/TCP 5m27s
[root@cloud-mn01 ~]#
http://192.168.1.201:32531/login
默认用户名 admin 密码 admin
Dashboards -> Import -> Grafana.net Dashboard -> 315