Pod管理
基本管理:
# 创建pod资源
kubectl create -f pod.yaml
# 查看pods
kubectl get pods nginx-pod
# 查看pod描述
kubectl describe pod nginx-pod
# 更新资源
kubectl apply -f pod.yaml
# 删除资源
kubectl delete pod nginx-pod
Pod.spec.nodeName 强制约束Pod调度到指定Node节点上
Pod.spec.nodeSelector 通过lable-selector机制选择节点
重启策略 restartPolicy
Always:当容器停止,总是重建容器, 默认策略。
OnFailure:当容器异常退出(退出状态码非0)时,才重启容器。
Never:当容器终止退出,从不重启容器。
livenessProbe
如果检查失败,将杀死容器,根据Pod的restartPolicy来操作。
readinessProbe
如果检查失败, Kubernetes会把Pod从service endpoints中剔除。
httpGet
发送HTTP请求,返回200-400范围状态码为成功。
exec
执行Shell命令返回状态码是0为成功。
tcpSocket
发起TCP Socket建立成功。
问题定位:
kubectl describe TYPE NAME_PREFIX
kubectl logs nginx-xxx
kubectl exec –it nginx-xxx bash
Service
环境变量
当一个Pod运行到Node, kubelet会为每个容器添加一组环境变量, Pod容器中程序就可以使用这些环境变量发现Service。
环境变量名格式如下:
{SVCNAME}_SERVICE_HOST
{SVCNAME}_SERVICE_PORT
其中服务名和端口名转为大写,连字符转换为下划线。
限制:
1) Pod和Service的创建顺序是有要求的, Service必须在Pod创建之前被创建,否则环境变量不会设置到Pod中。
2) Pod只能获取同Namespace中的Service环境变量。
DNS
DNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析。这样Pod中就可以通过DNS域名获取Service的访问地址。
vim kube-dns.yaml
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.
# Warning: This is a file generated from the base underscore template file: kube-dns.yaml.base
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.10.10.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.7
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.7
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
vim busybox.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
测试解析:
kubectl exec -ti busybox -- nslookup kubernetes.default / nginx-service
kubectl describe svc nginx-service
服务类型:
- ClusterIP
分配一个内部集群IP地址,只能在集群内部访问(同Namespace内的Pod),默认ServiceType。
- NodePort
分配一个内部集群IP地址,并在每个节点上启用一个端口来暴露服务,可以在集群外部访问。
访问地址:
- LoadBalancer
分配一个内部集群IP地址,并在每个节点上启用一个端口来暴露服务。
除此之外, Kubernetes会请求底层云平台上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去。
Ingress
部署
namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
default-backend.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: default-http-backend labels: app: default-http-backend namespace: ingress-nginxspec: replicas: 1 template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi---apiVersion: v1kind: Servicemetadata: name: default-http-backend namespace: ingress-nginx labels: app: default-http-backendspec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend
tcp-services-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
udp-services-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.9.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
# - --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
kubectl create -f namespace.yaml
kubectl create -f default-backend.yaml
kubectl create -f tcp-services-configmap.yaml
kubectl create -f udp-services-configmap.yaml
kubectl create -f rbac.yaml
kubectl create -f deployment.yaml
官方文档: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
default-http-backend-757754bc6b-f7g8d 1/1 Running 0 4m 172.17.90.5 192.168.1.16
nginx-ingress-controller-7f44d6dccb-6fcrd 1/1 Running 0 4m 192.168.1.111 192.168.1.111
创建测试案例
kubectl run --image=nginx nginx --replicas=2
kubectl expose deployment nginx --port=80
kubectl run --image=httpd httpd --replicas=2
kubectl expose deployment httpd --port=80
vim ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-test
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
- host: bar.baz.com
http:
paths:
- backend:
serviceName: httpd
servicePort: 80
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
http-test foo.bar.com,bar.baz.com 80 59s
TLS 测试
cat > hequan-csr.json <
Volume
挂载卷apiVersion: v1kind: Podmetadata: name: redis-podspec: containers: - image: redis name: redis volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}kubectl describe pods redis-pod挂载主机apiVersion: v1kind: Podmetadata: name: test-pdspec: containers: - image: nginx name: test-container volumeMounts: - mountPath: /tmp-test name: test-volume volumes: - name: test-volume hostPath: path: /tmp type: DirectorynfsapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-deploymentspec: replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - name: wwwroot mountPath: /usr/share/nginx/html ports: - containerPort: 80 volumes: - name: wwwroot nfs: server: 192.168.0.200 path: /opt/wwwroot
glusterfs
GlusterFS部署:http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
https://github.com/kubernetes/kubernetes/tree/8fd414537b5143ab039cb910590237cabf4af783/examples/volumes/glusterfs
yum install centos-release-gluster -y
yum install glusterfs-server -y
systemctl start glusterd.service
systemctl enable glusterd.service
iptables -I INPUT -p all -s 192.168.1.0/24 -j ACCEPT
iptables -I INPUT -p all -s 192.168.1.0/24 -j ACCEPT
1.111
gluster peer probe 192.168.1.16
peer probe: success
gluster peer status
1.16
gluster peer probe 192.168.1.111
peer probe: success
mkdir -p /opt/brick1/gv0
gluster volume create gv0 replica 2 192.168.1.111:/opt/brick1/gv0 192.168.1.16:/opt/brick1/gv0 force
volume create: gv0: success: please start the volume to access data
gluster volume start gv0
gluster volume info
挂载测试
yum install glusterfs-fuse -y
mount -t glusterfs 192.168.1.16:/gv0 /mnt
正式挂载
vim glusterfs-endpoints.json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "192.168.1.111"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "192.168.1.16"
}
],
"ports": [
{
"port": 1
}
]
}
]
}
vim glusterfs-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}
kubectl create -f glusterfs-endpoints.json
kubectl create -f glusterfs-service.json
kubectl get ep
NAME ENDPOINTS AGE
glusterfs-cluster 192.168.1.111:1,192.168.1.16:1 12s
kubectl describe svc glusterfs-cluster
测试
vim nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: glusterfsvol
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: glusterfsvol
glusterfs:
endpoints: glusterfs-cluster
path: gv0
readOnly: false
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
type: NodePort
PersistentVolumes
1. PersistentVolume(PV,持久卷):对存储抽象实现,使得存储作为集群中的资源。
2. PersistentVolumeClaim(PVC,持久卷申请):PVC消费PV的资源。
3. Pod申请PVC作为卷来使用,集群通过PVC查找绑定的PV,并Mount给Pod。
vim nfs-example.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: "/opt/nfs/data"
server: 192.168.0.200
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html/"
name: wwwroot
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: pvc001
vim glusterfs-example.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "gv0"
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: glusterfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html/"
name: wwwroot
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: pvc001