(1)自动装箱
(2)自我修复(自愈能力)
(3)水平扩展
(3)服务发现
(4)滚动更新
(5)版本回退
(6)密钥和配置管理
(7)存储编排
(8)批处理
k8s 集群控制节点,对集群进行调度管理,接受集群外用户去集群操作请求; Master Node 由
API Server:集群统一入口,以restfull方式交给etcd存储
Scheduler:节点调度,选择node节点应用部署
ClusterState Store(ETCD 数据库): 存储系统,用于保存集群数据
Controller MangerServer :处理集群中常规后台任务,一个资源对应一个控制器
集群工作节点,运行用户业务应用容器; Worker Node 包含
kubelet:master派到node节点的代表,管理本机容器如生命周期,容器的创建,容器的各种操作。
kube proxy :提供网络代理,可以实现负载均衡等
和 ContainerRuntime;
ReplicaSet:确保预期的Pod副本数量
Deployment:无状态应用(例如:web、nignx、apache、tomcat)
StatefulSet:有状态应用部署(例如:mysql)
有状态:该应用独一无二,无法重新创建进行完美替代,例如mysql、Oracle数据库
DaemonSet:确保所有的Node运行同一个Pod(把所有的Node设置为同一个命名空间)
Job:一次性任务(类似linux:at)
Cronjob:定时任务(类似linux:crontab)
kubectl get deploy 查看部署的资源
kubect get nodes 查看所有的节点
kubectl get pod -n kube-system 查看是否runing状态
kubectl get pods -o wide
kubectl create deployment nginx --image=nginx 创建一个pod
kubectl get pod 查看是否runing状态
kubectl expose deployment nginx --port=80 --type=NodePort 暴露端口
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort 暴露端口
kubectl expose deployment web --port=80 --type=NodePort --traget-port=80 --name=web1 -o yaml > web1.yaml
kubectl get pod,svc 查看pod
kubectl get cs 检查状态
kubectl apply -f a.yaml 部署yaml文件
kubectl get csr 查看申请
kubectl logs --tail=200 --all-containers=true -f -n sip -lapp=contractmg 查看日志
kubectl logs
```bash
kubectl get deploy 查看部署的资源
-- 生成yaml文件
kubectl create demployment web --image=nginx -o yaml --dry-run > my1.yaml
kubectl get deploy nginx -o=yaml --export > my2.yaml
-- 标签属于pod属性,污点taint属于node节点属性
kubectl label node node1 env_role=prod 添加标签
kubectl label node node1 env_role- 删除标签
kubectl taint node [node] key=value:污点三个值 添加污点
kubectl taint node k8snode1 key:NoSchedule- 删除污点
-- pod升级
kubectl set image deployment web nginx=nginx:1.15
-- 查看升级状态
kubectl rollout status deployment web
-- 查看历史版本
kubectl rollout history deployment web
-- 回滚
kubectl rollout undo deployment web
-- 回滚指定版本
kubectl rollout undo deployment web --to-revision=2
-- 弹性伸缩扩容
kubectl scale deployment web --replicas=5
--删除
kubectl delete statefullset --all 删除controller
kubectl delete svc nginx 删除service
kubectl delete pod --all
kubectl delete node --all
kubectl delete -f job.yaml
kubectl delete -f cronJob.yaml
-- 进入具体的pod
kubectl exec -it aaa bash
一次任务
kubectl create -f job.yaml
查看
kubectl get pods
kubectl get jobs 可以查看
echo -n 'admin'| base64
kubectl get ns 获取命名空间
kubectl create ns roletest 创建一个命名空间
kubectl run nginx --image=nginx -n roletest 运行一个pod,
kubect get pods -n roletest
kubectl get role -n roletest 查看创建的角色
kubectl get role.rolebinding -n roletest 查看角色绑定
生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes lucy-csr.json | cfssljson -bare lucy
helm repo list
helm repo add stable http://mirror.azure.cn/kubernetes/charts/
helm repo remove stable
helm repo update
helm search repo weave
helm install ui stable/weave-scope
helm list
helm status ui
kubectl edit svc ui-weave-scope
kubectl create deployment nginxchart --image=nginx -o yaml --dry-run
目前生产部署 Kubernetes 集群主要有两种方式:
kubeadm 是官方社区推出的一个用于快速部署 kubernetes 集群的工具,这个工具能通 过两条指令完成一个 kubernetes 集群的部署: 第一、创建一个 Master 节点 kubeadm init 第二, 将 Node 节点加入到当前集群中 $ kubeadm join
在开始之前,部署 Kubernetes 集群机器需要满足以下几个条件:
角色 IP
k8smaster 192.168.163.133
k8snode1 192.168.163.134
k8snode2 192.168.163.135
$ systemctl stop firewalld
$ systemctl disable firewalld
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
$ setenforce 0 # 临时
$ swapoff -a # 临时
$ vim /etc/fstab # 永久
$ hostnamectl set-hostname <hostname>
$ cat >> /etc/hosts << EOF
192.168.163.133 k8smaster
192.168.163.134 k8snode1
192.168.163.135 k8snode2
EOF
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system # 生效
$ yum install ntpdate -y
$ ntpdate time.windows.com
Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version
设置仓库地址
# cat > /etc/docker/daemon.json << EOF
{ "registry-mirrors": ["https://xxx.mirror.aliyuncs.com"] }
EOF
添加 yum 源
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
由于版本更新频繁,这里指定版本号部署:
$ yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
$ systemctl enable kubelet
提供了一种在拉去镜像时指定镜像仓库的方法
kubeadm init --apiserver-advertise-address=192.168.163.133 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.22.3 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。
如果kubeadm reset后需要删除这个目录重新弄
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
确保能够访问到 quay.io 这个 registery。如果 Pod 镜像下载失败,可以改这个镜像地址
在 192.168.163.134/135(Node)执行
向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:
$ kubeadm join 192.168.31.61:6443 --token esce21.q6hetwm8si29qxwn \
--discovery-token-ca-cert-hash sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5
在 Kubernetes 集群中创建一个 pod,验证是否正常运行:
$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
访问地址:http://NodeIP:Port。比如这里访问的http://192.168.163.134:30521
kubectl 是 Kubernetes 集群的命令行工具,通过 kubectl 能够对集群本身进行管理,并能 够在集群上进行容器化应用的安装部署。
k8s 集群中对资源管理和资源对象编排部署都可以通过声明样式(YAML)文件来解决,也 就是可以把需要对资源对象操作编辑到 YAML 格式文件中,我们把这种文件叫做资源清单文 件,通过 kubectl 命令直接使用资源清单文件就可以实现对大量的资源对象进行编排部署 了。
(1)YAML 介绍 YAML :仍是一种标记语言。为了强调这种语言以数据做为中心,而不是以标记语言为重点。 YAML 是一个可读性高,用来表达数据序列的格式。
(2)YAML 基本语法
(3)YAML 支持的数据结构
适合还没有部署的资源
将创建的过程写入my1.yaml
如果写错deployment就会报unknown flags --image 有坑
kubectl create demployment web --image=nginx -o yaml --dry-run > my1.yaml
适合已经部署过的资源,从已经有的导出成yaml编排清单
kubectl get deploy nginx -o=yaml --export > my2.yaml
Pod 是 k8s 系统中可以创建和管理的最小单元,是资源对象模型中由用户创建或部署的最 小资源对象模型,也是在 k8s 上运行容器化应用的资源对象,其他的资源对象都是用来支 撑或者扩展 Pod 对象功能的,比如控制器对象是用来管控 Pod 对象的,Service 或者 Ingress 资源对象是用来暴露 Pod 引用对象的,PersistentVolume 资源对象是用来为 Pod 提供存储等等,k8s 不会直接处理容器,而是 Pod,Pod 是由一个或多个 container 组成 Pod 是 Kubernetes 的最重要概念,每一个 Pod 都有一个特殊的被称为”根容器“的 Pause 容器。Pause 容器对应的镜 像属于 Kubernetes 平台的一部分,除了 Pause 容器,每个 Pod 还包含一个或多个紧密相关的用户业务容器
一个 Pod 里的多个容器可以***共享存储和网络***,可以看作一个逻辑的主机。共享的如 namespace,cgroups 或者其他的隔离资源。
多个容器共享同一 network namespace,由此在一个 Pod 里的多个容器共享 Pod 的 IP 和 端口 namespace,所以一个 Pod 内的多个容器之间可以通过 localhost 来进行通信,所需要 注意的是不同容器要注意不要有端口冲突即可。不同的 Pod 有不同的 IP,不同 Pod 内的多 个容器之前通信,不可以使用 IPC(如果没有特殊指定的话)通信,通常情况下使用 Pod 的 IP 进行通信。
一个 Pod 里的多个容器可以**共享存储卷**,这个存储卷会被定义为 Pod 的一部分,并且可 以挂载到该 Pod 里的所有容器的文件系统上。
Pod 属于生命周期比较短暂的组件,比如,当 Pod 所在节点发生故障,那么该节点上的 Pod 会被调度到其他节点,但需要注意的是,被重新调度的 Pod 是一个全新的 Pod,跟之前的 Pod 没有半毛钱关系。
K8s 集群中的所有 Pod 都在同一个共享网络地址空间中,也就是说每个 Pod 都可以通过其 他 Pod 的 IP 地址来实现访问
imagesPullPolicy: [ Always| Never| IfNotPresent]
获取镜像的策略,默认值为 IfNotPresent 在宿主机不存在时拉取
resources:
limits: //资源限制,容器的最大可用资源数量
cpu: Srting
memory: string
requeste: //资源限制,容器启动的初始可用资源数量
cpu: string
memory: string
restartPolicy: [ Always| Never| OnFailure]
//重启策略,默认值为 Always
如果检查失败,将杀死容器,根据pod的restartPolicy来操作
如果检查失败,kubernetes会把pod从service endpoints剔除
发送http请求,返回200-400范围状态码为成功
执行shell命令返回状态码0位成功
发起tcp Socket建立连接
spec:
ndoeSelector:
env_role=dev
containers:
- name: neginx
image: nginx:1.15
对节点创建标签,master上面设置
kubectl label node node1 env_role=prod
删除节点标签:后面的-表示将该标签删除
kubectl label node node1 env_role-
spec:
affinity:
nodeAffinity:
requiredDruingSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpression:
- key: env_role
operator: In
values:
- dev
- test
preferredDuringSchedulingIgnoreDuringExecution:
- weight: 1
preference:
matchExpression:
- key: group
operator: In
values:
- otherprod
containers:
- name: webdemo
image: nginx
节点亲和性nodeAffinity和之前的nodeSelector基本一样,根据节点上标签约束来决定pod调度到哪个节点上,
硬亲和性:约束条件必须满足
软亲和性:尝试满足,不保证
支持的常用操作符:
In NotIn Exists Gt Lt DoesNotExists
(1)为节点添加污点
kubectl taint node [node] key=value:污点三个值
(2)删除污点
例如: kubectl taint node k8snode1 key:NoSchedule-
污点容忍:就是配置内容
spec:
tolerations:
- kye: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
在集群上管理和运行容器的对象
Pod是通过Controller实现应用的运维,比如弹性伸缩,滚动升级等
kubectl create deployment web --image=nginx --dry-run -o yaml > web.yaml
kubectl apply -f web.yaml
kubectl expose deployment web --port=80 --type=NodePort --traget-port=80 --name=web1 -o yaml > web1.yaml
kubectl get pod,svc
kubectl set image deployment web nginx=nginx:1.15
kubectl rollout status deployment web
kubectl rollout history deployment web
kubectl rollout undo deployment web
kubectl rollout undo deployment web --to-revision=2
kubectl scale deployment web --replicas=5
2 loadBalancer公有云,把负载均衡,控制器
(1) 无状态:
认为pod都是一样的 ,使用kubectl get pods -o wide 查看pod的ip地址,ip是不一样的,每个pod内统一用一个ip
没有顺序要求
不用考虑哪个node运行
随意伸缩和扩展
(2)有状态
上面因素都需要考虑
让每个pod独立 ,保持pod启动顺序和唯一性
(唯一性网络标识符,持久存储
有序,比如mysql主从)
deployment和satefullSet区别:有身份(唯一标识),根据主机名+按照一定规则生成域名
域名规则:
主机名称.service名称.名称空间.svc.cluster.local
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-statefulset
namespace: default
spec:
serviceName: nginx
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
在每个node上运行一个pod,新加入的node也同时运行在一个pod里面
例子:
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
kubectl apply -f job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
kubectl apply -f cronjob.yaml
kubectl get pods -o wide
kubectl logs 容器名称
kubectl delete -f cronjob.yaml
加密: echo -n ‘admin’ | base64
解密:echo -n ‘fewfewf’ | base64 -d
作用:加密数据存储在etcd里面,让pod容器以挂载volmn方式进行访问
场景:凭证
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
secret-var.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
secret-vol.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
部署应用:记得删除之前的变量的形式,因为都叫mypod名称的,会报错部署的时候
kubectl apply -f kubectl-vol.yaml
作用:存储不加密数据到etcd,让pod以变量或者volumn挂载到容器中
场景:配置文件
redis.properties
redis.host=127.0.0.1
redis.port=6379
redis.password=123456
kubectl create configmap redis-config --from-file=redis.properties
创建pod,挂载volumn
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: redis-config
restartPolicy: Never
(1)创建yaml,声明变量信息configMap创建
(2)以变量挂载pod
vi mycm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfig
namespace: default
data:
special.level: info
special.type: hello
创建pod:(config-var.yaml)
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]
env:
- name: LEVEL
valueFrom:
configMapKeyRef:
name: myconfig
key: special.level
- name: TYPE
valueFrom:
configMapKeyRef:
name: myconfig
key: special.type
restartPolicy: Never
部署pod并将变量挂载到pod上
kubectl apply -f config-var.yaml
访问k8s集群时候,需要经过三个步骤完成具体操作:认证,授权,准入控制
进行访问的时候,过程中都需要经过apiserver,apiserver作为统一协调,比如门卫
访问过程中需要证书,token,或者用户名密码,如果访问pod,需要serviceAccount
传输安全:对外不暴露8080端口,只能内部访问,对外使用端口6443
基于RBAC进行鉴权操作
基于角色访问控制
就是准入控制器的列表,如果列表里有请求内容,通过,没有就拒绝
角色:
role:特定命名空间访问权限
clusterRole:所有命名空间访问权限
角色绑定
roleBinding:角色绑定到主体
ClusterRoleBinding:集群角色绑定到主体
主体:
user:用户
group:用户组
serviceAccount:服务账号
kubectl create ns roletest
kubectl get ns
-- 运行pod
kubectl run nginx --image=nginx -n roletest
-- 查看创建的pod
kubect get pods -n roletest
rbac-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: roletest
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
部署yaml文件创建角色:
kubectl apply -f rbac-role.yaml
查看创建的角色
kubectl get role -n roletest
rbac-rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: roletest
subjects:
- kind: User
name: lucy # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac-rolebinding.yaml
kubect get role.rolebinding -n roletest
rabc-user.sh
cat > lucy-csr.json <<EOF
{
"CN": "lucy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes lucy-csr.json | cfssljson -bare lucy
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.163.133:6443 \
--kubeconfig=lucy-kubeconfig
kubectl config set-credentials lucy \
--client-key=lucy-key.pem \
--client-certificate=lucy.pem \
--embed-certs=true \
--kubeconfig=lucy-kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=lucy \
--kubeconfig=lucy-kubeconfig
kubectl config use-context default --kubeconfig=lucy-kubeconfig
证书复制进来ca开头的 cp /root/TLS/k8s/ca* ./
这个证书没有用,所以自己生成一个ca证书
ca-config.json
cat>ca-config.json<< EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": ["signing", "key encipherment", "server auth", "client auth"]
}
}
}
}
EOF
ca-csr.json
cat > ca-csr.json<< EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
生成ca.csr,ca.pem ,ca-key.pem 三个文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes lucy-csr.json | cfssljson -bare lucy
即是rbac-user.sh里面的前两个命令拿出来执行了
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.31.63:6443 \
--kubeconfig=mary-kubeconfig
kubectl config set-credentials mary \
--client-key=mary-key.pem \
--client-certificate=mary.pem \
--embed-certs=true \
--kubeconfig=mary-kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=mary \
--kubeconfig=mary-kubeconfig
kubectl config use-context default --kubeconfig=mary-kubeconfig
kubectl get pods -n roletest
查看pod详细部署在哪个节点上然后修改window的hosts域名映射
使用service里面的NodePort实现
在每个节点都会起到端口,在访问的时候通过任意节点,通过节点ip+端口号实现访问
pod和ingress通过service关联的,ingress作为统一的入口,由service关联一组pod
第一步 部署ingress Controller
第二步 创建ingress规则
我们这里选择官方维护nginx控制器,实现部署
kubect create deployment web --image=nginx
kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
kubectl apply -f ingress-controller.yaml
kubectl get pods -n ingress-nginx
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.ingredemo.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
ingress01.yaml
---
# http
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.ctnrs.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
ingress02.yaml
---
# https
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- sslexample.ctnrs.com
secretName: secret-tls
rules:
- host: sslexample.ctnrs.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
之前方式部署应用的基本过程
helm是一个Kubernetes包管理工具,就想linux包管理器,如yum、apt,可以很方便将之前打包好的yaml文件部署到kubernetes上
下载helm安装压缩文件,上传到linux,解压helm压缩文件,把解压后helm目录复制到/usr/bin目录下
helm repo add 仓库名称 仓库地址
比如
helm repo add stable http://mirror.azure.cn/kubernetes/charts/
helm repo list
kubectl repo remove aliyun
helm repo update
helm search repo weave
helm install ui stable/weave-scope
helm list
helm status ui
nginxchart.yaml
kubectl create deployment nginxchart --image=nginx -o yaml --dry-run >nginxchartweb.yaml
nginxchartsvc.yaml
kubectl expose deployment nginxchart --port=80 --target-port=80 --name=nginxhcart --type=NodePort -o yaml --dry-run >nginxchartsvc.yaml
helm install nginxchart mychart/
helm upgrade ngingxchart mychart/
通过传递参数,动态渲染模板,yaml内容动态传入参数生成
在chart有values.yaml文件,定义yaml文件局变量
deployment.yaml文件替换变量
第一步找一台服务器nfs服务器
安装nfs
yum install -y nfs-utils
设置挂载路径
vi /etc/exports
添加/data/nfs *(rw,no_root_squash)
挂载路径需要创建出来 /data/nfs
第二步:在k8s集群node节点安装nfs: yum install -y nfs-utils
第三步:在nfs服务器启动nfs服务
system start nfs
查看
ps -ef | grep nfs
第四步:在k8s集群部署应用使用nfs持久网络存储
kubectl apply -f nfs-nginx.yaml
kubectl exec -it nginx-dept1-xxx bash
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-dep1
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
nfs:
server: 192.168.63.133
path: /data/nfs
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /k8s/nfs
server: 192.168.63.163
pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-dep1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
集群监控:资源资源利用率,节点数,运行pods
pod监控:容器指标,应用指标
prometheus:开源的,监控报警,数据库,以http协议周期性抓取被监控组件状态,不需要复杂的集成过程,使用http接口即可
grafana:开源的数据分析和可视化工具,支持多种数据源
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-system
labels:
k8s-app: node-exporter
spec:
selector:
matchLabels:
k8s-app: node-exporter
template:
metadata:
labels:
k8s-app: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
protocol: TCP
name: http
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: kube-system
spec:
ports:
- name: http
port: 9100
nodePort: 31672
protocol: TCP
type: NodePort
selector:
k8s-app: node-exporter
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: kube-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module: [http_2xx]
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: prom/prometheus:v2.0.0
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
serviceAccountName: prometheus
volumes:
- name: data
emptyDir: {}
- name: config-volume
configMap:
name: prometheus-config
---
kind: Service
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus
namespace: kube-system
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 30003
selector:
app: prometheus
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: kube-system
labels:
app: grafana
component: core
spec:
selector:
matchLabels:
app: grafana
component: core
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
containers:
- image: grafana/grafana:4.2.0
name: grafana-core
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
# - name: GF_AUTH_ANONYMOUS_ORG_ROLE
# value: Admin
# does not really work, because of template variables in exported dashboards:
# - name: GF_DASHBOARDS_JSON_ENABLED
# value: "true"
readinessProbe:
httpGet:
path: /login
port: 3000
# initialDelaySeconds: 30
# timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var
volumes:
- name: grafana-persistent-storage
emptyDir: {}
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
component: core
spec:
type: NodePort
ports:
- port: 3000
selector:
app: grafana
component: core
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: kube-system
spec:
rules:
- host: k8s.grafana
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 3000
先部署守护进程
kubectl create -f xxx.yaml
kubectl apply -f xxx.yaml
kubectl appply -f node-exporter.yaml
kubectl appply -f rbac-setup.yaml
kubectl appply -f configmap.yaml
kubectl appply -f prometheus.deploy.yml
kubectl appply -f prometheus.svc.yml
kubectl apply -f grafana-deploy.yaml
kubectl apply -f grafana-svc.yaml
kubectl apply -f grafana-ing.yaml
kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。
这个工具能通过两条指令完成一个kubernetes集群的部署:
### 创建一个 Master 节点
$ kubeadm init
# 将一个 Node 节点加入到当前集群中
$ kubeadm join
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
角色 | IP |
---|---|
master1 | 192.168.44.155 |
master2 | 192.168.44.156 |
node1 | 192.168.44.157 |
VIP(虚拟ip) | 192.168.44.158 |
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 根据规划设置主机名
hostnamectl set-hostname
# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.44.158 master.k8s.io k8s-vip
192.168.44.155 master01.k8s.io master1
192.168.44.156 master02.k8s.io master2
192.168.44.157 node01.k8s.io node1
EOF
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
yum install -y conntrack-tools libseccomp libtool-ltdl
yum install -y keepalived
master1节点配置
cat > /etc/keepalived/keepalived.conf <
master2节点配置
cat > /etc/keepalived/keepalived.conf <
在两台master节点都执行
# 启动keepalived
$ systemctl start keepalived.service
设置开机启动
$ systemctl enable keepalived.service
# 查看启动状态
$ systemctl status keepalived.service
启动后查看master1的网卡信息
ip a s ens33
yum install -y haproxy
两台master节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01.k8s.io 192.168.44.155:6443 check
server master02.k8s.io 192.168.44.156:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF
两台master都启动
# 设置开机启动
$ systemctl enable haproxy
# 开启haproxy
$ systemctl start haproxy
# 查看启动状态
$ systemctl status haproxy
检查端口
netstat -lntup|grep haproxy
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version
Docker version 18.06.1-ce, build e68fc7a
$ cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://xxx.mirror.aliyuncs.com"]
}
EOF
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
由于版本更新频繁,这里指定版本号部署:
$ yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
$ systemctl enable kubelet
在具有vip的master上操作,这里为master1
$ mkdir /usr/local/kubernetes/manifests -p
$ cd /usr/local/kubernetes/manifests/
$ vi kubeadm-config.yaml
apiServer:
certSANs:
- master1
- master2
- master.k8s.io
- 192.168.44.158
- 192.168.44.155
- 192.168.44.156
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}
$ kubeadm init --config kubeadm-config.yaml
按照提示配置环境变量,使用kubectl工具:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
$ kubectl get pods -n kube-system
按照提示保存以下内容,一会要使用:
kubeadm join master.k8s.io:16443 --token jv5z7n.3y1zi95p952y9p65 \
--discovery-token-ca-cert-hash sha256:403bca185c2f3a4791685013499e7ce58f9848e2213e27194b75a2e3293d8812 \
--control-plane
查看集群状态
kubectl get cs
kubectl get pods -n kube-system
从官方地址获取到flannel的yaml,在master1上执行
mkdir flannel
cd flannel
wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安装flannel网络
kubectl apply -f kube-flannel.yml
检查
kubectl get pods -n kube-system
从master1复制密钥及相关文件到master2
# ssh [email protected] mkdir -p /etc/kubernetes/pki/etcd
# scp /etc/kubernetes/admin.conf [email protected]:/etc/kubernetes
# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} [email protected]:/etc/kubernetes/pki
# scp /etc/kubernetes/pki/etcd/ca.* [email protected]:/etc/kubernetes/pki/etcd
执行在master1上init后输出的join命令,需要带上参数--control-plane
表示把master控制节点加入集群
kubeadm join master.k8s.io:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba --control-plane
检查状态
kubectl get node
kubectl get pods --all-namespaces
在node1上执行
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
kubeadm join master.k8s.io:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba
集群网络重新安装,因为添加了新的node节点
检查状态
kubectl get node
kubectl get pods --all-namespaces
在Kubernetes集群中创建一个pod,验证是否正常运行:
$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
访问地址:http://NodeIP:Port