先下载Docker-compose
,解压缩后将docker-compose文件拷贝到/usr/local/bin目录下,并添加执行权限。
链接:https://pan.baidu.com/s/1TtM-wtOtps6ISz4WwKKIXQ
提取码:ywf1
第一步:下载和安装goharbor-prepare镜像;
第二步:下载并解压缩harbor-offline-installer-v2.3.2.tgz;
链接:https://pan.baidu.com/s/1ZFFsQHjK3DJ4pWCOdzg7yA
提取码:wyzb
第三步:修改配置参数,注释掉https相关配置;
cp harbor.yml.tmpl harbor.yml
vi harbor.yml
#注释https配置:
#https:
#https port for harbor, default is 443
#port: 443
# The path of cert and key files for nginx
#certificate: /data/cert/server.crt
#private_key: /data/cert/server.key
第四步:配置日志服务;
vi /etc/rsyslog.conf
$ModLoad imtcp
$InputTCPServerRun 1514
Harbor默认使用rsyslog日志管理器管理所有服务的日志。配置完成后,重启rsyslog服务。
systemctl restart rsyslog
查看harbor-db日志:
cat /var/log/messages | grep harbor-db
其实,我们也可以不使用rsyslog日志管理器。这时候需要将docker-compose.yml文件中的logging配置项注释掉即可。这时候如果需要查看日志,只需要直接执行命令docker logs -f [容器ID]进行查看即可。
第五步:安装Harbor;
./prepare
./install.sh
只有所有服务状态为healthy时候,才代表启动启动。
注意,如果要重启Harbor服务,必须先将/data/harbor/database目录下的文件清空,否则harbor-db启动失败,从而导致harbor-core和harbor-jobservice也会启动失败。
另外,重启过程也可能会出现cannot access '/var/lib/postgresql/data': Operation not permitted
错误,这时候可以修改harbor.yml
文件,找到data_volume
,将原有的/data
挂载目录改成其他的目录即可。
首先在浏览器输入Harbor服务所在主机地址,进入harbor管理后台(不支持firefox浏览器)。
然后输入用户名和密码登录(密码可以在harbor.yml配置文件中修改)。但值得注意的使,如果修改了harbor.yml
文件内容,那么需求删除所有容器以及/data
文件夹后,重新执行./prepare
和./install.sh
命令启动容器。
登录成功后的界面:
点击新建项目,然后输入项目名“k8s-test”,访问级别为“公开”,大小为50Gb,保存即可。
先修改/etc/docker/daemon.json
文件中insecure-registries
参数的配置,参数值为Harbor服务所在的主机地址:
修改完成后,重启docker服务。
systemctl daemon-reload
systemctl restart docker
然后在node节点上登录docker:
docker login [仓库IP]
输入用户名和密码:
admin
Harbor123
登录成功后,将之前创建的web镜像推送到harbor仓库:
docker push [仓库IP]/k8s-test/web:v1
推送成功后,可以在Harbor上查看到上传的镜像。
链接:https://pan.baidu.com/s/12GjmRQWyAq_29UYE9ZIt4A
提取码:1tw0
第一步:下载并解压缩easy-rsa工具包;
第二步:进入easy-rsa-master/easyrsa3目录,执行init-pki命令执行初始化操作;
cd easy-rsa-master/easyrsa3
./easyrsa init-pki
第三步:生成证书文件;
./easyrsa --batch "--req-cn=k8s-master@date +%s" build-ca nopass
./easyrsa --subject-alt-name="IP:k8s-master,IP:169.169.0.1,DNS:kubernetes.default" build-server-full server nopass
./easyrsa --dn-mode=org --req-cn=kubecfg --req-org=system:master --req-c= --req-st= --req-city= --req-email= --req-ou= build-client-full kubecfg nopass
上面k8s-master
可以替换成对应的主机IP。
第四步:将证书文件拷贝到/etc/kubernetes/pki目录下;
mkdir /etc/kubernetes/pki
cp pki/ca.crt pki/issued/server.crt pki/private/server.key /etc/kubernetes/pki/
chmod 644 /etc/kubernetes/pki/*
第五步:将kubecfg相关文件拷贝到/srv/kubernetes/pki目录下;
mkdir /srv/kubernetes/
cp -fr pki/issued/kubecfg.crt /srv/kubernetes/
cp -fr pki/private/kubecfg.key /srv/kubernetes
echo 123456,admin,admin > /srv/kubernetes/basic_auth.csv
第六步:修改kube-apiserver,增加认证参数配置信息;
vi /etc/kubernetes/apiserver
--client-ca-file=/etc/kubernetes/pki/ca.crt
--tls-cert-file=/etc/kubernetes/pki/server.crt
--tls-private-key-file=/etc/kubernetes/pki/server.key
--basic-auth-file=/srv/kubernetes/basic_auth.csv
第七步:修改kube-controller-manager配置,增加认证参数配置信息;
vi /etc/kubernetes/controller-manager
--service_account_private_key_file=/etc/kubernetes/pki/server.key
--root-ca-file=/etc/kubernetes/pki/ca.crt
第八步:在kube-apiserver配置中,增加ServiceAccount参数;
配置完成后,重启kube-apiserver、kube-controller-manager服务即可。最后,还需要删除旧的secret配置。
kubectl get secrets --all-namespaces
kubectl delete secret default-token-50p5c
先准备好dashboard
和metrics-scraper
镜像文件:
链接:https://pan.baidu.com/s/1O8S2xuylbxL7gG-txvCs2w
提取码:qpk4
链接:https://pan.baidu.com/s/1myxiBdlWaTi7csP1ofB6jA
提取码:z622
下载官方提供的配置文件:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard-csrf
namespace: kube-system
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard-key-holder
namespace: kube-system
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard-settings
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-admin
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-admin
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: 10.79.4.122/k8s-test/kubernetesui/dashboard:v2.3.1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kube-system
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: 10.79.4.122/k8s-test/kubernetesui/metrics-scraper:v1.0.6
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30003
selector:
k8s-app: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
执行部署:
kubectl apply -f dashboard.yaml
部署效果:
部署成功后,打开火狐浏览器,输入https://k8s-node:30003/#/login,可以看到dashboard的认证页面。
查看Token:
kubectl get secret –n kube-system
kubectl describe secret [token_secret_name] –n kube-system
然后将token
拷贝到dashboard界面中,然后点击Sign in按钮即可登录。登录成功后的界面如下所示:
点击右上角的加号,然后将web的部署信息拷贝到输入框中,然后点击Upload按钮。
点击左边栏目的Pods,这时候可以看到部署的pod,并且状态为Running,代表部署成功。
同样方法,将svc也是以同样方式部署上去。
点击左边栏目的Service,这时候可以看到部署的service。
首先将项目打包镜像到docker仓库上,然后再打开dashboard,点击左边“Replication Controllers”菜单。
然后在右边Replication Controller上点击edit按钮:
修改镜像版本号,然后点击Update按钮。
点击rc的名称,进入详情页面。
然后将旧的pod删除掉。这时候k8s会按照新的镜像版本自动重启启动一个新的pod。
第一步:下载Ingress-controller镜像,并上传到harbor仓库;
链接: https://pan.baidu.com/s/1_n_IPRo2bojl2EIoaqqcUg
提取码: tjyt
第二步:创建命名空间;
kubectl create namespace ingress-nginx
第三步:配置节点的Label,用于将ingress-controller服务部署到该label对应的节点上;
kubectl label node k8s-node-1 isIngress="true"
部署文件分为两部分:ingress-controller和ingress。Ingress-controller对官方示例稍作了修改,将kind类型由Deployement修改为DaemonSet。这是为了保证ingress-controller绑定在特定的node上。
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: nginx-ingress-serviceaccount
addonmanager.kubernetes.io/mode: Reconcile
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
# 选择对应标签的node
nodeSelector:
isIngress: "true"
# 使用hostNetwork暴露服务
hostNetwork: true
containers:
- name: nginx-ingress-controller
image: 10.79.4.122/k8s-test/nginx-ingress-controller:0.25.0
args:
- /nginx-ingress-controller
#让其加载ingress-nginx/nginx-configuration这个configmap配置文件,从而允许我们修改nginx的http段的各种配置项覆盖nginx的默认值(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/)
#- --configmap=$(POD_NAMESPACE)/nginx-configuration
# 是一个configmap,因为ingress只是拿来提供http/https反向代理用的概念,而nginx额外提供了4层TCP反向代理的能力,这需要通过单独的configmap配置,而不是用ingress对象配置,具体参考:https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/。
#- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
#- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
# ingress-controller对外暴露服务名,与下面Service的服务名相同
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30081
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-web
namespace: book-process # 命名空间名称与下面serviceName服务所在命名空间要相同
annotations:
# 指定 Ingress Controller 的类型
kubernetes.io/ingress.class: "nginx"
# 指定我们的 rules 的 path 可以使用正则表达式
nginx.ingress.kubernetes.io/use-regex: "true"
# 连接超时时间,默认为 5s
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
# 后端服务器回转数据超时时间,默认为 60s
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
# 后端服务器响应超时时间,默认为 60s
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
# 客户端上传文件,最大大小,默认为 20m
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
# URL 重写
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
# 路由规则
rules:
# 主机名,只能是域名。这里需要在本地hosts中配置该域名,指向elk部署的节点
- host: test.xxx.com
http:
paths:
- path:
backend:
# 后台部署的 Service Name
serviceName: web
# 后台部署的 Service Port
servicePort: 8081
上面配置的域名,应该在本地hosts文件中进行配置,并且指向ingress-controller所在节点的ip地址。
部署命令:
kubectl apply -f ingress.yml
部署完成后,在postman上请求test.xxx.com:30081
查看效果。
kustomize 是 kubernetes 原生的配置管理,以无模板方式来定制应用的配置。kustomize 使用 k8s 原生概念帮助创建并复用资源配置,允许用户使用一个应用描述文件作为Base,然后通过 Overlay 的方式生成最终部署应用所需的描述文件。使用kustomize管理配置资源的好处:
自行百度搜索并下载kustomize二进制文件,然后放在10.79.4.122主机的/usr/bin目录下,并添加执行权限。
将所有资源文件放在同一目录下。然后新建kustomization.yml
文件。
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: default
resources:
- open-server-deploy.yml
- open-server-svc.yml
configMapGenerator:
- name: filebeat-config
files:
- filebeat.yml=filebeat.yml
generatorOptions:
disableNameSuffixHash: true
images:
- name: web
newTag: v2
编辑控制器部署文件,修改filebeat-config
配置项。
假设现在需要将运行的web
从v1
升级到v2
版本,具体操作步骤如下:
第一步:准备好新的镜像文件;
第二步:编辑kustomization.yml配置文件,将newTag配置项修改为v2;
第三步:执行kustomize和kubectl命令重新部署项目;
kustomize build . | kubectl apply -f -
第四步:删除旧的pod。
这样kubernetes就会基于v2版本自动创建一个新的pod。