[root@k8s-master ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/06/12 11:08:53 [INFO] generating a new CA key and certificate from CSR
2019/06/12 11:08:53 [INFO] generate received request
2019/06/12 11:08:53 [INFO] received CSR
2019/06/12 11:08:53 [INFO] generating key: rsa-2048
2019/06/12 11:08:53 [INFO] encoded CSR
2019/06/12 11:08:53 [INFO] signed certificate with serial number 708489059891717538616716772053407287945320812263
# 此时/root下应该有以下四个文件。
[root@k8s-master ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
$ cat > server-csr.json <<EOF{"CN":"kubernetes","hosts":["127.0.0.1","172.16.4.12","172.16.4.13","172.16.4.14","10.10.10.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","L":"BeiJing","ST":"BeiJing","O":"k8s","OU":"System"}]}EOF
# 正式生成Kubernetes证书和私钥
[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2019/06/1212:00:45[INFO] generate received request
2019/06/1212:00:45[INFO] received CSR2019/06/1212:00:45[INFO] generating key: rsa-20482019/06/1212:00:45[INFO] encoded CSR2019/06/1212:00:45[INFO] signed certificate with serial number 2763818527172634576560576707043312934359305862262019/06/1212:00:45[WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6,from the CA/Browser Forum(https://cabforum.org);
specifically, section 10.2.3("Information Requirements").
# 查看生成的server.pem和server-key.pem
[root@k8s-master ssl]# ls server*
server.csr server-csr.json server-key.pem server.pem
[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2019/06/12 14:52:32 [INFO] generate received request
2019/06/12 14:52:32 [INFO] received CSR
2019/06/12 14:52:32 [INFO] generating key: rsa-2048
2019/06/12 14:52:33 [INFO] encoded CSR
2019/06/12 14:52:33 [INFO] signed certificate with serial number 491769057064087302830652582150890184354925110925
2019/06/12 14:52:33 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
#查看生成的证书和私钥[root@k8s-master ssl]# ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
kube-apiserver 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
生成 kube-proxy 客户端证书和私钥
[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy && ls kube-proxy*
2019/06/12 14:58:09 [INFO] generate received request
2019/06/12 14:58:09 [INFO] received CSR
2019/06/12 14:58:09 [INFO] generating key: rsa-2048
2019/06/12 14:58:09 [INFO] encoded CSR
2019/06/12 14:58:09 [INFO] signed certificate with serial number 175491367066700423717230199623384101585104107636
2019/06/12 14:58:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
校验证书
以server证书为例
使用openssl命令
[root@k8s-master ssl]# openssl x509 -noout -text -in server.pem......
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=CN, ST=Beijing, L=Beijing, O=k8s, OU=System, CN=kubernetes
Validity
Not Before: Jun 12 03:56:00 2019 GMT
Not After : Jun 9 03:56:00 2029 GMT
Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
......
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
E9:99:37:41:CC:E9:BA:9A:9F:E6:DE:4A:3E:9F:8B:26:F7:4E:8F:4F
X509v3 Authority Key Identifier:
keyid:CB:97:D5:C3:5F:8A:EB:B5:A8:9D:39:DE:5F:4F:E0:10:8E:4C:DE:A2
X509v3 Subject Alternative Name:
DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address:127.0.0.1, IP Address:172.16.4.12, IP Address:172.16.4.13, IP Address:172.16.4.14, IP Address:10.10.10.1
......
确认 Issuer 字段的内容和 ca-csr.json 一致;
确认 Subject 字段的内容和 server-csr.json 一致;
确认 X509v3 Subject Alternative Name 字段的内容和 server-csr.json 一致;
admin.pem 证书 OU 字段值为 system:masters,kube-apiserver 预定义的 RoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin绑定,该 Role 授予了调用kube-apiserver 相关 API 的权限;
[root@k8s-master ~]# etcdctl \> --ca-file=/etc/kubernetes/ssl/ca.pem \
> --cert-file=/etc/kubernetes/ssl/server.pem \
> --key-file=/etc/kubernetes/ssl/server-key.pem \
> cluster-health
member 287080ba42f94faf is healthy: got healthy result from https://172.16.4.13:2379
member 47e558f4adb3f7b4 is healthy: got healthy result from https://172.16.4.12:2379
member e531bd3c75e44025 is healthy: got healthy result from https://172.16.4.14:2379
cluster is healthy
#### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including## kube-apiserver.service# kube-controller-manager.service# kube-scheduler.service# kubelet.service# kube-proxy.service# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://172.16.4.12:8080"
###### kubernetes system config###### The following values are used to configure the kube-apiserver######## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=172.16.4.12 --bind-address=172.16.4.12 --insecure-bind-address=172.16.4.12"##### The port on the local server to listen on.##KUBE_API_PORT="--port=8080"##### Port minions listen on##KUBELET_PORT="--kubelet-port=10250"##### Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.4.12:2379,https://172.16.4.13:2379,https://172.16.4.14:2379"##### Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.10.10.0/24"##### default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"##### Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC \
--runtime-config=rbac.authorization.k8s.io/v1beta1 \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/server.pem \
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/server.pem \
--etcd-keyfile=/etc/kubernetes/ssl/server-key.pem \
--enable-swagger-ui=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--event-ttl=1h"
如果中途修改过--service-cluster-ip-range地址,则必须将default命名空间的kubernetes的service给删除,使用命令:kubectl delete service kubernetes,然后系统会自动用新的ip重建这个service,不然apiserver的log有报错the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/24; please recreate
#### The following values are used to configure the kubernetes controller-manager# defaults from config and apiserver should be adequate# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 \
--service-cluster-ip-range=10.10.10.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--leader-elect=true"
--service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致;
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://172.16.4.12:2379,https://172.16.4.13:2379,https://172.16.4.14:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/server.pem -etcd-keyfile=/etc/kubernetes/ssl/server-key.pem"
# 需要修改/usr/lib/systemd/system/docker.servce的ExecStart字段,引入上述\$DOCKER_NETWORK_OPTIONS字段,docker.service详细配置见下。
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# add by gzr
EnvironmentFile=-/run/flannel/docker
EnvironmentFile=-/run/docker_opts.env
EnvironmentFile=-/run/flannel/subnet.env
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
# 待加入内容
EnvironmentFile=-/run/flannel/docker
EnvironmentFile=-/run/docker_opts.env
EnvironmentFile=-/run/flannel/subnet.env
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
# 最终完整的docker.service文件内容如下
[root@k8s-master ~]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# add by gzr
EnvironmentFile=-/run/flannel/docker
EnvironmentFile=-/run/docker_opts.env
EnvironmentFile=-/run/flannel/subnet.env
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
(2)启动docker
安装配置kubelet
(1)检查是否禁用sawp
[root@k8s-master ~]# free
total used free shared buff/cache available
Mem: 32753848 730892 27176072 377880 4846884 31116660
Swap: 0 0 0
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=172.16.4.12"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.4.12"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
KUBELET_API_SERVER="--api-servers=http://172.16.4.12:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=jimmysong/pause-amd64:3.0"
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=systemd \
--cluster-dns=10.10.10.2 \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--require-kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--cluster-domain=cluster.local \
--hairpin-mode promiscuous-bridge \
--serialize-image-pulls=false"
--cluster-dns 指定 kubedns 的 Service IP(可以先分配,后续创建 kubedns 服务时指定该 IP),--cluster-domain 指定域名后缀,这两个参数同时指定后才会生效;
--cluster-domain 指定 pod 启动时 /etc/resolve.conf 文件中的 search domain ,起初我们将其配置成了 cluster.local.,这样在解析 service 的 DNS 名称时是正常的,可是在解析 headless service 中的 FQDN pod name 的时候却错误,因此我们将其修改为 cluster.local,去掉最后面的 ”点号“ 就可以解决该问题,关于 kubernetes 中的域名/服务名称解析请参见我的另一篇文章。
[root@k8s-master ~]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@k8s-master ~]# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-5fc7b65789-rqk6f 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.10.10.2 53/UDP,53/TCP 20s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 1/1 1 1 20s
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-5fc7b65789 1 1 1 20s
[root@k8s-master ~]# kubectl apply -f busybox.yaml
pod/busybox created
# 采用kubectl describe命令发现busybox创建成功
[root@k8s-master ~]# kubectl describe po/busybox
.......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4s default-scheduler Successfully assigned default/busybox to 172.16.4.13
Normal Pulling 4s kubelet, 172.16.4.13 Pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/busybox"
Normal Pulled 1s kubelet, 172.16.4.13 Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/google_containers/busybox"
Normal Created 1s kubelet, 172.16.4.13 Created container busybox
Normal Started 1s kubelet, 172.16.4.13 Started container busybox
由于 kube-apiserver 启用了 RBAC 授权,而官方源码目录的 dashboard-controller.yaml 没有定义授权的 ServiceAccount,所以后续访问 API server 的 API 时会被拒绝,不过从k8s v.18.3中官方文档提供了dashboard.rbac.yaml文件。
(1)创建部署文件kubernetes-dashboard.yaml,其内容如下:
# Copyright 2017 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# ------------------- Dashboard Secret ------------------- #apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---# ------------------- Dashboard Service Account ------------------- #apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---# ------------------- Dashboard Role & Role Binding ------------------- #kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: kubernetes-dashboard-minimal
namespace: kube-system
rules:# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.-apiGroups:[""]resources:["secrets"]verbs:["create"]# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.-apiGroups:[""]resources:["configmaps"]verbs:["create"]# Allow Dashboard to get, update and delete Dashboard exclusive secrets.-apiGroups:[""]resources:["secrets"]resourceNames:["kubernetes-dashboard-key-holder","kubernetes-dashboard-certs"]verbs:["get","update","delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.-apiGroups:[""]resources:["configmaps"]resourceNames:["kubernetes-dashboard-settings"]verbs:["get","update"]# Allow Dashboard to get metrics from heapster.-apiGroups:[""]resources:["services"]resourceNames:["heapster"]verbs:["proxy"]-apiGroups:[""]resources:["services/proxy"]resourceNames:["heapster","http:heapster:","https:heapster:"]verbs:["get"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:-kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---# ------------------- Dashboard Deployment ------------------- #kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:replicas:1revisionHistoryLimit:10selector:matchLabels:k8s-app: kubernetes-dashboard
template:metadata:labels:k8s-app: kubernetes-dashboard
spec:containers:-name: kubernetes-dashboard
image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
ports:-containerPort:8443protocol: TCP
args:---auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:-name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs-mountPath: /tmp
name: tmp-volume
livenessProbe:httpGet:scheme: HTTPS
path: /
port:8443initialDelaySeconds:30timeoutSeconds:30volumes:-name: kubernetes-dashboard-certs
secret:secretName: kubernetes-dashboard-certs
-name: tmp-volume
emptyDir:{}serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:-key: node-role.kubernetes.io/master
effect: NoSchedule
---# ------------------- Dashboard Service ------------------- #kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:type: NodePort
ports:-port:443targetPort:8443selector:k8s-app: kubernetes-dashboard
wget https://github.com/kubernetes-retired/heapster/archive/v1.5.4.tar.gz
tar zxvf heapster-1.5.4.tar.gz
[root@k8s-master ~]# cd heapster-1.5.4/deploy/kube-config/influxdb/ && ls
grafana.yaml heapster.yaml influxdb.yaml
(1)我们修改heapster.yaml后内容如下:
# ------------------- Heapster Service Account ------------------- #apiVersion: v1
kind: ServiceAccount
metadata:name: heapster
namespace: kube-system
---# ------------------- Heapster Role & Role Binding ------------------- #kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: heapster
subjects:-kind: ServiceAccount
name: heapster
namespace: kube-system
roleRef:kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---# ------------------- Heapster Deployment ------------------- #apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: heapster
namespace: kube-system
spec:replicas:1template:metadata:labels:task: monitoring
k8s-app: heapster
spec:serviceAccountName: heapster
containers:-name: heapster
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.3
imagePullPolicy: IfNotPresent
command:- /heapster
---source=kubernetes:https://kubernetes.default
---sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086---# ------------------- Heapster Service ------------------- #apiVersion: v1
kind: Service
metadata:labels:task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)# If you are NOT using this as an addon, you should comment out this line.kubernetes.io/cluster-service:'true'kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:ports:-port:80targetPort:8082selector:k8s-app: heapster
[root@k8s-master influxdb]# cat grafana.yaml #------------Grafana Deployment----------------#apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: monitoring-grafana
namespace: kube-system
spec:replicas:1template:metadata:labels:task: monitoring
k8s-app: grafana
spec:containers:-name: grafana
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3
ports:-containerPort:3000protocol: TCP
volumeMounts:#- mountPath: /etc/ssl/certs# name: ca-certificates# readOnly: true-mountPath: /var
name: grafana-storage
env:-name: INFLUXDB_HOST
value: monitoring-influxdb
#- name: GF_SERVER_HTTP_PORT-name: GRAFANA_PORT
value:"3000"# The following env variables are required to make Grafana accessible via# the kubernetes api-server proxy. On production clusters, we recommend# removing these env variables, setup auth for grafana, and expose the grafana# service using a LoadBalancer or a public IP.-name: GF_AUTH_BASIC_ENABLED
value:"false"-name: GF_AUTH_ANONYMOUS_ENABLED
value:"true"-name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
-name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
# value: /volumes:# - name: ca-certificates# hostPath:# path: /etc/ssl/certs-name: grafana-storage
emptyDir:{}---#------------Grafana Service----------------#apiVersion: v1
kind: Service
metadata:labels:# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)# If you are NOT using this as an addon, you should comment out this line.kubernetes.io/cluster-service:'true'kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:# In a production setup, we recommend accessing Grafana through an external Loadbalancer# or through a public IP.# type: LoadBalancer# You could also use NodePort to expose the service at a randomly-generated port# type: NodePortports:-port:80targetPort:3000selector:k8s-app: grafana
执行所有定义文件
[root@k8s-master influxdb]# pwd
/root/heapster-1.5.4/deploy/kube-config/influxdb
[root@k8s-master influxdb]# ls
grafana.yaml heapster.yaml influxdb.yaml
[root@k8s-master influxdb]# kubectl create -f .
deployment.extensions/monitoring-grafana created
service/monitoring-grafana created
serviceaccount/heapster created
clusterrolebinding.rbac.authorization.k8s.io/heapster created
service/heapster created
deployment.extensions/heapster created
deployment.extensions/monitoring-influxdb created
service/monitoring-influxdb created
configmap/influxdb-config created
Error from server (AlreadyExists): error when creating "heapster.yaml": serviceaccounts "heapster" already exists
Error from server (AlreadyExists): error when creating "heapster.yaml": clusterrolebindings.rbac.authorization.k8s.io "heapster" already exists
Error from server (AlreadyExists): error when creating "heapster.yaml": services "heapster" already exists
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"heapster\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"name":"heapster","kind":"services"},"code":403}
分析问题:Kubernetes API Server新增了–anonymous-auth选项,允许匿名请求访问secure port。没有被其他authentication方法拒绝的请求即Anonymous requests, 这样的匿名请求的username为system:anonymous, 归属的组为system:unauthenticated。并且该选线是默认的。这样一来,当采用chrome浏览器访问dashboard UI时很可能无法弹出用户名、密码输入对话框,导致后续authorization失败。为了保证用户名、密码输入对话框的弹出,需要将–anonymous-auth设置为false。
再次访问dashboard发现多了CPU使用率和内存使用率的表格: (2)访问grafana页面
通过kube-apiserver访问:
获取 monitoring-grafana 服务 URL
[root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at https://172.16.4.12:6443
Heapster is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master EFK]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.16.4.12 Ready 18h v1.14.3
172.16.4.13 Ready 2d15h v1.14.3
172.16.4.14 Ready 2d15h v1.14.3
[root@k8s-master EFK]#kubectl label nodes 172.16.4.14 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.4.14" labeled
[root@k8s-master EFK]#kubectl label nodes 172.16.4.13 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.4.13" labeled
[root@k8s-master EFK]#kubectl label nodes 172.16.4.12 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.4.12" labeled
执行定义的文件
[root@k8s-master EFK]# kubectl create -f .
serviceaccount/efk created
clusterrolebinding.rbac.authorization.k8s.io/efk created
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
daemonset.extensions/fluentd-es-v1.22 created
deployment.apps/kibana-logging created
service/kibana-logging created
kibana Pod 第一次启动时会用较长时间(10-20分钟)来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度。
[root@k8s-master EFK]# kubectl logs kibana-logging-7db9f954ff-mkbhr -n kube-system
{"type":"log","@timestamp":"2019-06-18T09:23:33Z","tags":["plugin","warning"],"pid":1,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["warning","elasticsearch","config","deprecation"],"pid":1,"message":"Config key \"url\" is deprecated. It has been replaced with \"hosts\""}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:35Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:35Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-18T09:23:35Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
[root@k8s-master EFK]# kubectl logs kibana-logging-7db9f954ff-mkbhr -n kube-system
{"type":"log","@timestamp":"2019-06-18T09:23:33Z","tags":["plugin","warning"],"pid":1,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["warning","elasticsearch","config","deprecation"],"pid":1,"message":"Config key \"url\" is deprecated. It has been replaced with \"hosts\""}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:34Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:35Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-06-18T09:23:35Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-18T09:23:35Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
......
访问kibana
通过kube-apiserver访问:
获取kibana服务URL
[root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at https://172.16.4.12:6443
Elasticsearch is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Heapster is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/heapster/proxy
Kibana is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
KubeDNS is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://172.16.4.12:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy
java.lang.NullPointerException: Attempt to invoke virtual method 'int android.view.View.getImportantForAccessibility()' on a null object reference
出现以上异常.然后就在baidu上
cmd命令打jar是如下实现:
在运行里输入cmd,利用cmd命令进入到本地的工作盘符。(如我的是D盘下的文件有此路径 D:\workspace\prpall\WEB-INF\classes)
现在是想把D:\workspace\prpall\WEB-INF\classes路径下所有的文件打包成prpall.jar。然后继续如下操作:
cd D: 回车
cd workspace/prpal
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml&q
Given a 2D binary matrix filled with 0's and 1's, find the largest rectangle containing all ones and return its area.
public class Solution {
public int maximalRectangle(char[][] matrix)
随着RESTful Web Service的流行,测试对外的Service是否满足期望也变的必要的。从Spring 3.2开始Spring了Spring Web测试框架,如果版本低于3.2,请使用spring-test-mvc项目(合并到spring3.2中了)。
Spring MVC测试框架提供了对服务器端和客户端(基于RestTemplate的客户端)提供了支持。
&nbs
[[UIApplication sharedApplication] setStatusBarStyle:UIStatusBarStyleLightContent];
/*you'll also need to set UIViewControllerBasedStatusBarAppearance to NO in the plist file if you use this method
英文资料:
Thread Dump and Concurrency Locks
Thread dumps are very useful for diagnosing synchronization related problems such as deadlocks on object monitors. Ctrl-\ on Solaris/Linux or Ctrl-B