kubernets1.52安装文档

为什么80%的码农都做不了架构师?>>>   hot3.png

本文为markdown,需要的朋友可以复制到markdown的工具里看

#kubernets1.5.2 伪集群搭建
环境ubuntu 16.04   提前,不然遇到一堆错误
##master
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
	cat < /etc/apt/sources.list.d/kubernetes.list
	deb http://apt.kubernetes.io/ kubernetes-xenial main
	EOF
	apt-get update

Install docker if you don't have it already.

	apt-get install -y docker.io
	apt-get install -y kubelet kubeadm kubectl kubernetes-cni

###initial 

	kubeadm init --pod-network-cidr 10.244.0.0/16 --token=jastme.88888888

###fannel.yml

	---
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: flannel
	---
	kind: ConfigMap
	apiVersion: v1
	metadata:
	  name: kube-flannel-cfg
	  labels:
	    tier: node
	    app: flannel
	data:
	  cni-conf.json: |
	    {
	      "name": "cbr0",
	      "type": "flannel",
	      "delegate": {
	        "isDefaultGateway": true
	      }
	    }
	  net-conf.json: |
	    {
	      "Network": "10.244.0.0/16",
	      "Backend": {
	        "Type": "vxlan"
	      }
	    }
	---
	apiVersion: extensions/v1beta1
	kind: DaemonSet
	metadata:
	  name: kube-flannel-ds
	  labels:
	    tier: node
	    app: flannel
	spec:
	  template:
	    metadata:
	      labels:
	        tier: node
	        app: flannel
	    spec:
	      hostNetwork: true
	      nodeSelector:
	        beta.kubernetes.io/arch: amd64
	      serviceAccountName: flannel
	      containers:
	      - name: kube-flannel
	        image: quay.io/coreos/flannel:v0.7.0
	        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
	        securityContext:
	          privileged: true
	        env:
	        - name: POD_NAME
	          valueFrom:
	            fieldRef:
	              fieldPath: metadata.name
	        - name: POD_NAMESPACE
	          valueFrom:
	            fieldRef:
	              fieldPath: metadata.namespace
	        volumeMounts:
	        - name: run
	          mountPath: /run
	        - name: flannel-cfg
	          mountPath: /etc/kube-flannel/
	      - name: install-cni
	        image: quay.io/coreos/flannel:v0.7.0-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

###install fannel

	kubectl apply -f fannel.yml

###查看结果

	root@k8s-master:~# kubectl get pods --all-namespaces
	NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
	default       kube-flannel-ds-tvtp4                2/2       Running   0          25m
	kube-system   dummy-2088944543-0vbcf               1/1       Running   0          3h
	kube-system   etcd-k8s-master                      1/1       Running   0          3h
	kube-system   kube-apiserver-k8s-master            1/1       Running   7          3h
	kube-system   kube-controller-manager-k8s-master   1/1       Running   0          3h
	kube-system   kube-discovery-1769846148-c5v2w      1/1       Running   0          3h
	kube-system   kube-dns-2924299975-4m6g1            4/4       Running   0          3h
	kube-system   kube-proxy-2ndd1                     1/1       Running   0          3h
	kube-system   kube-scheduler-k8s-master            1/1       Running   0          3h


##master运行三个组件:

###apiserver:
	作为kubernetes系统的入口,封装了核心对象的增删改查操作,以RESTFul接口方式提供给外部客户和内部组件调用。它维护的REST对象将持久化到etcd(一个分布式强一致性的key/value存储)。

###scheduler:
	负责集群的资源调度,为新建的pod分配机器。这部分工作分出来变成一个组件,意味着可以很方便地替换成其他的调度器。
###controller-manager:
	负责执行各种控制器,目前有两类:
	endpoint-controller:定期关联service和pod(关联信息由endpoint对象维护),保证service到pod的映射总是最新的。
	replication-controller:定期关联replicationController和pod,保证replicationController定义的复制数量与实际运行pod的数量总是一致的。

##
##minion 

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
	cat < /etc/apt/sources.list.d/kubernetes.list
	deb http://apt.kubernetes.io/ kubernetes-xenial main
	EOF
	apt-get update

Install docker if you don't have it already.

	apt-get install -y docker.io
	apt-get install -y kubelet kubeadm kubectl kubernetes-cni


在node上执行

	kubeadm join --token=jastme.88888888 172.16.126.141

加入集群

	root@k8s-node-1:~# kubeadm join --token=jastme.88888888 172.16.126.141
	[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
	[preflight] Running pre-flight checks
	[tokens] Validating provided token
	[discovery] Created cluster info discovery client, requesting info from "http://172.16.126.141:9898/cluster-info/v1/?token-id=jastme"
	[discovery] Cluster info object received, verifying signature using given token
	[discovery] Cluster info signature and contents are valid, will use API endpoints [https://172.16.126.141:6443]
	[bootstrap] Trying to connect to endpoint https://172.16.126.141:6443
	[bootstrap] Detected server version: v1.5.2
	[bootstrap] Successfully established connection with endpoint "https://172.16.126.141:6443"
	[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
	[csr] Received signed certificate from the API server:
	Issuer: CN=kubernetes | Subject: CN=system:node:k8s-node-1 | CA: false
	Not before: 2017-02-08 02:15:00 +0000 UTC Not After: 2018-02-08 02:15:00 +0000 UTC
	[csr] Generating kubelet configuration
	[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
	
	Node join complete:
	* Certificate signing request sent to master and response
	  received.
	* Kubelet informed of new secure connection details.
	
	Run 'kubectl get nodes' on the master to see this machine join.

在master上执行

	root@k8s-master:~# kubectl get nodes
	NAME         STATUS         AGE
	k8s-master   Ready,master   23h
	k8s-node-1   Ready          39m

	节点已经加入

	root@k8s-master:~# kubectl get pod --all-namespaces
	NAMESPACE     NAME                                 READY     STATUS              RESTARTS   AGE
	default       kube-flannel-ds-jb7fr                0/2       ContainerCreating   0          6m
	default       kube-flannel-ds-tvtp4                2/2       Running             0          19h
	kube-system   dummy-2088944543-0vbcf               1/1       Running             0          22h
	kube-system   etcd-k8s-master                      1/1       Running             0          22h
	kube-system   kube-apiserver-k8s-master            1/1       Running             7          22h
	kube-system   kube-controller-manager-k8s-master   1/1       Running             0          22h
	kube-system   kube-discovery-1769846148-c5v2w      1/1       Running             0          22h
	kube-system   kube-dns-2924299975-4m6g1            4/4       Running             0          22h
	kube-system   kube-proxy-2ndd1                     1/1       Running             0          22h
	kube-system   kube-proxy-311d1                     0/1       ContainerCreating   0          6m
	kube-system   kube-scheduler-k8s-master            1/1       Running             0          22h

可以发现node上的flannel容器还未创建成功,等待几分钟后

	root@k8s-master:~# kubectl get pod --all-namespaces
	NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
	default       kube-flannel-ds-jb7fr                2/2       Running   1          13m
	default       kube-flannel-ds-tvtp4                2/2       Running   0          19h
	kube-system   dummy-2088944543-0vbcf               1/1       Running   0          22h
	kube-system   etcd-k8s-master                      1/1       Running   0          22h
	kube-system   kube-apiserver-k8s-master            1/1       Running   7          22h
	kube-system   kube-controller-manager-k8s-master   1/1       Running   0          22h
	kube-system   kube-discovery-1769846148-c5v2w      1/1       Running   0          22h
	kube-system   kube-dns-2924299975-4m6g1            4/4       Running   0          22h
	kube-system   kube-proxy-2ndd1                     1/1       Running   0          22h
	kube-system   kube-proxy-311d1                     1/1       Running   0          13m
	kube-system   kube-scheduler-k8s-master            1/1       Running   0          22h

##slave(称作minion)运行两个组件:

###kubelet:
	负责管控docker容器,如启动/停止、监控运行状态等。它会定期从etcd获取分配到本机的pod,并根据pod信息启动或停止相应的容器。同时,它也会接收apiserver的HTTP请求,汇报pod的运行状态。

	proxy:负责为pod提供代理。它会定期从etcd获取所有的service,并根据service信息创建代理。当某个客户pod要访问其他pod时,访问请求会经过本机proxy做转发。

##
OK,创建成功
到node上使用docker ps -a检查容器

	root@k8s-node-1:~# docker ps -a
	CONTAINER ID        IMAGE                                              COMMAND                  CREATED             STATUS                      PORTS               NAMES
	99e233a49bda        quay.io/coreos/flannel:v0.7.0                      "/opt/bin/flanneld --"   8 minutes ago       Up 8 minutes                                    k8s_kube-flannel.a2a489d6_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_6f2fb08a
	765f24fb0d1e        quay.io/coreos/flannel:v0.7.0-amd64                "/bin/sh -c 'set -e -"   8 minutes ago       Up 8 minutes                                    k8s_install-cni.878787d4_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_1b5eb192
	d1e6b843897a        gcr.io/google_containers/kube-proxy-amd64:v1.5.2   "kube-proxy --kubecon"   8 minutes ago       Up 8 minutes                                    k8s_kube-proxy.3353b476_kube-proxy-311d1_kube-system_29ebb2dc-eda5-11e6-ac81-000c29d4195a_64d43522
	6b8196f306d4        quay.io/coreos/flannel:v0.7.0                      "/opt/bin/flanneld --"   16 minutes ago      Exited (1) 16 minutes ago                       k8s_kube-flannel.a2a489d6_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_47192d91
	6e2752d9cf0c        gcr.io/google_containers/pause-amd64:3.0           "/pause"                 18 minutes ago      Up 18 minutes                                   k8s_POD.d8dbe16c_kube-proxy-311d1_kube-system_29ebb2dc-eda5-11e6-ac81-000c29d4195a_12d05f57
	7ebe09f3bf60        gcr.io/google_containers/pause-amd64:3.0           "/pause"                 18 minutes ago      Up 18 minutes                                   k8s_POD.d8dbe16c_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_04db9689

可以看到下面3个容器是之前master创建失败之后遗留下来的,这3容器个到期就会重新创建

检查下失败日志

	成功的

	root@k8s-node-1:~# docker logs 99e233a49bda
	I0208 02:30:58.138424       1 kube.go:109] Waiting 10m0s for node controller to sync
	I0208 02:30:58.152427       1 kube.go:289] starting kube subnet manager
	I0208 02:30:59.155273       1 kube.go:116] Node controller sync successful
	I0208 02:30:59.155329       1 main.go:132] Installing signal handlers
	I0208 02:30:59.156344       1 manager.go:136] Determining IP address of default interface
	I0208 02:30:59.273177       1 manager.go:149] Using interface with name ens32 and address 172.16.126.142
	I0208 02:30:59.273250       1 manager.go:166] Defaulting external address to interface address (172.16.126.142)
	I0208 02:30:59.668695       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
	I0208 02:30:59.679855       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
	I0208 02:30:59.684336       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
	I0208 02:30:59.691237       1 manager.go:250] Lease acquired: 10.244.1.0/24
	I0208 02:30:59.692142       1 network.go:58] Watching for L3 misses
	I0208 02:30:59.692253       1 network.go:66] Watching for new subnet leases

	失败的,可以看见一个莫名其妙的地址上去了和我们上去设置的subnet没一点关系,BUG吗?

	root@k8s-node-1:~# docker logs 6b8196f306d4
	E0208 02:23:00.033126       1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'default/kube-flannel-ds-jb7fr': Get https://10.96.0.1:443/api/v1/namespaces/default/pods/kube-flannel-ds-jb7fr: dial tcp 10.96.0.1:443: getsockopt: connection refused

到node检查我们的subnet 看看是不是和master上设置的一致

	root@k8s-node-1:~# more /run/flannel/subnet.env
	FLANNEL_NETWORK=10.244.0.0/16
	FLANNEL_SUBNET=10.244.1.1/24
	FLANNEL_MTU=1450
	FLANNEL_IPMASQ=true

OK,既然一切正常了,我们使用kubernets来创建一个简单的jenkins容器来分发到node1节点上

##

新的征程

先检查下我们的私有仓库

	root@k8s-node-1:~# curl -s http://172.16.126.129/v1/search
	{"num_results": 4, "query": "", "results": [{"description": "", "name": "library/first-test-docker-images"}, {"description": "", "name": "library/jenkins"}, {"description": "", "name": "library/ubuntu16.04"}, {"description": "", "name": "library/tomcat"}]}

有4个镜像,现在我们就选择jenkins这个镜像来测试下

jenkins.yml

	apiVersion: v1
	kind: Pod
	metadata:
	  name: jenkins
	  labels:
	    name: jenkins
	spec:
	  containers:
	    - name: jenkins
	      image: 172.16.126.129/jenkins
	      ports: 
	      - containerPort: 8080
	#        hostPort: 8888
	#        protocol: TCP
	      volumeMounts: 
	      - mountPath: /ywkj/tomcat/logs
	        name: test
	  volumes:
	    - name: test
	      hostPath:
        path: /tmp


jenkins_service.yml

	apiVersion: v1
	kind: Service
	metadata: 
	  name: jenkins-service
	  labels:
	    name: jenkins
	spec:
	  selector:
	    name: jenkins
	#  externalIPs: [192.168.1.10]
	  type: NodePort
	  ports:
	    - port: 8080            #这里是指容器之间内部访问的端口
	      targetPort: 8080      #容器提供的端口,对应上面的containerPort
          nodePort: 32000       #对外开放的端口,由service代理

可能第一次接触K8S的同学不太明白,为什么创建一个POD 需要2个文件。
第1个文件是出相当创建应用
第2个文件相当于在nginx做了一个反向代理,或者说是iptable的端口重定向。

	kubectl create -f jenkins_service.yml
	kubectl create -f jenkins.yml

看下结果  jenkins已经被部署到node-1上了

	root@k8s-master:~# kubectl get pod -o wide
	NAME                    READY     STATUS    RESTARTS   AGE       IP               NODE
	jenkins                 1/1       Running   0          2h        10.244.1.4       k8s-node-1
	kube-flannel-ds-jb7fr   2/2       Running   3          1d        172.16.126.142   k8s-node-1
	kube-flannel-ds-tvtp4   2/2       Running   0          1d        172.16.126.141   k8s-master

#
	root@k8s-master:~# kubectl get svc
	NAME              CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
	jenkins-service   10.96.22.215          8080:32000/TCP   6s
	kubernetes        10.96.0.1              443/TCP          2d

看看jenkins被部署的哪里了

	root@k8s-node-1:~# docker ps -a
	CONTAINER ID        IMAGE                                              COMMAND                  CREATED             STATUS                    PORTS               NAMES
	9dcc8fa11926        172.16.126.129/jenkins                             "/bin/sh -c '/ywkj/to"   31 minutes ago      Up 31 minutes                                 k8s_jenkins.4b7c55ea_jenkins_default_3405c95b-ee75-11e6-ac81-000c29d4195a_672cf41a
	8906c916df8b        gcr.io/google_containers/pause-amd64:3.0           "/pause"                 31 minutes ago      Up 31 minutes                                 k8s_POD.d8dbe16c_jenkins_default_3405c95b-ee75-11e6-ac81-000c29d4195a_779dbaeb
	9d5d72806c52        quay.io/coreos/flannel:v0.7.0-amd64                "/bin/sh -c 'set -e -"   21 hours ago        Up 21 hours                                   k8s_install-cni.878787d4_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_1e0a4b16
	15888db70b33        gcr.io/google_containers/kube-proxy-amd64:v1.5.2   "kube-proxy --kubecon"   21 hours ago        Up 21 hours                                   k8s_kube-proxy.3353b476_kube-proxy-311d1_kube-system_29ebb2dc-eda5-11e6-ac81-000c29d4195a_fcf2c195
	67eff3ad6b2c        quay.io/coreos/flannel:v0.7.0                      "/opt/bin/flanneld --"   21 hours ago        Up 21 hours                                   k8s_kube-flannel.a2a489d6_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_63b72aef
	e8327210e737        gcr.io/google_containers/pause-amd64:3.0           "/pause"                 21 hours ago        Up 21 hours                                   k8s_POD.d8dbe16c_kube-proxy-311d1_kube-system_29ebb2dc-eda5-11e6-ac81-000c29d4195a_858a3d20
	d6bb0197ad18        gcr.io/google_containers/pause-amd64:3.0           "/pause"                 21 hours ago        Up 21 hours                                   k8s_POD.d8dbe16c_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_9bd29be9
	0568662625bb        gcr.io/google_containers/pause-amd64:3.0           "/pause"                 21 hours ago        Created                                       k8s_POD.d8dbe16c_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_339ced0a
	86316fbdb88a        gcr.io/google_containers/kube-proxy-amd64:v1.5.2   "kube-proxy --kubecon"   21 hours ago        Exited (2) 21 hours ago                       k8s_kube-proxy.3353b476_kube-proxy-311d1_kube-system_29ebb2dc-eda5-11e6-ac81-000c29d4195a_87c07c72
	26c55cc063cf        gcr.io/google_containers/pause-amd64:3.0           "/pause"                 21 hours ago        Exited (0) 21 hours ago                       k8s_POD.d8dbe16c_kube-proxy-311d1_kube-system_29ebb2dc-eda5-11e6-ac81-000c29d4195a_bf152c74
	99e233a49bda        quay.io/coreos/flannel:v0.7.0                      "/opt/bin/flanneld --"   25 hours ago        Exited (0) 21 hours ago                       k8s_kube-flannel.a2a489d6_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_6f2fb08a
	765f24fb0d1e        quay.io/coreos/flannel:v0.7.0-amd64                "/bin/sh -c 'set -e -"   25 hours ago        Exited (0) 21 hours ago                       k8s_install-cni.878787d4_kube-flannel-ds-jb7fr_default_29eae0c5-eda5-11e6-ac81-000c29d4195a_1b5eb192

在node-1节点查询到了

如何访问jenkins

	http://172.16.126.142:32000/jenkins/

#dashbord 安装

kubernetes-dashboard.yaml

	kind: Deployment
	apiVersion: extensions/v1beta1
	metadata:
	  labels:
	    app: kubernetes-dashboard
	  name: kubernetes-dashboard
	  namespace: kube-system
	spec:
	  replicas: 1
	  selector:
	    matchLabels:
	      app: kubernetes-dashboard
	  template:
	    metadata:
	      labels:
	        app: kubernetes-dashboard
	      # Comment the following annotation if Dashboard must not be deployed on master
	      annotations:
	        scheduler.alpha.kubernetes.io/tolerations: |
	          [
	            {
	              "key": "dedicated",
	              "operator": "Equal",
	              "value": "master",
	              "effect": "NoSchedule"
	            }
	          ]
	    spec:
	      containers:
	      - name: kubernetes-dashboard
	        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
	        imagePullPolicy: Always
	        ports:
	        - containerPort: 9090
	          protocol: TCP
	        args:
	          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
    nodePort:32001
  selector:
    app: kubernetes-dashboard


执行

	root@k8s-master:~# kubectl create -f kubernetes-dashboard.yaml
	deployment "kubernetes-dashboard" created
	service "kubernetes-dashboard" created

#

	root@k8s-master:~# kubectl get pod --all-namespaces
	NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
	default       jenkins                                 1/1       Running   0          4h
	default       kube-flannel-ds-jb7fr                   2/2       Running   3          1d
	default       kube-flannel-ds-tvtp4                   2/2       Running   0          2d
	kube-system   dummy-2088944543-0vbcf                  1/1       Running   0          2d
	kube-system   etcd-k8s-master                         1/1       Running   0          2d
	kube-system   kube-apiserver-k8s-master               1/1       Running   7          2d
	kube-system   kube-controller-manager-k8s-master      1/1       Running   0          2d
	kube-system   kube-discovery-1769846148-c5v2w         1/1       Running   0          2d
	kube-system   kube-dns-2924299975-4m6g1               4/4       Running   0          2d
	kube-system   kube-proxy-2ndd1                        1/1       Running   0          2d
	kube-system   kube-proxy-311d1                        1/1       Running   2          1d
	kube-system   kube-scheduler-k8s-master               1/1       Running   0          2d
	kube-system   kubernetes-dashboard-3203831700-h873m   1/1       Running   0          38m
	root@k8s-master:~# kubectl get svc --all-namespaces
	NAMESPACE     NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
	default       jenkins-service        10.96.22.215            8080:32000/TCP   2h
	default       kubernetes             10.96.0.1                443/TCP          2d
	kube-system   kube-dns               10.96.0.10               53/UDP,53/TCP    2d
	kube-system   kubernetes-dashboard   10.111.202.110          80:32001/TCP     39m

访问地址

	http://172.16.126.141:32001/

#安装heapster图形化dashbord


	kubectl create -f https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb/grafana-deployment.yaml
	kubectl create -f https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb/grafana-service.yaml
	kubectl create -f https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb/heapster-deployment.yaml
	kubectl create -f https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb/heapster-service.yaml
	kubectl create -f https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb/influxdb-deployment.yaml
	kubectl create -f https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb/influxdb-service.yaml

#

	root@k8s-master:~# kubectl create -f grafana-deployment.yaml
	deployment "monitoring-grafana" created    
	root@k8s-master:~# kubectl create -f grafana-service.yaml 
	service "monitoring-grafana" created              
	root@k8s-master:~# kubectl create -f influxdb-deployment.yaml 
	deployment "monitoring-influxdb" created
	root@k8s-master:~# kubectl create -f influxdb-service.yaml 
	service "monitoring-influxdb" created
	root@k8s-master:~# kubectl create -f heapster-deployment.yaml 
	deployment "heapster" created
	root@k8s-master:~# kubectl create -f heapster-service.yaml 
	service "heapster" created

#

	root@k8s-master:~# kubectl get pod --all-namespaces
	NAMESPACE     NAME                                    READY     STATUS              RESTARTS   AGE
	default       jenkins                                 1/1       Running             0          6h
	default       kube-flannel-ds-jb7fr                   2/2       Running             3          1d
	default       kube-flannel-ds-tvtp4                   2/2       Running             0          2d
	kube-system   dummy-2088944543-0vbcf                  1/1       Running             0          2d
	kube-system   etcd-k8s-master                         1/1       Running             0          2d
	kube-system   heapster-482663051-4gg30                0/1       ContainerCreating   0          4m
	kube-system   kube-apiserver-k8s-master               1/1       Running             7          2d
	kube-system   kube-controller-manager-k8s-master      1/1       Running             0          2d
	kube-system   kube-discovery-1769846148-c5v2w         1/1       Running             0          2d
	kube-system   kube-dns-2924299975-4m6g1               4/4       Running             0          2d
	kube-system   kube-proxy-2ndd1                        1/1       Running             0          2d
	kube-system   kube-proxy-311d1                        1/1       Running             2          1d
	kube-system   kube-scheduler-k8s-master               1/1       Running             0          2d
	kube-system   kubernetes-dashboard-3203831700-h873m   1/1       Running             0          2h
	kube-system   monitoring-grafana-3730655072-bcqzl     0/1       ContainerCreating   0          9m
	kube-system   monitoring-influxdb-957705310-svjj6     0/1       ContainerCreating   0          5m

#

	root@k8s-master:~# kubectl get svc --all-namespaces
	NAMESPACE     NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
	default       jenkins-service        10.96.22.215            8080:32000/TCP   3h
	default       kubernetes             10.96.0.1                443/TCP          2d
	kube-system   heapster               10.107.38.82             80/TCP           1m
	kube-system   kube-dns               10.96.0.10               53/UDP,53/TCP    2d
	kube-system   kubernetes-dashboard   10.111.202.110          80:30492/TCP     2h
	kube-system   monitoring-grafana     10.102.225.82            80/TCP           4m
	kube-system   monitoring-influxdb    10.102.35.126            8086/TCP         3m


##可能出现的问题

heapster 这个镜像可能会拉去不下来,会提示认证失败,不知道其他在的镜像如何能下载的,我了个擦


解决方案

	more heapster-deployment.yaml
	apiVersion: extensions/v1beta1
	kind: Deployment
	metadata:
	  name: heapster
	  namespace: kube-system
	spec:
	  replicas: 1
	  template:
	    metadata:
	      labels:
	        task: monitoring
	        k8s-app: heapster
	    spec:
	      containers:
	      - name: heapster
	        image: gcr.io/google_containers/heapster-amd64:v1.3.0-beta.0
	        imagePullPolicy: IfNotPresent
	        command:
	        - /heapster
	        - --source=kubernetes:https://kubernetes.default
  	        - --sink=influxdb:http://monitoring-influxdb:8086

找到镜像地址

	gcr.io/google_containers/heapster-amd64
	
用浏览器打开,啊,是多么重要

	你会看到一个界面,然后选择你要的镜像,点击提取镜像,会出现提取的命令

	gcloud docker pull gcr.io/google-containers/heapster-amd64:v1.3.0-beta.1

这里也僵了,我们需要用到gcloud的SDK,那就只能先安装咯

	curl https://sdk.cloud.google.com | bash 会默认安装到root目录下
	dpkg-reconfigure dash 选择no
	apt-get install python ubuntu16.04安装python2
	source google-cloud-sdk/completion.bash.inc
	source google-cloud-sdk/path.bash.inc
	sh google-cloud-sdk/install.sh
	安装完成后,需要gloucd login
	按照提示把url在浏览器输入,然后获取verification code
	在机器上输入后就可以成功下载镜像

结果

	root@k8s-master:~# kubectl get pod --all-namespaces
	NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
	default       jenkins                                 1/1       Running   0          1d
	default       kube-flannel-ds-jb7fr                   2/2       Running   3          2d
	default       kube-flannel-ds-tvtp4                   2/2       Running   0          3d
	kube-system   dummy-2088944543-0vbcf                  1/1       Running   0          3d
	kube-system   etcd-k8s-master                         1/1       Running   0          3d
	kube-system   heapster-654039642-qzll1                1/1       Running   0          1h
	kube-system   kube-apiserver-k8s-master               1/1       Running   7          3d
	kube-system   kube-controller-manager-k8s-master      1/1       Running   0          3d
	kube-system   kube-discovery-1769846148-c5v2w         1/1       Running   0          3d
	kube-system   kube-dns-2924299975-4m6g1               4/4       Running   0          3d
	kube-system   kube-proxy-2ndd1                        1/1       Running   0          3d
	kube-system   kube-proxy-311d1                        1/1       Running   2          2d
	kube-system   kube-scheduler-k8s-master               1/1       Running   0          3d
	kube-system   kubernetes-dashboard-3203831700-h873m   1/1       Running   0          23h
	kube-system   monitoring-grafana-3730655072-bcqzl     1/1       Running   0          21h
	kube-system   monitoring-influxdb-957705310-svjj6     1/1       Running   0          21h

已经完全啊安好了,然后你在监控页面就能看到详细的图形

kubernets1.52安装文档_第1张图片

 

kubernets1.52安装文档_第2张图片

转载于:https://my.oschina.net/jastme/blog/834653

你可能感兴趣的:(kubernets1.52安装文档)