Kubeadm installation

作者:书书曾
链接:https://www.jianshu.com/p/e3e6a66fcb97

使用Kubeadm 本机部署Kubernetes

K8s的部署方式有很多,kubeadm是网上看到比较推荐的一种部署方式。可以部署到单机,或者集群。

Kubeadm

官方描述:
If you already have a way to configure hosting resources, use kubeadm to easily bring up a cluster with a single command per machine.
个人安装下来感觉还是比较好用的:

  • 使用docker 部署组件,比如 etcd
  • 使用比较简单,instruction都比较清晰。
  • 可以单机部署,也可以部署在不同的机器上,加入集群。
  • 支持多种操作系统,ubuntu,centos. 如果用虚拟机,操作系统就不重要。

安装

安装比较简单,按照官方的guide去安装即可。这里提供链接。
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
我自己是在一个Ubuntu上面装了master,然后用VM起了一个worker,尝试了一下集群。如果只是本地做开发,可以只部署master,pods 起在master即可。

使用到的命令

#reset kubeadm
kubeadm reset

#init kubeadm
kubeadm init

#Install network plugin
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml

#get all pods in k8s
kubectl get pods --all-namespaces

#get all nodes in k8s
kubectl get nodes

#Join a worker to k8s 
kubeadm join --token  : --discovery-token-ca-cert-hash sha256:

#Allow master to create pods
kubectl taint nodes --all node-role.kubernetes.io/master-

#Install a sample application (demo)
kubectl create namespace sock-shop
kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"

#Check applicaiton
kubectl -n sock-shop get svc front-end
kubectl describe svc front-end -n sock-shop

# Tear down
kubectl drain  --delete-local-data --force --ignore-daemonsets
kubectl delete node 

kubeadm reset

安装中遇到的问题和Tips

  • 第一次kubeadm init 报错,一些目录不是空的。可以使用 kubeadm reset 来清空。
  • 安装中可能遇到的一些问题,这里有个文章不错。 http://www.cnblogs.com/pinganzi/p/7239328.html

缺点&limitation

  • 需要访问 https://cloud.google.com/container-registry/
  • Be sure to read the limitations

安装后的扩展学习方向

通过安装会发现几个后面学习的方向.

  • Kubernetes 网络插件的学习, 几种网络插件合适哪些应用场景.
  • Pod 和 Service 的学习. 部署一个pod 和 service,理解他们的之间的关系.
  • 制作一个自己的docker image,部署成为 pod 和Service.
  • 微服务的学习.

PS: kubeadm 的guide包括了部署一个小型的微服务. 可以用来借鉴.

安装Log

root@hostname:/home/username/.kube# kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
root@hostname:/home/username/.kube# kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml", assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
root@hostname:/home/username/.kube# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [hostname kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 146.11.23.8]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 27.502157 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node hostname as master by adding a label and a taint
[markmaster] Master hostname tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 331134.8601be46f05da602
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 3311xxxxxxxxx2 146.11.23.8:6443 --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

root@hostname:/home/username/.kube# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite '/root/.kube/config'? y
root@hostname:/home/username/.kube# sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@hostname:/home/username/.kube# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
root@hostname:/home/username/.kube# 
root@hostname:/home/username/.kube# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   etcd-hostname                      1/1       Running   0          37s
kube-system   kube-apiserver-hostname            1/1       Running   0          1m
kube-system   kube-controller-manager-hostname   1/1       Running   0          42s
kube-system   kube-dns-545bc4bfd4-7t4cr             0/3       Pending   0          1m
kube-system   kube-flannel-ds-jbx9g                 1/1       Running   0          18s
kube-system   kube-proxy-kdfxj                      1/1       Running   0          1m
kube-system   kube-scheduler-hostname            1/1       Running   0          52s
root@hostname:/home/username/.kube# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   etcd-hostname                      1/1       Running   0          20m
kube-system   kube-apiserver-hostname            1/1       Running   0          20m
kube-system   kube-controller-manager-hostname   1/1       Running   0          20m
kube-system   kube-dns-545bc4bfd4-7t4cr             3/3       Running   0          21m
kube-system   kube-flannel-ds-jbx9g                 1/1       Running   0          20m
kube-system   kube-proxy-kdfxj                      1/1       Running   0          21m
kube-system   kube-scheduler-hostname            1/1       Running   0          20m
root@hostname:/home/username/.kube# 


 

你可能感兴趣的:(容器云)