2019-07-18 K8S 集群创建

。K8S是服务(容器)的管理工具,它支持集群架构。集群的nodes们要先在infrastructure上建起来。


使用kubeadm 在infrastructure节点上创建K8S集群

step 1. 在所有nodes上安装 集群管理工具 kubeadm 

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

A. 添加google cloud repo
[root@afdev2 ~]# cat < /etc/yum.repos.d/kubernetes.repo

> [kubernetes]

> name=Kubernetes

> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

> enabled=1

> gpgcheck=1

> repo_gpgcheck=1

> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

> EOF

B. 设置SELinux

[root@afdev2 ~]# setenforce 0

setenforce: SELinux is disabled

[root@afdev2 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

C. 解决依赖关系:1/2是kuberctl依赖,3/4是conntrack依赖

1. install  socat    -->  TCP port forwarder      https://centos.pkgs.org/7/centos-x86_64/socat-1.7.3.2-2.el7.x86_64.rpm.html

    [root@afdev2 socat]# rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm

2. install conntrack    --> Netfilter's connection tracking userspace tools      https://centos.pkgs.org/7/centos-x86_64/conntrack-tools-1.4.4-4.el7.x86_64.rpm.html 

3. install libnetfilter_cthelper    -->        http://rpm.pbone.net/index.php3/stat/4/idpl/27123498/dir/redhat_el_7/com/libnetfilter_cthelper-1.0.0-4.el7.x86_64.rpm.html

4. install libnetfilter_cttimeout and libnetfilter_queue    -->   [root@afdev2 conntrack]# yum install libnetfilter_cttimeout  libnetfilter_queue

D. 安装kubeadm *

[root@afdev2 k8s_sw]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Installing:

kubeadm                                    x86_64                              1.15.0-0                                kubernetes                              8.9 M

kubectl                                    x86_64                              1.15.0-0                                kubernetes                              9.5 M

kubelet                                    x86_64                              1.15.0-0                                kubernetes                              22 M

Installing for dependencies:

cri-tools                                  x86_64                              1.13.0-0                                kubernetes                              5.1 M

kubernetes-cni                              x86_64                              0.7.5-0                                  kubernetes                              10 M

E. 装载br_netfilter

# modprobe br_netfilter

F. 确保K8S trafiic 不被iptables bypass

[root@afdev2 ~]# cat < /etc/sysctl.d/k8s.conf

> net.bridge.bridge-nf-call-ip6tables = 1

> net.bridge.bridge-nf-call-iptables = 1

> EOF

[root@afdev2 ~]# sysctl --system

G. Docker 安装。因为kube-apiserver等等这些kube组件都将以容器形式运行

# yum install docker

H. 确保 server hostame解析ok(/etc/hosts),CPU不少于2个, disable swap (swapoff -a)

I. start/enable docker和kubelet service

[root@afdev2 ~]# systemctl enable --now docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@afdev2 ~]# systemctl enable --now kubelet

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@afdev2 ~]#

step 2. 在选做master的node上初始化集群

# kuberadm init        --> output的最后一行,token。所有工作节点加入集群的认证令牌

# mkdir -p $HOME/.kube

# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# chown $(id -u):$(id -g) $HOME/.kube/config

step 3. 让选做工作节点(后面叫nodes)加入集群 

# kuberadm join :6443  --token   --discovery-token-ca-cert-hash        --> 来自step2的output

嘻嘻  log在<2019-07-18 kubeadm init/join output>

step 4. 为了配置master和node、和以后的pods要求的网络,需要给master安装网络插件

现在node的状态都是NOT READY, 正是因为少这个:

[root@afdev2 ~]# kubectl getnodes

NAME     STATUS    ROLES    AGE   VERSION

afdev1   NotReady     16h   v1.15.1

afdev2   NotReady  master   17h   v1.15.0

安装网路插件 Weave Net

[root@afdev2 ~]# kubectl version

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

[root@afdev2 ~]#  export kubever=$(kubectl version | base64 | tr -d '\n')

[root@afdev2 ~]# kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$kubever

serviceaccount/weave-net created

clusterrole.rbac.authorization.k8s.io/weave-net created

clusterrolebinding.rbac.authorization.k8s.io/weave-net created

role.rbac.authorization.k8s.io/weave-net created

rolebinding.rbac.authorization.k8s.io/weave-net created

daemonset.extensions/weave-net created

所有节点都READY了 :)

[root@afdev2 ~]# kubectl get nodes

NAME    STATUS  ROLES    AGE  VERSION

afdev1  Ready      17h  v1.15.1

afdev2  Ready    master  17h  v1.15.0

[root@afdev2 ~]#

* 如果想选择其他的网络插件,可以从这里下载和找到安装方法:https://kubernetes.io/docs/concepts/cluster-administration/addons/ 


K8S的组件是这样的:

master上:

        etcd                --> port 2379。1个daemon,many socket ,全来自api。    (Openshift : port一样;1个Daemon,除了many socket 来自api,还有1个来自controller-manager)

                                * etcd是一种键值分布是存储。etcdctl是管理工具。

etcd的数据

[root@afdev2 hack]# docker exec -it k8s_etcd_etcd-afdev2_kube-system_243c0cac85ba38f13f7517899628e9a1_0 /bin/sh

/ # export ETCDCTL_API=3

/ # etcdctl member list

Error: context deadline exceeded

identify secure client using this TLS certificate/key file :

[root@afdev2 ~]# docker inspect k8s_etcd_etcd-afdev2_kube-system_243c0cac85ba38f13f7517899628e9a1_0 | egrep -i "(peer-cert|peer-key)"

            "--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt",

            "--peer-key-file=/etc/kubernetes/pki/etcd/peer.key",

                "--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt",

                "--peer-key-file=/etc/kubernetes/pki/etcd/peer.key",

[root@afdev2 ~]#

/ # ls -l /etc/kubernetes/pki/etcd/

total 32

-rw-r--r--    1 root    root          1017 Jul 18 18:46 ca.crt

-rw-------    1 root    root          1679 Jul 18 18:46 ca.key

-rw-r--r--    1 root    root          1094 Jul 18 18:46 healthcheck-client.crt

-rw-------    1 root    root          1679 Jul 18 18:46 healthcheck-client.key

-rw-r--r--    1 root    root          1127 Jul 18 18:46 peer.crt

-rw-------    1 root    root          1679 Jul 18 18:46 peer.key

-rw-r--r--    1 root    root          1127 Jul 18 18:46 server.crt

-rw-------    1 root    root          1679 Jul 18 18:46 server.key

/ #

/ # ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key get --prefix / --keys-only

Error: context deadline exceeded

/ # ETCDCTL_API=3 etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/peer.crt" --key="/etc/kubernetes/pki/etcd/peer.key" get --prefix / --keys-only            --> :)  列出了所有键名

/registry/apiregistration.k8s.io/apiservices/v1.

/registry/apiregistration.k8s.io/apiservices/v1.apps

/registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io

/registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io

/registry/apiregistration.k8s.io/apiservices/v1.autoscaling

/registry/apiregistration.k8s.io/apiservices/v1.batch

让我看一个键值:

/ # ETCDCTL_API=3 etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/peer.crt" --key="/etc/kubernetes/pki/etcd/peer.key" get /registry/

apiregistration.k8s.io/apiservices/v1.storage.k8s.io

/registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io

{"kind":"APIService","apiVersion":"apiregistration.k8s.io/v1beta1","metadata":{"name":"v1.storage.k8s.io","uid":"d1fed909-48ef-4654-91c7-533d58ceaf10","creationTimestamp":"2019-07-18T18:46:38Z","labels":{"kube-aggregator.kubernetes.io/automanaged":"onstart"}},"spec":{"service":null,"group":"storage.k8s.io","version":"v1","groupPriorityMinimum":16800,"versionPriority":15},"status":{"conditions":[{"type":"Available","status":"True","lastTransitionTime":"2019-07-18T18:46:38Z","reason":"Local","message":"Local APIServices are always available"}]}}


        scheduler      --> port NA。不是daemon 没有监听端口。  1个 进程,1个socket    

        api                 --> port 6443/sun-sr-https。 1个Daemon,socket= 其他组建(除了etcd)的client connection socket数 + nodes上的kubelet(n)和proxy(1) 。作为etcd的客户端, 向etcd daemon有many socket        (Openshift : port 8443/pcsync-https。)

        controller-manager    --> port NA。不是daemon 没有监听端口。一个进程里,n个socket

        kubelet            --> port NA。不是daemon 没有监听端口。一个进程里,n个socket

        proxy               --> port NA。不是daemon 没有监听端口。一个进程里,1个socket

nodes上:

        proxy        --> port NA。不是daemon 没有监听端口。一个进程里,1个socket。连向master

        kubelet    --> port NA。不是daemon 没有监听端口。一个进程里,n个socket。连向master


K8S的工具又是这样的:

master上:

        kubeadm    --> bootstrap

        kubectl        --> talk to nodes

nodes上:

        kubelet        --> starting pods and container


所以,虽然搭建和运行K8S(甚至包括集群node的建立)的方法,会因为infrastructure不同而看上去很不同,但是这些方法都是围绕着这些组件的安装和启动来设计的

你可能感兴趣的:(2019-07-18 K8S 集群创建)