k8s增加node

vim /etc/sysconfig/network-scripts/ifcfg-ens33

ONBOOT=no改成ONBOOT=yes

systemctl disable firewalld

systemctl stop firewalld

yum install -y docker

已加载插件:fastestmirror, langpacks
Determining fastest mirrors
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/7/base/mirrorlist.txt


 One of the configured repositories failed (未知),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo= ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable
        or
            subscription-manager repos --disable=

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=.skip_if_unavailable=true

Cannot find a valid baseurl for repo: base/7/x86_64

解决方案

https://blog.csdn.net/zhuchuangang/article/details/76572157#2%E4%B8%8B%E8%BD%BDkubernetes%E9%95%9C%E5%83%8F

#docker yum源
vim /etc/yum.repos.d/docker.repo
[docker-repo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=0
EOF
[root@node1 ~]# docker --version
Docker version 1.13.1, build 07f3374/1.13.1

[root@node1 ~]# yum install -y kubeadm
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
没有可用软件包 kubeadm。
错误:无须任何处理

[root@node1 ~]# yum install -y kubelet
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
没有可用软件包 kubelet。
错误:无须任何处理

解决方案

#kubernetes yum源
vim /etc/yum.repos.d/kubernetes.repo < [kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

systemctl start docker

systemctl enable docker

systemctl enable kubelet

systemctl start kubelet

kubeadm init --kubernetes-version=v1.13.1肯定找不到镜像

[root@node1 yum.repos.d]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:36:44Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

kubeadm config images list --kubernetes-version v1.13.0

[root@node1 yum.repos.d]# kubeadm config images list 
I1226 11:18:16.677369    4716 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I1226 11:18:16.677758    4716 version.go:95] falling back to the local client version: v1.13.1
k8s.gcr.io/kube-apiserver:v1.13.1
k8s.gcr.io/kube-controller-manager:v1.13.1
k8s.gcr.io/kube-scheduler:v1.13.1
k8s.gcr.io/kube-proxy:v1.13.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

找镜像

# 拉取镜像
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6

# 重命名镜像标签
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

# 删除旧镜像
docker rmi docker.io/mirrorgooglecontainers/kube-proxy:v1.13.1 
docker rmi docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.1 
docker rmi docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.1 
docker rmi docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker rmi docker.io/mirrorgooglecontainers/etcd:3.2.24  
docker rmi docker.io/mirrorgooglecontainers/pause:3.1 
docker rmi docker.io/coredns/coredns:1.2.6

kubectl delete deploy curl -n default

[root@node1 yum.repos.d]# kubeadm join 192.168.41.137:6443 --token ycd1dl.xza4hi7b4prr0387 --discovery-token-ca-cert-hash sha256:15299a96ced577a2a865216b9240511d47e940e961ac461970a55f12e2b564be
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

[root@node1 sysconfig]# free -m
  total        used        free      shared  buff/cache   available
Mem:            974         565          73          10         335         187
Swap:          2047         194        1853

#注意,需要关闭swap分区,或者在如下的配置文件里修改,表示添加而且的启动选项
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" 

systemctl daemon-reload

没作用

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"  

systemctl daemon-reload

没作用

vi /etc/fstab

#/dev/mapper/centos-swap swap                    swap    defaults        0 0

vi /etc/sysctl.d/k8s.conf

vm.swappiness=0

sysctl -p /etc/sysctl.d/k8s.conf

未验证

建议执行 swapoff -a 关闭swap分区

[root@node1 sysconfig]# free -m   
 total        used        free      shared  buff/cache   available
Mem:            974         683          68          14         222          72
Swap:             0           0           0

[root@node1 sysconfig]# kubeadm join 192.168.41.137:6443 --token ycd1dl.xza4hi7b4prr0387 --discovery-token-ca-cert-hash sha256:15299a96ced577a2a865216b9240511d47e940e961ac461970a55f12e2b564be
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.41.137:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.41.137:6443"
[discovery] Requesting info from "https://192.168.41.137:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.41.137:6443"
[discovery] Successfully established connection with API Server "192.168.41.137:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@master k8s]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS                  RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-26w77         1/1     Running                 0          4h37m   172.100.1.2      master            
kube-system   coredns-86c58d9df4-hw85q         1/1     Running                 0          4h37m   172.100.1.3      master            
kube-system   etcd-master                      1/1     Running                 0          4h41m   192.168.41.137   master            
kube-system   kube-apiserver-master            1/1     Running                 0          4h42m   192.168.41.137   master            
kube-system   kube-controller-manager-master   1/1     Running                 0          4h42m   192.168.41.137   master            
kube-system   kube-flannel-ds-amd64-8f5tq      0/1     Init:CrashLoopBackOff   11         52m     192.168.41.138   node1              
kube-system   kube-flannel-ds-amd64-d6mz5      1/1     Running                 0          4h9m    192.168.41.137   master            
kube-system   kube-proxy-dbn5m                 0/1     ImagePullBackOff        0          52m     192.168.41.138   node1              
kube-system   kube-proxy-lgkqk                 1/1     Running                 0          4h42m   192.168.41.137   master            
kube-system   kube-scheduler-master            1/1     Running                 0          4h41m   192.168.41.137   master            

[root@master k8s]# kubectl logs kube-flannel-ds-amd64-8f5tq -n kube-system                        
Error from server: Get https://192.168.41.138:10250/containerLogs/kube-system/kube-flannel-ds-amd64-8f5tq/kube-flannel: net/http: TLS handshake timeout
[root@master k8s]# kubectl logs kube-proxy-dbn5m -n kube-system                           
Error from server: Get https://192.168.41.138:10250/containerLogs/kube-system/kube-proxy-dbn5m/kube-proxy: net/http: TLS handshake timeout

https://blog.csdn.net/zzq900503/article/details/81710319

删除节点

kubectl delete node node_ip 

kubeadm reset

kubeadm join xxx

如果副节点是NotReady,可以使用命令检查是否有报错

systemctl status kubelet.service

journalctl -u kubelet -f

mkdir -p /etc/cni/net.d/
cat < /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat < /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
vim /etc/sysconfig/kubelet --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice systemctl restart kubelet

systemctl status kubelet.service -l

systemctl restart kubelet  

systemctl daemon-reload 


12月 26 17:53:13 node1 kubelet[28440]: E1226 17:53:13.061698   28440 pod_workers.go:190] Error syncing pod 4914f13e-08f3-11e9-b845-000c29c22ce9 ("kube-flannel-ds-amd64-krqz7_kube-system(4914f13e-08f3-11e9-b845-000c29c22ce9)"), skipping: failed to "StartContainer" for "install-cni" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=install-cni pod=kube-flannel-ds-amd64-krqz7_kube-system(4914f13e-08f3-11e9-b845-000c29c22ce9)"

export KUBECONFIG=/etc/kubernetes/kubelet.conf应该是这个起了作用。。。卧槽,最后这个卡了好久

setenforce 0

kubectl get nodes

如何从集群中移除Node

如果需要从集群中移除k8s2这个Node执行下面的命令:

在master节点上执行:

kubectl drain k8s2 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s2

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

kubeadm token create

kubeadm join 192.168.41.137:6443 --token 2ejteu.jhemrex75qwxsklh --discovery-token-ca-cert-hash sha256:e79985aca97b07f42a045425abf263fda4cdaf3f6591b1de41797a060b4e5e44

你可能感兴趣的:(k8s增加node)