虚拟机搭建 K8S服务

K8S服务搭建

角色 IP 组件
k8s-master 192.168.217.100 kube-api-server、kube-controller-manager、kube-scheduler、docker、etcd
k8s-node1 192.168.217.101 kubelet、kube-proxy、docker、etcd
k8s-node2 192.168.217.102 kubelet、kube-proxy、docker、etcd
K8s-node3 192.168.217.103 kubelet、kube-proxy、docker、etcd

配置

修改 /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
# SELINUX=enforcing
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

添加 /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装并启动

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

静态网络

/etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static" # 修改为静态boot
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="21499615-88aa-45ed-811e-b0ba39dfc6c6"
DEVICE="ens33"
ONBOOT="yes"
# 添加固定IP
IPADDR=192.168.112.103
NETMASK=255.255.255.0
GATEWAY=192.168.112.2
DNS1=192.168.112.2

vmware 分配固定IP

kubeadm

Master 节点

[root@kubelet-node-master ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6


[root@kubelet-node-master ~]# kubeadm init --token=102952.1a7dd4cc8d1f4cc5  --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.1
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubelet-node-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.82.144]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubelet-node-master localhost] and IPs [192.168.82.144 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubelet-node-master localhost] and IPs [192.168.82.144 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.503696 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubelet-node-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kubelet-node-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 102952.1a7dd4cc8d1f4cc5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.82.144:6443 --token 102952.1a7dd4cc8d1f4cc5 \
	--discovery-token-ca-cert-hash sha256:e0454d2ea113af7673a0ea63a6142f2ce8da9b136658a4ff6bc12b5e1da29567
	

Node 节点


[root@kubelet-node-salve-1 ~]# kubeadm join 192.168.112.100:6443 --token 102952.1a7dd4cc8d1f4cc5 --discovery-token-ca-cert-hash sha256:e0454d2ea113af7673a0ea63a6142f2ce8da9b136658a4ff6bc12b5e1da29567
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


错误1 配置出错

[root@kubelet-node-salve-3 ~]# kubeadm join 192.168.82.144:6443 --token 102952.1a7dd4cc8d1f4cc5 --discovery-token-ca-cert-hash sha256:e0454d2ea113af7673a0ea63a6142f2ce8da9b136658a4ff6bc12b5e1da29567
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@kubelet-node-salve-3 ~]# cat  /proc/sys/net/bridge/bridge-nf-call-iptables
0
[root@kubelet-node-salve-3 ~]# echo 1 >  /proc/sys/net/bridge/bridge-nf-call-iptables
[root@kubelet-node-salve-3 ~]# kubeadm join 192.168.82.144:6443 --token 102952.1a7dd4cc8d1f4cc5 --discovery-token-ca-cert-hash sha256:e0454d2ea113af7673a0ea63a6142f2ce8da9b136658a4ff6bc12b5e1da29567
[preflight] Running pre-flight checks

错误2 token过期

[root@kubelet-node-salve-3 ~]# kubeadm join 192.168.112.100:6443 --token 102952.1a7dd4cc8d1f4cc5 --discovery-token-ca-cert-hash sha256:e0454d2ea113af7673a0ea63a6142f2ce8da9b136658a4ff6bc12b5e1da29567
[preflight] Running pre-flight checks

error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "102952"
To see the stack trace of this error execute with --v=5 or higher

生成token

[root@kubelet-node-master ~]# kubeadm token generate
yjd7fr.2k1f7yee29ny0e6q

生成join命令

[root@kubelet-node-master ~]# kubeadm token create yjd7fr.2k1f7yee29ny0e6q  --print-join-command --ttl=0
kubeadm join 192.168.112.100:6443 --token yjd7fr.2k1f7yee29ny0e6q --discovery-token-ca-cert-hash sha256:3960a6fe5dc9729fb2e22ce75ad812c8cc18f35dcf3e7c0ee24cb875611def0d

查看节点状态

[root@kubelet-node-master ~]#  kubectl get nodes
NAME                   STATUS     ROLES                  AGE    VERSION
kubelet-node-master    NotReady   control-plane,master   6m4s   v1.23.1
kubelet-node-salve-1   NotReady   <none>                 107s   v1.23.1
kubelet-node-salve-2   NotReady   <none>                 72s    v1.23.1

查看 kubectl get pods --all-namespaces

kubectl get pods --all-namespaces

[root@kubelet-node-master test]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS              RESTARTS        AGE
kube-system   coredns-6d8c4cb4d-v5jlx                       0/1     ContainerCreating   0               39m
kube-system   coredns-6d8c4cb4d-wj7ch                       0/1     ContainerCreating   0               39m
kube-system   etcd-kubelet-node-master                      1/1     Running             0               39m
kube-system   kube-apiserver-kubelet-node-master            1/1     Running             0               39m
kube-system   kube-controller-manager-kubelet-node-master   1/1     Running             0               39m
kube-system   kube-flannel-ds-4dt6s                         0/1     CrashLoopBackOff    6 (4m33s ago)   13m
kube-system   kube-flannel-ds-5wp5v                         0/1     CrashLoopBackOff    7 (39s ago)     13m
kube-system   kube-flannel-ds-hnzd7                         0/1     CrashLoopBackOff    6 (5m2s ago)    13m
kube-system   kube-proxy-4fk2b                              1/1     Running             0               35m
kube-system   kube-proxy-fqw2v                              1/1     Running             0               34m
kube-system   kube-proxy-h25x9                              1/1     Running             0               39m
kube-system   kube-scheduler-kubelet-node-master            1/1     Running             0               39m

查看Pod状态

[root@kubelet-node-master ~]#  kubectl get pod -n istio-system
NAME                                   READY   STATUS              RESTARTS   AGE
istio-egressgateway-7dbd687f44-t622q   0/1     ContainerCreating   0          6m30s
istio-ingressgateway-594847699-xgzc6   0/1     ContainerCreating   0          6m30s
istio-ingressgateway-d5db996bb-jhcd4   0/1     ContainerCreating   0          6m30s
istiod-75c5dfcb64-clk6w                0/1     Running             0          6m30s
istiod-7854d7b487-qbllx                0/1     Running             0          6m30s


[root@kubelet-node-master ~]# kubectl describe pod istiod-7854d7b487-qbllx  -n istio-system

安装kube-flannel


wget -c  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

kubectl apply -f kube-flannel.yml

CrashLoopBackOff

部署flannel网络插件时发现flannel一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr

[root@kubelet-node-master ~]# kubectl get pods --all-namespaces  -o wide
NAMESPACE     NAME                                          READY   STATUS              RESTARTS         AGE   IP               NODE                   NOMINATED NODE   READINESS GATES
kube-system   coredns-6d8c4cb4d-v5jlx                       0/1     ContainerCreating   0                58m   <none>           kubelet-node-salve-1   <none>           <none>
kube-system   coredns-6d8c4cb4d-wj7ch                       0/1     ContainerCreating   0                58m   <none>           kubelet-node-salve-1   <none>           <none>
kube-system   etcd-kubelet-node-master                      0/1     Running             0                58m   192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-apiserver-kubelet-node-master            0/1     Running             0                58m   192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-controller-manager-kubelet-node-master   0/1     Running             0                68s   192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-flannel-ds-4dt6s                         1/1     Running             14 (43s ago)     32m   192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-flannel-ds-5wp5v                         0/1     CrashLoopBackOff    10 (3m39s ago)   32m   192.168.82.140   kubelet-node-salve-1   <none>           <none>
kube-system   kube-flannel-ds-hnzd7                         0/1     CrashLoopBackOff    10 (2m56s ago)   32m   192.168.82.142   kubelet-node-salve-2   <none>           <none>
kube-system   kube-proxy-4fk2b                              1/1     Running             0                53m   192.168.82.140   kubelet-node-salve-1   <none>           <none>
kube-system   kube-proxy-fqw2v                              1/1     Running             0                53m   192.168.82.142   kubelet-node-salve-2   <none>           <none>
kube-system   kube-proxy-h25x9                              1/1     Running             0                58m   192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-scheduler-kubelet-node-master            1/1     Running             0                58m   192.168.82.144   kubelet-node-master    <none>           <none>

解决flannel一直处于CrashLoopBackOff状态

vim /etc/kubernetes/manifests/kube-controller-manager.yaml

--allocate-node-cidrs=true
--cluster-cidr=192.168.0.0/16
解决后

[root@kubelet-node-master ~]# kubectl get pods --all-namespaces  -o wide
NAMESPACE     NAME                                          READY   STATUS    RESTARTS         AGE     IP               NODE                   NOMINATED NODE   READINESS GATES
kube-system   coredns-6d8c4cb4d-v5jlx                       0/1     Running   0                61m     192.168.1.3      kubelet-node-salve-1   <none>           <none>
kube-system   coredns-6d8c4cb4d-wj7ch                       0/1     Running   0                61m     192.168.1.2      kubelet-node-salve-1   <none>           <none>
kube-system   etcd-kubelet-node-master                      1/1     Running   0                62m     192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-apiserver-kubelet-node-master            1/1     Running   0                62m     192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-controller-manager-kubelet-node-master   1/1     Running   0                4m59s   192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-flannel-ds-4dt6s                         1/1     Running   14 (4m34s ago)   36m     192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-flannel-ds-5wp5v                         1/1     Running   11 (7m30s ago)   36m     192.168.82.140   kubelet-node-salve-1   <none>           <none>
kube-system   kube-flannel-ds-hnzd7                         1/1     Running   11 (6m47s ago)   36m     192.168.82.142   kubelet-node-salve-2   <none>           <none>
kube-system   kube-proxy-4fk2b                              1/1     Running   0                57m     192.168.82.140   kubelet-node-salve-1   <none>           <none>
kube-system   kube-proxy-fqw2v                              1/1     Running   0                57m     192.168.82.142   kubelet-node-salve-2   <none>           <none>
kube-system   kube-proxy-h25x9                              1/1     Running   0                61m     192.168.82.144   kubelet-node-master    <none>           <none>
kube-system   kube-scheduler-kubelet-node-master            1/1     Running   0                61m     192.168.82.144   kubelet-node-master    <none>           <none>

Kubeadm 重置

kubeadm reset

你可能感兴趣的:(linux,docker,docker,kubernetes,容器)