kubernetes集群部署

kubernetes快速部署

文章目录

  • kubernetes快速部署
    • @[toc]
    • 一、准备工作
      • master节点配置:
        • 1. 修改主机名
        • 2. 关闭防火墙和selinux
        • 3. 关闭swap并重启
        • 4. 映射主机
        • 5. 将桥接的IPv4流量传递到iptables的链:
        • 6. 时间同步
        • 7. 免密认证
      • node1节点配置:
        • 1. 修改主机名
        • 2. 关闭防火墙和selinux
        • 3. 关闭swap并重启
        • 4. 映射主机
        • 5. 将桥接的IPv4流量传递到iptables的链:
        • 6. 时间同步
      • node2节点配置:
        • 1. 修改主机名
        • 2. 关闭防火墙和selinux
        • 3. 关闭swap并重启
        • 4. 映射主机
        • 5. 将桥接的IPv4流量传递到iptables的链:
        • 6. 时间同步
    • 二、安装docker以及kubeadm、kubelet、kubectl
      • 在所有节点上安装docker:
      • 在所有节点上添加kubernetes阿里云YUM软件源:
      • 在所有节点上安装kubeadm,kubelet和kubectl:
      • 在所有节点上进行containerd配置:
    • 三、部署Kubernetes Master
    • 四、安装Pod网络插件(CNI)
    • 五、加入Kubernetes Node
    • 六、测试kubernetes集群
        • 测试端口:

一、准备工作

部署kubernetes集群需要满足以下条件:

  • 至少3台机器
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘20GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区

需求:

  1. 在所有节点上安装Docker和kubeadm
  2. 部署Kubernetes Master
  3. 部署容器网络插件
  4. 部署 Kubernetes Node,将节点加入Kubernetes集群中
  5. 部署Dashboard Web页面,可视化查看Kubernetes资源

环境说明

主机 IP
master 192.168.183.139
node1 192.168.183.136
node2 192.168.183.137

master节点配置:

1. 修改主机名

[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# bash

2. 关闭防火墙和selinux

[root@master ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

3. 关闭swap并重启

[root@master ~]# vim /etc/fstab
#/dev/mapper/cs-swap     none                    swap    defaults        0 0		//删除或者注释
[root@master ~]# reboot

4. 映射主机

[root@master ~]# cat >> /etc/hosts << EOF
192.168.183.139 master
192.168.183.136 node1
192.168.183.137 node2
EOF

5. 将桥接的IPv4流量传递到iptables的链:

[root@master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

//生效
[root@master ~]# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
kernel.core_pipe_limit = 16
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
net.core.optmem_max = 81920
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...

6. 时间同步

[root@master ~]# dnf -y install chrony
[root@master ~]# systemctl enable --now chronyd
[root@master ~]# vim /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool  time1.aliyun.com iburst
[root@master ~]# systemctl enable --now chronyd

//验证
[root@master ~]# for i in master node1 node2 ; do ssh root@$i 'date';done
Mon Nov 14 20:00:36 CST 2022
Mon Nov 14 20:00:36 CST 2022
Mon Nov 14 20:00:36 CST 2022

7. 免密认证

[root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:VWWZ2MLWrPgut98a+yoVGdhJPGUQyOH77yXVPgY+sNQ root@master
The key's randomart image is:
+---[RSA 3072]----+
|           oo&*Bo|
|           oO.%. |
|          .o.o + |
|         .. o.o .|
|        S  +.E .o|
|          . =.oo |
|           o ++oo|
|          . + o*o|
|           o.+===|
+----[SHA256]-----+
[root@master ~]# ssh-copy-id master
[root@master ~]# ssh-copy-id node1
[root@master ~]# ssh-copy-id node2

node1节点配置:

1. 修改主机名

[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash

2. 关闭防火墙和selinux

[root@node1 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node1 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

3. 关闭swap并重启

[root@node1 ~]# vim /etc/fstab
#/dev/mapper/cs-swap     none                    swap    defaults        0 0		//删除或者注释
[root@node1 ~]# reboot

4. 映射主机

[root@node1 ~]# cat >> /etc/hosts << EOF
192.168.183.139 master
192.168.183.136 node1
192.168.183.137 node2
EOF

5. 将桥接的IPv4流量传递到iptables的链:

[root@node1 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

//生效
[root@node1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
kernel.core_pipe_limit = 16
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
net.core.optmem_max = 81920
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...

6. 时间同步

[root@node1 ~]# dnf -y install chrony
[root@node1 ~]# systemctl enable --now chronyd
[root@node1 ~]# vim /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool  time1.aliyun.com iburst
[root@node1 ~]# systemctl enable --now chronyd

node2节点配置:

1. 修改主机名

[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash

2. 关闭防火墙和selinux

[root@node2 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node2 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

3. 关闭swap并重启

[root@node2 ~]# vim /etc/fstab
#/dev/mapper/cs-swap     none                    swap    defaults        0 0		//删除或者注释
[root@node2 ~]# reboot

4. 映射主机

[root@node2 ~]# cat >> /etc/hosts << EOF
192.168.183.139 master
192.168.183.136 node1
192.168.183.137 node2
EOF

5. 将桥接的IPv4流量传递到iptables的链:

[root@node2 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

//生效
[root@node2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
kernel.core_pipe_limit = 16
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
net.core.optmem_max = 81920
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...

6. 时间同步

[root@node2 ~]# dnf -y install chrony
[root@node2 ~]# systemctl enable --now chronyd
[root@node2 ~]# vim /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool  time1.aliyun.com iburst
[root@node2 ~]# systemctl enable --now chronyd

二、安装docker以及kubeadm、kubelet、kubectl

在所有节点上安装docker:

# curl -o docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
# sed -i 's@https://download.docker.com@https://mirrors.tuna.tsinghua.edu.cn/docker-ce@g' docker-ce.repo
# mv docker-ce.repo /etc/yum.repos.d/
# dnf -y install docker-ce
# systemctl enable --now docker
# docker --version

# cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

在所有节点上添加kubernetes阿里云YUM软件源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

在所有节点上安装kubeadm,kubelet和kubectl:

# dnf install -y kubelet kubeadm kubectl
# systemctl enable kubelet    //注意不要直接启动

在所有节点上进行containerd配置:

为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行

# containerd config default > /etc/containerd/config.toml
# sed -i 's#registry.k8s.io#registry.aliyuncs.com/google_containers#g' /etc/containerd/config.toml
# systemctl restart containerd

由于默认拉取镜像地址k8s.gcr.io国内无法访问,所以这里指定阿里云镜像仓库地址。

三、部署Kubernetes Master

在192.168.183.139(Master)执行。

[root@master ~]# kubeadm init \
  --apiserver-advertise-address=192.168.183.139 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.25.4 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.25.4
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.183.139]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.183.139 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.183.139 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.503346 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ou4kp0.ml5vgtrtya17cole
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.183.139:6443 --token ou4kp0.ml5vgtrtya17cole \
	--discovery-token-ca-cert-hash sha256:3c150ded043eb0a14de67254a9e2ec6548ac697df04e19b594660767af1ffc9b 

//配置环境变量
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > /etc/profile.d/kube.sh
[root@master ~]# source /etc/profile.d/kube.sh 
[root@master ~]# echo $KUBECONFIG
/etc/kubernetes/admin.conf

四、安装Pod网络插件(CNI)

# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

[root@master ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

确保能够访问到quay.io这个registery。

五、加入Kubernetes Node

在192.168.183.136、192.168.183.137上(Node)执行。

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

kubeadm join 192.168.183.139:6443 --token ou4kp0.ml5vgtrtya17cole \
	--discovery-token-ca-cert-hash sha256:3c150ded043eb0a14de67254a9e2ec6548ac697df04e19b594660767af1ffc9b

六、测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   7m32s   v1.25.4
node1    Ready    <none>          4m10s   v1.25.4
node2    Ready    <none>          4m10s   v1.25.4
[root@master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-76d6c9b8c-nrg5t   1/1     Running   0          86s
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        9m39s
nginx        NodePort    10.98.218.62   <none>        80:30527/TCP   16s

//测试
[root@master ~]# curl 10.98.218.62
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

测试端口:

IP是master节点的。

kubernetes集群部署_第1张图片

你可能感兴趣的:(k8s,kubernetes)