系统:centos 7.7
内核: 5.3.7-1.el7.elrepo.x86_64
uname -a
输出
Linux foxdev 5.3.7-1.el7.elrepo.x86_64 #1 SMP Thu Oct 17 18:17:07 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
cat /proc/version
输出
Linux version 5.3.7-1.el7.elrepo.x86_64 (mockbuild@Build64R7) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)) #1 SMP Thu Oct 17 18:17:07 EDT 2019
cat /etc/redhat-release
输出
CentOS Linux release 7.7.1908 (Core)
如果你不是5.2 请你按照这篇升级
https://blog.csdn.net/fenglailea/article/details/88740961
yum update -y
yum install -y wget curl vim
fox.风
hostnamectl set-hostname foxk8s
cat <<EOF >>/etc/hosts
10.10.10.10 foxk8s
EOF
systemctl disable firewalld --now
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
swapoff -a
echo "vm.swappiness = 0">> /etc/sysctl.conf
sed -i 's/.*swap.*/#&/' /etc/fstab
sysctl -p
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
https://kubernetes.io/docs/tasks/tools/install-kubectl/
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.bak
mv epel.repo epel.repo.bak
curl https://mirrors.aliyun.com/repo/Centos-7.repo -o CentOS-Base.repo
sed -i 's/gpgcheck=1/gpgcheck=0/g' /etc/yum.repos.d/CentOS-Base.repo
curl https://mirrors.aliyun.com/repo/epel-7.repo -o epel.repo
gpkcheck=0 表示对从这个源下载的rpm包不进行校验
cd /etc/yum.repos.d
curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
gpkcheck=0 表示对从这个源下载的rpm包不进行校验
repo_gpgcheck:某些安全性配置文件会在 /etc/yum.conf 内全面启用 repo_gpgcheck,以便能检验软件库的中继数据的加密签署
yum clean all
yum makecache
yum repolist
查看docker版本号,查看 是否安装
docker version
yum list docker-ce --showduplicates | sort -r
输出
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 @docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
这里选择 19.03.1-3.el7
这个版本安装
yum install -y docker-ce
启动 docker
systemctl enable docker --now
或
systemctl enable docker && systemctl start docker
查看docker 版本
docker -v
或
docker version
输出
Docker version 19.03.3, build a872fc2f86
访问阿里云官网:https://www.aliyun.com
找到镜像加速器配置页面(如果一时找不到,可以在页面上使用搜索功能):
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
页面上有详细的配置指南,对着操作就可以了
如果懒得登录阿里云折腾,也可以直接使用我的配置:
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-file": "3",
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://7fsmy198.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
注意:
native.cgroupdriver=systemd
官方推荐此配置,地址 https://kubernetes.io/docs/setup/production-environment/container-runtimes/
kubeadm不管kubelet和kubectl,所以我们需要手动安装kubelet和kubectl:
yum install -y kubeadm kubelet kubectl --disableexcludes=kubernetes
Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。
Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。
Kubectl是Kubernetes集群管理工具。
最后启动kubelet:
systemctl enable kubelet --now
注:在master节点上进行如下操作
在安装过程中我们发现安装的是 1.16.2
版本
kubeadm version
输出
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:15:39Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
执行 kubeadm config images list
命令就会输出如下所需版本
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
kubeadm init --kubernetes-version=1.16.2 \
--apiserver-advertise-address=10.10.10.10 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
本机IP
地址。10.244.0.0/16
阿里云镜像仓库
地址这一步很关键,由于kubeadm
默认从官网k8s.grc.io
下载所需镜像,国内无法
访问,因此需要通过–image-repository
指定阿里云镜像仓库
地址
集群初始化成功后返回如下信息:
记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.10.10:6443 --token kehvmq.e33d33lgkrm8h0rn \
--discovery-token-ca-cert-hash sha256:6150e7960c44890d5dd6b160bbbb4bfa256023db22f004b54d27e1cca72b0afc
根据以上结果,还要操作一些任务
只有 Kubernetes集群初始化 完成后才能修改端口范围
默认端口范围:30000-32767
如果只使用这些,那么不用修改
https://blog.csdn.net/fenglailea/article/details/91869648
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
来源
https://github.com/coreos/flannel
cd ~
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -O kube-flannel.yml
或
wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml -O kube-flannel.yml
kubectl apply -f kube-flannel.yml
注意: 如果上面自定义了pod ip [
--pod-network-cidr=10.244.0.0/16
] 范围,这里需要修改·kube-flannel.ym·l的net-conf.json
, 把10.244.0.0
修改为 你改动的范围。
nodes
kubectl get nodes
输出
NAME STATUS ROLES AGE VERSION
foxk8s Ready master 12m v1.15.3
如果你的环境迟迟都是NotReady状态,可以kubectl get pod -A
看一下pod状态,一般可以发现问题,比如flannel的镜像下载失败啦~
当node Ready的时候,我们可以看到pod也全部ready了:
cs
kubectl get cs
输出
NAME AGE
controller-manager <unknown>
scheduler <unknown>
etcd-0 <unknown>
kubectl get nodes
输出
NAME STATUS ROLES AGE VERSION
foxk8s Ready master 21m v1.15.3
重点查看STATUS内容为Ready时,则说明集群状态正常。
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
如果是单机版请看后面的最后配置 单机版 k8s
配置
kubectl delete pod nginx
kubectl delete svc nginx
#开机启动
systemctl enable kubelet
#启动
systemctl start kubelet
在master节点上进行如下操作
拉取镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml
sed -i "160a \ \ \ \ \ \ nodePort: 30001" kubernetes-dashboard.yaml
sed -i "161a \ \ type:\ NodePort" kubernetes-dashboard.yaml
备注
s/k8s.gcr.io/loveone/g
因为墙,所以要更改能访问的
160a \ \ \ \ \ \ nodePort: 30001
增加外部访问端口
161a \ \ type:\ NodePort
增加可外部访问
150行到164行代码如下
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
type: NodePort
selector:
k8s-app: kubernetes-dashboard
kubectl create -f kubernetes-dashboard.yaml
如果你部署错误了,那么可以删除 重新来过 kubectl delete -f kubernetes-dashboard.yaml
kubectl get deployment kubernetes-dashboard -n kube-system
kubectl get pods -n kube-system -o wide
kubectl get services -n kube-system
ss -ntlp|grep 30001
浏览器输入Dashboard访问地址:
https://10.10.10.10:30001
访问成功后,是要选择令牌的,填入 令牌 token
才能进入。令牌怎么来,看如下获取
创建 serviceaccount
kubectl create serviceaccount dashboard-admin -n kube-system
绑定 权限
kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
获取令牌
kubectl describe secrets \
-n kube-system $(kubectl -n kube-system get secret | awk '/admin/{print $1}')
输出如下,
====
priv: 1679 bytes
pub: 459 bytes
Name: kubernetes-dashboard-token-lhs57
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 0c9e6220-8d8f-11e9-8c09-4cedfbc99721
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1saHM1NyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBjOWU2MjIwLThkOGYtMTFlOS04YzA5LTRjZWRmYmM5OTcyMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.aQbqiVwqZvAIXuHSD72SNSqVSp55nkhy3YP_x_zV3ZMYQPW5geg_uH6OzCI11D5Iu_WJdFTl0rl9t12NfUkZWDiv9ghzoP-pDpJtKeEWZAq3pb_cLFyUmVcUsjuw7BNf0RUowBM3ukfYLHhwhNROjf-W6RAPj1Kp0O9xsMghDjMHZyASutz3XnmZvTrkDKvs-vTg-aSk9Jv6jt3Kat35_ufGVf80CJbhbPzd7CvaLS_03olv0veueup95Qm6mo5Mai1lYbaKeYGpC0hwi8aEpqZafni6MsxJWZt0sXZJiiclqJ7GoN9FRv1EXXGQ1Vcea6Ks7VQpDuz4woNhJdPppQ
要找到 kubernetes-dashboard的令牌token:
后面内容 就是需要的数据。
在浏览器中 选择令牌
,把令牌 填入,点击登录,认证通过后,登录Dashboard首页
默认 Master Node不参与工作负载,所以 要配置让Master
工作,请安如下2步操作
kubectl describe node foxk8s | grep Taints
或
kubectl describe node -A | grep Taints
结果
Taints: node-role.kubernetes.io/master:NoSchedule
kubectl taint nodes --all node-role.kubernetes.io/master-
或
kubectl taint nodes foxk8s node-role.kubernetes.io/master-
查看
kubectl describe node foxk8s | grep Taints
或
kubectl describe node -A | grep Taints
结果
Taints: <none>
https://blog.csdn.net/fenglailea/article/details/91873346
如果kubeadm init
命令后发现配置错误,重新更改怎么办,那么使用kubeadm reset
命令重置
kubectl get pod --namespace=kube-system
或
kubectl get pod -A
输出
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-7mhvr 1/1 Running 0 88m
coredns-8686dcc4fd-xwgft 1/1 Running 0 88m
etcd-afmserver 1/1 Running 0 88m
kube-apiserver-afmserver 1/1 Running 0 87m
kube-controller-manager-afmserver 1/1 Running 0 87m
kube-flannel-ds-amd64-nkj9m 1/1 Running 0 87m
kube-proxy-cskfx 1/1 Running 0 88m
kube-scheduler-afmserver 1/1 Running 0 88m
kubernetes-dashboard-76f6bf8c57-dqfxm 1/1 Running 0 13m
这里假设
:kubernetes-dashboard-76f6bf8c57-dqfx
STATUS 状态 Pending
,那么用如下查看
kubectl describe pod kubernetes-dashboard-76f6bf8c57-dqfx --namespace=kube-system
就会输出 错误日志
信息
kubectl get pod
输出
NAME READY STATUS RESTARTS AGE
nginx-65f88748fd-x4ppv 0/1 Pending 0 50m
kubectl describe pod nginx-65f88748fd-x4ppv
就会输出 错误日志
信息
....
....
....
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6s (x39 over 50m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
The Service “redis” is invalid: spec.ports[0].nodePort: Invalid value: 6379: provided port is not in the valid range. The range of valid ports is 30000-32767
修改kubernetes服务nodeport类型的端口范围
编辑 kube-apiserver.yaml 文件
vim /etc/kubernetes/manifests/kube-apiserver.yaml
找到 --service-cluster-ip-range
这一行,在这一行的下一行增加 如下内容
- --service-node-port-range=1-65535
实际 案例内容如
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.10.10.10
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.1.0.0/16
- --service-node-port-range=1-65535
最后 重启 kubelet
systemctl daemon-reload
systemctl restart kubelet
去掉配置文件中的 --network-plugin=cni
就可以了
vim /lib/systemd/system/kubelet.service.d/10-kubeadm.conf
通过编辑配置文件发现/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
,并没有此配置,找到 EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
参数,编辑这个文件 /var/lib/kubelet/kubeadm-flags.env
,发现 --network-plugin=cni
配置信息在这个配置文件中,修改注释掉即可
原
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1"
修改为
#KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1"
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1"
重新加载配置,和重启 kubelet
systemctl daemon-reload
systemctl restart kubelet
要把 slave 机器加入到 集群中,先要获取master
中的几个相关信息 token
--discovery-token-ca-cert-hash
master的IP和端口
slave 的IP 为 192.168.0.252
slave 主机名 为 kub-slave
所有机器配置一样,预装环境一样
master
机上执行在master
机上执行
kubeadm token list
输出
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
在master
机上执行
kubeadm token create
输出
x1k2fe.h9nhgblav0qpjw63
在master
机上执行
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
输出
286ce2032d6edad0b164fb36c1ab70d82c67a141171e4e931955831655925ffc
宿主机IP
:宿主机端口
,这里对应得是 10.10.10.106443
先设置 hosts
cat <<EOF >>/etc/hosts
10.10.10.10 foxk8s
EOF
在 slave 机器上 执行 格式
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
宿主机IP
:宿主机端口
,这里对应得是 192.168.0.254:6443
token
1天左右的时间就过期
--discovery-token-ca-cert-hash
获取ca证书sha256编码hash值
在上面中 几个关键信息已经获取到了,那么最后的整合信息如下
kubeadm join 192.168.0.254:6443 \
--token x1k2fe.h9nhgblav0qpjw63 \
--discovery-token-ca-cert-hash sha256:286ce2032d6edad0b164fb36c1ab70d82c67a141171e4e931955831655925ffc
slave节点需要等它下载好里面pod后,才会变成Ready状态
查看 子节点状态
在 master 上执行
kubectl get nodes
输出
NAME STATUS ROLES AGE VERSION
fox8s Ready master 41m v1.15.3
kub-slave NotReady <none> 15m v1.15.3
在 master 上执行
kubectl drain foxk8s --delete-local-data --force --ignore-daemonsets
kubectl delete node foxk8s
在 slave 上执行
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
rm -rf /var/lib/etcd/*
来源:
https://www.kubernetes.org.cn/5462.html
http://hutao.tech/k8s-source-code-analysis/prepare/debug-environment.html
https://blog.csdn.net/qq1083062043/article/details/84949924
https://cloud.tencent.com/developer/article/1487532
https://www.kubernetes.org.cn/5551.html
https://blog.csdn.net/mailjoin/article/details/79686934