【Kubeedge小白安装教程】Centos7.9+K8Sv1.22.17(kubeadm)+Kubeedgev1.13.1部署教程详解(NodePort)----亲测过

安装过程中资料的下载

第一:部署kubernetes1.22版本

这里部署k8s版本为1.22或者1.23版本。这里参考kubeedge支持的程度,选择1.22版本,后续会支持更高的k8s版本。

相关部署环境及部署组件:

主机名 ip地址 节点类型 系统版本
k8s-master 192.168.0.61 master、etcd、worker centos7.9
edge-01 192.168.0.232 centos7.9

环境准备

准备工作需要在所有节点上操作,包含的过程如下:

  • 配置主机名
  • 添加/etc/hosts
  • 清空防火墙
  • 设置yum源
  • 配置时间同步
  • 关闭swap
  • 配置内核参数
  • 加载ip_vs内核模块
  • 安装Containerd
  • 安装kubelet、kubectl、kubeadm

配置主机名

hostnamectl set-hostname k8s-master  &&  bash
hostnamectl set-hostname edge-01  &&  bash

添加/etc/hosts

cat > /etc/hosts  << EOF
192.168.0.61 k8s-master
192.168.0.232 edge-01
EOF

清空防火墙

systemctl stop firewalld
systemctl disable firewalld
iptables -F
setenforce 0 
sed -i 's/SELINUX=/SELINUX=disabled/g' /etc/selinux/config

设置yum源

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo

配置时间同步

yum install -y chrony -y 
cat > /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
logchange 0.5
logdir /var/log/chrony
EOF

systemctl enable --now chronyd 
chronyc sources 

关闭swap

默认情况下,kubernetes不允许其安装节点开启swap,如果已经开始了swap的节点,建议关闭掉swap

swapoff -a # 临时禁用swap
sed -i 's/.*swap.*/#&/' /etc/fstab # 永久

# 修改/etc/fstab,将swap挂载注释掉,可确保节点重启后swap仍然禁用

# 可通过如下指令验证swap是否禁用: 
free -m  # 可以看到swap的值为0
              total        used        free      shared  buff/cache   available
Mem:           7822         514         184         431        7123        6461
Swap:             0           0           0

配置内核参数

cat > /etc/modprobe.d/k8s.conf <

这些内核模块主要用于后续将kube-proxy的代理模式从iptables切换至ipvs

在linux kernel 4.19版本已经将nf_conntrack_ipv4 更新为 nf_conntrack,如果在加载内核时出现如下报错:modprobe: FATAL: Module nf_conntrack_ipv4 not found.,则将nf_conntrack_ipv4 改为nf_conntrack即可

加载ip_vs内核模块

cat > /etc/sysctl.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward=1
net.ipv4.ip_forward_use_pmtu = 0
EOF

sysctl --system
sysctl -a|grep "ip_forward"

如果出现sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No Such file or directory这样的错误,可以忽略

bridge-nf 使 netfilter 可以对 Linux 网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bridge.bridge-nf-call-iptables=1后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:

  • net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包
  • net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包
  • net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包
  • net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。
  • fs.may_detach_mounts:centos7.4引入的新内核参数,用于在容器场景防止挂载点泄露

安装Docker

# 安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum -y install docker-ce-20.10.0-3.el7   #这里指定版本

# 生成docker配置文件
mkdir -p /etc/docker/
touch /etc/docker/daemon.json
cat > /etc/docker/daemon.json << EOF
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 
EOF

# 启动docker
systemctl enable docker --now
systemctl start docker
systemctl status docker
docker --version

安装kubelet、kubectl、kubeadm

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum list kubeadm --showduplicates

yum install -y kubelet-1.22.17-0 kubeadm-1.22.17-0 kubectl-1.22.17-0
systemctl enable kubelet  # 注意: 在这里kubelet是无法正常启动的,只是确保其可以开机自启

部署master

部署master,只需要在master节点上配置,包含的过程如下:

  • 生成kubeadm.yaml文件
  • 编辑kubeadm.yaml文件
  • 根据配置的kubeadm.yaml文件部署master

安装master节点:

kubeadm init --apiserver-advertise-address=192.168.0.61  --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.22.17 --service-cidr=10.96.0.0/12  --pod-network-cidr=10.244.0.0/16

如果k8s集群出现问题,就重置再进行部署:

# 重置集群
kubeadm reset
# 停止kubelet
systemctl stop kubelet
# 删除已经部署的容器
crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -aq |xargs crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock rm 
# 清理所有目录
rm -rf /etc/kubernetes /var/lib/kubelet /var/lib/etcd /var/lib/cni/

此时如下看到类似如下输出即代表master安装完成:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.129:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:d3e83e7c3907dc42039bbf845022d1fa95bba9a4f5af018c17809e269ec6175d

配置访问集群:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置网络

在master完成部署之后,发现两个问题:

  1. master节点一直notready
  2. coredns pod一直pending

其实这两个问题都是因为还没有安装网络插件导致的,kubernetes支持众多的网络插件,详情可参考这里: https://kubernetes.io/docs/concepts/cluster-administration/addons/

caclico网站为:https://docs.tigera.io/calico/latest/getting-started/kubernetes/

我们这里使用calico网络插件,安装如下:

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O
kubectl apply -f calico.yaml

查看节点状态:

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   15m   v1.22.17

检查master组件是否正常:

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7b99746d5-cb9g9   1/1     Running   0          6m31s
calico-node-dprtz                         1/1     Running   0          6m31s
coredns-7f6cbbb7b8-h7dj2                  1/1     Running   0          16m
coredns-7f6cbbb7b8-mwr8n                  1/1     Running   0          16m
etcd-k8s-master                           1/1     Running   1          16m
kube-apiserver-k8s-master                 1/1     Running   1          16m
kube-controller-manager-k8s-master        1/1     Running   1          16m
kube-proxy-hdwsw                          1/1     Running   0          16m
kube-scheduler-k8s-master                 1/1     Running   1          16m

让master节点成为worker节点

这里因为资源不足的原因,使用master单节点。去除master污点,让master成为工作节点。

#查看污点
[root@k8s-master ~]# kubectl describe node k8s-master |grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
#去除污点
[root@k8s-master ~]# kubectl taint node k8s-master node-role.kubernetes.io/master-
#检查
[root@k8s-master ~]# kubectl describe node k8s-master |grep Taint
Taints:             

第二:部署kubeedge

部署cloudcore

获取keadm工具

wget https://github.com/kubeedge/kubeedge/releases/download/v1.13.1/keadm-v1.13.1-linux-amd64.tar.gz #下载
tar -zxvf keadm-v1.13.1-linux-amd64.tar.gz #解压
cp keadm-v1.13.1-linux-amd64/keadm/keadm /usr/local/bin/  #移动keadm
keadm version  #测试

cloudcore部署

[root@k8s-master ~]# keadm init --advertise-address=192.168.0.61 --set iptablesManager.mode="external" --profile version=v1.13.1
#192.168.200.160为MetaILB分配的地址
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
=========CHART DETAILS=======
NAME: cloudcore
LAST DEPLOYED: Thu Aug  3 16:34:37 2023
NAMESPACE: kubeedge
STATUS: deployed
REVISION: 1
[root@k8s-master ~]# ps -ef|grep cloudcore  #查询到进程
root      2447  8609  0 09:54 pts/0    00:00:00 grep --color=auto cloudcore
root      9990  9943  0 09:09 ?        00:00:01 cloudcore
[root@k8s-master ~]# netstat -anltp #查看端口

[root@k8s-master ~]# keadm gettoken  #生成的token
13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ


[root@k8s-master ~]# kubectl get pod -n kubeedge
NAME                           READY   STATUS    RESTARTS   AGE
cloud-iptables-manager-tk9nk   1/1     Running   0          4m37s
cloudcore-5475cc4b46-hclrt     1/1     Running   0          4m36s


[root@k8s-master ~]# kubectl get deploy -n kubeedge
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
cloudcore   1/1     1            1           6m47s


[root@k8s-master ~]# kubectl get svc -n kubeedge
NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                             AGE
cloudcore   ClusterIP   10.102.82.46   <none>        10000/TCP,10001/TCP,10002/TCP,10003/TCP,10004/TCP   7m16s

修改cloudcore的svc类型为NodePort


#修改yaml文件
kubectl edit svc cloudcore -n kubeedge
  type: NodePort  #这个需要修改
  
  
  [root@k8s-master ~]# kubectl get svc -n kubeedge  #这里暴露的端口尤其是10000的对外端口有用
NAME        TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                           AGE
cloudcore   NodePort   10.108.150.105           10000:32615/TCP,10001:31967/TCP,10002:32761/TCP,10003:32230/TCP,10004:30681/TCP   7m53s
[root@k8s-master ~]# 

因为边缘计算的硬件条件都不好,这里我们需要打上标签,让一些应用不扩展到edge节点上去

[root@k8s-master ~]# kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'

但凡是daemonset的都不可以去占用edge节点的硬件资源

部署metrics-server
#[root@k8s-master ~]# 
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl apply -f components.yaml
kubectl get pods -n kube-system

因为没有证书的缘故,metrics-server一只呈现这样的状态

修改yaml文件,让metrics-server不需要识别证书

[root@k8s-master ~]# kubectl patch deploy metrics-server -n kube-system --type='json' -p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]' 

metrics-server正常运行

第三:部署edgecore

获取keadm工具

wget https://github.com/kubeedge/kubeedge/releases/download/v1.13.1/keadm-v1.13.1-linux-amd64.tar.gz #下载
tar -zxvf keadm-v1.13.1-linux-amd64.tar.gz #解压
cp keadm-v1.13.1-linux-amd64/keadm/keadm /usr/local/bin/  #移动keadm
keadm version  #测试

安装docker-ce

cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "registry-mirrors": ["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"]
}
EOF

# 启动docker
systemctl enable docker --now
systemctl start docker
systemctl status docker
docker --version

这里注意:cgroupdriver=cgroupfs。修改为systemd则会出现两者之间不一致的问题

部署edgecore

在master节点上获取token

[root@k8s-master ~]# keadm gettoken
13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ

在edge节点上执行

[root@edge ~]# TOKEN=13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ

暴露节点和端口号

[root@edge ~]# SERVER=192.168.0.61:32615  #master执行这句暴露的端口kubectl get svc -n kubeedge

加油kubeedge

[root@edge ~]# keadm join --token=$TOKEN --cloudcore-ipport=$SERVER --kubeedge-version=v1.13.1

报错1:

I0803 20:40:53.928138   10458 join.go:184] 4. Pull Images
Pulling kubeedge/installation-package:v1.13.1 ...
E0803 20:40:53.929276   10458 remote_image.go:160] "Get ImageStatusfrom image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" image="kubeedge/installation-package:v1.13.1"
Error: edge node join failed: pull Images failed: rpc error: code =Unimplemented desc = unknown service runtime.v1alpha2.ImageService
execute keadm command failed:  edge node join failed: pull Images failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService

解决:

rm -rf /etc/containerd/config.toml
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

报错2:

I0803 20:47:32.605796   11139 join.go:184] 5. Copy resources from the image to the management directory
E0803 20:47:52.606330   11139 remote_runtime.go:198] "RunPodSandboxfrom runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Error: edge node join failed: copy resources failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
execute keadm command failed:  edge node join failed: copy resources failed: rpc error: code = DeadlineExceeded desc = context deadlineexceeded

解决:

keadm join --token=$TOKEN --cloudcore-ipport=$SERVER --kubeedge-version=1.13.1 --runtimetype=docker

keadm join --token=13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ --cloudcore-ipport=192.168.0.61:32615  --kubeedge-version=v1.13.1  --runtimetype=docker
[root@edge-01 kubeedge]# ll
total 12
drwxr-xr-x 2 root root 4096 Oct 16 09:49 ca
drwxr-xr-x 2 root root 4096 Oct 16 09:49 certs
drwxr-xr-x 2 root root 4096 Oct 16 09:49 config
srwxr-xr-x 1 root root    0 Oct 16 09:49 dmi.sock
[root@edge-01 kubeedge]# pwd
/etc/kubeedge
[root@edge-01 kubeedge]# 


[root@edge-01 kubeedge]# journalctl -u edgecore.service -xe  #查看日志正常

[root@edge-01 kubeedge]# systemctl status edgecore  #正常启动
● edgecore.service
   Loaded: loaded (/etc/systemd/system/edgecore.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2023-10-16 09:49:41 CST; 2min 37s ago
 Main PID: 27574 (edgecore)
    Tasks: 14
   Memory: 34.9M
   CGroup: /system.slice/edgecore.service
           └─27574 /usr/local/bin/edgecore

Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390116   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/upload/#
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390211   27574 client.go:153] finish hub-client pub
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390221   27574 eventbus.go:71] Init Sub And Pub Client for external mqtt broker tcp...essfully
Oct 16 09:49:56 edge-01 edgecore[27574]: W1016 09:49:56.390251   27574 eventbus.go:168] Action not found
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390458   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/device/+/state/update
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390905   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/device/+/twin/+
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.391326   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/node/+/membership/get
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.391804   27574 client.go:89] edge-hub-cli subscribe topic to SYS/dis/upload_records
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.392005   27574 client.go:89] edge-hub-cli subscribe topic to +/user/#
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.392182   27574 client.go:97] list edge-hub-cli-topics status, no record, skip sync
Hint: Some lines were ellipsized, use -l to show in full.
[root@edge-01 kubeedge]# 

csdn解决网址:https://blog.csdn.net/MacWx/article/details/129527231

1.13版本默认使用containerd,如果需要使用docker,runtimetype和remote runtime endpoint都要在keadm join时指定

如果都不可以,edgecore加入不进k8s集群,使用这个网站

https://blog.csdn.net/MacWx/article/details/130200209

检查是否安装成功

在edgecore节点上运行容器

[root@k8s-master ~]# cat > nginx.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3  # 可根据需要进行调整
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeName: edge-01  #调度到指定机器
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: NodePort
EOF
 
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
edge-01      Ready    agent,edge             22m   v1.23.15-kubeedge-v1.13.1
k8s-master   Ready    control-plane,master   89m   v1.22.17


[root@k8s-master ~]# kubectl get pods,svc
NAME                                   READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-c4c8d6d8d-cr8xf   1/1     Running   0          66s
pod/nginx-deployment-c4c8d6d8d-jwlr4   1/1     Running   0          66s
pod/nginx-deployment-c4c8d6d8d-mnxf6   1/1     Running   0          66s

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes      ClusterIP   10.96.0.1               443/TCP        89m
service/nginx-service   NodePort    10.103.238.37           80:31207/TCP   66s
[root@k8s-master ~]# 

第四:启用 kubectl logs 功能

在master节点上运行

启用Kubectl logs/exec/attach等能力 | KubeEdge

[root@k8s-master ~]# iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to 192.168.0.61:10003   #这个IP要注意




[root@edge-01 ~]# vim /etc/kubeedge/config/edgecore.yaml  #修改edgeStream下的enable: true为这个

  edgeStream:
    enable: true   #只修改这个
    handshakeTimeout: 30
    readDeadline: 15

[root@edge-01 config]# systemctl restart edgecore
[root@edge-01 config]# systemctl status  edgecore



ls /etc/kubernetes/pki/
export CLOUDCOREIPS="192.168.0.61"
echo $CLOUDCOREIPS
sudo su  #需要到root下执行

mkdir -p /etc/kubeedge
cd /etc/kubeedge/
wget https://github.com/kubeedge/kubeedge/blob/v1.13.1/build/tools/certgen.sh #下载对应版本
chmod +x certgen.sh

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
edge-01      Ready    agent,edge             91m    v1.23.15-kubeedge-v1.13.1
k8s-master   Ready    control-plane,master   158m   v1.22.17
[root@k8s-master ~]# kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-c4c8d6d8d-cr8xf   1/1     Running   0          70m
nginx-deployment-c4c8d6d8d-jwlr4   1/1     Running   0          70m
nginx-deployment-c4c8d6d8d-mnxf6   1/1     Running   0          70m


[root@k8s-master ~]# kubectl logs nginx-deployment-c4c8d6d8d-mnxf6  #master上看到的日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/10/16 02:11:46 [notice] 1#1: using the "epoll" event method
2023/10/16 02:11:46 [notice] 1#1: nginx/1.21.5
2023/10/16 02:11:46 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/10/16 02:11:46 [notice] 1#1: OS: Linux 3.10.0-1160.92.1.el7.x86_64
2023/10/16 02:11:46 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/10/16 02:11:46 [notice] 1#1: start worker processes
2023/10/16 02:11:46 [notice] 1#1: start worker process 31
2023/10/16 02:11:46 [notice] 1#1: start worker process 32
[root@k8s-master ~]# 
[root@edge-01 config]# docker ps -a
CONTAINER ID   IMAGE                COMMAND                  CREATED             STATUS             PORTS                                            NAMES
2f4bf337a7ce   nginx                "/docker-entrypoint.…"   About an hour ago   Up About an hour                                                    k8s_nginx_nginx-deployment-c4c8d6d8d-cr8xf_default_6059be8e-d186-40ed-bc8f-22ec4c091067_0
2653c891ba2e   nginx                "/docker-entrypoint.…"   About an hour ago   Up About an hour                                                    k8s_nginx_nginx-deployment-c4c8d6d8d-jwlr4_default_790b1623-6d0c-4849-a3d3-f37bb547804a_0
6b5df42b50e0   nginx                "/docker-entrypoint.…"   About an hour ago   Up About an hour                                                    k8s_nginx_nginx-deployment-c4c8d6d8d-mnxf6_default_bcce0320-a106-4321-949e-febf50323bb9_0
74525d0ebaa8   kubeedge/pause:3.6   "/pause"                 About an hour ago   Up About an hour                                                    k8s_POD_nginx-deployment-c4c8d6d8d-jwlr4_default_790b1623-6d0c-4849-a3d3-f37bb547804a_0
9a487feef1f8   kubeedge/pause:3.6   "/pause"                 About an hour ago   Up About an hour                                                    k8s_POD_nginx-deployment-c4c8d6d8d-cr8xf_default_6059be8e-d186-40ed-bc8f-22ec4c091067_0
a22916f46182   kubeedge/pause:3.6   "/pause"                 About an hour ago   Up About an hour                                                    k8s_POD_nginx-deployment-c4c8d6d8d-mnxf6_default_bcce0320-a106-4321-949e-febf50323bb9_0
fbe5a7a44d57   3a05ba674344         "/docker-entrypoint.…"   2 hours ago         Up 2 hours                                                          k8s_mqtt_mqtt-kubeedge_default_ea17e8b2-889d-4f8e-b45c-5793f96218c1_0
2c16459e7d7b   kubeedge/pause:3.6   "/pause"                 2 hours ago         Up 2 hours         0.0.0.0:1883->1883/tcp, 0.0.0.0:9001->9001/tcp   k8s_POD_mqtt-kubeedge_default_ea17e8b2-889d-4f8e-b45c-5793f96218c1_0


[root@edge-01 config]# docker logs 2f4bf337a7ce #edge边缘节点看到的日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/10/16 02:12:01 [notice] 1#1: using the "epoll" event method
2023/10/16 02:12:01 [notice] 1#1: nginx/1.21.5
2023/10/16 02:12:01 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/10/16 02:12:01 [notice] 1#1: OS: Linux 3.10.0-1160.92.1.el7.x86_64
2023/10/16 02:12:01 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/10/16 02:12:01 [notice] 1#1: start worker processes
2023/10/16 02:12:01 [notice] 1#1: start worker process 31
2023/10/16 02:12:01 [notice] 1#1: start worker process 32
[root@edge-01 config]# 

你可能感兴趣的:(kubernetes,docker容器相关,kubernetes,容器,云原生)