apple m1芯片
kubelet kubeadm kubectl
官方文档
首先添加Kubernetes软件包签名密钥
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
然后添加kubernetes软件包仓库
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
更新软件包列表
sudo apt-get update
安装
sudo apt-get install -y kubelet kubeadm kubectl
注意:如有网络问题请自行解决
禁用交换分区
swapoff -a
允许iptables检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
sudo kubeadm init \
--apiserver-advertise-address=172.16.190.135 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
--kubernetes-version
为控制平面选择一个特定的 Kubernetes 版本。--pod-network-cidr
指明 Pod 网络可以使用的 IP 地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDR。--apiserver-advertise-address
API 服务器所公布的其正在监听的 IP 地址,设置为master节点的ip地址。如果未设置,则使用默认网络接口。mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
~/.kube/config
文件,就是集群的kubeconfig文件如果是root用户,可以使用如下命令
export KUBECONFIG=/etc/kubernetes/admin.conf
在主节点机器上运行下面的命令
sudo kubeadm token create --print-join-command
把得到的结果复制到子节点机器上运行
sudo kubeadm join 172.16.190.132:6443 --token wih5po.h3gmincq4q7ew12f --discovery-token-ca-cert-hash sha256:dfa8a033bce67d8db3e9111e09ecf82df08bc1248a681aa176644c52e63a3119
这样就可以把ip为172.16.190.132
的节点加到集群中
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
之后节点状态就变成了Ready
kubectl config view
结果如果是下面这样,说明kubeconfig文件普通用户没有权限或者没有配置到环境变量中apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
sudo chmod 666 /etc/kubernetes/admin.conf
,即可解决参考https://www.cnblogs.com/ztxd/articles/16505585.html
/var/log/pods/
目录下看组件日志,kubernetes上的集群节点组件的系统级别日志存储在这里,系统级别的日志文件通常会包含更多的底层系统信息和组件级别的错误。注意kubectl命令看到的日志是针对容器化的kubernetes组件在容器中产生的日志,这些日志通常不会直接写入到/var/log
目录中,而是通过容器运行时的机制(如Docker、containerd)进行收集和管理2023-11-29T06:41:05.691880439Z stderr F {"level":"info","ts":"2023-11-29T06:41:05.69144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
2023-11-29T06:41:17.617799496Z stderr F {"level":"info","ts":"2023-11-29T06:41:17.617594Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
2023-11-29T06:41:17.617860196Z stderr F {"level":"info","ts":"2023-11-29T06:41:17.617684Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"master","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.16.190.135:2380"],"advertise-client-urls":["https://172.16.190.135:2379"]}
2023-11-29T06:41:17.618113328Z stderr F {"level":"warn","ts":"2023-11-29T06:41:17.617968Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.16.190.135:2379: use of closed network connection"}
2023-11-29T06:41:17.618121536Z stderr F {"level":"warn","ts":"2023-11-29T06:41:17.61803Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.16.190.135:2379: use of closed network connection"}
2023-11-29T06:41:17.618223064Z stderr F {"level":"warn","ts":"2023-11-29T06:41:17.618176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
2023-11-29T06:41:17.618367669Z stderr F {"level":"warn","ts":"2023-11-29T06:41:17.61832Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
2023-11-29T06:41:17.638942763Z stderr F {"level":"info","ts":"2023-11-29T06:41:17.638737Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aff1bc4be5440e1","current-leader-member-id":"aff1bc4be5440e1"}
2023-11-29T06:41:17.640764851Z stderr F {"level":"info","ts":"2023-11-29T06:41:17.640624Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.16.190.135:2380"}
2023-11-29T06:41:17.640807803Z stderr F {"level":"info","ts":"2023-11-29T06:41:17.640736Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.16.190.135:2380"}
2023-11-29T06:41:17.640883668Z stderr F {"level":"info","ts":"2023-11-29T06:41:17.640778Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"master","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.16.190.135:2380"],"advertise-client-urls":["https://172.16.190.135:2379"]}
{"level":"info","ts":"2023-11-29T06:41:17.617594Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
,说明系统中断导致etcd关闭,那么究竟是什么原因导致的?经过排查发现容器运行时没配置cgroup驱动,我们在/etc/containerd/config.toml
文件中写入如下内容version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
systemctl restart containerd
,集群恢复正常参考
https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#containerd
https://github.com/etcd-io/etcd/issues/13670
要使用kubeadm reset
重置,然后将kubeconfig文件夹删掉,重新部署
理论上kubeadm init之后会给你加入节点的语句,如果token过期了,可以使用下面的语句重新生成
kubeadm token create --print-join-command
然后在子节点所在的机器输入命令即可
git clone -b master https://github.com/kubernetes/kubernetes.git
openssl genrsa -out server.key 2048
openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=*.wsc.com" -out server.csr
openssl x509 -req -extfile <(printf "subjectAltName=DNS:wsc.com,DNS:www.wsc.com") -days 365 -in server.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out server.crt
openssl genrsa -out client.key 2048
/opt/homebrew/etc/openssl@3/openssl.cnf
文件的具体位置要使用openssl version -a|grep OPENSSLDIR
命令获取,在我的macos本地的路径是/opt/homebrew/etc/openssl@3/openssl.cnf
openssl req -new -sha256 -key client.key -subj "/CN=*.wsc.com" -reqexts SAN -extensions SAN -config <(cat /opt/homebrew/etc/openssl@3/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:wsc.com,DNS:www.wsc.com")) -out client.csr
openssl x509 -req -in client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out client.crt -days 5000
export KUBECONFIG=/Users/xxx/.kube/config
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://wsc.com:6443
kubectl config set-credentials wsc --client-certificate=client.crt --client-key=client.key --embed-certs=true
kubectl config set-credentials wsc --client-certificate=client.crt --client-key=client.key --embed-certs=true
kubectl config set-context wsc --cluster=kubernetes --user=wsc
kubectl config use-context wsc
kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=*.wsc.com
参考了https://www.cnblogs.com/wushc/p/15478800.html
--advertise-address=172.16.190.132
--allow-privileged=true
--authorization-mode=Node,RBAC
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://172.16.190.132:2379
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-issuer=https://kubernetes.default.svc.cluster.local
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/tmp/server.crt
--tls-private-key-file=/etc/kubernetes/tmp/server.key