主机名 | IP地址 | 应用信息 |
k8s-master01 | 192.168.124.251 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、keepalived+haproxy |
k8s-master02 | 192.168.124.252 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、keepalived+haproxy |
k8s-node01 | 192.168.124.253 | kubelet、kube-proxy、Containerd、etcd |
k8s-node02 | 192.168.124.254 | kubelet、kube-proxy、Containerd |
LB-VIP | 192.168.124.250 | 集群负载VIP |
Pod网段:172.16.0.0/16
service网段:10.96.0.0/16
这里使用的是开源Rocky Linux操作系统,该系统是继 CentOS 8停产后,继而由CentOS创始人再次延续CentOS8 而开发的系统,一系列操作命令与CentOS 8 无区别,但默认是禁用root远程登录的,需要在/etc/ssh/sshd_config打开,开源社区:Rocky Linux
软件名称 | 版本信息 |
Rocky Linux 9.0 | 5.14.0-70.17.1.el9_0.x86_64 自带的 |
kubernetes | v1.25.0 |
etcd | 3.5.4 |
cfssl | 1.6.1 |
containerd | 1.6.6 |
下载kubernetes1.25.+的二进制包
github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md
wget https://dl.k8s.io/v1.25.0/kubernetes-server-linux-amd64.tar.gz
下载etcd二进制包
github二进制包下载地址:https://github.com/etcd-io/etcd/releases
wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz
containerd下载二进制包
wget https://github.com/containerd/containerd/releases/download/v1.6.6/containerd-1.6.6-linux-amd64.tar.gz
cni网络插件下载
wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
crictl客户端工具
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
下载cfssl二进制包
github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
# 安装必要工具
yum -y install vim telnet wget curl lrzsz bash-completion net-tools ntpdate device-mapper-persistent-data lvm2
# 设置主机名称
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
# 添加 hosts文件
cat >> /etc/hosts <> /etc/security/limits.conf < /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 0
EOF
# 生效
sysctl --system
# 安装ipvs
yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat >> /etc/modules-load.d/ipvs.conf <
在master01配置免密登录其他机器
ssh-keygen -t rsa # 一直回车即可
ssh-copy-id 192.168.124.252
ssh-copy-id 192.168.124.253
ssh-copy-id 192.168.124.254
# 验证免密登录
ssh [email protected]
Last login: Fri Oct 28 10:28:42 2022 from 192.168.124.5
[root@k8s-node02 ~]#
在两台Node节点安装
# 解压containerd包
tar -zxf containerd-1.6.6-linux-amd64.tar.gz -C /opt/
cp /opt/bin/* /usr/bin/
# 创建 cni 目录
mkdir -p /etc/cni/net.d /opt/cni/bin
#解压cni二进制包
tar -zxf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
# 解压 crictl工具
tar -zxf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/
cat > /usr/lib/systemd/system/containerd.service << EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
mkdir /etc/containerd
containerd config default | tee /etc/containerd/config.toml
# 修改Containerd运行时使用systemd作为启动
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup
# 修改Containerd的镜像仓库文件,这里使用的是registry.cn-hangzhou.aliyuncs.com/chenby作为镜像仓库
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox_image
# 补充一句
在config.toml 文件中最上面 大概第4左右行的位置 是修改containerd数据存储目录,默认和docker一样的,在/var/log/containerd下,有需要可以自行修改
cat > /etc/crictl.yaml <
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 -O /usr/bin/runc
chmod +x /usr/bin/runc
# systemctl daemon-reload
# systemctl start containerd
在Master01操作
tar -zxf kubernetes-server-linux-amd64.tar.gz
tar -zxf etcd-v3.5.4-linux-amd64.tar.gz
cd kubernetes/server/bin
cp -r kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin
scp -r kube-apiserver kube-controller-manager kube-scheduler kubectl [email protected]:/usr/local/bin
scp -r kubelet kube-proxy [email protected]:/usr/local/bin
scp -r kubelet kube-proxy [email protected]:/usr/local/bin
cd etcd-v3.5.4-linux-amd64
cp -r etcd etcdctl /usr/local/bin
scp -r etcd etcdctl [email protected]:/usr/local/bin
scp -r etcd etcdctl [email protected]:/usr/local/bin
在master01上操作
# 在部署etcd 的机器上 创建目录
mkdir -p /etc/etcd/ssl
# 把下载后的证书工具重命名
chmod +x cfssl*
mv cfssl_1.6.1_linux_amd64 /usr/local/bin/cfssl
mv cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_1.6.1_linux_amd64 /usr/local/bin/cfssl-certinfo
# 进入提前准备好的证书目录中
cd /root/k8s-pki-yaml/pki
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-etcd01,k8s-etcd02,k8s-etcd03,192.168.124.251,192.168.124.252,192.168.124.253 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
scp -r /etc/etcd/ssl/* [email protected]:/etc/etcd/ssl
scp -r /etc/etcd/ssl/* [email protected]:/etc/etcd/ssl
# 所有节点 创建此目录
mkdir -p /etc/kubernetes/pki
# 进入证书存放目录
cd /root/k8s-pki-yaml/pki
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,192.168.124.250,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.124.251,192.168.124.252 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.124.250:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.124.250:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.124.250:8443 \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin \
--client-certificate=/etc/kubernetes/pki/admin.pem \
--client-key=/etc/kubernetes/pki/admin-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.124.250:8443 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
scp -r /etc/kubernetes/* [email protected]:/etc/kubernetes
scp -r /etc/kubernetes/* [email protected]:/etc/kubernetes
scp -r /etc/kubernetes/* [email protected]:/etc/kubernetes
cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-etcd01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.124.251:2380'
listen-client-urls: 'https://192.168.124.251:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.124.251:2380'
advertise-client-urls: 'https://192.168.124.251:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-etcd01=https://192.168.124.251:2380,k8s-etcd02=https://192.168.124.252:2380,k8s-etcd03=https://192.168.124.253:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-etcd02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.124.252:2380'
listen-client-urls: 'https://192.168.124.252:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.124.252:2380'
advertise-client-urls: 'https://192.168.124.252:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-etcd01=https://192.168.124.251:2380,k8s-etcd02=https://192.168.124.252:2380,k8s-etcd03=https://192.168.124.253:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-etcd03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.124.253:2380'
listen-client-urls: 'https://192.168.124.253:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.124.253:2380'
advertise-client-urls: 'https://192.168.124.253:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-etcd01=https://192.168.124.251:2380,k8s-etcd02=https://192.168.124.252:2380,k8s-etcd03=https://192.168.124.253:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload && systemctl start etcd #需要三台一同启动
systemctl enable etcd
export ETCDCTL_API=3
etcdctl --endpoints="192.168.124.251:2379,192.168.124.252:2379,192.168.124.253:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
# 输出如下,一主两从
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.124.251:2379 | 1026c1ebe076125d | 3.5.4 | 20 kB | true | false | 2 | 9 | 9 | |
| 192.168.124.252:2379 | 8c7539173eea4911 | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | |
| 192.168.124.253:2379 | 1b8b842ad30ff6a0 | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
在两台master上安装
yum -y install keepalived haproxy
两台master节点的Haproxy配置一样
cat >/etc/haproxy/haproxy.cfg<<"EOF"
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.124.251:6443 check
server k8s-master02 192.168.124.252:6443 check
EOF
master01上的keepalived
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
# 注意网卡名
interface ens160
mcast_src_ip 192.168.124.251
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.124.250
}
track_script {
chk_apiserver
} }
EOF
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
# 注意网卡名
interface ens160
mcast_src_ip 192.168.124.252
virtual_router_id 51
priority 50
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.124.250
}
track_script {
chk_apiserver
} }
EOF
两台节点都要有健康检查配置
cat > /etc/keepalived/check_apiserver.sh << EOF
#!/bin/bash
err=0
for k in \$(seq 1 3)
do
check_code=\$(pgrep haproxy)
if [[ \$check_code == "" ]]; then
err=\$(expr \$err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ \$err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
EOF
# 添加执行权限
chmod +x /etc/keepalived/check_apiserver.sh
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
# 测试ping
ping 192.168.124.250
telnet 192.168.124.250 8443
# 在 Master 节点上创建目录
mkdir -p /etc/kubernetes/manifests /var/log/kubernetes
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--logtostderr=true \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.124.251 \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.124.251:2379,https://192.168.124.252:2379,https://192.168.124.253:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true
# --feature-gates=IPv6DualStack=true
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--logtostderr=true \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.124.252 \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.124.251:2379,https://192.168.124.252:2379,https://192.168.124.253:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true
# --feature-gates=IPv6DualStack=true
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl start kube-apiserver
systemctl status kube-apiserver
两台master节点配置相同
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--v=2 \\
--logtostderr=true \\
--bind-address=127.0.0.1 \\
--root-ca-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
--leader-elect=true \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=40s \\
--node-monitor-period=5s \\
--pod-eviction-timeout=2m0s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--allocate-node-cidrs=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-cidr=172.16.0.0/16 \\
--node-cidr-mask-size-ipv4=24 \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
# --feature-gates=IPv6DualStack=true
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl start kube-controller-manager
systemctl status kube-controller-manager
两台master节点配置一致
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--v=2 \\
--logtostderr=true \\
--bind-address=127.0.0.1 \\
--leader-elect=true \\
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl start kube-scheduler
systemctl status kube-scheduler
在master01上操作,进入提前准备好的目录中执行
cd /root/k8s-pki-yaml/bootstrap
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true --server=https://192.168.124.250:8443 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# 此操作 在 master02也执行
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
# 创建bootstrap
kubectl create -f bootstrap.secret.yaml
# 把 bootstrap-kubelet.kubeconfig 拷贝到 master02 节点和 node节点
scp -r /etc/kubernetes/bootstrap-kubelet.kubeconfig [email protected]:/etc/kubernetes
scp -r /etc/kubernetes/bootstrap-kubelet.kubeconfig [email protected]:/etc/kubernetes
scp -r /etc/kubernetes/bootstrap-kubelet.kubeconfig [email protected]:/etc/kubernetes
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""}
etcd-1 Healthy {"health":"true","reason":""}
两台node节点配置都一致
mkdir -p /var/lib/kubelet /etc/kubernetes/manifests/ /etc/cni/net.d/
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-conf.yml \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--node-labels=node.kubernetes.io/node=
# --feature-gates=IPv6DualStack=true
# --container-runtime=remote
# --runtime-request-timeout=15m
# --cgroup-driver=systemd
[Install]
WantedBy=multi-user.target
EOF
两台Node节点配置一致
cat > /etc/kubernetes/kubelet-conf.yml <
systemctl daemon-reload && systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
两台节点配置一致
cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF
systemctl daemon-reload && systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node01 NotReady 15s v1.25.0
k8s-node02 NotReady 70m v1.25.0
cd /root/k8s-pki-yaml/calico/
# 更改网段
vim calico.yaml
在 第 4588行 改成如下
4588 - name: CALICO_IPV4POOL_CIDR
4589 value: "172.16.0.0/16"
# 创建
kubectl create -f calico.yaml
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-86d8c4fb68-wrm7s 1/1 Running 0 2m14s
kube-system calico-node-cg2cl 1/1 Running 0 2m14s
kube-system calico-node-fb52n 1/1 Running 0 2m14s
kube-system calico-typha-768795f74d-c7fmf 1/1 Running 0 2m14s
# 查看node状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node01 Ready 29m v1.25.0
k8s-node02 Ready 99m v1.25.0
cd /root/k8s-pki-yaml/CoreDNS/
vim coredns.yaml
在 186 行 改成如下 IP
clusterIP: 10.96.0.10
# 创建
kubectl create -f coredns.yaml
# 查看状态
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-86d8c4fb68-ldhx2 1/1 Running 0 13m
calico-node-5jx6t 1/1 Running 0 13m
calico-node-j29c8 1/1 Running 0 13m
calico-typha-768795f74d-6dc59 1/1 Running 0 13m
coredns-5bc764d4f4-m7xwh 1/1 Running 0 3m7s
命令补齐可以在输入kubectl 等一系列命令时 tab 自动补齐
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
# 部署 测试的 pod 资源
cat > busybox.yaml <
#用pod 解析默认命名空间中的kubernetes
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 17h
kubectl exec busybox -n default -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
kubectl exec busybox -n default -- nslookup kube-dns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.
telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.
curl 10.96.0.10:53
curl: (52) Empty reply from server
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 4m12s 10.244.58.199 k8s-node02
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-86d8c4fb68-mbr6b 1/1 Running 0 35m 172.16.85.193 k8s-node01
calico-node-pz4jw 1/1 Running 0 35m 192.168.124.253 k8s-node01
calico-node-x726v 1/1 Running 0 35m 192.168.124.254 k8s-node02
calico-typha-768795f74d-gc22c 1/1 Running 0 35m 192.168.124.253 k8s-node01
coredns-5bc764d4f4-9rxxf 1/1 Running 0 31m 172.16.58.193 k8s-node02
# 进入busybox ping其他节点上的pod
kubectl exec -ti busybox -- sh
/ # ping 192.168.124.254
PING 192.168.124.254 (192.168.124.254): 56 data bytes
64 bytes from 192.168.124.254: seq=0 ttl=63 time=2.039 ms
64 bytes from 192.168.124.254: seq=1 ttl=63 time=0.648 ms
^C
--- 192.168.124.254 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.648/1.343/2.039 ms
/ #
/ # ping 192.168.124.253
PING 192.168.124.253 (192.168.124.253): 56 data bytes
64 bytes from 192.168.124.253: seq=0 ttl=64 time=0.545 ms
64 bytes from 192.168.124.253: seq=1 ttl=64 time=0.071 ms
^C
--- 192.168.124.253 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.071/0.308/0.545 ms
/ #
/ # ping 172.16.85.193
PING 172.16.85.193 (172.16.85.193): 56 data bytes
64 bytes from 172.16.85.193: seq=0 ttl=63 time=0.528 ms
64 bytes from 172.16.85.193: seq=1 ttl=63 time=0.162 ms
64 bytes from 172.16.85.193: seq=2 ttl=63 time=0.165 ms
^C
--- 172.16.85.193 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.162/0.285/0.528 ms
# 到这里都可以连通,证明这个pod是可以跨命名空间和跨主机通信的
# 在 任意 一台 node节点上 curl vip地址加端口 只要能获得版本信息,说明集群负载成功,请求数据流程是 curl > vip(haproxy) > apiserver
[root@k8s-node02 ~]# curl -k https://192.168.124.250:8443/version
{
"major": "1",
"minor": "25",
"gitVersion": "v1.25.0",
"gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
"gitTreeState": "clean",
"buildDate": "2022-08-23T17:38:15Z",
"goVersion": "go1.19",
"compiler": "gc",
"platform": "linux/amd64"
}
[root@k8s-node02 ~]# curl -k https://192.168.124.250:6443/version
{
"major": "1",
"minor": "25",
"gitVersion": "v1.25.0",
"gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
"gitTreeState": "clean",
"buildDate": "2022-08-23T17:38:15Z",
"goVersion": "go1.19",
"compiler": "gc",
"platform": "linux/amd64"
}
# 在可以访问外网的机器上 ,下载该包,打包上传到内网机器或自己的虚机中
git clone -b release-0.10 https://github.com/prometheus-operator/kube-prometheus.git
# 需要提前把 镜像导入到本地,需要的镜像如下
docker pull registry.cn-wulanchabu.aliyuncs.com/moge1/kube-state-metrics:v2.3.0
docker pull quay.io/prometheus/node-exporter:v1.3.1
docker pull quay.io/prometheus/prometheus:v2.32.1
docker pull grafana/grafana:8.3.3
docker pull quay.io/prometheus/blackbox-exporter:v0.19.0
docker pull selina5288/prometheus-adapter:v0.9.1
在公网机器 提前把 镜像pull 下来重新打标签,生产环境中,打标签请根据自己镜像仓库来标记,我这里如下
registry.cn-beijing.aliyuncs.com/dotbalo/prometheus-adapter:v0.9.1
registry.cn-beijing.aliyuncs.com/dotbalo/node-exporter:v1.3.1
registry.cn-beijing.aliyuncs.com/dotbalo/prometheus:v2.32.1
registry.cn-beijing.aliyuncs.com/dotbalo/kube-state-metrics:v2.3.0
registry.cn-beijing.aliyuncs.com/dotbalo/grafana:8.3.3
registry.cn-beijing.aliyuncs.com/dotbalo/blackbox-exporter:v0.19.0
# 把镜像导入到本地 node 节点中
ctr -n=k8s.io image import registry.cn-beijing.aliyuncs.com/dotbalo/prometheus-adapter:v0.9.1
ctr -n=k8s.io image import registry.cn-beijing.aliyuncs.com/dotbalo/node-exporter:v1.3.1
ctr -n=k8s.io image import registry.cn-beijing.aliyuncs.com/dotbalo/prometheus:v2.32.1
ctr -n=k8s.io image import registry.cn-beijing.aliyuncs.com/dotbalo/kube-state-metrics:v2.3.0
ctr -n=k8s.io image import registry.cn-beijing.aliyuncs.com/dotbalo/grafana:8.3.3
ctr -n=k8s.io image import registry.cn-beijing.aliyuncs.com/dotbalo/blackbox-exporter:v0.19.0
# 进入指定目录
cd /root/kube-prometheus/manifests
# 创建
kubectl create -f setup/
# 查看命名空间
kubectl get namespaces |grep monitoring
NAME STATUS AGE
monitoring Active 13m
cd /root/kube-prometheus/manifests
# 生产环境下,需要调整alertmanager文件启动的副本数
vim alertmanager-alertmanager.yaml
podMetadata:
labels:
app.kubernetes.io/component: alert-router
app.kubernetes.io/instance: main
app.kubernetes.io/name: alertmanager
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 0.23.0
replicas: 3 # 把这里的副本数改成3,就会自动组成高可用
resources:
limits:
cpu: 100m
memory: 100Mi
# 调整 prometheus-prometheus文件
vim prometheus-prometheus.yaml
probeNamespaceSelector: {}
probeSelector: {}
replicas: 3 # 这里也改成3个副本数
resources:
requests:
memory: 400Mi
ruleNamespaceSelector: {}
# 这里涉及修改镜像名称的文件一共有6个
prometheusAdapter-deployment.yaml
prometheus-prometheus.yaml
grafana-deployment.yaml
blackboxExporter-deployment.yaml
nodeExporter-daemonset.yaml
kubeStateMetrics-deployment.yaml
# 需要把原有yaml文件里的镜像名称 替换成你导入到本地node节点上的镜像名称,否则会加载不到镜像,这里不再演示
# 在当前目录下 /root/kube-prometheus/manifests 创建所有
kubectl create -f .
# 查看pod 状态
kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 34m
alertmanager-main-1 2/2 Running 0 34m
alertmanager-main-2 2/2 Running 0 34m
blackbox-exporter-57c5ddc9f8-jpkm7 3/3 Running 0 35m
grafana-54db89877f-55hwc 1/1 Running 0 35m
kube-state-metrics-847d98796f-5kpss 3/3 Running 0 35m
node-exporter-8ds8k 2/2 Running 0 35m
node-exporter-dg25g 2/2 Running 0 35m
prometheus-adapter-58f76d4596-j2szt 1/1 Running 0 12m
prometheus-adapter-58f76d4596-wcdxq 1/1 Running 0 12m
prometheus-k8s-0 2/2 Running 0 34m
prometheus-k8s-1 2/2 Running 0 34m
prometheus-operator-5967d47854-82cx8 2/2 Running 0 35m
# 查看 grafana 端口
kubectl get svc grafana -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.250.180 3000/TCP 39m
# 修改grafana暴漏端口给外部
kubectl edit svc grafana -n monitoring
sessionAffinity: None
type: NodePort #在最下面把clusterIP 改为 NodePort,保存退出即可
status:
loadBalancer: {}
# 修改 prometheus暴漏端口
kubectl edit svc prometheus-k8s -n monitoring
clientIP:
timeoutSeconds: 10800
type: NodePort #在最下面把clusterIP 改成 NodePort,保存退出
status:
# 查看暴漏外部的端口号,根据暴漏外部的端口号 在浏览器中访问
kubectl get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main ClusterIP 10.96.247.73 9093/TCP,8080/TCP 67m
alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 66m
blackbox-exporter ClusterIP 10.96.76.121 9115/TCP,19115/TCP 67m
grafana NodePort 10.96.250.180 3000:30763/TCP 67m
kube-state-metrics ClusterIP None 8443/TCP,9443/TCP 67m
node-exporter ClusterIP None 9100/TCP 67m
prometheus-adapter ClusterIP 10.96.104.31 443/TCP 67m
prometheus-k8s NodePort 10.96.67.42 9090:32313/TCP,8080:30392/TCP 67m
prometheus-operated ClusterIP None 9090/TCP 66m
prometheus-operator ClusterIP None 8443/TCP 67m
在浏览器中,通过任意 一台 node 节点+端口 访问
http://192.168.124.253:30763/login
在界面依次点击General---Default 会出现各种自带的监控模板,当你选择NodeExporter/Nodes模板时 就会看到
由于我是Typora文档,这里上传的文章也没有截图,所以需要自己部署成功后去熟悉grafana
http://192.168.124.254:32313/
通过暴漏的端口 访问prometheus自带的web ui界面
由于自带的模板监控的信息有限,可以自己到下面地址去下载一些模板,这里下载模板ID为:8919
https://grafana.com/grafana/dashboards
在这个界面中 ,Data source 选择prometheus,在搜索node 就可以看到很多模板
具体的连接是:https://grafana.com/grafana/dashboards/8919-1-node-exporter-for-prometheus-dashboard-cn-0413-consulmanager/
点击 Download JSON 下载后,把json文件导入到grafana就可以,这里不再演示导入步骤,需要的可以自己去学习怎么导入创建
以上是整体部署成功过程,请你在部署前仔细过一遍文章或文档在去部署,因为很多刚学习k8s伙计直接上手就来,导致看文档部署还会出错,请注意细节,细节!
如果不想自己去下载安装包等镜像,下面是部署时用到的镜像以及安装包,百度网盘下载连接,总共大小1.3G,链接有效期30天
链接:https://pan.baidu.com/s/1xV4V_eGj3-KJN48gnySa8g
提取码:8s2v