节点 | IP分配 | 主机名称 | 操作系统版本 |
---|---|---|---|
工作节点1 | 192.168.163.132 | k8s-node1 | centos 7.9(GUI) |
工作节点2 | 192.168.163.133 | k8s-node2 | centos 7.9(GUI) |
主节点 | 192.168.163.134 | k8s-master | centos 7.9(GUI) |
虚拟 IP | 192.168.163.135 |
软件 | 版本 |
---|---|
docker | 19.03.5 |
192.168.163.132设置主机名
[root@localhost ~]# hostnamectl set-hostname k8s-node1
192.168.163.133设置主机名
[root@localhost ~]# hostnamectl set-hostname k8s-node2
192.168.163.134设置主机名
[root@localhost ~]# hostnamectl set-hostname k8s-master
master节点修改配置文件
[root@k8s-master ~]# cat >> /etc/hosts << EOF
192.168.163.132 node1
192.168.163.133 node2
192.168.163.134 master
# 以下3条不用写,我个人便于主机免密登录
192.168.163.132 n1
192.168.163.133 n2
192.168.163.134 m1
EOF
向其余主机分别远程复制hosts文件
[root@k8s-master ~]# scp /etc/hosts n1:/etc/hosts
[root@k8s-master ~]# scp /etc/hosts n2:/etc/hosts
k8s-master节点
[root@k8s-master ~]# yum install -y conntrack-tools ntpdate ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
k8s-node1节点
[root@k8s-node1 ~]# yum install -y conntrack-tools ntpdate ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
k8s-node2节点
[root@k8s-node2 ~]# yum install -y conntrack-tools ntpdate ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
k8s-master节点
[root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-master ~]# yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
k8s-node1节点
[root@k8s-node1 ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-node1 ~]# yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
k8s-node2节点
[root@k8s-node2 ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-node2 ~]# yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
k8s-master节点、k8s-node1节点、k8s-node2节点,分别做以下操作
[root@k8s-master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config #永久关闭,重启生效
[root@k8s-master ~]# setenforce 0 #临时关闭,重启失效
[root@k8s-master ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#UUID=211cc6f8-a7d6-438a-8fbb-c7c60d040174 swap swap defaults 0 0
k8s-master节点、k8s-node1节点、k8s-node2节点,分别做以下操作
[root@localhost ~]# yum install ntpdate -y
[root@localhost ~]# ntpdate time.windows.com
[root@k8s-node2 ~]# systemctl stop postfix && systemctl disable postfix #关闭不需要的服务(邮件)
k8s-master节点、k8s-node1节点、k8s-node2节点,分别做以下操作
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv6.conf.all.disable_ipv6=1
EOF
[root@k8s-master ~]# sysctl --system # 生效
原版本为:3.10.0-693.el7.x86_64
[root@k8s-master ~]# yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
或上传rpm离线安装包:yum install -y elrepo-release-7.el7.elrepo.noarch.rpm
[root@k8s-master ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt
[root@k8s-master ~]#grub2-set-default "CentOS Linux (5.4.130-1.el7.elrepo.x86_64) 7 (Core)"
[root@k8s-master ~]# uname -r
5.4.130-1.el7.elrepo.x86_64
k8s-master节点、k8s-node1节点、k8s-node2节点,分别做以下操作
[root@k8s-master ~]# vi /etc/ssh/sshd_conf
PermitRootLogin yes(是否允许root远程登陆建议关闭,有需要的可以打开)
StrictModes no (验证登陆时先检查用户有没有权限)
RSAAuthentication yes(打开)
PubkeyAuthentication yes(打开)
AuthorizedKeysFile .ssh/authorized_keys (密钥位置,家目录下.ssh目录,原配置文件没有此项)
手工在服务器root目录下创建.ssh并赋予700权限
进入root目录
输入 ssh-keygen -t rsa 生成密钥
密钥在/root下的 .ssh 目录下
id_ecdsa是私钥
id_ecdsa.pub是公钥
将每台服务器的公钥依次追加至authorized_keys,后将最终的authorized_keys文件分别传输到每台机器上
cat id_rsa.pub >> /root/.ssh/authorized_keys
1、YUM安装
yum -y install keepalived
#查看安装路径
rpm -ql keepalived
2、源码安装
1)安装依赖
yum -y install gcc openssl-devel libnfnetlink-devel
2)下载源码
wget https://www.keepalived.org/software/keepalived-1.4.5.tar.gz
3)解压
tar -zxvf keepalived-1.4.5.tar.gz -C /usr/src
4)编译安装
cd /usr/src/keepalived-1.4.5/
./configure && make -j 4 && make install
修改配置文件内容 /etc/keepalived/keepalived.conf
启动服务使用指令:service keepalived restart
master节点配置内容:
! Configuration File for keepalived
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 10
advert_int 1
unicast_src_ip 192.168.163.134
unicast_peer {
192.168.163.132
192.168.163.133
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.163.135
}
}
node1节点配置内容:
! Configuration File for keepalived
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 50
advert_int 1
unicast_src_ip 192.168.163.132
unicast_peer {
192.168.163.133
192.168.163.134
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.163.135
}
}
node2节点配置
! Configuration File for keepalived
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 50
advert_int 1
unicast_src_ip 192.168.163.133
unicast_peer {
192.168.163.132
192.168.163.134
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.163.135
}
}
在线安装,版本不确定,需要查找其它方法
[root@k8s-node2 ~]# yum -y install docker
或者本地离线安装
[root@k8s-node2 ~]# yum -y localinstall docker/*.rpm
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-node2 ~]# yum install -y kubelet kubeadm kubectl
[root@k8s-node2 ~]# systemctl enable kubelet
[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm-config.yaml
使用docker pull registry.aliyuncs.com/google_containers/coredns:1.8.0
后再次执行kubeadm init \
--apiserver-advertise-address=192.168.163.134 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.21.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
[root@k8s-master ~]# mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-node2 ~]# kubectl get nodes
[root@k8s-master~]#
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
·加入 Kubernetes Node
(指令来源于kubeadm init初始化过程中,注意根据实际修改token)
向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:
[root@k8s-node1 ~]# kubeadm join 192.168.163.134:6443 --token j2xrkw.4799l8kbyxybggn2 \
--discovery-token-ca-cert-hash sha256:fd8cd85dfe5a7f2d290fd7e7326f396e1a949500329964eae3e96ab1e4c31ca7
[root@k8s-node1 ~]# kubeadm token create --ttl 0 --print-join-command
`kubeadm join 192.168.233.3:6443 --token rpi151.qx3660ytx2ixq8jk --discovery-token-ca-cert-hash sha256:5cf4e801c903257b50523af245f2af16a88e78dc00be3f2acc154491ad4f32a4`
#这是生成的Token,node加入时使用,此``是起到注释作用,无其他用途。
在 Kubernetes 集群中创建一个 pod,验证是否正常运行:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
访问地址:http://NodeIP:Port
1、拉取代理工具
docker pull registry
2、创建镜像存储路径
mkdir /data/registry
3、创建容器
docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --name=registry registry
4、修改daemon.json文件
vi /etc/docker/daemon.json
内容如下:
{
"debug": true,
"default-address-pools": [
{
"base": "172.17.0.0/16",
"size": 24
}
],
"registry-mirrors": ["http://hub-mirror.c.163.com"],
"insecure-registries": ["localhost:5000"],
#添加["localhost:5000"]本地仓库地址
"graph":"/data/docker/lib",
"log-driver":"json-file",
"log-opts":{"max-size":"500m", "max-file":"3"}
}
5、重启docker,并测试
1)更换标签
docker tag 原镜像名称:tag 新镜像名称:tag
2)推送镜像
docker push localhost:5000/registry:latest
3)测试镜像目录
curl http://localhost:5000/v2/_catalog
4)拉取镜像
dockers pull localhost:5000/registry
指令帮助
kubectl explain pods
例如:kubectl explain pods.metadata
[root@k8s-master ~]# vim pod-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: web
namespace: default
labels:
web1: tomcat
spec:
containers:
- name: tomcat1
image: tomcat:latest
imagesPullPolicy: IfNotPresent
- name: nginx1
image: nginx:latest
imagesPullPolicy: IfNotPresent
几个基础指令
kubectl apply -f pod-test.yaml
kubectl delete -f pod-test.yaml
kubectl get pods
kubectl describe pods web
kubectl get pods -o wide
kubectl logs web
kubectl logs -c tomcat1 web
kubectl exec -it web -c tomcat1 – /bin/bash