上一回讲了我们可以使用kubeadm一键部署kubernetes,这一回就开始实际操作一下,搭建的过程中也踩了不少坑,爬出来后又重新整理了下,有序操作,可能会好一些,为了方便,本案例都是以root身份运行
192.168.1.10 master centos7 2CPU 7.5G
192.168.1.20 slave01 centos7 2CPU 7.5G
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#注意:此处docker版本一定要高,此处docker版本为18
vim /etc/yum.repos.d/docker.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=0
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
yum -y install docker-ce kubeadm
systemctl restart docker
systemctl enable docker
systemctl enable kubelet
不要启动,若在此处启动,则kubeadm init会报10250端口被占用,开机自启即可,因为待会kubeadm init初始化时会启动
此处我用的是kubernetes1.14.3版本比较新,所以,对应的docker版本也一定要高,否则kubeadm init会不成功
云服务器:
docker pull k8s.gcr.io/kube-apiserver:v1.14.3
docker pull k8s.gcr.io/kube-controller-manager:v1.14.3
docker pull k8s.gcr.io/kube-scheduler:v1.14.3
docker pull k8s.gcr.io/kube-proxy:v1.14.3
docker pull k8s.gcr.io/pause:3.1
docker pull k8s.gcr.io/etcd:3.3.10
docker pull k8s.gcr.io/coredns:1.3.1
mkdir k8s
cd k8s
docker save k8s.gcr.io/pause:3.1 > pause:3.1
docker save k8s.gcr.io/etcd:3.3.10 > etcd:3.3.10
docker save k8s.gcr.io/coredns:1.3.1 > coredns:1.3.1
docker save k8s.gcr.io/kube-scheduler:v1.14.3 > kube-scheduler:v1.14.3
docker save k8s.gcr.io/kube-apiserver:v1.14.3 > kube-apiserver:v1.14.3
docker save k8s.gcr.io/kube-controller-manager:v1.14.3 > kube-controller-manager:v1.14.3
docker save k8s.gcr.io/kube-proxy:v1.14.3 > kube-proxy:v1.14.3
cd ..
tar -zcf k8s.tar.gz k8s
本地操作(master,slave01都要导入):
scp 云服务IP:/root/k8s.tar.gz .
tar -xf k8s.tar.gz
cd k8s
for i in *
do
docker load < $i
done
docker images # 需要用到的7个镜像就准备好了
kubeadm init --pod-network-cidr 10.244.0.0/16
1.此处报错,说驱动不一致,修改docker驱动为systemd,docker高版本驱动都是cgroupfs,需要和kubeadm改成一致
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker
systemctl enable docker
kubeadm init --pod-network-cidr 10.244.0.0/16
2.执行成功,会提示创建目录等操作;初始化命令执行成功后,执行如下的命令,启动集群,并将kubeadm join命令拷贝到文件中,以备后用。
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
3.获取组件的健康状态
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
4.查看节点信息
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 11m v1.14.3
5.这里status未就绪,是因为没有网络插件,如flannel.可以查看flannel在github上的相关项目,执行如下的命令自动安装flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
6.执行完上述命令后,会拉取flannel的镜像,时间会比较长;网络插件安装好后,nodes状态就改变了
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 14m v1.14.3
7.获取当前系统上所有在运行的pod的状态,指定名称空间为kube-system,为系统级的pod
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-qkbvl 1/1 Running 0 105s
coredns-fb8b8dccf-znz92 1/1 Running 0 105s
etcd-master 1/1 Running 0 66s
kube-apiserver-master 1/1 Running 0 44s
kube-controller-manager-master 1/1 Running 0 48s
kube-flannel-ds-amd64-xs5kf 1/1 Running 0 62s
kube-proxy-lg5gl 1/1 Running 0 105s
kube-scheduler-master 1/1 Running 0 62s
8.获取当前系统的名称空间
kubectl get ns
NAME STATUS AGE
default Active 16m
kube-node-lease Active 16m
kube-public Active 16m
kube-system Active 16m
1.安装docker和kubeadm软件包,并启动服务
yum -y install docker-ce kubeadm
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
systemctl enable kubelet
systemctl start docker
systemctl enable docker
2.把配置文件拷贝到要加入该master节点的node上,保持配置一致。(master)
scp /etc/sysconfig/kubelet slave01:/etc/sysconfig/kubelet
scp /usr/lib/systemd/system/docker.service slave01:/usr/lib/systemd/system/
3.将slave01添加到集群中
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker
systemctl enable docker
kubeadm join master的IP:6443 --token adiqlr.xxxx --discovery-token-ca-cert-hash sha256:xxx
看到This node has joined the cluster即可,slave01配置完成。
4.此时需要等待一会儿,因为slave01也要下载flannel镜像启动
kubectl get pods -n kube-system -o wide #可以看到有俩pod在slave01上
kubectl get nodes # 都是ready即可
以上操作,就实现了kubernetes的搭建,下一回继续扩展一些功能以及pod的使用