kubernetes优势
自动装箱,水平扩展,自我修复
服务发现和负载均衡
自动发布(默认滚动发布模式)和回滚
集中化配置管理和秘钥管理
存储编排
任务批处理运行
环境:

主机名 IP地址 服务
master 192.168.1.22 kube-apiserver kubelet kubectl
node1 192.168.1.23 kube-apiserver kubelet kubectl
node2 192.168.1.24 kube-apiserver kubelet kubectl

集群硬件环境

环境准备
关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld
关闭selinux:
[root@master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@master ~]# setenforce 0
关闭swap:
swapoff -a $ 临时
vim /etc/fstab $ 永久
1.下载docker
(1)若您安装过docker,需要先删掉,之后再安装依赖:

sudo yum remove docker docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
(2)根据版本不同,下载repo文件
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
软件仓库地址替换为:

sudo sed -i 's+download.docker.com+mirrors.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
(3)更新索引文件并安装

sudo yum makecache fast
sudo yum install docker-ce
配置镜像加速器:
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<- 'EOF'
{
"registry-mirrors": ["https://c15671a72e23484e8ad2e8bb0b9b4f00.mirror.swr.myhuaweicloud.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
1.2修改主机名
[root@master ~]# hostnamectl set-hostname master
2.k8s系统网络设置
(1)配置内核参数,将桥接的IPv4流量传递到iptables的链
创建/etc/sysctl.d/k8s.conf文件
添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
然后执行:
[root@master ~]# sysctl --system
3.安装k8s

添加华为yum源
[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]
baseurl=https://repo.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enables=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://repo.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://repo.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum安装k8s插件
[root@master ~]# yum install -y kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0
[root@master ~]# systemctl enable kubelet
[root@master ~]# systemctl start kubelet
#安装kubelet 后会在/etc下生成文件目录/etc/kubernetes/manifests/
kubeadm: 引导集群的命令
kubelet:集群中运行任务的代理程序
kubectl:命令行管理工具
验证k8s
1.执行以下命令
[root@master ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
3.查看当前版本
[root@master ~]# kubectl version
3.1配置master
3.1.1创建工作目录
[root@master ~]# ansible all -a "mkdir -p /work/k8s"
[root@master ~]# cd /work/k8s/
3.1.2创建kubeadmin.conf配置文件
1.生成配置文件
[root@master k8s]# kubeadm config print init-defaults ClusterConfiguration > kubeadmin.conf
2.修改kubeadmin.conf
修改下载镜像源
imageRepository: registry.aliyuncs.com/google_containers
修改API服务地址
localAPIEndpoint:
advertiseAddress: 192.168.1.22
注192.168.1.11是masterip地址
配置子网网络
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
注:这里10.244.0.0/16和10.96.0.0/12分别是k8s内部pods和service的网络,后续flannel网络需要用到。
3.1.3拉取k8s必备的模块镜像
1.查看都需要那些镜像文件需要拉取
2.[root@master k8s]# kubeadm config images list --config kubeadmin.conf
registry.aliyuncs.com/google_containers/kube-apiserver:v1.14.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.14.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.14.0
registry.aliyuncs.com/google_containers/pause:3.1
registry.aliyuncs.com/google_containers/etcd:3.3.10
registry.aliyuncs.com/google_containers/coredns:1.3.1
kube-apiserver:集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
kube-controller-manager:处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。
kube-scheduler:根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。
kube-proxy在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
pause
etcd:分布式键值存储系统。用于保存集群状态数据,比如Pod、Service等对象信息。
coredns
2.拉取镜像
[root@master k8s]# kubeadm config images pull --config kubeadmin.conf
3.1.4初始化kubernetes环境
初始化并启动
[root@master k8s]# kubeadm init --config kubeadmin.conf
k8s启动成功,但是记住末尾内容
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.22:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:bce41e8a7028dccdd878f1be0504be10e0d1d8dd3ab8d7b4eca509b699a8d016

1.按照提示执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果node节点想要加入master集群,需要执行以下命令
kubeadm join 192.168.1.22:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:bce41e8a7028dccdd878f1be0504be10e0d1d8dd3ab8d7b4eca509b699a8d016
2.创建系统服务并启动
[root@master k8s]# systemctl enable kubelet
[root@master k8s]# systemctl start kubelet
3.1.5验证kubernetes启动结果
1.验证输入,注意显示master状态是NotReady,证明初始化服务器成功
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 10m v1.14.0
2.查看当前k8s集群状态
3.[root@master k8s]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
目前只有一个master,好没有node,而且是NotReady状态,那么需要将node加入到master管理集群中来。在之前需要下配置k8s集群的内部网络,这里采用flannel
3.1.6部署集群内部通信flannel网络
[root@master k8s]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
编辑这个文件确保flannel网络是对的
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
引用当前的flannel配置文件
[root@master k8s]# kubectl apply kube-flannel.yml
查看结果
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 27m v1.14.0
3.2.1配置node节点
1.启动k8s后台服务
[root@node1 ~]# systemctl enable kubelet
[root@node1 ~]# systemctl start kubelet
2.将master机器的/etc/kubernetes/admin.conf传到node下/work/k8s
登录master终端
将admin.conf传递给node1-2
[root@master k8s]# ansible all -i hosts -m copy -a "src=/etc/kubernetes/admin.conf dest=/work/k8s"
3.node1-2,创建基础kube配置文件
[root@master k8s]# ansible all -i hosts -a "mkdir -p $HOME/.kub"
[root@master k8s]# ansible all -i hosts -a "cp -i /work/k8s/admin.conf $HOME/.kube/config"
[root@master k8s]# ansible all -i hosts -a "chown $(id -u):$(id -g) $HOME/.kube/config"
4.node1-2分别连接master加入master集群
[root@master k8s]# ansible all -i hosts -a "kubeadm join 192.168.1.22:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:bce41e8a7028dccdd878f1be0504be10e0d1d8dd3ab8d7b4eca509b699a8d016"
5.node1-2应用flannel网络
将master中的kube-flannel.yml传入到node节点/work/k8s
[root@master k8s]# ansible all -i hosts -m copy -a "src=kube-flannel.yml dest=/work/k8s"
启动node1-2,flannel网络
[root@master k8s]# ansible all -i hosts -a "kubectl apply -f /work/k8s/kube-flannel.yml"
6.查看node是否加入到k8s集群中(需要一点时间)
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 97m v1.14.0
node1 Ready 15m v1.14.0
node2 Ready 15m v1.14.0
搭建完成
搭建步骤分析4大步骤
1.master:kubeadm int --config admin.conf (初始化k8s集群) 创建一个master
2.配置master node之间的网络通信协议和模式 flannle网络
3.配置node节点,分别配置flannel网络
4.让node加入master kubeadm join 加入到master集群中

部署ningx应用,测试部署
在Kubernetes部署中创建一个pod,验证是否正常运行:
[root@master k8s]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master k8s]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master k8s]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-65f88748fd-754g4 1/1 Running 0 105s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 4h1m
service/nginx NodePort 10.97.134.248 80:30983/TCP 23s