k8s俗称Kubernetes,是一款自动化部署工具,具有几大特征
k8s特性如下:
1.自动装箱,自动容器的部署,不影响可用性
2.自我修复,如容器崩溃后快速重新启动新的容器
3.自动实现水平扩展
4.自动实现服务发现和负载均衡
5.自动发布和回滚
6.支持密钥和配置管理,把应用程序的配置信息通过服务来加载,而不是加载本地的配置。实现配置的统一
7.实现存储编排
8.任务的批处理运行
整个系统分为master、node
其中master负责监控和调度
node负责运行具体pod(容器),执行任务操作
systemctl start kubelet
--启动镜像,设置访问端口
kubectl run echo-text --image=echo-text:1.0.0 --replicas=1 --port=8080
--暴露端口,通过service 8888端口,转发容器内8080端口(kubectl run设置了端口,这俩参数可以不写)
kubectl expose deployment echo-text --port=8888 --target-port=8080 --type=NodePort
kubectl expose deployment echo-text --type=NodePort(常用)
--编辑service名称为echo-text的信息(用于设置常用TCP端口)
kubectl edit svc/echo-text
--查看服务名为echo-text信息
kubectl describe service echo-text
--获取启动的服务
kubectl get service
kubectl get svc
--获取pods
kubectl get pods
--查看映射端口信息
kubectl get pod,svc -o wide
--删除应用
kubectl delete deployment echo-text
--删除网络映射端口
kubectl delete svc echo-text
--删除pod
kubectl delete pod zyb-79568bf7dd-b4lwd
kubectl scale deployment zyb --replicas=1
--查看日志
最后100行
kubectl log --tail=100 zyb-79568bf7dd-b4lwd
实时显示
kubectl log -f zyb-79568bf7dd-b4lwd
k8s有很多种安装方式:
kubeadm、yum、源码编译二进制等等
本节进行的是kubeadm安装:
系统要求centos
三台虚拟机,master节点需要分配双核
安装虚拟机和centos操作系统,设置IP
例:
192.168.56.51
192.168.56.52
192.168.56.53
#k8s-master
hostname k8s-master
hostnamectl set-hostname k8s-master
hostname
k8s-master
#k8s-node1
hostname k8s-node1
hostnamectl set-hostname k8s-node1
#k8s-node2
hostname k8s-node2
hostnamectl set-hostname k8s-node2
配置域名解析,所有服务器都要配置
vi /etc/hosts
192.168.56.51 k8s-master
192.168.56.52 k8s-node1
192.168.56.53 k8s-node2
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
关闭selinux
sed -i ‘s/enforcing/disabled/’ /etc/selinux/config
setenforce 0
关闭swap
swapoff -a # 临时关闭
free $ 查看Swap一行值均为0
vi /etc/fstab # 永久关闭(最好是永久关闭)
#/dev/mapper/centos_template-swap swap swap defaults 0 0 #注释掉这行
安装ntp,配置时间同步
yum install ntp wget -y
ntpdate ntp.api.bz
将桥接的IPv4流量传递到iptables的链
cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system #生效
安装docker,切换阿里源docker
$ cd /etc/yum.repos.d/
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version
Docker version 18.06.1-ce, build e68fc7a
添加k8s源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
导入gpgkey文件
$ wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
$ rpm --import yum-key.gpg
$ wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
$ rpm --import rpm-package-key.gpg
安装kubeadm,kubelet和kubectl (备注:三台服务器都要操作)
由于版本更新频繁,这里指定版本号部署:
yum install -y kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0
systemctl enable kubelet
#此时此刻不需要启动kubelet,仅仅设置开机启动即可
systemctl enable kubelet
初始化kubernetes Master (备注:仅在master节点安装,再开一个窗口,一会要用到这个token)
kubeadm init
–apiserver-advertise-address=192.168.56.51
–image-repository registry.aliyuncs.com/google_containers
–kubernetes-version v1.14.0
–service-cidr=10.1.0.0/16
–pod-network-cidr=10.244.0.0/16
备注:记得改成部署主机的IP地址。
初始化完成,会生成token,记得复制下来,一会后面要用到。
kubeadm join 192.168.56.51:6443 --token fwcgkb.1g65pag18m86w71e
–discovery-token-ca-cert-hash sha256:980bd9b4dafd518650e7ccf69c76c81b8c8e55084b470e3daa3ed5dafdc56ba0
查看镜像:master节点
使用kubectl工具,执行: (备注:仅master上面执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
他会在根目录下面生成config文件 (备注:仅master上面执行)
[root@k8s-master ~]# ls .kube/
cache config http-cache
查看节点(没有安装容器网络,所以是没有准备好的状态,这是正常的)备注:仅master上面执行)
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 4m v1.13.3
安装Pod网络插件(CNI)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
确保能够访问到quay.io这个registery。
如果下载失败,可以改成这个镜像地址:
docker pull lizhenliang/flannel:v0.11.0-amd64
因镜像在国外,没有无法下载成功,所以用打包好的镜像文件,上传到服务器上面,再导入镜像文件。
#从其它服务器上面下载镜像,再打包
docker pull lizhenliang/flannel:v0.11.0-amd64
docker image save lizhenliang/flannel:v0.11.0-amd64 >flannelv0.11.0-amd64.tar
docker load
#查看新导入的flannel的镜像
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID C
registry.aliyuncs.com/google_containers/kube-proxy v1.13.3 98db19758ad4 3
registry.aliyuncs.com/google_containers/kube-apiserver v1.13.3 fe242e556a99 3
registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.3 0482f6400933 3
registry.aliyuncs.com/google_containers/kube-scheduler v1.13.3 3a6f709e97a0 3
lizhenliang/flannel v0.11.0-amd64 ff281650a721 3 #就是这个镜像
registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 6
registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 8
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 1
#再执行这个命令,就可以安装成功了。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
#如果之前安装过,没有成功,则执行下面这条命令,删除他
#删除yaml
[root@k8s-master ~]# kubectl delete -f kube-flannel.yml
podsecuritypolicy.extensions “psp.flannel.unprivileged” deleted
clusterrole.rbac.authorization.k8s.io “flannel” deleted
clusterrolebinding.rbac.authorization.k8s.io “flannel” deleted
#查看状态,成功
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-pwxc2 1/1 Running 0 51m
coredns-78d4cf999f-qgc94 1/1 Running 0 51m
etcd-k8s-master 1/1 Running 3 50m
kube-apiserver-k8s-master 1/1 Running 5 50m
kube-controller-manager-k8s-master 1/1 Running 4 50m
kube-flannel-ds-amd64-fvgfs 1/1 Running 0 93s
kube-proxy-6sb9v 1/1 Running 3 51m
kube-scheduler-k8s-master 1/1 Running 3 50m
#查看状态,就成了Ready
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 54m v1.13.3
加入Kubernetes Node节点
首先可以docker images,看一下node节点都有什么景象
registry.aliyuncs.com/google_containers/kube-proxy v1.14.0 5cd54e388aba 11 months ago 82.1MB
lizhenliang/flannel v0.11.0-amd64 ff281650a721 13 months ago 52.6MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 13 months ago 52.6MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
其中lizhenliang/flannel和quay.io/coreos/flannel是一样的,因为imageId一样
可能有的节点缺失相关的镜像,没关系,可以手动pull到本地,补充完整需要的服务镜像
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
将node节点加入网络(备注:仅在两台node节点上面操作,添加节点,方法都是一样的。)
kubeadm join 192.168.88.149:6443 --token qlzhpw.fy30lar1jiz11xbw --discovery-token-ca-cert-hash sha256:ed2f22c8a4727f7be52a1495b49e52638e1f79107677daf6722dfa009218f2e8
这一步可能会报错,连接不到8080之类的,多执行几遍上述命令, kubectl get node可能没有发现节点,或者notReady状态,重启node节点
#查看node,是否添加成功 (备注:在master节点上面操作)
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h11m v1.14.0
k8s-node1 Ready 3h5m v1.14.0 #添加成功
k8s-node2 Ready 3h v1.14.0 #添加成功
测试:
#创建容器,并暴露端口
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
#查看是否创建成功,并暴露端口
#查看pod
[root@k8s-master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-65f88748fd-6859x 1/1 Running 0 133m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.1.0.1 443/TCP 3h31m
service/nginx NodePort 10.1.255.203 80:31118/TCP 3h16m
#显示详细pod的详细信息
[root@k8s-master ~]# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-65f88748fd-6859x 1/1 Running 0 133m 10.244.2.2 k8s-node2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.1.0.1 443/TCP 3h31m
service/nginx NodePort 10.1.255.203 80:31118/TCP 3h16m app=nginx
安装Dashboard
如果端口重复可以执行以下操作:
[root@localhost etcd]# netstat -lnp|grep 10250
tcp6 0 0 :::10250 :::* LISTEN 2188/kubelet
[root@localhost etcd]# kill -9 2188
[root@localhost kubernetes]# kubeadm init --apiserver-advertise-address=192.168.1.107 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.14.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
参考:
https://www.cnblogs.com/nulige/articles/10941218.html