K8S之一:集群搭建

前言:K8S组件概念

etcd - A highly available key-value store for shared configuration and service discovery.
flannel - An etcd backed network fabric for containers.
kube-apiserver - Provides the API for Kubernetes orchestration.
kube-controller-manager - Enforces Kubernetes services.
kube-scheduler - Schedules containers on hosts.
kubelet - Processes a container manifest so the containers are launched according to how they are described.
kube-proxy - Provides network proxy services.

第一部分:节点规划及准备

  1. 搭建4台服务器,最小化安装Centos 7 x64系统,以下为架构示意图,对应地址及域名为
d181.contoso.com/172.16.3.181
d182.contoso.com/172.16.3.182
d183.contoso.com/172.16.3.183
d184.contoso.com/172.16.3.184
网关及DNS服务器为172.16.3.1
k8s.png
  1. 关闭防火墙
所有节点上
$ systemctl stop firewalld
$ systemctl disable firewalld

第二部分:安装master

以下操作在d181节点

  1. 安装软件包,以下命令会自动安装docker
$ yum -y install etcd kubernetes
  1. 配置etcd
$ vim /etc/etcd/etcd.conf
ETCD_NAME="default"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
  1. 配置api server
$ vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# 注意去掉了ServiceAccount防止报错
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
  1. 启动服务
$ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done
  1. 定义flannel网络
$ etcdctl mk /atomic.io/network/config '{"Network":"172.27.0.0/16"}'
  1. 测试
$ kubectl get nodes

结果应该什么都没有,因为还未启动节点服务

第三部分:安装nodes

以下操作在其他节点

  1. 安装软件包
$ yum -y install flannel kubernetes
  1. 配置与etcd连接
vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://d181.contoso.com:2379"
  1. 配置与apiserver连接
vim /etc/kubernetes/config
KUBE_MASTER="--master=http://d181.contoso.com:8080"
  1. d182配置kubelet服务
vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
# change the hostname to this host’s IP address
KUBELET_HOSTNAME="--hostname_override=d182.contoso.com"
KUBELET_API_SERVER="--api_servers=http://d181.contoso.com:8080"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause --cluster-domain=contoso.com --cluster-dns=172.16.3.1"
上句如果采用本地镜像有库则为
KUBELET_ARGS="--pod-infra-container-image=docker.contoso.com:5000/pause-amd64.3.0 --cluster-domain=contoso.com --cluster-dns=172.16.3.1"
  1. d183配置kubelet服务
vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
# change the hostname to this host’s IP address
KUBELET_HOSTNAME="--hostname_override=d183.contoso.com"
KUBELET_API_SERVER="--api_servers=http://d181.contoso.com:8080"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause --cluster-domain=contoso.com --cluster-dns=172.16.3.1"
  1. d184配置kubelet服务
vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
# change the hostname to this host’s IP address
KUBELET_HOSTNAME="--hostname_override=d184.contoso.com"
KUBELET_API_SERVER="--api_servers=http://d181.contoso.com:8080"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause --cluster-domain=contoso.com --cluster-dns=172.16.3.1"
  1. 启动服务
$ for SERVICES in kube-proxy kubelet docker flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done
  1. 在每个节点检查
$ ip a | grep flannel | grep inet
会发现每个节点都有新的网络接口docker0和flannel0

d181节点检查应可看到其他节点已经启动
$ kubectl get nodes

第四部分:创建pods

以下操作在d181节点

  1. 编写mysql配置文件
$ mkdir pods
$ cd pods
$ vim mysql.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mysql-pod
  labels:
    name: mysql-pod
spec:
  containers:
    - resources:
        limits :
          cpu: 1
      image: mysql
      name: mysql
      env:
        - name: MYSQL_ROOT_PASSWORD
          value: 1a.genius
      ports:
        - containerPort: 3306
          name: mysql
  1. 创建pods
$ kubectl create -f mysql.yaml
  1. 测试
查看pod在哪个node上创建
$ kubectl get pods
$ kubectl get pod mysql -o wide
$ kubectl describe pod mysql

在上面查找到的节点上运行
docker ps -a
注:发现有两个2个容器,一个是mysql,一个是pause,pause是Netowrk Container, 每启动一个Pod都会附加启动这样一个容器,它的作用就只是简单的等待,设置Pod的网络

获得pod更详细的信息
kubectl get pod mysql -o yaml
kubectl get pod mysql -o json

手动停止容器测试
docker stop $(docker ps -a -q)
注:发现k8s会自动重新生成新容器,但如果关闭pod所在node再重启该node,在d181上再查询则发现pod会被销毁,且不会自动在其它node上创建新的pod

如何删除pod
kubectl delete pod mysql-pod

第五部分:创建RC

注:创建rc时会自动创建pod,所以运行该命令前应删除上一步创建的pod
意义:Replication Controller可确保任何时候Kubernetes集群中有指定数量的pod副本

在d181节点运行

  1. 编写mysql rc文件
$ vim mysql-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
   name: mysql-controller
   labels: 
     name: mysql-controller
spec:
  replicas: 2
  selector:
     name: mysql-pod
  template: 
    metadata:
     labels:
       name: mysql-pod
    spec:
      containers:
      - name: mysql
        image: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: 1a.genius
        ports:
        - containerPort: 3306
          name: mysql
  1. 创建rc
kubectl create -f mysql-controller.yaml 
  1. 查询rc
kubectl get rc
kubectl get rc mysql-controller
kubectl describe rc mysql-controller
  1. 查询pods
kubectl get pods
kubectl get pods -o wide
注:可见到在两个node上创建了pod
  1. 扩大副本数量
kubectl scale rc mysql-controller --replicas=3
kubectl get pods -o wide
  1. 减小副本数量
kubectl scale rc mysql-controller --replicas=1
kubectl get pods -o wide
  1. 删除测试
删除容器测试
docker rm <容器ID>
注:删除某个容器,会马上自动启动新的容器

删除rc不影响pod
kubectl delete rc mysql-controller

删除rc及其下所有的pod
kubectl delete -f mysql-controller.yaml

第六部分:创建services(master节点)

注:创建service前最好已创建好pod
每个Pod都会分配一个单独的IP地址,但这个IP地址会随着Pod的销毁而消失。Service解决通过一个IP访问一组pod

在d181节点运行

  1. 编写mysql service文件
$ vim mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  labels:
    name: mysql-service
spec:
  externalIPs:
    - 172.16.3.182
  ports:
    - port: 3306
  selector:
    name: mysql-pod
    
注:externalIP无所谓是哪一个地址,但一定是集群中实实在在的地址
  1. 创建service
$ kubectl create -f mysql-service.yaml
  1. 查询
kubectl get services
kubectl get endpoints
注:mysql-service显示两个地址,一个地址是10.254.*是内部地址,另一个是外部地址
  1. 删除服务
kubectl delete -f mysql-service.yaml
  1. 客户端测试
mysql -uroot -p -h172.16.3.182
MySQL [(none)]> show variables like '%version%';

你可能感兴趣的:(K8S之一:集群搭建)