ubuntu16下快速安装k8s教程与应用示例(django)

安装Kubernetes是公认的对运维和DevOps而言最棘手的问题之一。因为Kubernetes可以在各种平台和操作系统上运行,所以在安装过程中需要考虑很多因素。

在这篇文章中,我将介绍一种新的、用于在裸机、虚拟机、公私有云上安装Kubernetes的轻量级工具——Rancher Kubernetes Engine(RKE)。RKE是一个用Golang编写的Kubernetes安装程序,极为简单易用,用户不再需要做大量的准备工作,即可拥有闪电般快速的Kubernetes安装部署体验。你可以从官方的GitHub仓库安装RKE。 RKE可以在Linux和MacOS机器上运行

系统环境

vultr的vps 主机
os: ubuntu 16.04 x86_64

节点 ip 内存
master 140.xx.xx.181 1G
node1 140.xx.xx.164 512M
node2 140.xx.xx.96 512M

准备工作

master

  • 1 在master上生成ssh key,并加入authorized_keys

RKE的工作方式是通过SSH连接到每个服务器,并在此服务器上建立到Docker socket的隧道,这意味着SSH用户必须能够访问此服务器上的Docker引擎。

# ssh-keygen -t rsa

将master的公钥id_dsa.pub分别加入到master,node1,node2节点的 ~/.ssh/authorized_keys,确保master能ssh连接自己和node1,node2

cat id_dsa.pub >> ~/.ssh/authorized_keys
  • 2 下载kubelet和kubectl
$ wget -q --show-progress --https-only --timestamping "https://storage.googleapis.com/kubernetes-release/release/v1.8.8/bin/linux/amd64/kubelet" -O /usr/local/bin/kubelet
$ wget -q --show-progress --https-only --timestamping "https://storage.googleapis.com/kubernetes-release/release/v1.8.8/bin/linux/amd64/kubectl" -O /usr/local/bin/kubectl
$ chmod +x /usr/local/bin/kubelet /usr/local/bin/kubectl
  • 3 下载rke
$ wget -q --show-progress --https-only --timestamping "https://github.com/rancher/rke/releases/download/v0.1.3/rke_linux-amd64" -O ~/rke
$ chmod +x ~/rke

查看rke是否安装成功

./rke --version

master,node1,node2

RKE是一个基于容器的安装程序,这意味着它需要在远程服务器上安装Docker,目前需要在服务器上安装Docker 1.12版本。
分别在三个节点上安装docker

# apt install docker

官网的curl -sSL https://get.docker.com/ | sh脚本安装的docker是最新版本,不确定是否支持

RKE入门使用

集群配置文件(master)

默认情况下,RKE将查找名为cluster.yml的文件,该文件中包含有关将在服务器上运行的远程服务器和服务的信息。
cluster.yml

---
nodes:
  - address: 140.xx.xx.181
    user: root
    role:
    - controlplane
    - etcd
    ssh_key_path: /root/.ssh/id_rsa
  - address: 140.xx.xx.164
    user: root
    role:
    - worker
  - address: 140.xx.xx.96
    user: root
    role:
    - worker

services:
  etcd:
    image: quay.io/coreos/etcd:latest
  kube-api:
    image: rancher/k8s:v1.8.3-rancher2
  kube-controller:
    image: rancher/k8s:v1.8.3-rancher2
  scheduler:
    image: rancher/k8s:v1.8.3-rancher2
  kubelet:
    image: rancher/k8s:v1.8.3-rancher2
  kubeproxy:
    image: rancher/k8s:v1.8.3-rancher2

# supported plugins are:
# flannel
# calico
# canal
# weave
#
# If you are using calico on AWS or GCE, use the network plugin config option:
# 'calico_cloud_provider: aws'
# or
# 'calico_cloud_provider: gce'
# network:
#   plugin: calico
#   options:
#     calico_cloud_provider: aws
#
# To specify flannel interface, you can use the 'flannel_iface' option:
# network:
#   plugin: flannel
#   options:
#     flannel_iface: eth1

network:
  plugin: flannel
  options:

# At the moment, the only authentication strategy supported is x509.
# You can optionally create additional SANs (hostnames or IPs) to add to
#  the API server PKI certificate. This is useful if you want to use a load balancer
#  for the control plane servers, for example.
# If set to true, rke won't fail when unsupported Docker version is found
ignore_docker_version: false

kubernetes_version: v1.8.9-rancher1-1

# If set, this is the cluster name that will be used in the kube config file
# Default value is "local"
cluster_name: mycluster

集群配置文件包含一个节点列表。每个节点至少应包含以下值:

  • 地址 – 服务器的SSH IP / FQDN
  • 用户 – 连接到服务器的SSH用户
  • 角色 – 主机角色列表:worker,controlplane或etcd

有三种类型的角色可以使用:

  • etcd – 这些主机可以用来保存集群的数据。
  • controlplane – 这些主机可以用来存放运行K8s所需的Kubernetes API服务器和其他组件。
  • worker – 这些是您的应用程序可以部署的主机。

另一节是“服务”,其中包含有关将在远程服务器上部署的Kubernetes组件的信息
服务的镜像有

etcd: rancher/etcd:v3.0.17
kubernetes: rancher/k8s:v1.8.9-rancher1-1
alpine: alpine:latest
nginx_proxy: rancher/rke-nginx-proxy:v0.1.1
cert_downloader: rancher/rke-cert-deployer:v0.1.1
kubernetes_services_sidecar: rancher/rke-service-sidekick:v0.1.0
kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.5
dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.5
kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.5
kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
flannel: rancher/coreos-flannel:v0.9.1
flannel_cni: rancher/coreos-flannel-cni:v0.2.0

运行RKE(master)

要运行RKE,首先要确保cluster.yml文件在同一个目录下,然后运行如下命令:

# ./rke up

若想指向另一个配置文件,运行如下命令:

# ./rke up --config /tmp/config.yml

输出情况将如下所示:

INFO[0000] Building Kubernetes cluster
INFO[0000] [ssh] Checking private key
...
...
...
INFO[0075] Finished building Kubernetes cluster successfully

检查是否部署成功

root@master:~# kubectl get nodes
NAME            STATUS    ROLES               AGE       VERSION
140.xx.xx.164   Ready     worker              12d       v1.8.3-rancher1
140.xx.xx.181   Ready     controlplane,etcd   12d       v1.8.3-rancher1
140.xx.xx.96    Ready     worker              12d       v1.8.3-rancher1

root@master:~# kubectl get pods -n kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
kube-dns-6f7666d48c-htp4p              3/3       Running   0          1d
kube-dns-autoscaler-6bbfff8c44-h4phq   1/1       Running   1          12d
kube-flannel-d6hf4                     2/2       Running   2          12d
kube-flannel-s22w4                     2/2       Running   2          12d
kube-flannel-snwtm                     2/2       Running   2          12d

k8s使用实例

以web应用为例

django后端 + postgresql + redis

使用k8s需要搭建私有的镜像仓库,一些基础的应用镜像可以从docker hub上直接拉取,但是自己的包含代码的项目镜像需要提前打包好上传到自己的镜像仓库,k8s不能像docker-compose那样通过Dockerfile直接生成镜像

应确保master,node1,node2三个节点都能从你的私有仓库拉取镜像

项目结果

├── build.sh
├── Dockerfile
├── requirements.txt
└── src
    ├── src为django项目根目录

项目镜像打包Dockerfile实例:

FROM python:3.6

# 如果在中国,apt 使用镜像
RUN curl -s ifconfig.co/json | grep "China" > /dev/null && \
    curl -s http://mirrors.163.com/.help/sources.list.jessie > /etc/apt/sources.list || true

# 安装开发所需要的一些工具,同时方便在服务器上进行调试
RUN apt-get update;\
    apt-get install -y vim gettext postgresql-client;\
    true

COPY . /opt/demo
WORKDIR /opt/demo/src

# 先判断是否在中国,如果在中国,使用镜像下载
RUN curl -s ifconfig.co/json | grep "China" > /dev/null && \
    pip install -r /opt/demo/requirements.txt -i https://pypi.doubanio.com/simple --trusted-host pypi.doubanio.com || \
    pip install -r /opt/demo/requirements.txt

RUN mkdir /opt/logging
RUN mkdir /opt/running

打包镜像
build.sh

docker build -t 127.0.0.1:5000/backend:v1.0 . && docker push 127.0.0.1:5000/backend:v1.0

k8s 配置文件

backend.yaml

# backend
# dns: backend-service.demo.svc.cluster.local
---
apiVersion: v1
kind: Service
metadata:
  name: backend-service
  namespace: demo
spec:
  ports:
  - port: 80
    targetPort: 8000
  selector:
    app: backend-pod

# ingress 负载均衡
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: backend-ingress
  namespace: demo
spec:
  rules:
      paths:
        - path: /
          backend:
            serviceName: backend-service
            servicePort: 80

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: demo-backend
  namespace: demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: backend-pod
    spec:
      containers:
        - name: demo-backend
          image: 140.xx.xx.181:5000/backend:v1.0   # 你的后端打包的镜像地址(私有)
          imagePullPolicy: Always

          ports:
            - containerPort: 8000
          command: ["/bin/sh"]
          args: ["-c", "python manage.py runserver 0.0.0.0:8000"]
          # python manage.py runserver 0.0.0.0:8000 为测试使用,应使用uwsgi等方式

      initContainers:
        - name: wait-redis
          image: busybox
          command: ['sh', '-c', 'until nslookup redis.demo.svc.cluster.local; do echo waiting for redis service; sleep 2; done;']
        - name: wait-postgresql
          image: busybox
          command: ['sh', '-c', 'until nslookup postgresql.demo.svc.cluster.local; do echo waiting for postgresql service; sleep 2; done;']

---
apiVersion: batch/v1
kind: Job
metadata:
  name: sync-db
spec:
  template:
    metadata:
      name: sync-db
      labels:
        app: backend-sync-db-job
    spec:
      containers:
      - name: backend-db-migrate
        image: 140.xx.xx.181:5000/backend:v1.0
        command:
        - "/bin/sh"
        - "-c"
        - "python manage.py makemigrations && python manage.py migrate"
      restartPolicy: Never

postgres.yaml

# postgresql
# dns: postgresql.demo.svc.cluster.local
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  namespace: demo
spec:
  ports:
    - port: 5432
      targetPort: postgresql-port
  selector:
    app: postgresql-pod

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: demo-postgresql
  namespace: demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: postgresql-pod
    spec:
      nodeName: 140.xx.xx.164   # 为了数据的持久化指定调度节点为node1: 140.xx.xx.164 
      containers:
        - name: demo-postgresql
          image: postgres:9.6.3
          imagePullPolicy: Always

          env:
            - name: POSTGRES_DB
              value: demo
            - name: POSTGRES_USER
              value: root
            - name: POSTGRES_PASSWORD
              value: devpwd

          ports:
            - name: postgresql-port
              containerPort: 5432

          volumeMounts:
            - name: postgresql-storage
              mountPath: /var/lib/postgresql

      volumes:
        - name: postgresql-storage
          hostPath:
            path: /data/postgresql   # 为了数据的持久化,使用主机hostPath方式挂载数据卷

redis.yaml

# redis
# dns: redis.demo.svc.cluster.local
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: demo
spec:
  ports:
    - port: 9200
      targetPort: redis-port
  selector:
    app: redis-pod

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: demo-redis
  namespace: demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis-pod
    spec:
      containers:
        - name: demo-redis
          image: redis:3.0.7
          imagePullPolicy: Always

          ports:
            - name: redis-port
              containerPort: 6379

django后端配置

django的settings.py中关于postgres和redis配置

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'demo',
        'USER': 'root',
        'PASSWORD': 'devpwd',
        'HOST': 'postgresql.demo.svc.cluster.local',
        'PORT': '',
    }
}

REDIS_HOST = "redis.demo.svc.cluster.local"
# 对于不解析dns的应用配置,可以在配置文件中手动解析,如
# import socket 
# REDIS_HOST = socket.gethostbyname("redis.demo.svc.cluster.local")

注意: 基于dns的服务发现需要k8s-dns支持(rke默认已安装)

部署(master)


kubectl create namespace demo  # create namespace

kubectl -n demo apply -f .  # apply backend.yaml postgres.yaml yaml

查看结果

kubectl -n demo get pods

访问 140.xx.xx.181/admin/

总结

k8s更适合那种无状态的微服务类型应用, 浮动的pod,服务的动态伸缩在容器化应用方面有着巨大的优势
对于以数据为中心且没有集群概念的应用比如mysql等数据库,数据的持久化比较麻烦

参考: https://www.kubernetes.org.cn/3280.html

你可能感兴趣的:(docker)