使用nginx对集群模式的minio做负载均衡

前言

上篇文章:kubenetes集群模式部署minio 中讲述了如何在kubenetes集群中部署minio,但是遗留问题是未使用service,导致minio集群缺少负载均衡效应,本文将讲述如何通过nginx为minio做负载均衡。

部署nginx

关于nginx的部署,就不多说了,参考本人文章:
记一次Kubenetes部署Nginx的全过程
需要注意的一点是:当前集群版本是1.18.1,故而在deployment.yaml中需要把apiVersion改为apps/v1

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-server
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx-server
    spec:
      volumes:
      - name: etc
        hostPath:
          path: /etc/nginx/
      - name: data
        hostPath:
          path: /usr/share/nginx/html
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        volumeMounts:
        - mountPath: /etc/nginx/
          name: etc
        - mountPath: /usr/share/nginx/html
          name: data
      hostNetwork: true
      nodeSelector:
        nginx-server: "true"
      imagePullSecrets:
        - name: default-secret

配置nginx

[root@dev-learn-77 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
minio-2cgbg              1/1     Running   0          60m   172.22.21.79   dev-learn-79              
minio-5jzql              1/1     Running   0          55m   172.22.21.77   dev-learn-77              
minio-cxdzl              1/1     Running   0          60m   172.22.21.78   dev-learn-78              
nginx-564d778c56-2df5v   1/1     Running   0          65s   172.22.21.77   dev-learn-77              

可以看到nginx已经在77节点,成功部署上去
修改/etc/nginx/nginx.conf


#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       8088;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;
        
        ignore_invalid_headers off;
        proxy_buffering off;

        location / {
            root   html;
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Host $http_host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass  http://webhost; 
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.*$ {
        #    proxy_pass   localhost
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }

    upstream webhost {
        server 172.22.21.77:9000;
        server 172.22.21.78:9000;
        server 172.22.21.79:9000;
    }
    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}

使用upstream 做负载均衡,同时为了保证session一致性,需要加上proxy_set_header ,否则会出现如下异常:


SignatureDoesNotMatch

The request signature we calculated does not match the signature you provided. Check your key and signing method.

1557203545.jpeg
supportportal
/supportportal/1557203545.jpeg
16045466839CAE3F
93831f66-bad9-4e9b-a1f1-826b691cfd83

然后重启nginx(把pod删除即可,因deployment特性会自动重启)

[root@dev-learn-77 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
minio-2cgbg              1/1     Running   0          68m
minio-5jzql              1/1     Running   0          63m
minio-cxdzl              1/1     Running   0          68m
nginx-564d778c56-2df5v   1/1     Running   0          8m47s
[root@dev-learn-77 ~]# kubectl delete pod nginx-564d778c56-2df5v

接着访问172.22.21.77:8088即可访问minio集群,用户名和密码和minio一致。
使用nginx对集群模式的minio做负载均衡_第1张图片

后面程序访问minio,即可通过http://172.22.21.77:8088来访问minio集群,从而避免了当某台minio宕机而导致程序出错,而且因为有nginx的存在,只要保证nginx正常,负载均衡问题也一并解除。当然如果不放心可以部署多台nginx服务kubectl scale deployment nginx --replicas 3,再给其他两台机器打上nginx-server的label,即可自动扩容。

你可能感兴趣的:(大数据)