openresty lua-resty-balancer动态负载均衡


openresty lua-resty-balancer动态负载均衡

            

lua-resty-balancer:https://github.com/openresty/lua-resty-balancer.git

ngx.balancer:https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md

              

                        

                                                  

lua-resty-balancer 说明

            

This library is still under early development and is still experimental.
* lua-resty-balancer还处在早期开发阶段,并且处于试验阶段

           

chash、roundRoubin

Both resty.chash and resty.roundrobin have the same apis
* resty.chash、resty.roubdrobin有相同的api

# resty.chash、resty.roubdrobin模块引用
local resty_chash = require "resty.chash"
local resty_roundrobin = require "resty.roundrobin"

           

new:创建对象实例

语法格式:obj, err = class.new(nodes)

Instantiates an object of this class. The class value is returned 
by the call require "resty.chash".
* 创建对象实例

The id should be table.concat({host, string.char(0), port}) like the 
nginx chash does, when we need to keep consistency with nginx chash.
* 当需要与ngxin chash保持一致时,id要与nginx chash保持一致
* id ==> table.concat({host, string.char(0), port})

The id can be any string value when we do not need to keep consistency 
with nginx chash. The weight should be a non negative integer
* 如果不想与ngin chash保持一致,id可以是任意字符串
* 权重应该是一个非负数

       

reinit:重新初始化对象

语法格式:obj:reinit(nodes)

Reinit the chash obj with the new nodes
* 使用新的节点重新初始化chash对象

                 

set:设置id的权重

语法格式:obj:set(id, weight)

Set weight of the id
* 设置id的权重

             

delete:删除id节点

语法格式:obj:delete(id)

Delete the id
* 删除id节点

                 

incr:增加id节点的权重

语法格式:obj:incr(id, weight?)

Increments weight for the id by the step value weight(default to 1)
* 增加id节点的权重,weight如果不设置,默认为1

         

decr:降低id节点的权重

语法格式:obj:decr(id, weight?)

Decrease weight for the id by the step value weight(default to 1).
* 降低id节点的权重,weight如果不设置,默认为1

                

find:使用key查找id节点

语法格式:id, index = obj:find(key)

Find an id by the key, same key always return the same id in the same obj.
* 使用key查找id,
* 同一对象中,相同的key总是返回相同的id

The second return value index is the index in the chash circle of 
the hash value of the key
* index:节点在一致性hash环中的索引位置

              

next:获取当前节点的下一个节点

语法格式:id, new_index = obj:next(old_index)

If we have chance to retry when the first id(server) doesn't 
work well, then we can use obj:next to get the next id.
* 如果可以重试,第一个节点不能正常工作,可以获取当前节点的下一个节点

The new id may be the same as the old one
* 新的节点id可能和久的节点id相同

           

示例

    lua_package_path "/path/to/lua-resty-chash/lib/?.lua;;";
    lua_package_cpath "/path/to/lua-resty-chash/?.so;;";

    init_by_lua_block {
        local resty_chash = require "resty.chash"
        local resty_roundrobin = require "resty.roundrobin"

        local server_list = {
            ["127.0.0.1:1985"] = 2,
            ["127.0.0.1:1986"] = 2,
            ["127.0.0.1:1987"] = 1,
        }

        -- XX: we can do the following steps to keep consistency with nginx chash
        local str_null = string.char(0)

        local servers, nodes = {}, {}
        for serv, weight in pairs(server_list) do
            -- XX: we can just use serv as id when we doesn't need keep consistency with nginx chash
            local id = string.gsub(serv, ":", str_null)

            servers[id] = serv
            nodes[id] = weight
        end

        local chash_up = resty_chash:new(nodes)

        package.loaded.my_chash_up = chash_up
        package.loaded.my_servers = servers

        local rr_up = resty_roundrobin:new(server_list)
        package.loaded.my_rr_up = rr_up
    }

    # 一致性hash
    upstream backend_chash {
        server 0.0.0.1;     # 设置成0.0.0.0、0.0.0.1表示可监听本机上所有ip地址
        balancer_by_lua_block {
            local b = require "ngx.balancer"

            local chash_up = package.loaded.my_chash_up
            local servers = package.loaded.my_servers

            -- we can balancer by any key here
            local id = chash_up:find(ngx.var.arg_key)
            local server = servers[id]

            assert(b.set_current_peer(server))
        }
    }

    # 轮询
    upstream backend_rr {
        server 0.0.0.1;
        balancer_by_lua_block {
            local b = require "ngx.balancer"

            local rr_up = package.loaded.my_rr_up

            -- Note that Round Robin picks the first server randomly
            local server = rr_up:find()

            assert(b.set_current_peer(server))
        }
    }

    server {
        location /chash {
            proxy_pass http://backend_chash;
        }

        location /roundrobin {
            proxy_pass http://backend_rr;
        }
    }

                   

                        

                                                  

ngx.balancer 说明

            

This Lua module is currently considered experimental
* ngx.balancer目前处于试验状态

              

set_current_peer:当前调用的节点

语法格式:ok, err = balancer.set_current_peer(host, port)

环境:balancer_by_lua*

Sets the peer address (host and port) for the current backend 
query (which may be a retry).
* 设置当前调用的节点(host、port)

Domain names in host do not make sense. You need to use OpenResty 
libraries like lua-resty-dns to obtain IP address(es) from all the 
domain names before entering the balancer_by_lua* handler (for example, 
you can perform DNS lookups in an earlier phase like access_by_lua* 
and pass the results to the balancer_by_lua* handler via ngx.ctx
* host中的域名不起作用,需要使用ua-resty-dns解析域名
* 可以在access_by_lua*阶段解析域名

            

set_more_tries:设置重试次数

语法格式:ok, err = balancer.set_more_tries(count)

环境:balancer_by_lua*

Sets the tries performed when the current attempt (which may be a retry) 
fails (as determined by directives like proxy_next_upstream, depending 
on what particular nginx uptream module you are currently using). Note 
that the current attempt is excluded in the count number set here.
* 当前节点调用失败后的重试次数
* 不包括当前调用次数

Please note that, the total number of tries in a single downstream 
request cannot exceed the hard limit configured by directives like proxy_next_upstream_tries, depending on what concrete nginx upstream 
module you are using. When exceeding this limit, the count value will 
get reduced to meet the limit and the second return value will be the 
string "reduced tries due to limit", which is a warning, while the 
first return value is still a true value
* 设置的重试次数不能超过proxy_next_upstream_tries设置的次数
* 如果超过,重试次数为proxy_next_upstream_tries设置的值,
* 并且返回错误(warining)信息:reduced tries due to limit

           

get_last_failure:获取错误详细信息

语法格式:state_name, status_code = balancer.get_last_failure()

环境:balancer_by_lua*

Retrieves the failure details about the previous failed attempt 
(if any) when the next_upstream retrying mechanism is in action. 
When there was indeed a failed previous attempt, it returned a 
string describing that attempt's state name, as well as an integer 
describing the status code of that attempt.
* 获取上一次调用错误的详细信息

Possible state names are as follows:
   * "next" Failures due to bad status codes sent from the backend 
      server. The origin's response is same though, which means the 
      backend connection can still be reused for future requests.
   * "failed" Fatal errors while communicating to the backend server 
      (like connection timeouts, connection resets, and etc). In this 
       case, the backend connection must be aborted and cannot get reused.
* 可能的state name
 * next:后端连接仍可使用
 * failed:后端连接不可使用

Possible status codes are those HTTP error status codes like 502 and 504.
For stream module, status_code will always be 0 (ngx.OK) and is provided 
for compatibility reasons.
* status code:502、504
* stream module中,出于兼容,state code总是0(ngx.OK)

When the current attempt is the first attempt for the current downstream 
request (which means there is no previous attempts at all), this method 
always returns a single nil value
* 第一次请求调用,由于之前没有请求调用过,get_last_failure总是返回nil

                         

set_timeouts:设置超时时间

语法格式:ok, err = balancer.set_timeouts(connect_timeout, send_timeout, read_timeout)

环境:balancer_by_lua*

Sets the upstream timeout (connect, send and read) in seconds for the 
current and any subsequent backend requests (which might be a retry).
* 设置当前请求、子请求的超时时间(连接、发送、读取),单位为秒

If you want to inherit the timeout value of the global nginx.conf 
configuration (like proxy_connect_timeout), then just specify the nil 
value for the corresponding argument (like the connect_timeout argument).
* 参数为nil,默认会使用nginx.conf中的超时时间配置

Zero and negative timeout values are not allowed.
* 不能设置为0或者负数

You can specify millisecond precision in the timeout values by using 
floating point numbers like 0.001 (which means 1ms).
* 时间可精确到毫秒,0.001s

Note: send_timeout and read_timeout are controlled by the same config 
proxy_timeout for ngx_stream_proxy_module. To keep API compatibility, 
this function will use max(send_timeout, read_timeout) as the value 
for setting proxy_timeout.
* send_timeout、read_timeout由proxy_timeout控制
* 为了保证api兼容,proxy_timeout的值为max(send_timeout, read_timeout)

Returns true when the operation is successful; 
returns nil and a string describing the error otherwise.
* 设置成功,返回true
* 设置失败,返回nil、错误描述信息

This only affects the current downstream request. It is not 
a global change
* 只会影响当前请求,不是全局设置

             

recreate_request:重新创建请求

语法格式:ok, err = balancer.recreate_request()

环境:balancer_by_lua*

Recreates the request buffer for sending to the upstream server. 
This is useful, for example if you want to change a request header 
field to the new upstream server on balancer retries.
* 重新创建发送给服务端的请求缓存
* 发送请求给新的后端服务时,可以用这个改变请求头

Normally this does not work because the request buffer is created 
once during upstream module initialization and won't be regenerated 
for subsequent retries. However you can use proxy_set_header My-Header 
$my_header and set the ngx.var.my_header variable inside the balancer 
phase. Calling balancer.recreate_request() after updating a header 
field will cause the request buffer to be re-generated and the 
My-Header header will thus contain the new value.
* proxy_set_header My-Header $my_header
* 在balancer阶段使用ngx.var.my_header变量
* 调用balancer.recreate_request()重新创建请求
* My-Header会包含新的请求值

Warning: because the request buffer has to be recreated and such 
allocation occurs on the request memory pool, the old buffer has 
to be thrown away and will only be freed after the request finishes. 
Do not call this function too often or memory leaks may be noticeable. 
Even so, a call to this function should be made only if you know the 
request buffer must be regenerated, instead of unconditionally in 
each balancer retries
* 新的请求需要分配内存空间,旧的请求缓存只会在请求结束后释放
* 不要频繁调用这个方法,这会造成内存泄漏
* balancer.recreate_request应该在请求缓存必须重建是调用,不应该在每个balancer重试时调用

              

示例

http {
    upstream backend {
        server 0.0.0.1;   # just an invalid address as a place holder

        balancer_by_lua_block {
            local balancer = require "ngx.balancer"

            -- well, usually we calculate the peer's host and port
            -- according to some balancing policies instead of using
            -- hard-coded values like below
            local host = "127.0.0.2"
            local port = 8080

            local ok, err = balancer.set_current_peer(host, port)
            if not ok then
                ngx.log(ngx.ERR, "failed to set the current peer: ", err)
                return ngx.exit(500)
            end
        }

        keepalive 10;  # connection pool
    }

    server {
        # this is the real entry point
        listen 80;

        location / {
            # make use of the upstream named "backend" defined above:
            proxy_pass http://backend/fake;
        }
    }

    server {
        # this server is just for mocking up a backend peer here...
        listen 127.0.0.2:8080;

        location = /fake {
            echo "this is the fake backend peer...";
        }
    }
}

                     

                          

                                                  

使用示例

            

构建包含lua-resty-balancer的镜像

# 创建容器
docker run -it -d --name open-balancer2 lihu12344/openresty


# 进入容器,安装lua-resty-balancer
huli@hudeMacBook-Pro ~ % docker exec -it open-balancer2 bash
[root@3c83c9031939 /]# cd /usr/local/openresty/bin

 * 搜索lua-resty-balancer
[root@3c83c9031939 bin]# opm search lua-resty-balancer
jojohappy/lua-resty-balancer                      A generic consistent hash implementation for OpenResty/Lua

 * 安装lua-resty-balancer
[root@3c83c9031939 bin]# opm install jojohappy/lua-resty-balancer
* Fetching jojohappy/lua-resty-balancer  
  Downloading https://opm.openresty.org/api/pkg/tarball/jojohappy/lua-resty-balancer-0.0.1.opm.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6935  100  6935    0     0  18531      0 --:--:-- --:--:-- --:--:-- 18592
Package jojohappy/lua-resty-balancer 0.0.1 installed successfully under /usr/local/openresty/site/ .

 * 查看安装包
[root@3c83c9031939 bin]# opm list
jojohappy/lua-resty-balancer                                 0.0.1


# 创建镜像:lihu12344/openresty:2
huli@hudeMacBook-Pro ~ % docker commit open-balancer2 lihu12344/openresty:2
sha256:ee9fb9cbac217e3a905a019c51b771c5555f4b13ab7ba4ce41c68d4bba191cbd

              

nginx.conf

#user  nobody;
#worker_processes 1;

pcre_jit on;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    client_body_temp_path /var/run/openresty/nginx-client-body;
    proxy_temp_path       /var/run/openresty/nginx-proxy;
    fastcgi_temp_path     /var/run/openresty/nginx-fastcgi;
    uwsgi_temp_path       /var/run/openresty/nginx-uwsgi;
    scgi_temp_path        /var/run/openresty/nginx-scgi;

    sendfile        on;
    keepalive_timeout  65;
    include /etc/nginx/conf.d/*.conf;

    init_by_lua_block {
        local resty_roundrobin = require "resty.roundrobin"

        local server_list = {
            ["127.0.0.1:8001"] = 2,
            ["127.0.0.1:8002"] = 2,
            ["127.0.0.1:8003"] = 1,
        }

        local rr_up = resty_roundrobin:new(server_list)
        package.loaded.my_rr_up = rr_up
    }

    upstream backend_rr {
        server 0.0.0.1;
        balancer_by_lua_block {
            local b = require "ngx.balancer"

            local rr_up = package.loaded.my_rr_up

            local server = rr_up:find()
            assert(b.set_current_peer(server))
        }
    }

    server {
        location /roundrobin {
            proxy_pass http://backend_rr;
        }
    }
}

        

default.conf

server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/local/openresty/nginx/html;
        index  index.html index.htm;
    }

    location /roundrobin {
        proxy_pass http://backend_rr;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/local/openresty/nginx/html;
    }

}

server {
    # this server is just for mocking up a backend peer here...
    listen 127.0.0.1:8001;

    location /roundrobin {
        echo "127.0.0.1:8001 /roundrobin ==> gtlx";
    }
}

server {
    # this server is just for mocking up a backend peer here...
    listen 127.0.0.1:8002;

    location /roundrobin {
        echo "127.0.0.1:8002 /roundrobin ==> gtlx";
    }

}

server {
    # this server is just for mocking up a backend peer here...
    listen 127.0.0.1:8003;

    location /roundrobin {
        echo "127.0.0.1:8003 /roundrobin ==> gtlx";
    }
}

           

创建容器

docker run -it -d --net fixed --ip 172.18.0.102 -p 8002:80 \
-v /Users/huli/lua/openresty/balancer/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf \
-v /Users/huli/lua/openresty/balancer/default.conf:/etc/nginx/conf.d/default.conf \
--name open-balancer3 lihu12344/openresty:2

                     

使用测试

# server 8001、8002、8003按权重进行轮询
huli@hudeMacBook-Pro ~ % curl localhost:8001/roundrobin
127.0.0.1:8001 /roundrobin ==> gtlx

huli@hudeMacBook-Pro ~ % curl localhost:8001/roundrobin
127.0.0.1:8002 /roundrobin ==> gtlx

huli@hudeMacBook-Pro ~ % curl localhost:8001/roundrobin
127.0.0.1:8001 /roundrobin ==> gtlx

huli@hudeMacBook-Pro ~ % curl localhost:8001/roundrobin
127.0.0.1:8002 /roundrobin ==> gtlx

huli@hudeMacBook-Pro ~ % curl localhost:8001/roundrobin
127.0.0.1:8003 /roundrobin ==> gtlx

huli@hudeMacBook-Pro ~ % curl localhost:8001/roundrobin
127.0.0.1:8001 /roundrobin ==> gtlx

        

                  

你可能感兴趣的:(openresty,openresty)