今天对于数据收集的接口修改成nginx+lua+redis收集数据,cron脚本处理数据。
环境
使用OpenResty + redis
2核4G centos6.4
nginx配置
server {
listen 80;
server_name t.com;
rewrite ^\/collect /LUA/ last;
rewrite ^(.*)$ /index.php?_RW_=$1 last;
location /LUA/{
default_type text/html;
content_by_lua_file /web/test/lua/collect.lua;
}
location / {
root /web/test/wwwroot;
}
}
collect.lua 请求处理脚本
local function breakpoint()
ngx.say('breakpoint');
ngx.eof();
end
local function getPostFormArgv()
local function explode(_str, seperator)
local pos, arr = 0, {}
for st, sp in function() return string.find(_str, seperator, pos, true) end do
table.insert(arr, string.sub(_str, pos, st - 1))
pos = sp + 1
end
table.insert(arr, string.sub(_str, pos))
return arr
end
local receive_headers = ngx.req.get_headers()
ngx.req.read_body()
local body = ngx.req.get_body_data()
local seperator = '--' .. string.sub(receive_headers["content-type"], 31)
local tmp = explode(body, seperator)
table.remove(tmp, 1)
table.remove(tmp)
local argv = {}
for k, v in pairs(tmp) do
local start_pos, end_pos, key, val = string.find(v, 'Content%-Disposition: form%-data; name="([^\"]+)"(.*)')
argv[key] = string.gsub(val, "[\r\n]", '')
end
return argv
end
local function getPostJsonArgv()
local json = require('cjson');
ngx.req.read_body()
local body = ngx.req.get_body_data()
if body == nil then
return {}
end
local argv = json.decode(body);
return argv
end
local function getPostArgv()
local receive_headers = ngx.req.get_headers()
local argv = {}
if receive_headers["content-type"] and string.sub(receive_headers["content-type"], 1, 20) == "multipart/form-data;" then
argv = getPostFormArgv()
else
argv = getPostJsonArgv()
end
return argv
end
local argv;
-- 获取post参数
argv = getPostArgv();
if (argv.running == nil) then
ngx.say('{"code":101}');
return
end
-- 获取header数据
local headers = ngx.req.get_headers();
argv.mac = headers.mac;
-- 存redis
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(1000) -- 1 sec
local ok, err = red:connect("127.0.0.1", 6380)
if not ok then
ngx.say("failed to connect: ", err)
return
end
local json = require('cjson');
local json_string = json.encode(argv);
red:lpush('ng_running', json_string);
red:close()
ngx.say('{"code":100}');
ngx.eof();
Nginx API for Lua https://github.com/openresty/lua-nginx-module#nginx-api-for-lua
压力测试
先用Curl测试上面代码是否存在问题
curl http://t.com/test -H "uid:1" -H "from:curl" -X POST -d'{"a":"1","b":"2"}'
使用ab测试
ab -c 500 -n 1000 -H "uid:1" -H "from:curl" -p post.text http://t.com/test
#运行结果
Server Software: openresty/1.13.6.1
Server Hostname: 127.0.0.1
Server Port: 80
Document Path: /api/test
Document Length: 13 bytes
Concurrency Level: 500
Time taken for tests: 0.480 seconds
Complete requests: 2000
Failed requests: 0
Total transferred: 352000 bytes
Total body sent: 2094000
HTML transferred: 26000 bytes
Requests per second: 4166.27 [#/sec] (mean)
Time per request: 120.011 [ms] (mean)
Time per request: 0.240 [ms] (mean, across all concurrent requests)
Transfer rate: 716.08 [Kbytes/sec] received
4259.85 kb/s sent
4975.92 kb/s total
对比我用php写的收集脚本,同样是接收数据写入redis,吞吐量在260左右,lua的方案在4200左右。
Server Software: openresty/1.13.6.1
Server Hostname: 127.0.0.1
Server Port: 80
Document Path: /api/test2
Document Length: 24 bytes
Concurrency Level: 500
Time taken for tests: 7.617 seconds
Complete requests: 2000
Failed requests: 0
Total transferred: 386000 bytes
Total body sent: 2104000
HTML transferred: 48000 bytes
Requests per second: 262.59 [#/sec] (mean)
Time per request: 1904.128 [ms] (mean)
Time per request: 3.808 [ms] (mean, across all concurrent requests)
Transfer rate: 49.49 [Kbytes/sec] received
269.77 kb/s sent
319.26 kb/s total
ab性能指标
参考 https://www.cnblogs.com/yueminghai/p/6412254.html
在进行性能测试过程中有几个指标比较重要:
1、吞吐率(Requests per second)
服务器并发处理能力的量化描述,单位是reqs/s,指的是在某个并发用户数下单位时间内处理的请求数。某个并发用户数下单位时间内能处理的最大请求数,称之为最大吞吐率。
记住:吞吐率是基于并发用户数的。这句话代表了两个含义:
a、吞吐率和并发用户数相关
b、不同的并发用户数下,吞吐率一般是不同的
计算公式:总请求数/处理完成这些请求数所花费的时间,即
Request per second=Complete requests/Time taken for tests
必须要说明的是,这个数值表示当前机器的整体性能,值越大越好。
2、并发连接数(The number of concurrent connections)
并发连接数指的是某个时刻服务器所接受的请求数目,简单的讲,就是一个会话。
3、并发用户数(Concurrency Level)
要注意区分这个概念和并发连接数之间的区别,一个用户可能同时会产生多个会话,也即连接数。在HTTP/1.1下,IE7支持两个并发连接,IE8支持6个并发连接,FireFox3支持4个并发连接,所以相应的,我们的并发用户数就得除以这个基数。
4、用户平均请求等待时间(Time per request)
计算公式:处理完成所有请求数所花费的时间/(总请求数/并发用户数),即:
Time per request=Time taken for tests/(Complete requests/Concurrency Level)
5、服务器平均请求等待时间(Time per request:across all concurrent requests)
计算公式:处理完成所有请求数所花费的时间/总请求数,即:
Time taken for/testsComplete requests
可以看到,它是吞吐率的倒数。
同时,它也等于用户平均请求等待时间/并发用户数,即
Time per request/Concurrency Level
ab报错处理
apr_socket_recv: Connection reset by peer (104)
解决方法:加参数 -r
参考:https://www.cnblogs.com/pengyusong/p/5737915.html