首先安装docker,然后在docker里创建grafana容器和prometheus容器,master机器数据采集脚本prometheus_exporter.py
安装docker
https://www.jianshu.com/writer#/notebooks/49095554/notes/82193903
docker创建prometheus容器,注意prometheus配置文件的内容配置
https://www.jianshu.com/writer#/notebooks/48908430/notes/82197167/preview
docker创建grafana容器
https://www.jianshu.com/writer#/notebooks/48908430/notes/81365865
Linux上安装locust以及prometheus_exporter.py里所需要的库
为什么需要该性能指标平台?
1、locust自带有性能指标web界面,但是指标数据不能保存刷新页面就没有了,想做几次性能测试进行指标对比不能达到该目的。
2、grafana可视化界面更好看。
搭建好后如何使用?
起grafana服务、prometheus服务、以master机器身份运行prometheus_exporter.py数据采集脚本,此时只需要从属机器启动并发脚本时指定主机ip,这样master机器就能采集到所有worker从属机器的指标数据。
locust操作界面用服务器IP:8089访问
prometheus服务用服务器IP:9090访问
grafana视图用服务器IP:3000访问
这里需要用到prometheus_exporter.py脚本中的/export/prometheus接口,对locust的数据进行采集并存储到时序列数据库,grafana选择prometheus数据源进行数据获取然后根据模版进行数据绘制。并发开始后可以通过服务器IP:8089/export/prometheus查看是否采集到locust数据
# coding: utf8
import six
from itertools import chain
from flask import request, Response
from locust import stats as locust_stats, runners as locust_runners
from locust import User, task, events
from prometheus_client import Metric, REGISTRY, exposition
# This locustfile adds an external web endpoint to the locust master, and makes it serve as a prometheus exporter.
# Runs it as a normal locustfile, then points prometheus to it.
# locust -f prometheus_exporter.py --master
# Lots of code taken from [mbolek's locust_exporter](https://github.com/mbolek/locust_exporter), thx mbolek!
class LocustCollector(object):
registry = REGISTRY
def __init__(self, environment, runner):
self.environment = environment
self.runner = runner
def collect(self):
# collect metrics only when locust runner is spawning or running.
runner = self.runner
if runner and runner.state in (locust_runners.STATE_SPAWNING, locust_runners.STATE_RUNNING):
stats = []
for s in chain(locust_stats.sort_stats(runner.stats.entries), [runner.stats.total]):
stats.append({
"method": s.method,
"name": s.name,
"num_requests": s.num_requests,
"num_failures": s.num_failures,
"avg_response_time": s.avg_response_time,
"min_response_time": s.min_response_time or 0,
"max_response_time": s.max_response_time,
"current_rps": s.current_rps,
"median_response_time": s.median_response_time,
"ninetieth_response_time": s.get_response_time_percentile(0.9),
# only total stats can use current_response_time, so sad.
# "current_response_time_percentile_95": s.get_current_response_time_percentile(0.95),
"avg_content_length": s.avg_content_length,
"current_fail_per_sec": s.current_fail_per_sec
})
# perhaps StatsError.parse_error in e.to_dict only works in python slave, take notices!
errors = [e.to_dict() for e in six.itervalues(runner.stats.errors)]
metric = Metric('locust_user_count', 'Swarmed users', 'gauge')
metric.add_sample('locust_user_count', value=runner.user_count, labels={})
yield metric
metric = Metric('locust_errors', 'Locust requests errors', 'gauge')
for err in errors:
metric.add_sample('locust_errors', value=err['occurrences'],
labels={'path': err['name'], 'method': err['method'],
'error': err['error']})
yield metric
is_distributed = isinstance(runner, locust_runners.MasterRunner)
if is_distributed:
metric = Metric('locust_slave_count', 'Locust number of slaves', 'gauge')
metric.add_sample('locust_slave_count', value=len(runner.clients.values()), labels={})
yield metric
metric = Metric('locust_fail_ratio', 'Locust failure ratio', 'gauge')
metric.add_sample('locust_fail_ratio', value=runner.stats.total.fail_ratio, labels={})
yield metric
metric = Metric('locust_state', 'State of the locust swarm', 'gauge')
metric.add_sample('locust_state', value=1, labels={'state': runner.state})
yield metric
stats_metrics = ['avg_content_length', 'avg_response_time', 'current_rps', 'current_fail_per_sec',
'max_response_time', 'ninetieth_response_time', 'median_response_time',
'min_response_time',
'num_failures', 'num_requests']
for mtr in stats_metrics:
mtype = 'gauge'
if mtr in ['num_requests', 'num_failures']:
mtype = 'counter'
metric = Metric('locust_stats_' + mtr, 'Locust stats ' + mtr, mtype)
for stat in stats:
# Aggregated stat's method label is None, so name it as Aggregated
# locust has changed name Total to Aggregated since 0.12.1
if 'Aggregated' != stat['name']:
metric.add_sample('locust_stats_' + mtr, value=stat[mtr],
labels={'path': stat['name'], 'method': stat['method']})
else:
metric.add_sample('locust_stats_' + mtr, value=stat[mtr],
labels={'path': stat['name'], 'method': 'Aggregated'})
yield metric
@events.init.add_listener
def locust_init(environment, runner, **kwargs):
print("locust init event received")
if environment.web_ui and runner:
@environment.web_ui.app.route("/export/prometheus")
def prometheus_exporter():
registry = REGISTRY
encoder, content_type = exposition.choose_encoder(request.headers.get('Accept'))
if 'name[]' in request.args:
registry = REGISTRY.restricted_registry(request.args.get('name[]'))
body = encoder(registry)
return Response(body, content_type=content_type)
REGISTRY.register(LocustCollector(environment, runner))
class Dummy(User):
@task(20)
def hello(self):
pass
启动locust的主机master,只用作采集数据,这个脚本放在服务器上
locust --master -f prometheus_exporter.py
启动locust的从属机worker,指定ip为master机器所在服务器的IP,不要忘记添加默认端口5557到防火墙设置里,如果主从在同一电脑上不需要指定。
locust -f demo.py --worker --master-host=10.0.50.98
此时可以查看locust控制台显示workers有一个
开始并发
查看master机器是否采集数据访问服务器IP:8089/export/prometheus
配置grafana
选择数据源prometheus
绑定数据源地址
导入模版
选择12081模板
最后成功是这样的