由于OpenTSDB没有支持Java的SDK进行调用,所以基于Java开发OpenTSDB的调用将要依靠HTTP请求的方式进行。下面就先介绍下部分HTTP API的特性:
写入特性
写入请求可选参数
{
"metric":"self.test",
"timestamp":1456123787,
"value":20,
"tags":{
"host":"web1"
}
}
{
"failed": 0,
"success": 1
}
{
"errors": [],
"failed": 0,
"success": 1
}
/api/query:可以选择 Get 或者 Post 两种方式,推荐使用 Post 方式,
{
"start": 1456123705, // 该查询的起始时间
"end": 1456124985, // 该查询的结束时间
"globalAnnotation": false, // 查询结果中是否返回 global annotation
"noAnnotations": false, // 查询结果中是否返回 annotation
"msResolution": false, // 返回的点的精度是否为毫秒级,如果该字段为false,
// 则同一秒内的点将按照 aggregator 指定的方式聚合得到该秒的最终值
"showTSUIDs": true, // 查询结果中是否携带 tsuid
"showQuery": true, // 查询结果中是否返回对应的子查询
"showSummary": false, // 查询结果中是否携带此次查询时间的一些摘要信息
"showStats": false, // 查询结果中是否携带此次查询时间的一些详细信息
"delete": false, // 注意:如果该值设为true,则所有符合此次查询条件的点都会被删除
"queries": [
// 子查询,为一个数组,可以指定多条相互独立的子查询
]
}
{
"metric": "JVM_Heap_Memory_Usage_MB", // 查询使用的 metric
"aggregator": "sum", // 使用的聚合函数
"downsample": "30s-avg", // 采样时间间隔和采样函数
"tags": { // tag组合,在OpenTSDB 2.0 中已经标记为废弃
// 推荐使用下面的 filters 字段
"host": "server01"
},
"filters": [], // TagFilter,下面将详细介绍 Filter 相关的内容
"explicitTags": false, // 查询结果是否只包含 filter 中出现的 tag
"rate": false, // 是否将查询结果转换成 rate
"rateOption": {} // 记录了 rate 相关的参数,具体参数后面会进行介绍
}
{
"aggregator": "sum", // 使用的聚合函数
"tsuids": [ // 查询的 tsuids 集合,这里的 tsuids 可理解为
// 时序数据库的 id
"123",
"456"
]
}
[
{
"metric": "self.test",
"tags": {},
"aggregateTags": [
"host"
],
"dps": {
"1456123785": 10,
"1456123786": 10
}
},
{
"metric": "self.test",
"tags": {
"host": "web1"
},
"aggregateTags": [],
"dps": {
"1456123784": 10,
"1456123786": 15
}
}
]
OpenTSDB的查询中的两种类型的时间:
由于存储时间序列的最高精度是毫秒,毫秒级、秒级所占用字节数:
注意:毫秒级存储,在查询时默认返回秒级数据(按照查询中指定的聚合方式对 1秒内的时序数据进行采样聚合,形成最终结果);可以通过 msResolution 参数设置,返回毫秒级数据。
Filter 详解:
Filter 的具体格式:
{
"type": "wildcard", // Fliter 类型,可以直接使用 OpenTSDB 中内置的 Filter,也可以通过插件
// 的方式增加自定义的 Filter 类型
"tagk": "host", // 被过滤的 TagKey
"filter": "*", // 过滤表达式,该表达式作用于 TagValue 上,不同类型的 Filter 支持不同形式的表达式
"groupBy": true // 是否对过滤结果进行分组(group by),默认为 false,即查询结果会被聚合成一条时序数据
}
几个实用内置 Filter 类型详解:
literal_or、ilteral_or 类型:
支持单个字符串,也支持使用 “|” 连接多个字符串;
含义与 SQL 语句中的 “WHERE host IN(‘server01’,‘server02’,‘server03’)” 相同;
ilteral_or 是 literal_or 的大小写不敏感版本,使用方式一致。
使用方式:
{
"type": "literal_or",
"tagk": "host",
"filter": "server01|server02|server03",
"groupBy": false
}
not_literal_or、not_iliteral_or 类型:
{
"type": "not_literal_or",
"tagk": "host",
"filter": "server01|server02",
"groupBy": false
}
wildcard、iwildcard 类型:
{
"type": "wildcard",
"tagk": "host",
"filter": "server*",
"groupBy": false
}
regexp 类型:
{
"type": "regexp",
"tagk": "host",
"filter": ".*",
"groupBy": false
}
not_key 类型:
{
"type": "not_key",
"tagk": "host",
"filter": "",
"groupBy": false
}
aggregator 是用于对查询结果进行聚合,将同一 unix 时间戳内的数据进行聚合计算后返回结果,例如如果 tags 不填,1456123705 有两条数据,一条 host=web1,另外一条 host=web2,值都为10,那么返回的该时间点的值为 sum 后的 20。
"queries": [
{
"aggregator": "sum",
"metric": "self.test",
"filters": [
{
"type": "wildcard",
"tagk": "host",
"filter": "web*"
}
]
}
]
OpenTSDB 中提供了 interpolation 来解决聚合时数据缺失的补全问题,目前支持如下四种类型:
列出部分常用的Aggregator 函数 机器使用的 Interpolation 类型:
Aggregator | 描述 | Interpolation |
---|---|---|
avg | 计算平均值作为聚合结果 | Linear Interpolation |
count | 点的个数作为聚合结果 | ZIM |
dev | 标准差 | Linear Interpolation |
min | 最小值作为聚合结果 | Linear Interpolation |
max | 最大值作为聚合结果 | Linear Interpolation |
sum | 求和 | Linear Interpolation |
zimsum | 求和 | ZIM |
p99 | 将p99作为聚合结果 | Linear Interpolation |
简单来说就是对指定时间段内的数据进行聚合后返回;例如,需要返回每分钟的平均值数据,按照 5m-avg 的方式进行采样;分析 5m-avg 参数:
其含义是 每5分钟为一个采样区间,将每个区间的平均值作为返回的点。返回结果中,每个点之间的时间间隔为5分钟。
"queries": [
{
"aggregator": "sum",
"metric": "self.test",
"downsample": "5m-avg",
"tags": {
"host": "web1"
}
}
]
时序数据的丢失也会对 Downsampling 处理结果产生一定的影响。OpenTSDB 也为 Downsampling 提供了相应的填充策略,Downsampling 的填充策略如下:
在子查询中,有两个字段与 Rate Conversion 相关
{
"type": "metric", // 查询的字符串的类型,可选项有 metrics、tagk、tagv
"q": "sys", // 字符串前缀
"max": 10 // 此次请求返回值携带的字符串个数上限
}
OpenTSDB提供三种方式的读写操作:telnet、http、post,但官方并没提供JAVA版的API。在GitHub上有的Java的opentsdb-client ,才使得我能对openTSDB的读写操作进行封装,从而分享至此:https://github.com/shifeng258/opentsdb-client
在此项目上包装开发或将其打包成SDK使用均可。本人将用前一种方式进行的。
增加的一个统一调用类OpentsdbClient
import com.alibaba.fastjson.JSONArray;
import com.alibaba.fastjson.JSONObject;
import com.ygsoft.opentsdb.client.ExpectResponse;
import com.ygsoft.opentsdb.client.HttpClient;
import com.ygsoft.opentsdb.client.HttpClientImpl;
import com.ygsoft.opentsdb.client.builder.MetricBuilder;
import com.ygsoft.opentsdb.client.request.Query;
import com.ygsoft.opentsdb.client.request.QueryBuilder;
import com.ygsoft.opentsdb.client.request.SubQueries;
import com.ygsoft.opentsdb.client.response.Response;
import com.ygsoft.opentsdb.client.response.SimpleHttpResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.*;
/**
* Opentsdb读写工具类
*/
public class OpentsdbClient {
private static Logger log = LoggerFactory.getLogger(OpentsdbClient.class);
/**
* 取平均值的聚合器
*/
public static String AGGREGATOR_AVG = "avg";
/**
* 取累加值的聚合器
*/
public static String AGGREGATOR_SUM = "sum";
private HttpClient httpClient;
public OpentsdbClient(String opentsdbUrl) {
this.httpClient = new HttpClientImpl(opentsdbUrl);
}
/**
* 写入数据
* @param metric 指标
* @param timestamp 时间点
* @param value
* @param tagMap
* @return
* @throws Exception
*/
public boolean putData(String metric, Date timestamp, long value, Map tagMap) throws Exception {
long timsSecs = timestamp.getTime() / 1000;
return this.putData(metric, timsSecs, value, tagMap);
}
/**
* 写入数据
* @param metric 指标
* @param timestamp 时间点
* @param value
* @param tagMap
* @return
* @throws Exception
*/
public boolean putData(String metric, Date timestamp, double value, Map tagMap) throws Exception {
long timsSecs = timestamp.getTime() / 1000;
return this.putData(metric, timsSecs, value, tagMap);
}
/**
* 写入数据
* @param metric 指标
* @param timestamp 转化为秒的时间点
* @param value
* @param tagMap
* @return
* @throws Exception
*/
public boolean putData(String metric, long timestamp, long value, Map tagMap) throws Exception {
MetricBuilder builder = MetricBuilder.getInstance();
builder.addMetric(metric).setDataPoint(timestamp, value).addTags(tagMap);
try {
log.debug("write quest:{}", builder.build());
Response response = httpClient.pushMetrics(builder, ExpectResponse.SUMMARY);
log.debug("response.statusCode: {}", response.getStatusCode());
return response.isSuccess();
} catch (Exception e) {
log.error("put data to opentsdb error: ", e);
throw e;
}
}
/**
* 写入数据
* @param metric 指标
* @param timestamp 转化为秒的时间点
* @param value
* @param tagMap
* @return
* @throws Exception
*/
public boolean putData(String metric, long timestamp, double value, Map tagMap) throws Exception {
MetricBuilder builder = MetricBuilder.getInstance();
builder.addMetric(metric).setDataPoint(timestamp, value).addTags(tagMap);
try {
log.debug("write quest:{}", builder.build());
Response response = httpClient.pushMetrics(builder, ExpectResponse.SUMMARY);
log.debug("response.statusCode: {}", response.getStatusCode());
return response.isSuccess();
} catch (Exception e) {
log.error("put data to opentsdb error: ", e);
throw e;
}
}
/**
* 批量写入数据
* @param metric 指标
* @param timestamp 时间点
* @param value
* @param tagMap
* @return
* @throws Exception
*/
public boolean putData(JSONArray jsonArr) throws Exception {
return this.putDataBatch(jsonArr);
}
/**
* 批量写入数据
* @param metric 指标
* @param timestamp 转化为秒的时间点
* @param value
* @param tagMap
* @return
* @throws Exception
*/
public boolean putDataBatch(JSONArray jsonArr) throws Exception {
MetricBuilder builder = MetricBuilder.getInstance();
try {
for(int i = 0; i < jsonArr.size(); i++){
Map tagMap = new HashMap();
for(String key : jsonArr.getJSONObject(i).getJSONObject("tags").keySet()){
tagMap.put(key, jsonArr.getJSONObject(i).getJSONObject("tags").get(key));
}
String metric = jsonArr.getJSONObject(i).getString("metric").toString();
long timestamp = DateTimeUtil.parse(jsonArr.getJSONObject(i).getString("timestamp"), "yyyy/MM/dd HH:mm:ss").getTime() / 1000;
double value = Double.valueOf(jsonArr.getJSONObject(i).getString("value"));
builder.addMetric(metric).setDataPoint(timestamp, value).addTags(tagMap);
}
log.debug("write quest:{}", builder.build());
Response response = httpClient.pushMetrics(builder, ExpectResponse.SUMMARY);
log.debug("response.statusCode: {}", response.getStatusCode());
return response.isSuccess();
} catch (Exception e) {
log.error("put data to opentsdb error: ", e);
throw e;
}
}
/**
* 查询数据,返回的数据为json格式,结构为:
* "[
* " {
* " metric: mysql.innodb.row_lock_time,
* " tags: {
* " host: web01,
* " dc: beijing
* " },
* " aggregateTags: [],
* " dps: {
* " 1435716527: 1234,
* " 1435716529: 2345
* " }
* " },
* " {
* " metric: mysql.innodb.row_lock_time,
* " tags: {
* " host: web02,
* " dc: beijing
* " },
* " aggregateTags: [],
* " dps: {
* " 1435716627: 3456
* " }
* " }
* "]";
* @param metric 要查询的指标
* @param aggregator 查询的聚合类型, 如: OpentsdbClient.AGGREGATOR_AVG, OpentsdbClient.AGGREGATOR_SUM
* @param tagMap 查询的条件
* @param downsample 采样的时间粒度, 如: 1s,2m,1h,1d,2d
* @param startTime 查询开始时间,时间格式为yyyy/MM/dd HH:mm:ss
* @param endTime 查询结束时间,时间格式为yyyy/MM/dd HH:mm:ss
*/
public String getData(String metric, Map tagMap, String aggregator, String downsample, String startTime, String endTime) throws IOException {
QueryBuilder queryBuilder = QueryBuilder.getInstance();
Query query = queryBuilder.getQuery();
query.setStart(DateTimeUtil.parse(startTime, "yyyy/MM/dd HH:mm:ss").getTime() / 1000);
query.setEnd(DateTimeUtil.parse(endTime, "yyyy/MM/dd HH:mm:ss").getTime() / 1000);
List sqList = new ArrayList();
SubQueries sq = new SubQueries();
sq.addMetric(metric);
sq.addTag(tagMap);
sq.addAggregator(aggregator);
sq.setDownsample(downsample + "-" + aggregator);
sqList.add(sq);
query.setQueries(sqList);
try {
log.debug("query request:{}", queryBuilder.build()); //这行起到校验作用
SimpleHttpResponse spHttpResponse = httpClient.pushQueries(queryBuilder, ExpectResponse.DETAIL);
log.debug("response.content: {}", spHttpResponse.getContent());
if (spHttpResponse.isSuccess()) {
return spHttpResponse.getContent();
}
return null;
} catch (IOException e) {
log.error("get data from opentsdb error: ", e);
throw e;
}
}
/**
* 查询数据,返回tags与时序值的映射: Map>
* @param metric 要查询的指标
* @param aggregator 查询的聚合类型, 如: OpentsdbClient.AGGREGATOR_AVG, OpentsdbClient.AGGREGATOR_SUM
* @param tagMap 查询的条件
* @param downsample 采样的时间粒度, 如: 1s,2m,1h,1d,2d
* @param startTime 查询开始时间, 时间格式为yyyy/MM/dd HH:mm:ss
* @param endTime 查询结束时间, 时间格式为yyyy/MM/dd HH:mm:ss
* @param retTimeFmt 返回的结果集中,时间点的格式, 如:yyyy/MM/dd HH:mm:ss 或 yyyyMMddHH 等
* @return Map>
*/
public Map getData(String metric, Map tagMap, String aggregator, String downsample, String startTime, String endTime, String retTimeFmt) throws IOException {
String resContent = this.getData(metric, tagMap, aggregator, downsample, startTime, endTime);
return this.convertContentToMap(resContent, retTimeFmt);
}
public Map convertContentToMap(String resContent, String retTimeFmt) {
Map tagsValuesMap = new HashMap();
if (resContent == null || "".equals(resContent.trim())) {
return tagsValuesMap;
}
JSONArray array = (JSONArray) JSONObject.parse(resContent);
if (array != null) {
for (int i = 0; i < array.size(); i++) {
JSONObject obj = (JSONObject) array.get(i);
JSONObject tags = (JSONObject) obj.get("tags");
JSONObject dps = (JSONObject) obj.get("dps"); //timeValueMap.putAll(dps); Map timeValueMap = new HashMap(); for (Iterator it = dps.keySet().iterator(); it.hasNext(); ) { String timstamp = it.next(); Date datetime = new Date(Long.parseLong(timstamp)*1000); timeValueMap.put(DateTimeUtil.format(datetime, retTimeFmt), dps.get(timstamp)); } tagsValuesMap.put(tags.toString(), timeValueMap); } } return tagsValuesMap; }
}
}
return tagsValuesMap;
}
}
增加一个时间处理类
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
public class DateTimeUtil {
public static Date parse(String date,String fm){
Date res=null;
try {
SimpleDateFormat sft=new SimpleDateFormat(fm);
res=sft.parse(date);
} catch (ParseException e) {
e.printStackTrace();
}
return res;
}
}
为了建立HTTP的长连接,将PoolingHttpClient进行单例模式的修改(防止调用时频繁的实例化引起的资源消耗)
public class PoolingHttpClient {
...
/*单例模式修改*/
private static PoolingHttpClient poolingHttpClient;
private PoolingHttpClient() {
// Increase max total connection
connManager.setMaxTotal(maxTotalConnections);
// Increase default max connection per route
connManager.setDefaultMaxPerRoute(maxConnectionsPerRoute);
// config timeout
RequestConfig config = RequestConfig.custom()
.setConnectTimeout(connectTimeout)
.setConnectionRequestTimeout(waitTimeout)
.setSocketTimeout(readTimeout).build();
httpClient = HttpClients.custom()
.setKeepAliveStrategy(keepAliveStrategy)
.setConnectionManager(connManager)
.setDefaultRequestConfig(config).build();
// detect idle and expired connections and close them
IdleConnectionMonitorThread staleMonitor = new IdleConnectionMonitorThread(
connManager);
staleMonitor.start();
}
public static PoolingHttpClient getInstance() {
if (null == poolingHttpClient) {
//加锁保证线程安全
synchronized (PoolingHttpClient.class) {
if (null == poolingHttpClient) {
poolingHttpClient = new PoolingHttpClient();
}
}
}
return poolingHttpClient;
}
/*单例模式修改结束*/
public SimpleHttpResponse doPost(String url, String data)
throws IOException {
StringEntity requestEntity = new StringEntity(data);
HttpPost postMethod = new HttpPost(url);
postMethod.setEntity(requestEntity);
HttpResponse response = execute(postMethod);
int statusCode = response.getStatusLine().getStatusCode();
SimpleHttpResponse simpleResponse = new SimpleHttpResponse();
simpleResponse.setStatusCode(statusCode);
HttpEntity entity = response.getEntity();
if (entity != null) {
// should return: application/json; charset=UTF-8
String ctype = entity.getContentType().getValue();
String charset = getResponseCharset(ctype);
String content = EntityUtils.toString(entity, charset);
simpleResponse.setContent(content);
}
/*增加线程回收开始*/
EntityUtils.consume(entity);
postMethod.releaseConnection();
/*增加线程回收结束*/
return simpleResponse;
}
...
}
修改HttpClientImpl对单例模式的PoolingHttpClient调用
public class HttpClientImpl implements HttpClient {
...
//private PoolingHttpClient httpClient = new PoolingHttpClient();
private PoolingHttpClient httpClient = PoolingHttpClient.getInstance();
...
}