桶聚合编辑
全球聚合编辑
下面是如何使用 Global Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilders
.global("agg")
.subAggregation(AggregationBuilders.terms("genders").field("gender"));
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.global.Global;
// sr is here your SearchResponse object
Global agg = sr.getAggregations().get("agg");
agg.getDocCount(); // Doc coun
过滤器聚合编辑
下面是如何使用 Filters Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilders
.filter("agg")
.filter(QueryBuilders.termQuery("gender", "male"));
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.filter.Filter;
// sr is here your SearchResponse object
Filter agg = sr.getAggregations().get("agg");
agg.getDocCount(); // Doc count
过滤器聚合编辑
下面是如何使用Filters Aggregation与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.filters("agg")
.filter("men", QueryBuilders.termQuery("gender", "male"))
.filter("women", QueryBuilders.termQuery("gender", "female"));
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.filters.Filters
// sr is here your SearchResponse object
Filters agg = sr.getAggregations().get("agg");
// For each entry
for (Filters.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // bucket key
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], doc_count [{}]", key, docCount);
}
这将主要生产:
key [men], doc_count [4982]
key [women], doc_count [5018]
Missing聚合编辑
下面是如何使用Missing Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilders.missing("agg").field("gender");
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.missing.Missing;
// sr is here your SearchResponse object
Missing agg = sr.getAggregations().get("agg");
agg.getDocCount(); // Doc count
嵌套式聚合编辑
下面是如何使用Nested Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilders
.nested("agg")
.path("resellers");
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.nested.Nested;
// sr is here your SearchResponse object
Nested agg = sr.getAggregations().get("agg");
agg.getDocCount(); // Doc count
反向嵌套式聚合编辑
下面是如何使用Reverse Nested Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.nested("agg").path("resellers")
.subAggregation(
AggregationBuilders
.terms("name").field("resellers.name")
.subAggregation(
AggregationBuilders
.reverseNested("reseller_to_product")
)
);
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.nested.Nested;
import org.elasticsearch.search.aggregations.bucket.nested.ReverseNested;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
// sr is here your SearchResponse object
Nested agg = sr.getAggregations().get("agg");
Terms name = agg.getAggregations().get("name");
for (Terms.Bucket bucket : name.getBuckets()) {
ReverseNested resellerToProduct = bucket.getAggregations().get("reseller_to_product");
resellerToProduct.getDocCount(); // Doc count
}
Children聚合编辑
下面是如何使用Children Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.children("agg")
.childType("reseller");
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.children.Children;
// sr is here your SearchResponse object
Children agg = sr.getAggregations().get("agg");
agg.getDocCount(); // Doc count
从聚合编辑
下面是如何使用Terms Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilders
.terms("genders")
.field("gender");
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
// sr is here your SearchResponse object
Terms genders = sr.getAggregations().get("genders");
// For each entry
for (Terms.Bucket entry : genders.getBuckets()) {
entry.getKey(); // Term
entry.getDocCount(); // Doc count
}
订单编辑
订购他们的桶 doc_count在一个提升的方式:
AggregationBuilders
.terms("genders")
.field("gender")
.order(Terms.Order.count(true))
下令桶按字母顺序的条款以升序的方式:
AggregationBuilders
.terms("genders")
.field("gender")
.order(Terms.Order.term(true))
订购的桶单值指标sub-aggregation(被聚合的名字):
AggregationBuilders
.terms("genders")
.field("gender")
.order(Terms.Order.aggregation("avg_height", false))
.subAggregation(
AggregationBuilders.avg("avg_height").field("height")
)
重要术语的聚合编辑
下面是如何使用重要术语的聚合与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.significantTerms("significant_countries")
.field("address.country");
// Let say you search for men only
SearchResponse sr = client.prepareSearch()
.setQuery(QueryBuilders.termQuery("gender", "male"))
.addAggregation(aggregation)
.get();
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;
// sr is here your SearchResponse object
SignificantTerms agg = sr.getAggregations().get("significant_countries");
// For each entry
for (SignificantTerms.Bucket entry : agg.getBuckets()) {
entry.getKey(); // Term
entry.getDocCount(); // Doc count
}
聚合范围编辑
下面是如何使用Range Aggregation与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.range("agg")
.field("height")
.addUnboundedTo(1.0f) // from -infinity to 1.0 (excluded)
.addRange(1.0f, 1.5f) // from 1.0 to 1.5 (excluded)
.addUnboundedFrom(1.5f); // from 1.5 to +infinity
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // Range as key
Number from = (Number) entry.getFrom(); // Bucket from
Number to = (Number) entry.getTo(); // Bucket to
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount);
}
基本上这将产生第一个例子:
key [*-1.0], from [-Infinity], to [1.0], doc_count [9]
key [1.0-1.5], from [1.0], to [1.5], doc_count [21]
key [1.5-*], from [1.5], to [Infinity], doc_count [20]
日期范围聚合编辑
下面是如何使用Date Range Aggregation与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.dateRange("agg")
.field("dateOfBirth")
.format("yyyy")
.addUnboundedTo("1950") // from -infinity to 1950 (excluded)
.addRange("1950", "1960") // from 1950 to 1960 (excluded)
.addUnboundedFrom("1960"); // from 1960 to +infinity
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // Date range as key
DateTime fromAsDate = (DateTime) entry.getFrom(); // Date bucket from as a Date
DateTime toAsDate = (DateTime) entry.getTo(); // Date bucket to as a Date
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsDate, toAsDate, docCount);
}
这将主要生产:
key [*-1950], from [null], to [1950-01-01T00:00:00.000Z], doc_count [8]
key [1950-1960], from [1950-01-01T00:00:00.000Z], to [1960-01-01T00:00:00.000Z], doc_count [5]
key [1960-*], from [1960-01-01T00:00:00.000Z], to [null], doc_count [37]
Ip范围聚合编辑
下面是如何使用Ip范围聚合与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.ipRange("agg")
.field("ip")
.addUnboundedTo("192.168.1.0") // from -infinity to 192.168.1.0 (excluded)
.addRange("192.168.1.0", "192.168.2.0") // from 192.168.1.0 to 192.168.2.0 (excluded)
.addUnboundedFrom("192.168.2.0"); // from 192.168.2.0 to +infinity
注意,您还可以使用ip面具范围:
AggregationBuilder aggregation =
AggregationBuilders
.ipRange("agg")
.field("ip")
.addMaskRange("192.168.0.0/32")
.addMaskRange("192.168.0.0/24")
.addMaskRange("192.168.0.0/16");
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // Ip range as key
String fromAsString = entry.getFromAsString(); // Ip bucket from as a String
String toAsString = entry.getToAsString(); // Ip bucket to as a String
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsString, toAsString, docCount);
}
基本上这将产生第一个例子:
key [*-192.168.1.0], from [null], to [192.168.1.0], doc_count [13]
key [192.168.1.0-192.168.2.0], from [192.168.1.0], to [192.168.2.0], doc_count [14]
key [192.168.2.0-*], from [192.168.2.0], to [null], doc_count [23]
和第二个(使用Ip面具):
key [192.168.0.0/32], from [192.168.0.0], to [192.168.0.1], doc_count [0]
key [192.168.0.0/24], from [192.168.0.0], to [192.168.1.0], doc_count [13]
key [192.168.0.0/16], from [192.168.0.0], to [192.169.0.0], doc_count [50]
直方图聚合编辑
下面是如何使用 Histogram Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.histogram("agg")
.field("height")
.interval(1);
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
// sr is here your SearchResponse object
Histogram agg = sr.getAggregations().get("agg");
// For each entry
for (Histogram.Bucket entry : agg.getBuckets()) {
Long key = (Long) entry.getKey(); // Key
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], doc_count [{}]", key, docCount);
}
日期直方图聚合编辑
下面是如何使用Date Histogram Aggregation 与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.dateHistogram("agg")
.field("dateOfBirth")
.interval(DateHistogramInterval.YEAR);
如果你想设置一个10天的时间间隔:
AggregationBuilder aggregation =
AggregationBuilders
.dateHistogram("agg")
.field("dateOfBirth")
.interval(DateHistogramInterval.days(10));
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram
// sr is here your SearchResponse object
Histogram agg = sr.getAggregations().get("agg");
// For each entry
for (Histogram.Bucket entry : agg.getBuckets()) {
DateTime key = (DateTime) entry.getKey(); // Key
String keyAsString = entry.getKeyAsString(); // Key as String
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], date [{}], doc_count [{}]", keyAsString, key.getYear(), docCount);
}
基本上这将产生第一个例子:
key [1942-01-01T00:00:00.000Z], date [1942], doc_count [1]
key [1945-01-01T00:00:00.000Z], date [1945], doc_count [1]
key [1946-01-01T00:00:00.000Z], date [1946], doc_count [1]
...
key [2005-01-01T00:00:00.000Z], date [2005], doc_count [1]
key [2007-01-01T00:00:00.000Z], date [2007], doc_count [2]
key [2008-01-01T00:00:00.000Z], date [2008], doc_count [3]
地理距离聚合编辑
下面是如何使用 Geo Distance Aggregation与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.geoDistance("agg")
.field("address.location")
.point(new GeoPoint(48.84237171118314,2.33320027692004))
.unit(DistanceUnit.KILOMETERS)
.addUnboundedTo(3.0)
.addRange(3.0, 10.0)
.addRange(10.0, 500.0);
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // key as String
Number from = (Number) entry.getFrom(); // bucket from value
Number to = (Number) entry.getTo(); // bucket to value
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount);
}
这将主要生产:
key [*-3.0], from [0.0], to [3.0], doc_count [161]
key [3.0-10.0], from [3.0], to [10.0], doc_count [460]
key [10.0-500.0], from [10.0], to [500.0], doc_count [4925]
地理散列网格聚合编辑
下面是如何使用 Geo Hash Grid Aggregation与Java API。
准备聚合请求编辑
这里有一个例子关于如何创建聚合的要求:
AggregationBuilder aggregation =
AggregationBuilders
.geohashGrid("agg")
.field("address.location")
.precision(4);
使用聚合反应编辑
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.geogrid.GeoHashGrid;
// sr is here your SearchResponse object
GeoHashGrid agg = sr.getAggregations().get("agg");
// For each entry
for (GeoHashGrid.Bucket entry : agg.getBuckets()) {
String keyAsString = entry.getKeyAsString(); // key as String
GeoPoint key = (GeoPoint) entry.getKey(); // key as geo point
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], point {}, doc_count [{}]", keyAsString, key, docCount);
}
这将主要生产:
key [gbqu], point [47.197265625, -1.58203125], doc_count [1282]
key [gbvn], point [50.361328125, -4.04296875], doc_count [1248]
key [u1j0], point [50.712890625, 7.20703125], doc_count [1156]
key [u0j2], point [45.087890625, 7.55859375], doc_count [1138]
...