接下来的这个章节主要介绍es的index的mapping相关的内容。es 的index 像是mysql的一个table,而index的mapping这像是一个table的定义DDL。
比如下面的一个mysql的表定义
| likes_count| CREATE TABLE `likes_count` (
`id` int(11) NOT NULL AUTO_INCREMENT COMMENT '自增主键',
`target_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '喜欢目标id',
`target_type` tinyint(8) NOT NULL DEFAULT '0' COMMENT '喜欢目标type',
`cnt` bigint(20) NOT NULL DEFAULT '0' COMMENT '喜欢次数',
`created_at` bigint(20) NOT NULL DEFAULT '0' COMMENT '创建时间',
`updated_at` bigint(20) NOT NULL DEFAULT '0' COMMENT '更新时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_id_type` (`target_id`,`target_type`)
) ENGINE=InnoDB AUTO_INCREMENT=19 DEFAULT CHARSET=utf8mb4 COMMENT='喜欢次数表'
这个是一个mysql中的table的定义,
1.他定义了该table的name,
2.table有哪些字段field,
3.每个字段的数据类型,
4.每个字段的非空约束,
5.每个字段的默认值和说明内容,
5.还有一些是自增字段的定义AUTO_INCREMENT
6.以及一些索引的定义,PRIMARY KEY,UNIQUE KEY 等
7.table的引擎定义,字符编码定义等
而es的index mapping的作用也和这个类似,主要是为了约束每个索引中的字段类型,同时因为es和mysql的实现不一样,所以也会有自己的特别配置,比如分词器等配置,数据类型支持的可能也并不一样。当然,es还有index setting相关的设置,那个更多的是类似于对table的设置,index mapping更多的是对每个字段的定义。后面我们也会讨论对index 的setting。
es mapping的内容主要有以下几部分
es是一个很灵活的存储系统,可以直接使用json文档进行插入,即使是多层的json,json字段类型都不同也没有关系。所以他支持的数据类型就要比较灵活才行。下面来学习一下es的数据类型。
类型集合综述
keyword类型
复合数据类型
GEO 地理信息类型
专用的数据类型
Arrays数组类型
多字段类型
PUT my_index
{
"mappings": {
"properties": {
"full_name": {
"type": "text"
}
}
}
}
这个字段主要用来搜索,很少用来的做聚合操作(当然也可以做聚合操作)
可以有的设置有
analyzer
boost
eager_global_ordinals
fielddata
fields
index
index_options
index_prefixes
index_phrases
norms
position_increment_gap
store
search_analyzer
search_quote_analyzer
similarity
term_vector
这些设置被称为mapping param,将在后续的博客进行讲述
这个就是用来做filter,sort和聚合查询使用的
PUT my_index
{
"mappings": {
"properties": {
"tags": {
"type": "keyword"
}
}
}
}
值得注意的是,对于数值型数据,如果不需要进行范围查询,像status仅仅是用来过滤的话,那么可以考虑使用keyword来存储数值,
es的term查询,keyword类型要比数值类型更加高效。
如果你不太确定的话,可以使用multi-field特性来保存keyword和numeric两种类型。
可以有的设置有
boost
doc_values
eager_global_ordinals
fields
ignore_above
index
index_options
norms
null_value
store
similarity
normalizer
split_queries_on_whitespace
meta
long : 64位整数,负的2的63次方----> 2的63次方-1
integer: 32位整数,负的2的32次方----> 2的32次方-1
short: 16位整数,-32,768 -----> 32,767.
byte: 单字节 -128 -----> 127
double: 64位浮点数,准寻IEEE 754 标准
float: 32位浮点数,准寻IEEE 754 标准
half_float: 16位浮点数,准寻IEEE 754 标准
scaled_float: 一个浮点数,由一个长整数支持,并由一个固定的双比例缩放因子缩放
PUT my_index
{
"mappings": {
"properties": {
"number_of_bytes": {
"type": "integer"
},
"time_in_seconds": {
"type": "float"
},
"price": {
"type": "scaled_float",
"scaling_factor": 100
}
}
}
}
需要注意的的对于float,double,half_float类型的数据 -0.0 和 +0.0 是不相等的,查询的之后需要注意。
选取的时候根据数值的范围选择够用的即可。
可以设置的mapping param
coerce
boost
doc_values
ignore_malformed
index
null_value
store
meta
因为json没有json类型,所以es是根据下面的方式来识别date的
1.特殊类型的string,比如"2015-01-01" or “2015/01/01 12:10:30”.
2.long类型毫秒级的时间戳
3.int类型秒级时间戳
在es内部都是被转成utc时间,然后存储为long型的时间戳
针对date的query都被转成了range查询,然后在返回的时候再转回对应的string pattern,同一个索引中的同一个date field 的数据格式可能是不一样的,result返回的是你写入的格式。
date类型可以设置日期的pattern(通过format属性进行设置),默认情况下使用 strict_date_optional_time||epoch_millis
作为约束
strict_date_optional_time 这个类型的要求是ISO要求的标准必须是 yyyy-MM-dd'T'HH:mm:ss.SSSZ or yyyy-MM-dd
我测验了一下yyyy-MM-dd'T'HH:mm:ss.SSS
也是可以的,也就是没有带Z,但是这个时候默认加了一个Z,等于是0时区的这个时间。
样例
PUT my_index
{
"mappings": {
"properties": {
"date": {
"type": "date"
}
}
}
}
PUT my_index/_doc/1
{ "date": "2015-01-01" }
PUT my_index/_doc/2
{ "date": "2015-01-01T12:10:30Z" }
PUT my_index/_doc/3
{ "date": 1420070400001 }
然后使用date升序查询
GET my_index/_search
{
"sort": { "date": "asc"}
}
返回
"took" : 209,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 3,
"relation" : "eq"
},
"max_score" : null,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : null,
"_source" : {
"date" : "2015-01-01"
},
"sort" : [
1420070400000
]
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "3",
"_score" : null,
"_source" : {
"date" : 1420070400001
},
"sort" : [
1420070400001
]
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : null,
"_source" : {
"date" : "2015-01-01T12:10:30Z"
},
"sort" : [
1420114230000
]
}
]
}
}
可以看到sort字段都是long类型的timestamp
同时对于_id=1的doc来说,date字段的值是 1420070400000 ,在本地服务器上转换为本地时间
date -d @1420070400
Thu Jan 1 08:00:00 CST 2015
得到的是 ‘2015-01-01 08:00:00’,这也就说明了如果时间没有携带时间戳,那么就是直接按照utc时间进行计算的,这样的话,如果这个时间用来在kibana展示的时候可能会有问题,所以如果使用kibana展示的话,时间字段还是要转成utc这种带有时区的时间比较准确,否则在es中存储的就是不准确的时间。
可以设置的mapping param有
boost
doc_values
format: 设置时间格式
locale
ignore_malformed
index
null_value
store
meta
你也可以自己设置format属性,比如
PUT my_index
{
"mappings": {
"properties": {
"date": {
"type": "date",
"format": "yyyy-MM-dd"
}
}
}
}
这个存储的是一个long的类型,标识纳秒的时间戳,因为long的限制,只能表示1970-2262
在返回时再转换成原来的格式。
date类型可以设置index时候日期的pattern,默认情况下使用 strict_date_optional_time||epoch_millis
作为约束
strict_date_optional_time 这个类型的要求是ISO要求的标准必须是 yyyy-MM-dd'T'HH:mm:ss.SSSZ or yyyy-MM-dd
我测验了一下yyyy-MM-dd'T'HH:mm:ss.SSS
也是可以的,也就是没有带Z,但是这个时候默认加了一个Z,等于是0时区的这个时间。
但是这个时候其实是会丢失精度的,因为精度只能到毫秒级别。
建议使用 strict_date_optional_time_nanos
这个类型的格式是 yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ or yyyy-MM-dd
PUT my_index?include_type_name=true
{
"mappings": {
"_doc": {
"properties": {
"date": {
"type": "date_nanos"
}
}
}
}
这个字段相对来说比较简单,就是 true|false
, 就是在index的时候可以写入string类型的true,false
返回的时候也还是原来的文档的模样,也就是index的是带字符的返回的也是带字符的
PUT my_index
{
"mappings": {
"properties": {
"is_published": {
"type": "boolean"
}
}
}
}
POST my_index/_doc/1
{
"is_published": "true"
}
POST my_index/_doc/2
{
"is_published": true
}
GET my_index/_search
{
"query": {
"term": {
"is_published": true
}
}
}
返回关键部分
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.18232156,
"_source" : {
"is_published" : "true"
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : 0.18232156,
"_source" : {
"is_published" : true
}
}
]
在 term agg 查询的时候返回的是0,1
POST my_index/_doc/1
{
"is_published": true
}
POST my_index/_doc/2
{
"is_published": false
}
GET my_index/_search
{
"aggs": {
"publish_state": {
"terms": {
"field": "is_published"
}
}
},
"script_fields": {
"is_published": {
"script": {
"lang": "painless",
"source": "doc['is_published']"
}
}
}
}
返回
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"fields" : {
"is_published" : [
true
]
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : 1.0,
"fields" : {
"is_published" : [
false
]
}
}
]
},
"aggregations" : {
"publish_state" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : 0,
"key_as_string" : "false",
"doc_count" : 1
},
{
"key" : 1,
"key_as_string" : "true",
"doc_count" : 1
}
]
}
}
}
在script_field 中返回的是true,false
二进制类型写入的时候必须是base64类型的字符串,这个字段默认不会单独存储,而且不能被搜索。
PUT my_index
{
"mappings": {
"properties": {
"name": {
"type": "text"
},
"blob": {
"type": "binary"
}
}
}
}
PUT my_index/_doc/1
{
"name": "Some binary blob",
"blob": "U29tZSBiaW5hcnkgYmxvYg=="
}
可以有的mapping param
doc_values
store
range 类型很有意思,比如淘宝经常做活动,活动都有开始时间和结束时间,正常情况下需要在mysql中使用两个字段存储,但是在es中只使用一个field就可存储并且可以用来查询。
类型有
integer_range: 范围同integer
float_range: 范围同float
long_range: 范围同long
double_range: 范围同double
date_range: 64为毫秒
ip_range: ipv4 ipv6都支持
numberic range 样例
PUT range_index
{
"settings": {
"number_of_shards": 2
},
"mappings": {
"properties": {
"expected_attendees": {
"type": "integer_range"
},
"time_frame": {
"type": "date_range",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
}
}
PUT range_index/_doc/1?refresh
{
"expected_attendees" : {
"gte" : 10,
"lte" : 20
},
"time_frame" : {
"gte" : "2015-10-31 12:00:00",
"lte" : "2015-11-01"
}
}
GET range_index/_search
{
"query" : {
"term" : {
"expected_attendees" : {
"value": 12
}
}
}
}
GET range_index/_search
{
"query" : {
"range": {
"expected_attendees": {
"gte": 11,
"lte": 20
}
}
}
}
GET range_index/_search
{
"query" : {
"range": {
"expected_attendees": {
"gte": 10,
"lte": 21,
"relation" : "within"
}
}
}
}
relation 标识目标文档和当前条件的关系,within,标识目标文档的字段范围是条件的子集(可以完全重合)。
time range 样例
GET range_index/_search
{
"query" : {
"range" : {
"time_frame" : {
"gte" : "2015-10-31",
"lte" : "2015-11-01",
"relation" : "within"
}
}
}
}
ip range 样例
PUT range_index/_mapping
{
"properties": {
"ip_allowlist": {
"type": "ip_range"
}
}
}
PUT range_index/_doc/2
{
"ip_allowlist" : "192.168.0.0/16"
}
GET range_index/_search
{
"query": {
"term": {
"ip_allowlist": {
"value": "192.168.0.1"
}
}
}
}
ip地址使用CIDR表示方法:IP地址/网络ID的位数
比如
192.168.23.35/21
子网的网络ID: 192.168.16.0
子网掩码:255.255.248.0
起止IP地址: 192.168.16.1-192.168.23.254
参考这里
object 是一个嵌套结构,就是内部又有一些子属性
PUT my_index/_doc/1
{
"region": "US",
"manager": {
"age": 30,
"name": {
"first": "John",
"last": "Smith"
}
}
}
在实际的存储当中,这个对象是展开了存储的就像下面这样
{
"region": "US",
"manager.age": 30,
"manager.name.first": "John",
"manager.name.last": "Smith"
}
对应的index mapping 是
PUT my_index
{
"mappings": {
"properties": {
"region": {
"type": "keyword"
},
"manager": {
"properties": {
"age": { "type": "integer" },
"name": {
"properties": {
"first": { "type": "text" },
"last": { "type": "text" }
}
}
}
}
}
}
}
你不用显式的去指定manage 或者 manage.name的type,默认就是object。
但是如果你存储的是array类型的object的话,建议使用nested type
可以设置的mapping param
dynamic:
enabled:json是否应该被识别转换并放进该field
properties: object 内部对应的fields
类似于object类型,区别是,如果存储的是数组对象的话,数组中的每个对象能够以独立的方式被检索出来。
PUT my_index/_doc/1
{
"group" : "fans",
"user" : [
{
"first" : "John",
"last" : "Smith"
},
{
"first" : "Alice",
"last" : "White"
}
]
}
上面的写入产生的的默认mapping中user就是object类型的,他的存储会变成这样
{
"group" : "fans",
"user.first" : [ "alice", "john" ],
"user.last" : [ "smith", "white" ]
}
当我们用
GET my_index/_search
{
"query": {
"bool": {
"must": [
{ "match": { "user.first": "Alice" }},
{ "match": { "user.last": "Smith" }}
]
}
}
}
这样的query会把这个doc查出来,这显然是不对的。
但是当我们使用nested 类型定义的时候就不会出现这个问题了。
同时,也要使用nested query进行查询。
PUT my_index
{
"mappings": {
"properties": {
"user": {
"type": "nested"
}
}
},
"settings": {
"index.mapping.nested_fields.limit":20,
"index.mapping.nested_objects.limit":100
}
}
PUT my_index/_doc/1
{
"group" : "fans",
"user" : [
{
"first" : "John",
"last" : "Smith"
},
{
"first" : "Alice",
"last" : "White"
}
]
}
GET my_index/_search
{
"query": {
"nested": {
"path": "user",
"query": {
"bool": {
"must": [
{ "match": { "user.first": "Alice" }},
{ "match": { "user.last": "Smith" }}
]
}
}
}
}
}
由于嵌套文档被索引为单独的文档,因此只能在nested查询,nested/ reverse_nested agg 或nested inner hits 内访问它们。
由于嵌套文档被索引为单独的文档,所以包含多个nested object的数组会占用很多个doc,算是一个比较昂贵的mappings.因此es还增加了一些设置进行限制。
index.mapping.nested_fields.limit: nested field 的数量,默认50
index.mapping.nested_objects.limit: 每个doc中所有nested字段中可以存储的nested对象的数量。
可以设置的mapping param
dynamic
properties
include_in_parent
include_in_root
存储的是geo的点数据,可以使用geo-box,或者distance 距离进行查询
可以按照距离或者geographically进行agg
把score和距离相关
使用distance对doc进行sort 排序
geo-box查询样例
PUT my_index
{
"mappings": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
PUT my_index/_doc/1
{
"text": "Geo-point as an object",
"location": {
"lat": 41.12,
"lon": -71.34
}
}
PUT my_index/_doc/2
{
"text": "Geo-point as a string",
"location": "41.12,-71.34"
}
PUT my_index/_doc/3
{
"text": "Geo-point as a geohash",
"location": "drm3btev3e86"
}
PUT my_index/_doc/4
{
"text": "Geo-point as an array",
"location": [ -71.34, 41.12 ]
}
PUT my_index/_doc/5
{
"text": "Geo-point as a WKT POINT primitive",
"location" : "POINT (-71.34 41.12)"
}
GET my_index/_search
{
"query": {
"geo_bounding_box": {
"location": {
"top_left": {
"lat": 42,
"lon": -72
},
"bottom_right": {
"lat": 40,
"lon": -74
}
}
}
}
}
可以有的mapping param
ignore_malformed
ignore_z_value: true的话,会接收3维的point数据,但是只有二维的进行索引。
null_value
geo-shape顾名思义,就像是range-number一样,表达的是一个区域的地理位置
实际上他支持点,线,面 的存储,具体的原理好像比较复杂,等后面看具体的查询的时候再回过来看。
这里只简单看一下支持的数据类型
支持的shape的类型
POST /example/_doc
{
"location" : {
"type" : "point",
"coordinates" : [-77.03653, 38.897676]
}
}
POST /example/_doc
{
"location" : "POINT (-77.03653 38.897676)"
}
POST /example/_doc
{
"location" : {
"type" : "linestring",
"coordinates" : [[-77.03653, 38.897676], [-77.009051, 38.889939]]
}
}
POST /example/_doc
{
"location" : "LINESTRING (-77.03653 38.897676, -77.009051 38.889939)"
}
POST /example/_doc
{
"location" : {
"type" : "polygon",
"coordinates" : [
[ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ]
]
}
}
POST /example/_doc
{
"location" : "POLYGON ((100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0, 100.0 0.0))"
}
环状区域标识
POST /example/_doc
{
"location" : {
"type" : "polygon",
"coordinates" : [
[ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ],
[ [100.2, 0.2], [100.8, 0.2], [100.8, 0.8], [100.2, 0.8], [100.2, 0.2] ]
]
}
}
POST /example/_doc
{
"location" : "POLYGON ((100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0, 100.0 0.0), (100.2 0.2, 100.8 0.2, 100.8 0.8, 100.2 0.8, 100.2 0.2))"
}
POST /example/_doc
{
"location" : {
"type" : "multipoint",
"coordinates" : [
[102.0, 2.0], [103.0, 2.0]
]
}
}
POST /example/_doc
{
"location" : "MULTIPOINT (102.0 2.0, 103.0 2.0)"
}
POST /example/_doc
{
"location" : {
"type" : "multilinestring",
"coordinates" : [
[ [102.0, 2.0], [103.0, 2.0], [103.0, 3.0], [102.0, 3.0] ],
[ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0] ],
[ [100.2, 0.2], [100.8, 0.2], [100.8, 0.8], [100.2, 0.8] ]
]
}
}
POST /example/_doc
{
"location" : "MULTILINESTRING ((102.0 2.0, 103.0 2.0, 103.0 3.0, 102.0 3.0), (100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0), (100.2 0.2, 100.8 0.2, 100.8 0.8, 100.2 0.8))"
}
POST /example/_doc
{
"location" : {
"type" : "multipolygon",
"coordinates" : [
[ [[102.0, 2.0], [103.0, 2.0], [103.0, 3.0], [102.0, 3.0], [102.0, 2.0]] ],
[ [[100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0]],
[[100.2, 0.2], [100.8, 0.2], [100.8, 0.8], [100.2, 0.8], [100.2, 0.2]] ]
]
}
}
POST /example/_doc
{
"location" : "MULTIPOLYGON (((102.0 2.0, 103.0 2.0, 103.0 3.0, 102.0 3.0, 102.0 2.0)), ((100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0, 100.0 0.0), (100.2 0.2, 100.8 0.2, 100.8 0.8, 100.2 0.8, 100.2 0.2)))"
}
POST /example/_doc
{
"location" : {
"type": "geometrycollection",
"geometries": [
{
"type": "point",
"coordinates": [100.0, 0.0]
},
{
"type": "linestring",
"coordinates": [ [101.0, 0.0], [102.0, 1.0] ]
}
]
}
}
mapping param
orientation: 对于多边形的多个点,按照什么方向连点成线再成多边形
ignore_malformed
ignore_z_value
coerce
可以存储ipv4 ipv6数据
PUT my_index
{
"mappings": {
"properties": {
"ip_addr": {
"type": "ip"
}
}
}
}
PUT my_index/_doc/1
{
"ip_addr": "192.168.1.1"
}
GET my_index/_search
{
"query": {
"term": {
"ip_addr": "192.168.0.0/16"
}
}
}
这里因为网络地址是前面16位,主机地址是后面16位,所以可以查出来对应的文档
对应的mapping param
boost
doc_values
index
null_value
store
这种类型是为了completion suggester 而创建的,就是为了查询补全提供的功能,不具备纠错的能力
同时es也做了很多优化使查询很快,但是代价就是index的时候比较慢,而且都存储于内存当中,对于数据量较小的情况下比较适合。
PUT music
{
"mappings": {
"properties" : {
"suggest" : {
"type" : "completion"
},
"title" : {
"type": "keyword"
}
}
}
}
写入文档
PUT music/_doc/1?refresh
{
"suggest" : {
"input": [ "Nevermind", "Nirvana" ],
"weight" : 34
}
}
PUT music/_doc/1?refresh
{
"suggest" : [
{
"input": "Nevermind",
"weight" : 10
},
{
"input": "Nirvana",
"weight" : 3
}
]
}
PUT music/_doc/1?refresh
{
"suggest" : [ "Nevermind", "Nirvana" ]
}
PUT music/_doc/2?refresh
{
"suggest" : [ "my house is beautiful" ]
}
查询文档
POST music/_search?pretty
{
"suggest": {
"song-suggest" : {
"prefix" : "nir",
"completion" : {
"field" : "suggest"
}
}
}
}
POST music/_search?pretty
{
"suggest": {
"song-suggest" : {
"prefix" : "nor",
"completion" : {
"field" : "suggest",
"size" : 5 ,
"skip_duplicates": true
}
}
上面的查询结果都是
...
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 0,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"suggest" : {
"song-suggest" : [
{
"text" : "nir",
"offset" : 0,
"length" : 3,
"options" : [
{
"text" : "Nirvana",
"_index" : "music",
"_type" : "_doc",
"_id" : "1",
"_score" : 34.0,
"_source" : {
"suggest" : {
"input" : [
"Nevermind",
"Nirvana"
],
"weight" : 34
}
}
}
]
}
]
}
}
下面这个查询没有结果,因为他只能从文档的头部开始命中,把house替换为my,则查询结果就出来了
POST music/_search?pretty
{
"suggest": {
"song-suggest" : {
"prefix" : "house",
"completion" : {
"field" : "suggest"
}
}
}
}
可以使用的mapping param
analyzer : index使用的analyzer默认为simple
search_analyzer: 默认同analyzer
preserve_separators: 是否保留分割符,默认是true,如果是false, Foo Fighters
的内容会被 foof
的suggest的查询命中
preserve_position_increments:
max_input_length
token_count 类型的field实际上是一个integer类型,他接收一个text字段,然后将其分词,存储其token的数量,一般情况下会使用fileds功能作为一个text字段的辅助字段来使用
PUT my_index
{
"mappings": {
"properties": {
"name": {
"type": "text",
"fields": {
"length": {
"type": "token_count",
"analyzer": "standard"
}
}
}
}
}
}
PUT my_index/_doc/1
{ "name": "John Smith" }
PUT my_index/_doc/2
{ "name": "Rachel Alice Williams" }
GET my_index/_search
{
"query": {
"term": {
"name.length": 3
}
}
}
mapping param
analyzer
enable_position_increments
boost
doc_values
index
null_value
store
这个需要先装插件才行
sudo bin/elasticsearch-plugin install mapper-murmur3
PUT my_index
{
"mappings": {
"properties": {
"my_field": {
"type": "keyword",
"fields": {
"hash": {
"type": "murmur3"
}
}
}
}
}
}
# Example documents
PUT my_index/_doc/1
{
"my_field": "This is a document"
}
PUT my_index/_doc/2
{
"my_field": "This is another document"
}
GET my_index/_search
{
"aggs": {
"my_field_cardinality": {
"cardinality": {
"field": "my_field.hash"
}
}
}
}
使用hash进行聚合操作的话效率会高出很多。
类似于命名实体识别能力,给一个field 进行标记,增加一些特定的词汇
这个也要插件支持
sudo bin/elasticsearch-plugin install mapper-annotated-text
使用样例
PUT my_index
{
"mappings": {
"properties": {
"my_field": {
"type": "annotated_text"
}
}
}
}
PUT my_index/_doc/1
{
"my_field": "[Beck](Beck) announced a new tour"
}
PUT my_index/_doc/2
{
"my_field": "[Jeff Beck](Jeff+Beck&Guitarist) plays a strat"
}
# Example search
GET my_index/_search
{
"query": {
"term": {
"my_field": "Beck"
}
}
}
这种情况下只有第一个文档会被检索出来,第二个不会
因为 Jeff Beck 这个是一个固定的格式,中括号中的会被识别为一个token,小括号内会识别成同义词。
比如下面的实验
GET my_index/_analyze
{
"field": "my_field",
"text":"Investors in [Apple](Apple+Inc.) rejoiced."
}
返回
{
"tokens": [
{
"token": "investors",
"start_offset": 0,
"end_offset": 9,
"type": "",
"position": 0
},
{
"token": "in",
"start_offset": 10,
"end_offset": 12,
"type": "",
"position": 1
},
{
"token": "Apple Inc.",
"start_offset": 13,
"end_offset": 18,
"type": "annotation",
"position": 2
},
{
"token": "apple",
"start_offset": 13,
"end_offset": 18,
"type": "",
"position": 2
},
{
"token": "rejoiced",
"start_offset": 19,
"end_offset": 27,
"type": "",
"position": 3
}
]
}
可以看到apple 和Apple Inc. 的索引位置position是一样的,被识别为同义词一样的东西。
这个可能是在需要对文档进行反向查询的时候使用更加合适,参考这里
举例:提供一个存储用户兴趣的平台,以便在每次有新内容进入时将正确的内容(通知警报)发送给正确的用户。
举例:用户订阅了特定主题,以便一旦该主题的新文章出现,就会向感兴趣的用户发送通知。
有些对doc进行分类的意思
应用场景如下:
价格监控
新闻警报
…
他的使用方式是
但是实际上使用起来看着有点不是很习惯,可能还是看的少
PUT index
{
"mappings": {
"properties": {
"query" : {
"type" : "percolator"
},
"body" : {
"type": "text"
}
}
}
}
PUT index/_doc/1?refresh
{
"query" : {
"match" : {
"body" : "quick brown fox"
}
}
}
PUT index/_doc/2?refresh
{
"query" : {
"match" : {
"body" : "fox jumps over"
}
}
}
GET /index/_search
{
"query": {
"percolate" : {
"field" : "query",
"document" : {
"body" : "fox jumps over the lazy dog"
}
}
}
}
返回
{
"took" : 6,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.39229372,
"hits" : [
{
"_index" : "index",
"_type" : "_doc",
"_id" : "2",
"_score" : 0.39229372,
"_source" : {
"query" : {
"match" : {
"body" : "fox jumps over"
}
}
},
"fields" : {
"_percolator_document_slot" : [
0
]
}
},
{
"_index" : "index",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.13076457,
"_source" : {
"query" : {
"match" : {
"body" : "quick brown fox"
}
}
},
"fields" : {
"_percolator_document_slot" : [
0
]
}
}
]
}
}
这个返回的结构似乎和其他的略有不同
样例二
PUT percolator
{
"mappings": {
"properties": {
"dsl":{
"type": "percolator"
},
"message":{
"type": "text"
}
}
}
}
POST percolator/_doc/2
{
"dsl": {
"match": {
"message": "to be better or bad " # 这里dsl字段中使用的query对应的message字段必须在percolator 的mapping当中已经定义了才能正常的使用
}
}
}
GET percolator/_search
{
"query": {
"percolate": {
"field": "dsl",
"document": {
"message":"bad information"
}
}
}
}
返回
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 0.13076457,
"hits" : [
{
"_index" : "percolator",
"_type" : "_doc",
"_id" : "2",
"_score" : 0.13076457,
"_source" : {
"dsl" : {
"match" : {
"message" : "to be better or bad "
}
}
},
"fields" : {
"_percolator_document_slot" : [
0
]
}
}
]
}
percolator 的查询原理是存储 query到index01中,然后使用document查询的时候,使用该doc对query进行一次召回,然后再将该doc创建内存索引index02,使用召回的query在index02中进行查询,然后进行相关度打分并返回
join 类型主要是辅助你在一个索引中定义具有父子关系的doc
使用样例
PUT my_index
{
"mappings": {
"properties": {
"my_join_field": {
"type": "join",
"relations": {
"question": "answer"
}
}
}
}
}
PUT my_index/_doc/1?refresh
{
"text": "This is a question",
"my_join_field": {
"name": "question"
}
}
PUT my_index/_doc/2?refresh
{
"text": "This is another question",
"my_join_field": {
"name": "question"
}
}
PUT my_index/_doc/3?routing=1&refresh
{
"text": "This is an answer",
"my_join_field": {
"name": "answer",
"parent": "1"
}
}
PUT my_index/_doc/4?routing=1&refresh
{
"text": "This is another answer",
"my_join_field": {
"name": "answer",
"parent": "1"
}
}
GET my_index/_search
查询返回
...
"hits" : {
"total" : {
"value" : 4,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"text" : "This is a question",
"my_join_field" : {
"name" : "question"
}
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : 1.0,
"_source" : {
"text" : "This is another question",
"my_join_field" : {
"name" : "question"
}
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "3",
"_score" : 1.0,
"_routing" : "1",
"_source" : {
"text" : "This is an answer",
"my_join_field" : {
"name" : "answer",
"parent" : "1"
}
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "4",
"_score" : 1.0,
"_routing" : "1",
"_source" : {
"text" : "This is another answer",
"my_join_field" : {
"name" : "answer",
"parent" : "1"
}
}
}
]
}
然后使用更加复杂的查询试试
GET my_index/_search
{
"query": {
"parent_id": {
"type": "answer",
"id": "1"
}
},
"aggs": {
"parents": {
"terms": {
"field": "my_join_field#question",
"size": 10
}
}
},
"script_fields": {
"parent": {
"script": {
"source": "doc['my_join_field#question']"
}
}
}
}
查询返回
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.35667494,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "3",
"_score" : 0.35667494,
"_routing" : "1",
"fields" : {
"parent" : [
"1"
]
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "4",
"_score" : 0.35667494,
"_routing" : "1",
"fields" : {
"parent" : [
"1"
]
}
}
]
},
"aggregations" : {
"parents" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "1",
"doc_count" : 2
}
]
}
}
为了加速join查询,会使用全局基数来进行加速,这就需要当前shard的doc发生变化之后重建全局基数。parent id越多,重建的代价也约高。
对于有join field的index来说,全局基数默认会进行重建,重建的任务称为refresh任务的一部分。
如果禁用了立即重建,会在第一次使用join查询或者agg查询的时候进行重建,有可能导致查询超时。
同时使用上还有一些限制
感觉这个在大规模的数据当中使用可能还是会有一些性能问题,尤其是那种频繁更新的数据。
听说这个是es支持机器学习的一个部分
查询的时候也必须用rank_feture 查询哦,就是一种特殊的查询,类似term,match_all 等,具体看下面的例子
DELETE my_index
PUT my_index
{
"mappings": {
"properties": {
"pagerank": {
"type": "rank_feature"
},
"url_length": {
"type": "rank_feature",
"positive_score_impact": false
}
}
}
}
PUT my_index/_doc/1
{
"pagerank": 8,
"url_length": 22
}
PUT my_index/_doc/2
{
"pagerank": 9,
"url_length": 22
}
GET my_index/_search
{
"query": {
"rank_feature": {
"field": "pagerank"
}
},
"explain": true
}
查询返回
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.5142857,
"hits" : [
{
"_shard" : "[my_index][0]",
"_node" : "ADi2c-NmTnWhTmb2dDlCeA",
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : 0.5142857,
"_source" : {
"pagerank" : 9,
"url_length" : 22
},
"_explanation" : {
"value" : 0.5142857,
"description" : "Saturation function on the _feature field for the pagerank feature, computed as w * S / (S + k) from:",
"details" : [
{
"value" : 1.0,
"description" : "w, weight of this function",
"details" : [ ]
},
{
"value" : 8.5,
"description" : "k, pivot feature value that would give a score contribution equal to w/2",
"details" : [ ]
},
{
"value" : 9.0,
"description" : "S, feature value",
"details" : [ ]
}
]
}
},
{
"_shard" : "[my_index][0]",
"_node" : "ADi2c-NmTnWhTmb2dDlCeA",
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.4848485,
"_source" : {
"pagerank" : 8,
"url_length" : 22
},
"_explanation" : {
"value" : 0.4848485,
"description" : "Saturation function on the _feature field for the pagerank feature, computed as w * S / (S + k) from:",
"details" : [
{
"value" : 1.0,
"description" : "w, weight of this function",
"details" : [ ]
},
{
"value" : 8.5,
"description" : "k, pivot feature value that would give a score contribution equal to w/2",
"details" : [ ]
},
{
"value" : 8.0,
"description" : "S, feature value",
"details" : [ ]
}
]
}
}
]
}
}
这个公式我暂时先不聊,应该在rank_feature query中会再出现的。
这个和上一个类似,只是可以存储特征向量
使用样例
PUT my_index
{
"mappings": {
"properties": {
"topics": {
"type": "rank_features"
}
}
}
}
PUT my_index/_doc/1
{
"topics": {
"politics": 20,
"economics": 50.8
}
}
PUT my_index/_doc/2
{
"topics": {
"politics": 5.2,
"sports": 80.1
}
}
GET my_index/_search
{
"query": {
"rank_feature": {
"field": "topics.politics"
}
}
}
稠密向量,这个简直是顾名思义,哈哈
一般是用来计算文档的score的时候使用
理论上一个向量的维度不应该超过1024
PUT my_index
{
"mappings": {
"properties": {
"my_vector": {
"type": "dense_vector",
"dims": 3
},
"my_text" : {
"type" : "keyword"
}
}
}
}
PUT my_index/_doc/1
{
"my_text" : "text1",
"my_vector" : [0.5, 10, 6]
}
PUT my_index/_doc/2
{
"my_text" : "text2",
"my_vector" : [-0.5, 10, 10]
}
GET my_index/_search
{
"query": {
"script_score": {
"query": {
"match_all": {}
},
"script": {
"source": "cosineSimilarity(params.queryVector, doc['my_vector'])",
"params": {
"queryVector": [4, 3.4, -0.2]
}
}
}
}
}
返回
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.5674877,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.5674877,
"_source" : {
"my_text" : "text1",
"my_vector" : [
0.5,
10,
6
]
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : 0.4035343,
"_source" : {
"my_text" : "text2",
"my_vector" : [
-0.5,
10,
10
]
}
}
]
}
稀疏向量,为了应对多维度但是又比较稀疏的向量
DELETE my_index
PUT my_index
{
"mappings": {
"properties": {
"my_vector": {
"type": "sparse_vector"
},
"my_text" : {
"type" : "keyword"
}
}
}
}
PUT my_index/_doc/1
{
"my_text" : "text1",
"my_vector" : {"1": 0.5, "5": -0.5, "100": 1}
}
PUT my_index/_doc/2
{
"my_text" : "text2",
"my_vector" : {"103": 0.5, "4": -0.5, "5": 1, "11" : 1.2}
}
GET my_index/_search
{
"query": {
"script_score": {
"query": {
"match_all": {}
},
"script": {
"source": "cosineSimilaritySparse(params.queryVector, doc['my_vector'])",
"params": {
"queryVector": {"2": 0.5, "10" : 111.3, "50": -1.3, "113": 14.8, "4545": 156.0}
}
}
}
}
}
返回
...
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.0,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.0,
"_source" : {
"my_text" : "text1",
"my_vector" : {
"1" : 0.5,
"5" : -0.5,
"100" : 1
}
}
},
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : 0.0,
"_source" : {
"my_text" : "text2",
"my_vector" : {
"103" : 0.5,
"4" : -0.5,
"5" : 1,
"11" : 1.2
}
}
}
]
}
感觉这个有点类似prefix query,底层做了不少优化,使查询的效率更高
但是代价是增加了不少存储。看下面的一个例子
PUT my_index
{
"mappings": {
"properties": {
"my_field": {
"type": "search_as_you_type",
"analyzer": "standard"
}
}
}
}
上面的mapping会产生下面几个字段
my_field: 使用对应的analyzer 产生该字段的token
my_field._2gram: 使用shingle token filter 对my_field产出的token按照2gram处理形成当前字段的token
my_field._3gram: 使用shingle token filter 对my_field产出的token按照3gram处理形成当前字段的token
my_field._index_prefix: 使用 edge ngram token filter 对my_field._3gram产出的token进行处理形成当前字段的token
使用analyze api测试
POST my_index/_analyze
{
"field": "my_field",
"text": ["quick brown fox "]
}
返回
{
"tokens" : [
{
"token" : "quick",
"start_offset" : 0,
"end_offset" : 5,
"type" : "",
"position" : 0
},
{
"token" : "brown",
"start_offset" : 6,
"end_offset" : 11,
"type" : "",
"position" : 1
},
{
"token" : "fox",
"start_offset" : 12,
"end_offset" : 15,
"type" : "",
"position" : 2
}
]
}
测试2gram
POST my_index/_analyze
{
"field": "my_field._2gram",
"text": ["quick brown fox"]
}
返回
{
"tokens" : [
{
"token" : "quick brown",
"start_offset" : 0,
"end_offset" : 11,
"type" : "shingle",
"position" : 0
},
{
"token" : "brown fox",
"start_offset" : 6,
"end_offset" : 15,
"type" : "shingle",
"position" : 1
}
]
}
测试3gram
POST my_index/_analyze
{
"field": "my_field._3gram",
"text": ["quick brown fox"]
}
返回
{
"tokens" : [
{
"token" : "quick brown fox",
"start_offset" : 0,
"end_offset" : 15,
"type" : "shingle",
"position" : 0
}
]
}
测试_index_prefix
POST my_index/_analyze
{
"field": "my_field._index_prefix",
"text": ["quick brown fox"]
}
返回
{
"tokens" : [
{"token" : "q", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "qu", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "qui", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quic", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick ", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick b", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick br", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick bro", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick brow", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick brown", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick brown ", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick brown f", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick brown fo", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "quick brown fox", "start_offset" : 0, "end_offset" : 15, "type" : "shingle", "position" : 0 },
{"token" : "b", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "br", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "bro", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "brow", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "brown", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "brown ", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "brown f", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "brown fo", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "brown fox", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "brown fox ", "start_offset" : 6, "end_offset" : 15, "type" : "shingle", "position" : 1 },
{"token" : "f", "start_offset" : 12, "end_offset" : 15, "type" : "shingle", "position" : 2 },
{"token" : "fo", "start_offset" : 12, "end_offset" : 15, "type" : "shingle", "position" : 2 },
{"token" : "fox", "start_offset" : 12, "end_offset" : 15, "type" : "shingle", "position" : 2 },
{"token" : "fox ", "start_offset" : 12, "end_offset" : 15, "type" : "shingle", "position" : 2 },
{"token" : "fox ", "start_offset" : 12, "end_offset" : 15, "type" : "shingle", "position" : 2 }
]
}
这里因为产出的数据太多,所以为了方便并航,做了处理
至于为什么是这个样子,还不是很清晰,感觉对于单个词都增加了一个空格,好奇怪
比如下面这个样子的
POST my_index/_analyze
{
"field": "my_field._3gram",
"text": ["quick"]
}
返回
{
"tokens" : [ ]
}
POST my_index/_analyze
{
"field": "my_field._index_prefix",
"text": ["quick"]
}
返回
{
"tokens" : [
{"token" : "q", "start_offset" : 0, "end_offset" : 5, "type" : "shingle", "position" : 0 },
{"token" : "qu", "start_offset" : 0, "end_offset" : 5, "type" : "shingle", "position" : 0 },
{"token" : "qui", "start_offset" : 0, "end_offset" : 5, "type" : "shingle", "position" : 0 },
{"token" : "quic", "start_offset" : 0, "end_offset" : 5, "type" : "shingle", "position" : 0 },
{"token" : "quick", "start_offset" : 0, "end_offset" : 5, "type" : "shingle", "position" : 0 },
{"token" : "quick ", "start_offset" : 0, "end_offset" : 5, "type" : "shingle", "position" : 0 },
{"token" : "quick ", "start_offset" : 0, "end_offset" : 5, "type" : "shingle", "position" : 0 }
]
}
可以看到也会多出来两个空格,不知道是干什么用的。
这个就是对一个字段设置别名,感觉作用好像不是很大,这个别名search操作起来和对别名指向的真实field一样,但是对于index和update操作则是不行的。
样例
PUT trips
{
"mappings": {
"properties": {
"distance": {
"type": "long"
},
"route_length_miles": {
"type": "alias",
"path": "distance"
},
"transit_mode": {
"type": "keyword"
}
}
}
}
PUT trips/_doc/1
{
"distance":12345
}
PUT trips/_doc/2
{
"distance":12
}
GET trips/_search
{
"query": {
"range" : {
"route_length_miles" : {
"lte" : 39
}
}
}
}
这个的作用,对于一个对象字需要定一个一个filed mapping,他内部将各个层级个字段处理成了类似keyword进行存储,这样就可以用更简单的方式来进行查询。
实际上这个的作用也没有看太明白。唯一的好处就是防止了mapping爆炸吧。
使用样例
PUT bug_reports
{
"mappings": {
"properties": {
"title": {
"type": "text"
},
"labels": {
"type": "flattened"
}
}
}
}
POST bug_reports/_doc/1
{
"title": "Results are not sorted correctly.",
"labels": {
"priority": "urgent",
"release": ["v1.2.5", "v1.3.0"],
"timestamp": {
"created": 1541458026,
"closed": 1541457010
}
}
}
POST bug_reports/_search
{
"query": {
"term": {"labels": "urgent"}
}
}
POST bug_reports/_search
{
"query": {
"term": {"labels.release": "v1.3.0"}
}
}
这些查询都能正常使用
他的使用和keyword有很多相似之处。
可以支持的查询有
term, terms, and terms_set
prefix
range
match and multi_match
query_string and simple_query_string
exists
可以设置的mapping param
boost
depth_limit
doc_values
eager_global_ordinals
ignore_above
index
index_options
null_value
similarity
split_queries_on_whitespace : full text query 是不是应该被空格切分。
es天然支持了数组类型, 假如是动态类型的话,加入的第一个doc中的当前filed的类型决定了数据的类型,数组中类型要保持一致性。
像 [ 10, "some string" ]
这样的数据也是会报错的。
样例
PUT my_index/_doc/1
{
"message": "some arrays in this document...",
"tags": [ "elasticsearch", "wow" ],
"lists": [
{
"name": "prog_list",
"description": "programming list"
},
{
"name": "cool_list",
"description": "cool stuff list"
}
]
}
PUT my_index/_doc/2
{
"message": "no arrays in this document...",
"tags": "elasticsearch",
"lists": {
"name": "prog_list",
"description": "programming list"
}
}
GET my_index/_search
{
"query": {
"match": {
"tags": "elasticsearch"
}
}
}
可以看到第二个文档不是数组但是还是可以正常的加入进去,所以说es对数组的支持是自然的,主要是因为lucene的设计的支持,lucene将text分成了多个token存储,所以很方便的可以存储数组。
就是说一个字段可以通过不同的存储方式存进索引,主要是通过mapping param中的 fields 特性来支持
样例
PUT my_index
{
"mappings": {
"properties": {
"city": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
}
}