document锁,是用脚本进行上锁
POST /fs/lock/1/_update
{
"upsert": { "process_id": 123 },
"script": "if ( ctx._source.process_id != process_id ) { assert false }; ctx.op = 'noop';"
"params": {
"process_id": 123
}
}
params,里面有个process_id,process_id,是你的要执行增删改操作的进程的唯一id,比如说可以在java系统,启动的时候,给你的每个线程都用UUID自动生成一个thread id,你的系统进程启动的时候给整个进程也分配一个UUID。process_id + thread_id就代表了某一个进程下的某个线程的唯一标识。可以自己用UUID生成一个唯一ID
process_id很重要,会在lock中,设置对对应的doc加锁的进程的id,这样其他进程过来的时候,才知道,这条数据已经被别人给锁了
assert false,不是当前进程加锁的话,则抛出异常
ctx.op='noop',不做任何修改
如果该document之前没有被锁,/fs/lock/1之前不存在,也就是doc id=1没有被别人上过锁; upsert的语法,那么执行index操作,创建一个/fs/lock/id这条数据,而且用params中的数据作为这个lock的数据。process_id被设置为123,script不执行。这个时候象征着process_id=123的进程已经锁了一个doc了。
如果document被锁了,就是说/fs/lock/1已经存在了,代表doc id=1已经被某个进程给锁了。那么执行update操作,script,此时会比对process_id,如果相同,就是说,某个进程,之前锁了这个doc,然后这次又过来,就可以直接对这个doc执行操作,说明是该进程之前锁的doc,则不报错,不执行任何操作,返回success; 如果process_id比对不上,说明doc被其他doc给锁了,此时报错
/fs/lock/1
{
"process_id": 123
}
POST /fs/lock/1/_update
{
"upsert": { "process_id": 123 },
"script": "if ( ctx._source.process_id != process_id ) { assert false }; ctx.op = 'noop';"
"params": {
"process_id": 123
}
}
script:ctx._source.process_id,123
process_id:加锁的upsert请求中带过来额proess_id
如果两个process_id相同,说明是一个进程先加锁,然后又过来尝试加锁,可能是要执行另外一个操作,此时就不会block,对同一个process_id是不会block,ctx.op= 'noop',什么都不做,返回一个success
如果说已经有一个进程加了锁了
/fs/lock/1
{
"process_id": 123
}
POST /fs/lock/1/_update
{
"upsert": { "process_id": 123 },
"script": "if ( ctx._source.process_id != process_id ) { assert false }; ctx.op = 'noop';"
"params": {
"process_id": 234
}
}
"script": "if ( ctx._source.process_id != process_id ) { assert false }; ctx.op = 'noop';"
ctx._source.process_id:123
process_id: 234
scripts/judge-lock.groovy: if ( ctx._source.process_id != process_id ) { assert false }; ctx.op = 'noop';
POST /fs/lock/1/_update
{
"upsert": { "process_id": 123 },
"script": {
"lang": "groovy",
"file": "judge-lock",
"params": {
"process_id": 123
}
}
}
{
"_index": "fs",
"_type": "lock",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
}
}
GET /fs/lock/1
{
"_index": "fs",
"_type": "lock",
"_id": "1",
"_version": 1,
"found": true,
"_source": {
"process_id": 123
}
}
POST /fs/lock/1/_update
{
"upsert": { "process_id": 234 },
"script": {
"lang": "groovy",
"file": "judge-lock",
"params": {
"process_id": 234
}
}
}
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[4onsTYV][127.0.0.1:9300][indices:data/write/update[s]]"
}
],
"type": "illegal_argument_exception",
"reason": "failed to execute script",
"caused_by": {
"type": "script_exception",
"reason": "error evaluating judge-lock",
"caused_by": {
"type": "power_assertion_error",
"reason": "assert false\n"
},
"script_stack": [],
"script": "",
"lang": "groovy"
}
},
"status": 400
}
POST /fs/lock/1/_update
{
"upsert": { "process_id": 123 },
"script": {
"lang": "groovy",
"file": "judge-lock",
"params": {
"process_id": 123
}
}
}
{
"_index": "fs",
"_type": "lock",
"_id": "1",
"_version": 1,
"result": "noop",
"_shards": {
"total": 0,
"successful": 0,
"failed": 0
}
}
POST /fs/file/1/_update
{
"doc": {
"name": "README1.txt"
}
}
{
"_index": "fs",
"_type": "file",
"_id": "1",
"_version": 4,
"result": "updated",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
}
}
POST /fs/_refresh
GET /fs/lock/_search?scroll=1m
{
"query": {
"term": {
"process_id": 123
}
}
}
{
"_scroll_id": "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAACPkFjRvbnNUWVZaVGpHdklqOV9zcFd6MncAAAAAAAAj5RY0b25zVFlWWlRqR3ZJajlfc3BXejJ3AAAAAAAAI-YWNG9uc1RZVlpUakd2SWo5X3NwV3oydwAAAAAAACPnFjRvbnNUWVZaVGpHdklqOV9zcFd6MncAAAAAAAAj6BY0b25zVFlWWlRqR3ZJajlfc3BXejJ3",
"took": 51,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "fs",
"_type": "lock",
"_id": "1",
"_score": 1,
"_source": {
"process_id": 123
}
}
]
}
}
PUT /fs/lock/_bulk
{ "delete": { "_id": 1}}
{
"took": 20,
"errors": false,
"items": [
{
"delete": {
"found": true,
"_index": "fs",
"_type": "lock",
"_id": "1",
"_version": 2,
"result": "deleted",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 200
}
}
]
}
POST /fs/lock/1/_update
{
"upsert": { "process_id": 234 },
"script": {
"lang": "groovy",
"file": "judge-lock",
"params": {
"process_id": 234
}
}
}
process_id=234上锁就成功了
ElasticSearch 笔记
1_ElasticSearch使用term filter来搜索数据
2_ElasticSearch filter执行原理 bitset机制与caching机制
3_ElasticSearch 基于bool组合多个filter条件来搜索数据
4_ElasticSearch 使用terms搜索多个值
5_ElasticSearch 基于range filter来进行范围过滤
6_ElasticSearch 控制全文检索结果的精准度
7_ElasticSearch term+bool实现的multiword搜索原理
8_基于boost的搜索条件权重控制
9_ElasticSearch 多shard场景下relevance score不准确
10_ElasticSearch dis_max实现best fields策略进行多字段搜索
11_ElasticSearch 基于tie_breaker参数优化dis_max搜索效果
12_ElasticSearch multi_match语法实现dis_max+tie_breaker
13_ElasticSearch multi_match+most fiels策略进行multi-field搜索
14_ElasticSearch 使用most_fields策略进行cross-fields search
15_ElasticSearch copy_to定制组合field进行cross-fields搜索
16_ElasticSearch 使用原生cross-fiels 查询
17_ElasticSearch phrase matching搜索
18_ElasticSearch 基于slop参数实现近似匹配
19_ElasticSearch 使用match和近似匹配实现召回率与精准度的平衡
20_ElasticSearch rescoring机制优化近似匹配搜索的性能
21_ElasticSearch 前缀搜索、通配符搜索、正则搜索
22_ElasticSearch 搜索推荐match_phrase_prefix实现search-time
23_ElsaticSearch 搜索推荐ngram分词机制实现index-time更多干货
24_ElasticSearch TF&IDF算法以及向量空间模型
25_ElasticSearch 揭秘lucene的相关度分数算法
26_ElasticSearch 四种常见的相关度分数优化方法
27_ElasticSearch用function_score自定义相关度分数算法
28_ElasticSearch误拼写时的fuzzy模糊搜索技术
29_ElasticSearchIK中文分词器的安装和使用
30_ElasticSearch IK分词器配置文件 以及自定义词库
ElasticSearchIK中文分词器的安装和使用
日志管理ELK