【EOS干货】带宽速率限制和存储使用限制(第四篇)

DAWN-472 ⁃ Bandwidth Rate Limiting & Storage Usage Limits

DAWN-472 带宽速率限制和存储使用限制

bytemaster opened this issue on 6 Sep 2017

bytemaster 2017年9月6开始了这个话题

这是长篇中的第四篇。



bytemaster commented on 8 Sep 2017

bytemaster在2017.9.8回复了这个帖子

Some random thoughts on how to implement database memory accounting efficiently:

一些随机的想法:关于如何高效地实现数据库内存计算


The Relevant Data

1 memory_used_by_contract

2 memory_available_to_contract

相关数据

1 memory_used_by_contract

2 memory_available_to_contract



Relevant Actions

1 reduce_available_memory (so that user can claim tokens)

2 adjust_used_memory

相关举措

1,reduce_available_memory(以便用户可以明确代币)

2,adjust_used_memory



Required Invariants

1 all transactions succeed regardless of which order they are applied

2 all state is deterministic regardless of which order operations are applied

所需的不变量

1 ,不管在程序中是什么顺序,所有的交易都会成功

2,不管采用什么顺序的操作,所有状态都是确定的



Assumptions

1 withdraw_stake_from_contract is rare operation requiring only contract scope

2 adjust_used_memory occurs very frequently

假设

1,withdraw_stake_from_contract 只需要合同范围,是罕见的操作

2,adjust_used_memory 频繁出现



Atomic Algorithm

The biggest problem with our parallel algorithm is the UNDO database which must accurately revert the transaction if it fails. We cannot use the traditional approach to "backup data before modifying it" because that would be a sequential process (multiple transactions attempting to do this at the same time would be a mess). Instead, we need atomic update and atomic undo.

1 use atomic operations to increment or decrement memory used

2 if( producing )

3 check that used <= available

4 if not true then use atomic operations to undo step 1 and throw exception, transaction will not be accepted

原子算法

我们的并行算法中最大的问题是 “ 撤销 ” 操作的数据库,如果在某一步操作失败,它必须准确恢复之前的交易。我们不能使用传统的方法 “ 在修改数据之前备份数据 ” ,因为这将不是一个单一的过程,它会有连锁反应(多个交易同时发生这样的情况会陷入混乱)。所以我们需要的是原子更新和原子撤消。

1,内存使用的变化通过原子操作来实现

2,如果(产生)

3,检查使用<=可用

4,失败的话,则使用原子操作撤消步骤1并抛出异常——交易失败



The key to making this work is a viable undo infrastructure that will accurately revert the used_memory without requiring locks on this field.

To do this I imagine a new kind of database table that does not use the current UNDO infrastructure. Instead it depends upon each thread creating a stack of "atomic do" and "atomic undo" operations. Furthermore, it requires that the result of reading data modified by these atomic operations is only possible "while producing" and that no other side effects except "success | failure" are possible based upon the result of reading said memory.

这项工作运行起来的关键是一个可撤消的基础设置,它不需要锁定该字段就能准确还原used_memory。为此,我想象一种不用当前 “ 撤销 ” 基础架构的新型数据库表。它不同在于,需要每个线程创建一堆 “ 原子启动 ” 和 “ 原子撤消 ” 操作,此外还要求这些原子操作修改的数据仅在 “ 产生时 ” 被读取,并且基于读取所述存储器的结果,除了 “ 成功 | 失败 ” 之外没有第三种结果。



If we maintain this pattern and we have three threads accessing total_used property such that not all 3 can be included at once then we have the following guarantee:

none, A, B, C, AB, AC or BC will be the only valid acceptance. This works because we always increment first, then check, then decrement and fail. At the time we check we are over-optimistic in used space. When we fail we unwind. Worst case scenario is all 3 add, then all three check, then all 3 unwind. This will happen when storage is near full.

如果我们保持这个模式,并且有三个线程访问total_used属性,这样就不能同时包含所有3个线程,那么以下的情况:一个都没有,A,B,C,AB,AC或BC将是生效的条件。因为我们的运行顺序总是先增量,然后检查,递减,失败。我们当初测试检查对使用空间过于乐观,测试失败后进行了调整。当存储快爆满的时候,最坏的是三种情况会一起出现——三个都需要检查并且三个也都需要调整。



The above is only true when contracts attempt to increase storage. If one of the messages decreases storage then when it unwinds total storage use could increase again. This means that after all transactions either succeed or fail the total used might be greater than the amount allowed. In this case the block producers can freeze the account until they fund it with more capacity. Letting it slide could open an attack where they intentionally "fake a delete".

This process will work for deciding to increment or fail the state of this field, but it does not allow other threads to access the data. For example, the currency contract could not reduce the available capacity so that it could claim tokens while this is going on.

以上只有在合同试图增加存储时才能实现的。如果因为一条消息而减少了存储空间,那么当其释放总存储使用量时,存储空间将会再次增加。这意味着,在交易成功或失败后,总使用量可能会大于整个环境能承受的量。在这种情况下,区块生产商可以冻结账户,直到他们拥有更多的产能。不冻结的话,可能会让他们产生故意 “ 伪造删除 ” 的攻击念头。此过程将决定该字段成功与否,但不允许其他线程访问这些数据。例如,货币合同不能减少可用容量,以便发生这种情况时,代币不受影响。



Other options:

1,maintain an increment only usage field and an increment only free field. While having a lock on the scope (to increase or decrease user storage or some other time) these fields could be normalized. In this case users get delayed credit for freeing memory.

其他选项:

1. 保持仅增量使用字段和仅增量空闲字段。锁定范围(增加或减少用户存储空间或其他时间)时,可以对这些字段标准化。在这种情况下,用户用延迟信用的代价来释放内存。



(本文未完待续)

本文链接:http://github.com/EOSIO/eos/issues/353


翻译:Lochaiching

校正:Sheldon


更多内容可以扫描以下二维码关注“EOS技术爱好者”!

【EOS干货】带宽速率限制和存储使用限制(第四篇)_第1张图片

这篇文章对你有用的话,可以用ERC-20钱包扫描以下地址给我们赞赏哟〜

【EOS干货】带宽速率限制和存储使用限制(第四篇)_第2张图片
二维码原文地址:0xBdE77CaFFf33970322c0F1f59F6B047de3AC88F9

你可能感兴趣的:(【EOS干货】带宽速率限制和存储使用限制(第四篇))