著名的区块链不可能三角由以太坊创始人Vitalik Buterin首次提出,具体为:
以太坊面临的扩容问题即为不可能三角中的可扩展性问题。为此,针对以太坊有大量的Layer 2(L2)扩容方案。这些扩容方案致力于:
Polygon zkEVM为L2 rollup解决方案,在L1(Layer 1)以太坊链上集成了数据可用性和execution verification,从而可确保L2 state transition的安全性和可靠性。本技术文档为Polygon团队为实现该目的所做的架构设计及实现。
为确保交易的固化,以及Polygon zkEVM L2 rollup中state transition的正确性,协议中涉及的主要元素有:
Trusted Sequencer:该角色负责接收用户的L2交易,对这些L2交易进行排序,生成batches,并以sequences的形式将这些batches提交到L1合约的storage slots中。
Trusted Aggregator:该角色负责获取Sequencer已提交的L2 batches,使用某特殊的链下EVM解析器——可计算出运行transactions batches所获得的L2 State,生成相应的computational integrity(CI)ZKP(Zero-Knowledge proof)证明。
L1 PolygonZkEVM.sol合约:Sequencer向该L1合约提交a sequence of transaction batches,可将L1 PolygonZkEVM.sol合约看成是sequences的历史仓库。
由此可知,整个数据可用性以及交易执行的验证都仅依赖于L1安全假设,且最终,节点仅需要依赖L1上的数据来保持每个L2 State transition的同步。
由于L2网络节点有3次更新同步local state,对应的L2 State分为3个阶段:
从batch的角度来看,L2 State阶段的时间轴以及触发阶段变更的操作展示 见图2:
zkEVM节点为一个软件包,包含了运行zkEVM网络所需的所有元素,节点可以三种模式启动:
1)Sequencer模式:
2)Aggregator模式:
3)RPC模式:
与L1交易一样,L2交易也是由用户通过钱包创建并使用私钥签名。事实上,Polygon zkEVM的L2 EVM提供了与L1以太坊完全相同的用户体验。
用户与zkEVM的交互通过JSON RPC来实现,该JSON RPC与以太坊RPC完全兼容。使得与EVM兼容的任何应用,如某钱包软件,都原生与zkEVM兼容。
一旦交易生成并签名,将通过JSON RPC接口发送给Trusted Sequencer节点。交易将会存储在pending交易池中,等待被sequencer选中执行或者丢弃。
Trusted Sequencer会从pending交易池中获取交易、排序并将其打包到transaction batches中,并通过执行这些batches来更新其本地L2 State。
一旦transactions batches被添加到Trusted Sequencer的L2 State实例中,可立即通过broadcast服务分享广播给其它zkEVM节点,使得其它节点也可获得该trusted state。
注意,通过依赖Trusted Sequencer,可实现交易的快速固化(要比依赖L1更快),但是,相应的L2 State也将处于trusted state,直到该batch被提交到L1合约,才进入Virutal State。
用户通常与trusted L2 State交互,不过,由于特定的协议特性(后续将提及),L2交易的验证流程用时将相对较长,通常约为30分钟,极端情况下为2周。因此,用户应注意其高价值交易所关联的潜在风险,特别是对于不可逆转的交易——具有L2之外影响的交易,如off-ramps、over-the-counter transactions(场外交易)以及alternative bridges。
Trusted Sequencer必须按L1 PolygonZkEVM.sol合约中约定的特殊格式来打包交易,具体见BatchData结构体:
/**
* @notice Struct which will be used to call sequenceBatches
* @param transactions L2 ethereum transactions EIP-155 or pre-EIP-155 with signature:
* EIP-155: rlp(nonce, gasprice, gasLimit, to, value, data, chainid, 0, 0,) || v || r || s
* pre-EIP-155: rlp(nonce, gasprice, gasLimit, to, value, data) || v || r || s
* @param globalExitRoot Global exit root of the batch
* @param timestamp Sequenced timestamp of the batch
* @param minForcedTimestamp Minimum timestamp of the force batch data, empty when non forced batch
*/
struct BatchData {
bytes transactions;
bytes32 globalExitRoot;
uint64 timestamp;
uint64 minForcedTimestamp;
}
transactions
参数:为包含了拼接batch transactions的字节数组。每笔交易遵循以太坊pre-EIP-115或EIP-115格式采用RLP(Recursive-Length Prefix)标准进行编码之后,再拼接签名的v、r、s值。
globalExitRoot
参数:为Bridge合约Global Exit Merkle Tree的root,在batch执行之初将同步到L2 State中,使得bridge claiming交易可在L2执行成功。Bridge合约用于在L1和L2之间转移资产,且claiming交易用于解锁目标网络的资产。
timestamp
参数:为Batch timestamp,存在的限制约束为:
batch的这2个限制约束可确保batches是按时间排序且随L1区块同步的。
minForcedTimestamp
:若batch为forced batch,则该参数不为0,forced batch用作反审查对策(详细见第5章)。
对batches排序意味着成功将a sequence of batches添加到L1 PolygonZkEVM.sol合约的sequencedBatches map中,该map是维护定义virtual state的sequences队列的存储结构体:
// Queue of batches that defines the virtual state
// SequenceBatchNum --> SequencedBatchData
mapping(uint64 => SequencedBatchData) public sequencedBatches;
/**
* @notice Struct which will be stored for every batch sequence
* @param accInputHash Hash chain that contains all the information to process a batch:
* keccak256(bytes32 oldAccInputHash, keccak256(bytes transactions), bytes32 globalExitRoot, uint64 timestamp, address seqAddress)
* @param sequencedTimestamp Sequenced timestamp
* @param previousLastBatchSequenced Previous last batch sequenced before the current one, this is used to properly calculate the fees
*/
struct SequencedBatchData {
bytes32 accInputHash;
uint64 sequencedTimestamp;
uint64 previousLastBatchSequenced;
}
a sequence of batches的逻辑结构见图3:
一个batch中可包含的交易数量受限于PolygonZkEVM.sol合约中的_MAX_TRANSACTIONS_BYTE_LENGTH(120000)
常量参数。而一个sequence中可包含的batches数量受限于合约的_MAX_VERIFY_BATCHES(1000)
常量参数。
// Max transactions bytes that can be added in a single batch
// Max keccaks circuit = (2**23 / 155286) * 44 = 2376
// Bytes per keccak = 136
// Minimum Static keccaks batch = 2
// Max bytes allowed = (2376 - 2) * 136 = 322864 bytes - 1 byte padding
// Rounded to 300000 bytes
// In order to process the transaction, the data is approximately hashed twice for ecrecover:
// 300000 bytes / 2 = 150000 bytes
// Since geth pool currently only accepts at maximum 128kb transactions:
// https://github.com/ethereum/go-ethereum/blob/master/core/txpool/txpool.go#L54
// We will limit this length to be compliant with the geth restrictions since our node will use it
// We let 8kb as a sanity margin
uint256 internal constant _MAX_TRANSACTIONS_BYTE_LENGTH = 120000;
// Maximum batches that can be verified in one call. It depends on our current metrics
// This should be a protection against someone that tries to generate huge chunk of invalid batches, and we can't prove otherwise before the pending timeout expires
uint64 internal constant _MAX_VERIFY_BATCHES = 1000;
为对a sequence of batches进行排序Trusted Sequencer需调用sequenceBatches
合约函数,相应的参数为待排序的一组batches:
/**
* @notice Allows a sequencer to send multiple batches
* @param batches Struct array which holds the necessary data to append new batches to the sequence
* @param l2Coinbase Address that will receive the fees from L2
*/
function sequenceBatches(
BatchData[] calldata batches,
address l2Coinbase
) external ifNotEmergencyState onlyTrustedSequencer
batches
参数中需包含至少一个batch,至多_MAX_VERIFY_BATCHES(1000)
个batch。sequenceBatches
合约函数仅可由Trusted Sequencer的以太坊账户调用。若以上条件不满足,则该函数调用将被revert。
sequenceBatches
合约函数将遍历sequence中的每个batch,检查其有效性。一个有效的batch需满足如下条件:
_MAX_TRANSACTIONS_BYTE_LENGTH(120000)
常量值。若某batch不是有效的,交易将被revert且整个sequence将被丢弃。否则,如果该batch有效,则将继续sequencing流程。
lastBatchSequenced
为storage变量,会随着每个batch排序而递增,用作batch计数器,为每个batch一个指定的索引值,该索引值可用作batch chain的位置值。
为确保batch chain的密码学完整性,采用一种机制来将batches link到其之前的batches。会为每个sequenced batch计算累计哈希值,称其为累计是因为,其将 当前batch 与 之前已排序的batches的累计哈希值 进行了绑定。
某特定batch的累加哈希值计算方式为:
// Calculate next accumulated input hash
currentAccInputHash = keccak256(
abi.encodePacked(
currentAccInputHash,
currentTransactionsHash,
currentBatch.globalExitRoot,
currentBatch.timestamp,
l2Coinbase
)
);
currentAccInputHash (bytes32)
参数:为前一已排序batch的累计哈希值。currentTransactionsHash (bytes32)
参数:为当前batch transactions bytes数组的哈希摘要值:keccak256(currentBatch.transactions)。currentBatch.globalExitRoot (bytes32)
参数:为执行完当前batch之后Bridge合约的Global Exit Merkle Tree root。currentBatch.timestamp (uint64)
参数:为当前batch timestamp。l2Coinbase (address)
参数:为将从L2收取手续费的地址。
如图4所示,每个累计哈希值将确保当前batch data(transactions、timestamp、globalExitRoot)的完整性,以及之前batch数据的完整性,以及这些batch的排序。注意,不可能对该batch chain做任何跳转,因为哪怕跳转一个单一bit,将导致完全不同的前一累计哈希值。
一旦验证完sequence中所有batches的有效性,且计算完每个batch的累计哈希值,会以SequencedBatchData结构向sequencedBatches添加该batch sequence:
/**
* @notice Struct which will be stored for every batch sequence
* @param accInputHash Hash chain that contains all the information to process a batch:
* keccak256(bytes32 oldAccInputHash, keccak256(bytes transactions), bytes32 globalExitRoot, uint64 timestamp, address seqAddress)
* @param sequencedTimestamp Sequenced timestamp
* @param previousLastBatchSequenced Previous last batch sequenced before the current one, this is used to properly calculate the fees
*/
struct SequencedBatchData {
bytes32 accInputHash;
uint64 sequencedTimestamp;
uint64 previousLastBatchSequenced;
}
accInputHash
参数:为sequence中前一batch的唯一密码学表示。sequencedTimestamp
参数:为当前执行sequencing L1交易的L1区块timestamp。previosLastBatchSequenced
参数:为当前sequence中第一个batch之前的前一sequenced batch的索引值,即为前一sequence的最后一个batch的索引值。sequencedBatches map中:
由于gas消耗量高,L1上的storage操作是昂贵的,因此应避免尽可能少的使用L1 storage操作。为此,专门使用storage slots(mapping entries)来存储该sequence的commitment值。
该mapping中的每个entry将对如下元素进行commit:
L2交易的数据可用性是有保障的,因为每个batch的data都可根据sequencing交易calldata恢复,这些数据不在合约storage中,而是L1 State的一部分。
完成sequencing交易执行的最后一个要求为:
最后,将释放SequenceBatches事件:
/**
* @dev Emitted when the trusted sequencer sends a new batch of transactions
*/
event SequenceBatches(uint64 indexed numBatch);
一旦这些batches成功sequenced到L1,所有的L2 zkEVM节点将无需再信任Trusted Sequencer,可直接从L1 PolygonZkEVM.sol合约中获取sequences of batches来同步其本地L2 State状态,此时,即达成了L2 Virtual State。
为避免误解,有必要区分以下名词:
为实现L2 State最终阶段(consolidated),最终Trusted Aggregator需要aggregate 之前 Trusted Sequencer commit的sequences of batches。
aggregate a sequence意味着成功将相应的resulting L2 State root添加到L1 PolygonZkEVM.sol合约的batchNumToStateRoot mapping中。batchNumToStateRoot mapping为storage结构:
// State root mapping
// BatchNum --> state root
mapping(uint64 => bytes32) public batchNumToStateRoot;
a sequence of batches的verification意味着:
底层的Zero-Knowledge verification schema为某Succinct Non-interactive Arguments of Knowledge(SNARK),其核心属性为:
因此,已知某详尽计算,可 以原始直接计算一部分的计算资源来验证其integrity。借助SNARK方案,可 以gas efficient的方式,为详尽的链下计算提供链上安全性。
如图5所示,链下的a sequence of batches的execution将引起L2 state transition,最终将修改new L2 state root。Aggregator将生成 该execution的computation integrity(CI)proof,该proof在L1的链上验证将确保resulting L2 state root的有效性。
为aggregator a sequence of batches,Trusted Aggregator必须调用verifyBatchesTrustedAggregator合约函数:
/**
* @notice Allows an aggregator to verify multiple batches
* @param pendingStateNum Init pending state, 0 if consolidated state is used
* @param initNumBatch Batch which the aggregator starts the verification
* @param finalNewBatch Last batch aggregator intends to verify
* @param newLocalExitRoot New local exit root once the batch is processed
* @param newStateRoot New State root once the batch is processed
* @param proof fflonk proof
*/
function verifyBatchesTrustedAggregator(
uint64 pendingStateNum,
uint64 initNumBatch,
uint64 finalNewBatch,
bytes32 newLocalExitRoot,
bytes32 newStateRoot,
bytes calldata proof
) external onlyTrustedAggregator
pendingStateNum
参数:为待consolidated的pending state transitions数量,只要Trusted Aggregator运行正常,该值为0。当L2 state由独立的Aggregator consolidated时,pending state为一种安全机制(详情见第7章)。initNumBatch
参数:为上一aggregated sequence中最后一个batch的索引值。finalNewBatch
参数:为当前正在aggregating sequence中最后一个batch的索引值。newLocalExitRoot
参数:为当前sequence execution结束时的Bridge L2 Exit Merkle Tree root,当该sequence被aggregate时,该值用于计算新的Global Exit Root,从而使得bridge claiming交易能在L1执行成功。newStateRoot
参数:为基于更老的L2 State,执行完当前sequence of batches之后获得的新L2 StateRoot。proof
参数:为当前sequence of batches execution的Zero-Knowledge CI proof。verifyBatchesTrustedAggregator合约函数仅可由Trusted Aggregator账号调用,在该函数内:
首先调用 _verifyAndRewardBatches内部函数,该函数参数与verifyBatchesTrustedAggregator函数参数完全相同,其实现的逻辑为验证某指定sequence of batches的Zero-Knowledge CI proof。若验证成功,则按激励机制规定给aggregator支付奖励(详情见第5章)。
某sequence of batches验证成功需满足如下条件:
initNumBatch
参数:必须为某已aggregated batch的索引值,即,该值必须在batchNumToStateRoot mapping中对应有某L2 State root。initNumBatch
参数:必须小于等于last verified batch索引值。finalNewBatch
参数:必须大于等于last verified batch索引值。initNumBatch
和finalNewBatch
参数:必须为sequenced batches,即必须存在于sequencedBatches mapping中。Aggregator节点的:
可将这些服务看成是Ethereum Virtual Machine(EVM)“黑盒”解析器,会在当前L2 state基础之上执行a sequence of transaction batches,计算相应的resulting L2 state root,并为该execution生成Zero-Knowledge CI proof。
以这种方式来实现证明/验证系统,若证明验证成功,则从密码学上证明了:基于某Specific L2 State,执行某指定sequence of batches,会得到在newStateRoot
中体现的新 L2 State。
以下PolygonZkEVM.sol合约代码为Zero-Knowledge proof验证之处:
// Get snark bytes
bytes memory snarkHashBytes = getInputSnarkBytes(
initNumBatch,
finalNewBatch,
newLocalExitRoot,
oldStateRoot,
newStateRoot
);
// Calulate the snark input
uint256 inputSnark = uint256(sha256(snarkHashBytes)) % _RFIELD;
// Verify proof
if (!rollupVerifier.verifyProof(proof, [inputSnark])) {
revert InvalidProof();
}
rollupVerifier为具有verifyProof函数的外部合约,其输入为proof
和inputSnark
,输出为布尔值,若为true则表示proof验证通过,false表示proof验证不通过。
proof的成功验证仅确认了计算的完整性,并不代表基于正确的inputs生成了正确的ouputs。Public参数用于公开所证明计算的关键信息,以证明该计算基于了正确的inputs,并公开相应的outputs。
这样,在proof验证过程中,L1合约将设置公开参数,以确保所证明的state transition 对应了 Trusted Sequencer所提交的batches execution。
inputSnark
为代表了某特定L2 State transition的唯一256 bit表示。其计算方式为 uint256(sha256(snarkHashBytes)) % _RFIELD
,其中snarkHashBytes
数组为根据合约中名为getInputSnarkBytes的函数计算而来:
return
abi.encodePacked(
msg.sender,
oldStateRoot,
oldAccInputHash,
initNumBatch,
chainID,
forkID,
newStateRoot,
newAccInputHash,
newLocalExitRoot,
finalNewBatch
);
inputSnark
将代表某特定L2 State transition中的所有L2交易,在某特定L2(chain id)按指定顺序执行,并由某特定Trusted Aggregator(msg.sender)证明。
verifyBatchesTrustedAggregator合约函数不仅验证Zero-Knowledge proof的有效性,还会检查inputSnark
的值对应pending to be aggregated的某L2 State transition。
若内部调用 _verifyAndRewardBatches函数函数返回为true,则意味着该sequence of batches验证成功,然后会将newStateRoot
添加到batchNumToStateRoot mapping中,相应的key为该sequence最后一个batch的索引值。
最终,将释放VerifyBatchesTrustedAggregator事件:
emit VerifyBatchesTrustedAggregator(
finalNewBatch,
newStateRoot,
msg.sender
);
一旦batches成功在L1 aggregated,所有的zkEVM节点将直接从L1 PolygonZkEVM.sol合约中获取consolidated roots,以检查其本地L2 State的有效性,从而达成了L2 consolidated State。
为了保持系统的可持续性,必须激励参与者正确履行其职责,并使协议具有最终性。
L2使用源自L1所bridge而来的Ether作为原生货币,用于支付L2交易手续费。L1与L2之间的bridge兑换比例为1:1。
当claiming bridged assets from L1时,L2账号默认是没有ether来支付L2交易手续费的,因此,Polygon zkEVM协议方会资助调用bridge claiming函数的L2 claiming交易,不要求用户支付相应的gas费。
Sequencer赚用户在L2支付的交易手续费,该手续费直接以bridged ether来支付。手续费取决于gas price,具体由用户为其交易执行所愿意支付的费用设置。
为激励Aggregator,对于每个sequenced batch,Sequencer必须为在L1 PolygonZkEVM.sol合约中锁定与sequence中batches数量呈比例的一定数量的MATIC token。batchFee storage变量为锁定每个sequenced batch所需的MATICT token数量。
图6展示了协议中每个角色的收入支出:
注意,Sequencer可对具有更高gas price的交易优先打包排序,以增加其收入。此外,Sequencer执行打包交易存在一个盈利阈值,即其从L2用户中赚的手续费可能会少于:支付的MATIC sequencing手续费 + 支付的Ether L1 sequencing交易手续费。
为激励Sequencer,用户应设置合适的交易手续费,以超过该盈利阈值,否则,Sequencer将无动力来处理其交易。以下为Sequencer所赚的Ether纯利计算共识:
其中:
每次Aggregator aggregate某sequence时,根据其所聚合的batch数以及合约的MATIC balance,aggregator将赚得一定数量的MATIC token。在aggregation of a sequence之前,每个batch aggregated所赚MATIC 数量由L1 PolygonZkEVM.sol合约计算:
因此,aggregation of a sequence of batches,Aggregator的Ether净利计算公式为:
其中:
每次某独立Aggregator在aggregate a sequence时,batchFee将自动调整。Trusted Aggregator工作不正常时(详情见第7章),将启动该模式,且将修改batchFee值以激励aggregation。
_updateBatchFee内部函数用于修改batchFee storage变量:
function _updateBatchFee(uint64 newLastVerifiedBatch) internal {
Admin设置了2个storage变量(详情见第9章),以对batchFee 进行调整。
// Time target of the verification of a batch
// Adaptatly the batchFee will be updated to achieve this target
uint64 public verifyBatchTimeTarget;
// Batch fee multiplier with 3 decimals that goes from 1000 - 1023
uint16 public multiplierBatchFee;
_updateBatchFee 首先将计算当前being aggregated的batches中有多少batch是已迟到的,所谓迟到,是指这些batch暂未被aggregated,而已超过了verifyBatchTimeTarget时间。
diffBatches为迟到和未迟到batch数量的差值,限制该差值最大为 _MAX_BATCH_MULTIPLIER。
合约中的相关初始化值为:
之前章节所描述的方案,用户需依赖Trusted Sequencer来对其L2的交易进行打包执行。若用户无法通过Trusted Sequencer执行其交易,此时用户可发起forced batch。所谓forced batch,是指用户通过公开向L1提交的a batch L2 transactions 以执行这些交易的意图。
如图9所示,PolygonZkEVM.sol合约中有forcedBatches mapping,供用户提交transaction batches to be forced。forcedBatches mapping用作不可修改的告示牌,其中的forced batches为打上时间戳,并等待包含在某sequence中。为将forcedBatches mapping的状态维护为可信实体,Trusted Sequencer会将这些forced batches包含在未来的sequence中。否则,用户将可证明其被审计了,Trusted Sequencer将失去其trusted状态。
// Queue of forced batches with their associated data
// ForceBatchNum --> hashedForcedBatchData
// hashedForcedBatchData: hash containing the necessary information to force a batch:
// keccak256(keccak256(bytes transactions), bytes32 globalExitRoot, unint64 minForcedTimestamp)
mapping(uint64 => bytes32) public forcedBatches;
尽管会激励Trusted Sequencer将公开提交到forcedBatches mapping的forced batches打包,但这并不能保证forced batches内交易执行的最终性。当Trusted Sequencer故障时,为确保forced batches内交易的最终性,在L1 PolygonZkEVM.sol合约中,还有另一batch sequencing函数,名为sequenceForceBatches——该函数可供任何人调用打包 已公开提交到 forcedBatches mapping中的 未打包且已超过某期限 FORCE_BATCH_TIMEOUT(5天) 的 forced batches,
任何用户都可直接调用forceBatch函数来publish a batch to be forced:
/**
* @notice Allows a sequencer/user to force a batch of L2 transactions.
* This should be used only in extreme cases where the trusted sequencer does not work as expected
* Note The sequencer has certain degree of control on how non-forced and forced batches are ordered
* In order to assure that users force transactions will be processed properly, user must not sign any other transaction
* with the same nonce
* @param transactions L2 ethereum transactions EIP-155 or pre-EIP-155 with signature:
* @param maticAmount Max amount of MATIC tokens that the sender is willing to pay
*/
function forceBatch(
bytes calldata transactions,
uint256 maticAmount
) public isForceBatchAllowed ifNotEmergencyState
其中:
为成功将forced batch发布到forcedBatches mapping中,需满足以下条件,否则交易将被revert:
maticAmount
参数值必须高于the matic fee per batch。forcedBatches mapping中,以forced batch索引值为key。lastForceBatch为forced batch计数器,会随着每个forced batch的发布而递增,并提供特定的索引值。forcedBatches mapping中的value为 ForcedBatchData 结构体ABI编码之后的哈希值。
/**
* @notice Struct which will be used to call sequenceForceBatches
* @param transactions L2 ethereum transactions EIP-155 or pre-EIP-155 with signature:
* EIP-155: rlp(nonce, gasprice, gasLimit, to, value, data, chainid, 0, 0,) || v || r || s
* pre-EIP-155: rlp(nonce, gasprice, gasLimit, to, value, data) || v || r || s
* @param globalExitRoot Global exit root of the batch
* @param minForcedTimestamp Indicates the minimum sequenced timestamp of the batch
*/
struct ForcedBatchData {
bytes transactions;
bytes32 globalExitRoot;
uint64 minForcedTimestamp;
}
forcedBatches[lastForceBatch] = keccak256(
abi.encodePacked(
keccak256(transactions),
lastGlobalExitRoot,
uint64(block.timestamp) //为L1 block timestamp,即为forced batch发布时间
)
);
为优化storage usage,mapping条目的storage slot中仅存储forced batch的承诺值。由于可根据交易calldata来恢复forced batch,因此可保证数据可用性。
极端情况下,Trusted Sequencer出现了故障,任何用户都可调用sequenceForceBatches函数来提交a sequence of forced batches:
/**
* @notice Allows anyone to sequence forced Batches if the trusted sequencer has not done so in the timeout period
* @param batches Struct array which holds the necessary data to append force batches
*/
function sequenceForceBatches(
ForcedBatchData[] calldata batches
) external isForceBatchAllowed ifNotEmergencyState
sequenceForceBatches与sequenceBatches类似,不同之处在于,若支持batch forcing,任何人都可调用sequenceForceBatches。对于提交到forcedBatches mapping中的sequence,当其过期了forceBatchTimeout(5天)时, sequenceForceBatches也将检查该sequence中的每个batch。由于在发布时已支付了MATIC batch fee,此时无需再支付。
若forced batches sequence满足to be sequenced的所有条件,则会将其与正常batch一样,添加到sequencedBatches mapping中。最终会释放SequenceForceBatches事件。
emit SequenceForceBatches(currentBatchSequenced);
注意,使用sequenceForceBatches提交的forced batches sequences,将永远不会进入trusted state,即意味着节点本地的trusted L2 State 与 commit在L1 PolygonZkEVM.sol合约中的virtual L2 State存在差异。节点软件需能探测并处理该情况,并将从L1获取的L2 State作为有效状态,用于对其本地L2 State进行reorg。
当有forced batch sequence提交之后,图10展示了trusted L2 State与virtual L2 State之间的差异:
batches提交到L1之后,当Trusted Aggregator缺席或不作为,则L2 State transitions将永远无法在L1中consolidated,系统无法具有最终性。因此,L1 PolygonZkEVM.sol合约中有名为verifyBatches的函数,支持任何人aggregate sequences of batches。
/**
* @notice Allows an aggregator to verify multiple batches
* @param pendingStateNum Init pending state, 0 if consolidated state is used
* @param initNumBatch Batch which the aggregator starts the verification
* @param finalNewBatch Last batch aggregator intends to verify
* @param newLocalExitRoot New local exit root once the batch is processed
* @param newStateRoot New State root once the batch is processed
* @param proof fflonk proof
*/
function verifyBatches(
uint64 pendingStateNum,
uint64 initNumBatch,
uint64 finalNewBatch,
bytes32 newLocalExitRoot,
bytes32 newStateRoot,
bytes calldata proof
) external ifNotEmergencyState {
verifyBatches函数的参数与trustedVerifyBatches的一样,此外,对于待聚合的sequence,verifyBatches还有2个额外的限制,进而引入了名为pending state的新L2 State阶段。除需满足trustedVerifyBatches中的条件之外,verifyBatches还需额外满足如下条件:
若以上条件都满足,该函数将调用 _verifyAndRewardBatches 内部函数来验证Zero-Knowledge CI proof,若验证通过,不同于trustedVerifyBatches中实现,该sequence并不会立即aggregated,而是会将验证通过的sequence添加到pendingStateTransitions mapping中,当过期超过pendingStateTimeout才会被aggregated。
// Pending state mapping
// pendingStateNumber --> PendingState
mapping(uint256 => PendingState) public pendingStateTransitions;
/**
* @notice Struct to store the pending states
* Pending state will be an intermediary state, that after a timeout can be consolidated, which means that will be added
* to the state root mapping, and the global exit root will be updated
* This is a protection mechanism against soundness attacks, that will be turned off in the future
* @param timestamp Timestamp where the pending state is added to the queue
* @param lastVerifiedBatch Last batch verified batch of this pending state
* @param exitRoot Pending exit root
* @param stateRoot Pending state root
*/
struct PendingState {
uint64 timestamp;
uint64 lastVerifiedBatch;
bytes32 exitRoot;
bytes32 stateRoot;
}
已验证通过的sequences of batches将进入名为pending state的中间状态,其state transition暂未consolidated,因此,既不会向batchNumToStateRoot mapping 中添加新的L2 State root,也不会更新bridge的Global Exit Root。lastPendingState storage变量将跟踪待固化的pending state transitions的数量,并用作batchNumToStateRoot mapping 的key。由于Zero-Knowledge proof已验证通过,独立的aggregator仍然将获得aggregation奖励。
图11从batch的角度,展示了L2 Stages时间轴,以及某batch sequence经verifyBatches函数Aggregated触发其进入下一L2 State阶段的动作:
pending state中存在的sequences of batches并不会影响协议的正常运行,因为后续的sequences将在pending sequences之前验证。lastVerifiedBatch storage变量将跟踪上一verified和aggregated的batch的索引值。因此,当某sequence of batches待验证时,将调用getLastVerifiedBatch函数来获得上一verified batch的索引值。若有pending state transitions,则该函数将返回pending state中最后一个batch的索引值,否则返回lastVerifiedBatch值。
/**
* @notice Get the last verified batch
*/
function getLastVerifiedBatch() public view returns (uint64) {
if (lastPendingState > 0) {
return pendingStateTransitions[lastPendingState].lastVerifiedBatch;
} else {
return lastVerifiedBatch;
}
}
每次调用sequenceBatches函数,都会调用 _tryConsolidatePendingState内部函数来试图固化pending state。 _tryConsolidatePendingState将检查 自pending state sequence of batches验证之后,是否已过期pendingStateTimeout,若已过期,则将固化相应的pending state transitions。由于Zero-Knowledge CI proof已验证通过,此时无需再验证proof的有效性。
此外,任何人都可调用consolidatePendingState外部函数来触发固化某pending state。若调用consolidatePendingState的账号为Trusted Aggregator,即使自pending sequences of batches被验证之后,未过期pendingStateTimeout,这些pending sequences of batches将直接aggregate;否则,若调用者账号不是Trusted Aggregator,consolidatePendingState函数将检查是否 自pending sequences of batches被验证之后,是否已过期pendingStateTimeout,只有过期了,才会固化这些pending state transitions。
/**
* @notice Allows to consolidate any pending state that has already exceed the pendingStateTimeout
* Can be called by the trusted aggregator, which can consolidate any state without the timeout restrictions
* @param pendingStateNum Pending state to consolidate
*/
function consolidatePendingState(uint64 pendingStateNum) external
该机制旨在为Polygon团队提供回旋余地,以防零知识验证系统中的可靠性漏洞被利用,并保护资产不被恶意用户从L2中转走。
为支持未来协议实现升级(包括但不限于增加新特性、解决bug或优化升级),以下合约均采用Transparent Upgradeable Proxy(TUP)模式部署:
为了继承安全性,避免延长和使审计过程更加复杂,Polygon团队选择使用OpenZeppelin的openzeppelin-upgrades库来实现这一功能。OpenZeppelin因其对以太坊标准实施的审计和开源库而在业内享有知名品牌的声誉,其openzeppelin-upgrades已经过审计和战斗测试。此外,openzeppelin-upgrades不仅是一组合约,而且还有Hardhat和Truffle插件,以支持代理的部署、升级和管理员权限管理。
如图12所示,OpenZeppelin的TUP模式,通过使用delegated calls和fallback函数,分离了协议实现的storage variables,从而提供了在不改变storage state或改变合约public address的情况下,具备更新代码实现的能力。
遵循OpenZeppelin的建议,部署了 包含在openzeppelin-upgrades库中的ProxyAdmin.sol合约 实例,并将其地址设置为proxy合约的admin。这些操作都是安全的,且易于使用Hardhat和Truffle插件。每个ProxyAdmin.sol实例作为每个proxy的实际管理接口,且每个ProxyAdmin.sol实例的owner将作为相应的管理员账号。ProxyAdmin.sol ownership在部署时,将转给协议的Admin角色(详细见第9章)。
Admin为治理整个协议的某以太坊账号,为PolygonZkEVM.sol合约中可调用以下函数的唯一账号:
同时,Admin账号为所有ProxyAdmin.sol实例的owner,即,可执行协议合约升级操作的唯一账号。
此外,Admin账号具有所有proxy的所有权,即为可升级协议合约的唯一账号。
为增加安全性以及用户使用本协议的信心,实现了timelock controller。timelock controller为可建立delay的合约,在运用潜在危险维护操作之前,为用户提供了退出的回旋余地。timelock controller支持admin来schedule和commit L1的maintenance operations transactions,且但过期了指定的minDelay时间,将触发timelock来执行maintenance operations transactions。
为了继承安全性,避免延长和使审计过程更加复杂,Polygon团队选择使用OpenZeppelin久经测试的TimelockController.sol合约,但是重载了getMinDelay函数,在OpenZeppelin基础之上定制化的实现见PolygonZkEVMTimelock.sol合约。当zkevm合约系统的紧急模式被激活时,新的getMinDelay函数会将minDelay时间设置为0(详细见第10章)。在部署阶段,协议的Admin角色设置为PolygonZkEVMTimelock.sol合约实例的地址。
Admin角色承担的责任重大,不应只分配给某单一账号。为此,PolygonZkEVMTimelock.sol合约实例的Admin账号被分配给某multi-sign合约,该multi-sign合约用作协议的治理工具,将管理权限去中心化分布给多个可信实体。
图13展示了Polygon zkEVM L1合约的治理树形状:
总之,协议维护操作仅能遵照如下步骤进行:
注意,受限于合约见的治理流程,代表Admin角色的任何交易,都必须按以上步骤执行。
紧急状态为所激活的PolygonZkEVM.sol和PolygonZkEVMBridge.sol L1合约状态,将停止batches sequencing和bridge operations。设置紧急状态的目的是为Polygon团队解决可靠性漏洞或合约bug等情况提供回旋余地,以保障L2用户资产安全。
当合约处于紧急状态时,以下函数将被锁定:
注意,当合约处于紧急状态时,Sequencer将无法sequence batches,但是,Trusted Aggregator仍然可固化further state transitions或override某经证明是non-deterministic的pending state transition。
当具有2个不同的resulting L2 State root值的同一sequence of batches均验证成功时,则会引起non-deterministic state transition。该情况可源于Zero-Knowledge CI proof验证系统的可靠性漏洞。
仅有2个合约函数可触发紧急状态:
/**
* @notice Function to activate emergency state, which also enables the emergency mode on both PolygonZkEVM and PolygonZkEVMBridge contracts
* If not called by the owner must be provided a batcnNum that does not have been aggregated in a _HALT_AGGREGATION_TIMEOUT period
* @param sequencedBatchNum Sequenced batch number that has not been aggreagated in _HALT_AGGREGATION_TIMEOUT
*/
function activateEmergencyState(uint64 sequencedBatchNum) external {
/**
* @notice Allows to halt the PolygonZkEVM if its possible to prove a different state root given the same batches
* @param initPendingStateNum Init pending state, 0 if consolidated state is used
* @param finalPendingStateNum Final pending state, that will be used to compare with the newStateRoot
* @param initNumBatch Batch which the aggregator starts the verification
* @param finalNewBatch Last batch aggregator intends to verify
* @param newLocalExitRoot New local exit root once the batch is processed
* @param newStateRoot New State root once the batch is processed
* @param proof fflonk proof
*/
function proveNonDeterministicPendingState(
uint64 initPendingStateNum,
uint64 finalPendingStateNum,
uint64 initNumBatch,
uint64 finalNewBatch,
bytes32 newLocalExitRoot,
bytes32 newStateRoot,
bytes calldata proof
) external ifNotEmergencyState {
当发现了某可靠性漏洞被利用,Trusted Aggregator将使用overridePendingState函数来override某non-deterministic pending state。由于Trusted Aggregator为系统的可信实体,当存在non-deterministic state transition时,仅Trusted Aggregator提供的L2 state root才被认为是有效的可供固化的。 /**
* @notice Allows the trusted aggregator to override the pending state
* if it's possible to prove a different state root given the same batches
* @param initPendingStateNum Init pending state, 0 if consolidated state is used
* @param finalPendingStateNum Final pending state, that will be used to compare with the newStateRoot
* @param initNumBatch Batch which the aggregator starts the verification
* @param finalNewBatch Last batch aggregator intends to verify
* @param newLocalExitRoot New local exit root once the batch is processed
* @param newStateRoot New State root once the batch is processed
* @param proof fflonk proof
*/
function overridePendingState(
uint64 initPendingStateNum,
uint64 finalPendingStateNum,
uint64 initNumBatch,
uint64 finalNewBatch,
bytes32 newLocalExitRoot,
bytes32 newStateRoot,
bytes calldata proof
) external onlyTrustedAggregator {
为成功override pending state,Trusted Aggregator必须提交proof供proveNonDeterministicPendingState函数内验证,若验证成功,该pending state transition将被擦除,且将直接固化新的state。总之,触发紧急状态的条件有:
[1] Polygon zkEVM Technical Document 之 Trustless L2 State Management v.1.1