1 导语
区块链的主要工作就是出块,出块的制度、方式叫做共识;\
块里的内容是不可篡改的信息记录,块连接成链就是区块链。
出块又叫挖矿,有各种挖矿的方式,比如POW、DPOS,本文主要分析DPOS共识源码。
以太坊存在多种共识:
- PoW (etash)在主网使用
- PoA(clique) 在测试网使用
- FakePow 在单元测试使用
- DPOS 新增共识替代POW
既然是源码分析,主要读者群体应该是看代码的人,读者须要结合代码看此类文章。明白此类文章的作用是:提供一个分析的切入口,将散落的代码按某种内在逻辑串起来,用图文的形式叙述代码的大意,引领读者有一个系统化的认知,同时对自己阅读代码过程中不理解的地方起到一定参考作用。
2 DPOS的共识逻辑
DPOS的基本逻辑可以概述为:成为候选人-获得他人投票-被选举为验证人-在周期内轮流出块。
从这个过程可以看到,成为候选人和投票是用户主动发起的行为,获得投票和被选为验证人是系统行为。DPOS的主要功能就是成为候选人、投票(对方获得投票),以及系统定期自动执行的选举。
2.1 最初的验证人
验证人就是出块人,在创世的时候,系统还没运行,用户自然不能投票,本系统采用的方法是,在创世配置文件中定义好最初的一批出块验证人(Validator),由这一批验证人在第一个出块周期内轮流出块,默认是21个验证人。
{
"config": {
"chainId": 8888,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock":0,
"dpos":{
"validators":[
"0x8807fa0db2c60675a8f833dd010469e408428b83",
"0xdf5f5a7abc5d0821c50deb4368528d8691f18737",
"0xe0d64bfb1a30d66ae0f06ce36d5f4edf6835cd7c"
……
]
}
},
"nonce": "0x0000000000000042",
"difficulty": "0x020000",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"timestamp": "0x00",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"extraData": "0x11bbe8db4e347b4e8c937c1c8370e4b5ed33adb3db69cbdb7a38e1e50b1b82fa",
"gasLimit": "0x500000",
"alloc": {}
}
2.2 成为候选人
系统运行之后,任何人随时可以投票,同时也可以获得他人投票。因为只有候选人才允许获得投票,所以任何人被投票之前都要先成为候选人(candidate)。
\
\
从外部用户角度看,成为候选人只需要自己发一笔交易即可:
eth.sendTransaction({
from: '0x646ba1fa42eb940aac67103a71e9a908ef484ec3',
to: '0x646ba1fa42eb940aac67103a71e9a908ef484ec3',
value: 0,
type: 1
})
在系统内部,成为候选人和投票均被定义为交易,其实DPOS定义的所有交易有四种类型,是针对这两种行为的正向和反向操作。
type TxType uint8
const (
Binary TxType = iota
LoginCandidate //成为候选人
LogoutCandidate //取消候选人
Delegate //投票
UnDelegate //取消投票
)
type txdata struct {
Type TxType `json:"type"
……
}
成为候选人代码非常简单,就是更新(插入)一下candidateTrie,这棵树的键和值都是候选人的地址,它保存着所有当前时间的候选人。
func (d *DposContext) BecomeCandidate(candidateAddr common.Address) error {
candidate := candidateAddr.Bytes()
return d.candidateTrie.TryUpdate(candidate, candidate)
}
具体执行交易的时候,它取的地址是from,这意味着只能将自己设为候选人。
case types.LoginCandidate:
dposContext.BecomeCandidate(msg.From())
除了这里提到的candidateTrie,DPOS总共有五棵树:
type DposContext struct {
epochTrie *trie.Trie //记录出块周期内的验证人列表 ("validator",[]validator)
delegateTrie *trie.Trie //(append(candidate, delegator...), delegator)
voteTrie *trie.Trie //(delegator, candidate)
candidateTrie *trie.Trie //(candidate, candidate)
mintCntTrie *trie.Trie //记录验证人在周期内的出块数目(append(epoch, validator.Bytes()...),count) 这里的epoch=header.Time/86400
db ethdb.Database
}
delegator是投票人
2.3 投票
从外部用户角度看,投票也是一笔交易:
eth.sendTransaction({
from: '0x646ba1fa42eb940aac67103a71e9a908ef484ec3',
to: '0x5b76fff970bf8a351c1c9ebfb5e5a9493e956ddd',
value: 0,
type: 3
})
系统内部的投票代码,主要更新delegateTrie和voteTrie:
func (d *DposContext) Delegate(delegatorAddr, candidateAddr common.Address) error {
delegator, candidate := delegatorAddr.Bytes(), candidateAddr.Bytes()
// 获得投票的候选人一定要在candidateTrie中
candidateInTrie, err := d.candidateTrie.TryGet(candidate)
if err != nil {
return err
}
if candidateInTrie == nil {
return errors.New("invalid candidate to delegate")
}
// delete old candidate if exists
oldCandidate, err := d.voteTrie.TryGet(delegator)
if err != nil {
if _, ok := err.(*trie.MissingNodeError); !ok {
return err
}
}
if oldCandidate != nil {
d.delegateTrie.Delete(append(oldCandidate, delegator...))
}
if err = d.delegateTrie.TryUpdate(append(candidate, delegator...), delegator); err != nil {
return err
}
return d.voteTrie.TryUpdate(delegator, candidate)
}
2.4 选举
投票虽然随时可以进行,但是验证人的选出,则是周期性的触发。\
选举周期默认设定为24小时,每过24小时,对验证人进行一次重新选举。\
每次区块被打包的时候(Finalize)都会调用选举函数,选举函数判断是否到了重新选举的时刻,它根据当前块和上一块的时间,计算两块是否属于同一个选举周期,如果是同一个周期,不触发重选,如果不是同一个周期,则说明当前块是新周期的第一块,触发重选。\
\
选举函数:
func (ec *EpochContext) tryElect(genesis, parent *types.Header) error {
genesisEpoch := genesis.Time.Int64() / epochInterval //0
prevEpoch := parent.Time.Int64() / epochInterval
//ec.TimeStamp从Finalize传过来的当前块的header.Time
currentEpoch := ec.TimeStamp / epochInterval
prevEpochIsGenesis := prevEpoch == genesisEpoch
if prevEpochIsGenesis && prevEpoch < currentEpoch {
prevEpoch = currentEpoch - 1
}
prevEpochBytes := make([]byte, 8)
binary.BigEndian.PutUint64(prevEpochBytes, uint64(prevEpoch))
iter := trie.NewIterator(ec.DposContext.MintCntTrie().PrefixIterator(prevEpochBytes))
//currentEpoch只有在比prevEpoch至少大于1的时候执行下面代码。
//大于1意味着当前块的时间,距离上一块所处的周期起始时间,已经超过epochInterval即24小时了。
//大于2过了48小时……
for i := prevEpoch; i < currentEpoch; i++ {
// 如果前一个周期不是创世周期,触发踢出验证人规则
if !prevEpochIsGenesis && iter.Next() {
if err := ec.kickoutValidator(prevEpoch); err != nil {
return err
}
}
//计票,按票数从高到低得出safeSize个验证人
// 候选人的票数cnt=所有投他的delegator的账户余额之和
votes, err := ec.countVotes()
if err != nil {
return err
}
candidates := sortableAddresses{}
for candidate, cnt := range votes {
candidates = append(candidates, &sortableAddress{candidate, cnt})
}
if len(candidates) < safeSize {
return errors.New("too few candidates")
}
sort.Sort(candidates)
if len(candidates) > maxValidatorSize {
candidates = candidates[:maxValidatorSize]
}
// shuffle candidates
//用父块的hash和当前周期编号做验证人列表随机乱序的种子
//打乱验证人列表顺序,由seed确保每个节点计算出来的验证人顺序都是一致的。
seed := int64(binary.LittleEndian.Uint32(crypto.Keccak512(parent.Hash().Bytes()))) + i
r := rand.New(rand.NewSource(seed))
for i := len(candidates) - 1; i > 0; i-- {
j := int(r.Int31n(int32(i + 1)))
candidates[i], candidates[j] = candidates[j], candidates[i]
}
sortedValidators := make([]common.Address, 0)
for _, candidate := range candidates {
sortedValidators = append(sortedValidators, candidate.address)
}
epochTrie, _ := types.NewEpochTrie(common.Hash{}, ec.DposContext.DB())
ec.DposContext.SetEpoch(epochTrie)
ec.DposContext.SetValidators(sortedValidators)
log.Info("Come to new epoch", "prevEpoch", i, "nextEpoch", i+1)
}
return nil
}
当epochContext最终调用了dposContext的SetValidators()后,新的一批验证人就产生了,这批新的验证人将开始轮流出块。
2.5 DPOS相关类图
EpochContext是选举周期(默认24小时)相关实体类,所以主要功能是仅在周期时刻发生的事情,包括选举、计票、踢出验证人。它是更大范围上的存在,不直接操作DPOS的五棵树,而是通过它聚合的DposContext对五棵树进行增删改查。
DposContext和Trie是强组合关系,DPOS的交易行为(成为候选人、取消为候选人、投票、取消投票、设置验证人)就是它的主要功能。
Dpos is a engine,实现Engine接口。
func (self *worker) mintBlock(now int64) {
engine, ok := self.engine.(*dpos.Dpos)
……
}
3 DPOS引擎实现
DPOS是共识引擎的具体实现,Engine接口定义了九个方法。
3.1 Author
func (d *Dpos) Author(header *types.Header) (common.Address, error) {
return header.Validator, nil
}
这个接口的意思是返回出块人。在POW共识中,返回的是header.Coinbase。\
DPOS中Header增加了一个Validator,是有意将Coinbase和Validator的概念分开。Validator默认等于Coinbase,也可以设为不一样的地址。
3.2 VerifyHeader
验证header里的一些字段是否符合dpos共识规则。\
符合以下判断都是错的:
header.Time.Cmp(big.NewInt(time.Now().Unix())) > 0
len(header.Extra) < extraVanity+extraSeal //32+65
header.MixDigest != (common.Hash{})
header.Difficulty.Uint64() != 1
header.UncleHash != types.CalcUncleHash(nil)
parent == nil || parent.Number.Uint64() != number-1 || parent.Hash() != header.ParentHash
//与父块出块时间间隔小于了10(blockInterval)秒
parent.Time.Uint64()+uint64(blockInterval) > header.Time.Uint64()
3.3 VerifyHeaders
批量验证header
3.4 VerifyUncles
dpos里不应有uncles。
func (d *Dpos) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {
if len(block.Uncles()) > 0 {
return errors.New("uncles not allowed")
}
return nil
}
3.5 Prepare
为Header准备部分字段:\
Nonce为空;\
Extra预留为32+65个0字节,Extra字段包括32字节的extraVanity前缀和65字节的extraSeal后缀,都为预留字节,extraSeal在区块Seal的时候写入验证人的签名。\
Difficulty置为1;\
Validator设置为signer;signer是在启动挖矿的时候设置的,其实就是本节点的验证人(Ethereum.validator)。
func (d *Dpos) Prepare(chain consensus.ChainReader, header *types.Header) error {
header.Nonce = types.BlockNonce{}
number := header.Number.Uint64()
//如果header.Extra不足32字节,则用0填充满32字节。
if len(header.Extra) < extraVanity {
header.Extra = append(header.Extra, bytes.Repeat([]byte{0x00}, extraVanity-len(header.Extra))...)
}
header.Extra = header.Extra[:extraVanity]
//header.Extra再填65字节
header.Extra = append(header.Extra, make([]byte, extraSeal)...)
parent := chain.GetHeader(header.ParentHash, number-1)
if parent == nil {
return consensus.ErrUnknownAncestor
}
header.Difficulty = d.CalcDifficulty(chain, header.Time.Uint64(), parent)
//header.Validator赋值为Dpos的signer。
header.Validator = d.signer
return nil
}
关于难度
在DPOS里,不需要求难度值,给定一个即可。
func (d *Dpos) CalcDifficulty(chain consensus.ChainReader, time uint64, parent *types.Header) *big.Int { return big.NewInt(1) }
而在POW中,难度是根据父块和最新块的时间差动态调整的,小于10增加难度,大于等于20减小难度。
block_diff = parent_diff + 难度调整 + 难度炸弹 难度调整 = parent_diff // 2048 * MAX(1 - (block_timestamp - parent_timestamp) // 10, -99) 难度炸弹 = INT(2^((block_number // 100000) - 2))
关于singer
调用API,人为设置本节点的验证人
func (api *PrivateMinerAPI) SetValidator(validator common.Address) bool { api.e.SetValidator(validator) //e *Ethereum return true }
func (self *Ethereum) SetValidator(validator common.Address) { self.lock.Lock() //lock sync.RWMutex self.validator = validator self.lock.Unlock() }
节点启动挖矿时调用了dpos.Authorize将验证人赋值给了dpos.signer
func (s *Ethereum) StartMining(local bool) error { validator, err := s.Validator() …… if dpos, ok := s.engine.(*dpos.Dpos); ok { wallet, err := s.accountManager.Find(accounts.Account{Address: validator}) if wallet == nil || err != nil { log.Error("Coinbase account unavailable locally", "err", err) return fmt.Errorf("signer missing: %v", err) } dpos.Authorize(validator, wallet.SignHash) } …… }
func (s *Ethereum) Validator() (validator common.Address, err error) { s.lock.RLock() //lock sync.RWMutex validator = s.validator s.lock.RUnlock() …… }
func (d *Dpos) Authorize(signer common.Address, signFn SignerFn) { d.mu.Lock() d.signer = signer d.signFn = signFn d.mu.Unlock() }
3.6 Finalize
生成一个新的区块,不过不是最终的区块。该函数功能请看注释。
func (d *Dpos) Finalize(……){
//把奖励打入Coinbase,拜占庭版本以后奖励3个eth,之前奖励5个
AccumulateRewards(chain.Config(), state, header, uncles)
//调用选举,函数内部判断是否到了新一轮选举周期
err := epochContext.tryElect(genesis, parent)
//每出一个块,将该块验证人的出块数+1,即更新DposContext.mintCntTrie。
updateMintCnt(parent.Time.Int64(), header.Time.Int64(), header.Validator, dposContext)
//给区块设置header,transactions,Bloom,uncles;
//给header设置TxHash,ReceiptHash,UncleHash;
return types.NewBlock(header, txs, uncles, receipts), nil
}
3.7 Seal
dpos的Seal主要是给新区块进行签名,即把签名写入header.Extra,返回最终状态的区块。\
d.signFn是个函数类型的声明,首先源码定义了一个钱包接口SignHash用于给一段hash进行签名,然后将这个接口作为形参调用dpos.Authorize,这样d.signFn就被赋予了这个函数,而具体实现是keystoreWallet.SignHash,所以d.signFn的执行就是在执行keystoreWallet.SignHash。
func (d *Dpos) Seal(chain consensus.ChainReader, block *types.Block, stop <-chan struct{}) (*types.Block, error) {
header := block.Header()
number := header.Number.Uint64()
// Sealing the genesis block is not supported
if number == 0 {
return nil, errUnknownBlock
}
now := time.Now().Unix()
delay := NextSlot(now) - now
if delay > 0 {
select {
case <-stop:
return nil, nil
//等到下一个出块时刻slot,如10秒1块的节奏,10秒内等到第10秒,11秒则要等到第20秒,以此类推。
case <-time.After(time.Duration(delay) * time.Second):
}
}
block.Header().Time.SetInt64(time.Now().Unix())
// time's up, sign the block
sighash, err := d.signFn(accounts.Account{Address: d.signer}, sigHash(header).Bytes())
if err != nil {
return nil, err
}
//将签名赋值给header.Extra的后缀。这里数组索引不会为负,因为在Prepare的时候,Extra就保留了32(前缀)+65(后缀)个字节。
copy(header.Extra[len(header.Extra)-extraSeal:], sighash)
return block.WithSeal(header), nil
}
func (b *Block) WithSeal(header *Header) *Block {
cpy := *header
return &Block{
header: &cpy,
transactions: b.transactions,
uncles: b.uncles,
// add dposcontext
DposContext: b.DposContext,
}
}
3.8 VerifySeal
Seal接口是区块产生的最后一道工序,也是各种共识算法最核心的实现,VerifySeal就是对这种封装的真伪验证。\
\
1)从epochTrie里获取到验证人列表,(epochTrie的key就是字面量“validator”,它全局唯一,每轮选举后都会被覆盖更新)再用header的时间计算本区块验证人所在列表的偏移量(作为验证人列表数组索引),获得验证人地址。
validator, err := epochContext.lookupValidator(header.Time.Int64())
2)用Dpos的签名还原出这个验证人的地址。两者进行对比,看是否一致,再用还原的地址和header.Validator对比看是否一致。
if err := d.verifyBlockSigner(validator, header); err != nil {
return err
}
func (d *Dpos) verifyBlockSigner(validator common.Address, header *types.Header) error {
signer, err := ecrecover(header, d.signatures)
if err != nil {
return err
}
if bytes.Compare(signer.Bytes(), validator.Bytes()) != 0 {
return ErrInvalidBlockValidator
}
if bytes.Compare(signer.Bytes(), header.Validator.Bytes()) != 0 {
return ErrMismatchSignerAndValidator
}
return nil
}
其中:\
header.Validator是在Prepare接口中被赋值的。\
d.signatures这个签名是怎么赋值的?不要顾名思义它存的不是签名,它的类型是一种有名的缓存,(key,value)分别是(区块头hash,验证人地址),它的赋值也是在ecrecover里进行的。ecrecover根据区块头hash从缓存中获取到验证人地址,如果没有就从header.Extra的签名部分还原出验证人地址。
3)VerifySeal经过上面两步验证后,最后这个操作待详细分析。
return d.updateConfirmedBlockHeader(chain)
3.9 APIs
用于容纳API。
func (d *Dpos) APIs(chain consensus.ChainReader) []rpc.API {
return []rpc.API{{
Namespace: "dpos",
Version: "1.0",
Service: &API{chain: chain, dpos: d},
Public: true,
}}
}
它在eth包里被赋值具体API
apis = append(apis, s.engine.APIs(s.BlockChain())...)
func (s *Ethereum) APIs() []rpc.API {
apis := ethapi.GetAPIs(s.ApiBackend)
// Append any APIs exposed explicitly by the consensus engine
apis = append(apis, s.engine.APIs(s.BlockChain())...)
// Append all the local APIs and return
return append(apis, []rpc.API{
{
Namespace: "eth",
Version: "1.0",
Service: NewPublicEthereumAPI(s),
Public: true,
}, {
Namespace: "eth",
Version: "1.0",
Service: NewPublicMinerAPI(s),
Public: true,
}, {
Namespace: "eth",
Version: "1.0",
Service: downloader.NewPublicDownloaderAPI(s.protocolManager.downloader, s.eventMux),
Public: true,
}, {
Namespace: "miner",
Version: "1.0",
Service: NewPrivateMinerAPI(s),
Public: false,
}, {
Namespace: "eth",
Version: "1.0",
Service: filters.NewPublicFilterAPI(s.ApiBackend, false),
Public: true,
}, {
Namespace: "admin",
Version: "1.0",
Service: NewPrivateAdminAPI(s),
}, {
Namespace: "debug",
Version: "1.0",
Service: NewPublicDebugAPI(s),
Public: true,
}, {
Namespace: "debug",
Version: "1.0",
Service: NewPrivateDebugAPI(s.chainConfig, s),
}, {
Namespace: "net",
Version: "1.0",
Service: s.netRPCService,
Public: true,
},
}...)
}
这些赋值的其实是结构体,通过结构体可以访问到自身的方法,这些结构体大多都是Ethereum,只不过区分了Namespace用于不同场景。
type PublicEthereumAPI struct {
e *Ethereum
}
type PublicMinerAPI struct {
e *Ethereum
}
type PublicDownloaderAPI struct {
d *Downloader
mux *event.TypeMux
installSyncSubscription chan chan interface{}
uninstallSyncSubscription chan *uninstallSyncSubscriptionRequest
}
type PrivateMinerAPI struct {
e *Ethereum
}
type PublicDebugAPI struct {
eth *Ethereum
}
看看都有哪些API服务:
在mintLoop方法里,worker无限循环,阻塞监听stopper通道,每秒调用一次mintBlock。\
用户主动停止以太坊节点的时候,stopper通道被关闭,worker就停止了。
4.2 mintBlock挖矿函数分析
这个函数的作用即用引擎(POW、DPOS)出块。在POW版本中,worker还需要启动agent(分为CpuAgent和何RemoteAgent两种实现),agent进行Seal操作。在DPOS中,去掉了agent这一层,直接在mintBlock里Seal。
mintLoop每秒都调用mintBlock,但并非每秒都出块,逻辑在下面分析。
func (self *worker) mintLoop() {
ticker := time.NewTicker(time.Second).C
for {
select {
case now := <-ticker:
self.mintBlock(now.Unix())
case <-self.stopper:
close(self.quitCh)
self.quitCh = make(chan struct{}, 1)
self.stopper = make(chan struct{}, 1)
return
}
}
}
func (self *worker) mintBlock(now int64) {
engine, ok := self.engine.(*dpos.Dpos)
if !ok {
log.Error("Only the dpos engine was allowed")
return
}
err := engine.CheckValidator(self.chain.CurrentBlock(), now)
if err != nil {
switch err {
case dpos.ErrWaitForPrevBlock,
dpos.ErrMintFutureBlock,
dpos.ErrInvalidBlockValidator,
dpos.ErrInvalidMintBlockTime:
log.Debug("Failed to mint the block, while ", "err", err)
default:
log.Error("Failed to mint the block", "err", err)
}
return
}
work, err := self.createNewWork()
if err != nil {
log.Error("Failed to create the new work", "err", err)
return
}
result, err := self.engine.Seal(self.chain, work.Block, self.quitCh)
if err != nil {
log.Error("Failed to seal the block", "err", err)
return
}
self.recv <- &Result{work, result}
}
如时序图和源码所示,mintBlock函数包含3个主要方法:
4.2.1 CheckValidator出块前验证
该函数判断当前出块人(validator)是否与dpos规则计算得到的validator一样,同时判断是否到了出块时间点。
func (self *worker) mintBlock(now int64) { …… //检查出块验证人validator是否正确 //CurrentBlock()是截止当前时间,最后加入到链的块 //CurrentBlock()是BlockChain.insert的时候赋的值 err := engine.CheckValidator(self.chain.CurrentBlock(), now) …… }
func (d *Dpos) CheckValidator(lastBlock *types.Block, now int64) error {
//检查是否到达出块间隔最后1秒(slot),出块间隔设置为10秒
if err := d.checkDeadline(lastBlock, now); err != nil {
return err
}
dposContext, err := types.NewDposContextFromProto(d.db, lastBlock.Header().DposContext)
if err != nil {
return err
}
epochContext := &EpochContext{DposContext: dposContext}
//根据dpos规则计算:先从epochTrie里获得本轮选举周期的验证人列表
//然后根据当前时间计算偏移量,获得应该由谁挖掘当前块的验证人
validator, err := epochContext.lookupValidator(now)
if err != nil {
return err
}
//判断dpos规则计算得到的validator和d.signer即节点设置的validator是否一致
if (validator == common.Address{}) || bytes.Compare(validator.Bytes(), d.signer.Bytes()) != 0 {
return ErrInvalidBlockValidator
}
return nil
}
func (d *Dpos) checkDeadline(lastBlock *types.Block, now int64) error {
prevSlot := PrevSlot(now)
nextSlot := NextSlot(now)
//假如当前时间是1542117655,则prevSlot = 1542117650,nextSlot = 1542117660
if lastBlock.Time().Int64() >= nextSlot {
return ErrMintFutureBlock
}
// nextSlot-now <= 1是要求出块时间需要接近出块间隔最后1秒
if lastBlock.Time().Int64() == prevSlot || nextSlot-now <= 1 {
return nil
}
//时间不到,就返回等待错误
return ErrWaitForPrevBlock
}
CheckValidator()判断不通过则跳出mintBlock,继续下一秒mintBlock循环。\
判断通过进入createNewWork()。
4.2.2 createNewWork生成新块并定型
这个函数涉及具体执行交易、生成收据和日志、向监听者发送相关事件、调用dpos引擎Finalize打包、将未Seal的新块加入未确认块集等事项。
4.2.2.1 挖矿时序图
func (self *worker) createNewWork() (*Work, error) {
self.mu.Lock()
defer self.mu.Unlock()
self.uncleMu.Lock()
defer self.uncleMu.Unlock()
self.currentMu.Lock()
defer self.currentMu.Unlock()
tstart := time.Now()
parent := self.chain.CurrentBlock()
tstamp := tstart.Unix()
if parent.Time().Cmp(new(big.Int).SetInt64(tstamp)) >= 0 {
tstamp = parent.Time().Int64() + 1
}
// this will ensure we're not going off too far in the future
if now := time.Now().Unix(); tstamp > now+1 {
wait := time.Duration(tstamp-now) * time.Second
log.Info("Mining too far in the future", "wait", common.PrettyDuration(wait))
time.Sleep(wait)
}
num := parent.Number()
header := &types.Header{
ParentHash: parent.Hash(),
Number: num.Add(num, common.Big1),
GasLimit: core.CalcGasLimit(parent),
GasUsed: new(big.Int),
Extra: self.extra,
Time: big.NewInt(tstamp),
}
// Only set the coinbase if we are mining (avoid spurious block rewards)
if atomic.LoadInt32(&self.mining) == 1 {
header.Coinbase = self.coinbase
}
if err := self.engine.Prepare(self.chain, header); err != nil {
return nil, fmt.Errorf("got error when preparing header, err: %s", err)
}
// If we are care about TheDAO hard-fork check whether to override the extra-data or not
if daoBlock := self.config.DAOForkBlock; daoBlock != nil {
// Check whether the block is among the fork extra-override range
limit := new(big.Int).Add(daoBlock, params.DAOForkExtraRange)
if header.Number.Cmp(daoBlock) >= 0 && header.Number.Cmp(limit) < 0 {
// Depending whether we support or oppose the fork, override differently
if self.config.DAOForkSupport {
header.Extra = common.CopyBytes(params.DAOForkBlockExtra)
} else if bytes.Equal(header.Extra, params.DAOForkBlockExtra) {
header.Extra = []byte{} // If miner opposes, don't let it use the reserved extra-data
}
}
}
// Could potentially happen if starting to mine in an odd state.
err := self.makeCurrent(parent, header)
if err != nil {
return nil, fmt.Errorf("got error when create mining context, err: %s", err)
}
// Create the current work task and check any fork transitions needed
work := self.current
if self.config.DAOForkSupport && self.config.DAOForkBlock != nil && self.config.DAOForkBlock.Cmp(header.Number) == 0 {
misc.ApplyDAOHardFork(work.state)
}
pending, err := self.eth.TxPool().Pending()
if err != nil {
return nil, fmt.Errorf("got error when fetch pending transactions, err: %s", err)
}
txs := types.NewTransactionsByPriceAndNonce(self.current.signer, pending)
work.commitTransactions(self.mux, txs, self.chain, self.coinbase)
// compute uncles for the new block.
var (
uncles []*types.Header
badUncles []common.Hash
)
for hash, uncle := range self.possibleUncles {
if len(uncles) == 2 {
break
}
if err := self.commitUncle(work, uncle.Header()); err != nil {
log.Trace("Bad uncle found and will be removed", "hash", hash)
log.Trace(fmt.Sprint(uncle))
badUncles = append(badUncles, hash)
} else {
log.Debug("Committing new uncle to block", "hash", hash)
uncles = append(uncles, uncle.Header())
}
}
for _, hash := range badUncles {
delete(self.possibleUncles, hash)
}
// Create the new block to seal with the consensus engine
if work.Block, err = self.engine.Finalize(self.chain, header, work.state, work.txs, uncles, work.receipts, work.dposContext); err != nil {
return nil, fmt.Errorf("got error when finalize block for sealing, err: %s", err)
}
work.Block.DposContext = work.dposContext
// update the count for the miner of new block
// We only care about logging if we're actually mining.
if atomic.LoadInt32(&self.mining) == 1 {
log.Info("Commit new mining work", "number", work.Block.Number(), "txs", work.tcount, "uncles", len(uncles), "elapsed", common.PrettyDuration(time.Since(tstart)))
self.unconfirmed.Shift(work.Block.NumberU64() - 1)
}
return work, nil
}
4.2.2.2 准备区块头
先调用dpos引擎的Prepare填充区块头字段。
……
num := parent.Number()
header := &types.Header{
ParentHash: parent.Hash(),
Number: num.Add(num, common.Big1),
GasLimit: core.CalcGasLimit(parent),
GasUsed: new(big.Int),
Extra: self.extra,
Time: big.NewInt(tstamp),
}
// 确保出块时间不要偏离太大(过早或过晚)
if atomic.LoadInt32(&self.mining) == 1 {
header.Coinbase = self.coinbase
}
self.engine.Prepare(self.chain, header)
……
此时,即将产生的区块Header的GasUsed和Extra都为空,Extra通过前面引擎分析的时候,我们知道会在Prepare里用0字节填充32+65的前后缀,除了Extra,Prepare还将填充其他的Header字段(详见3.5 Prepare分析),当Prepare执行完成,大部分字段都设置好了,还有少部分待填。
4.2.2.3 准备挖矿环境
接下来把父块和本块的header传给makeCurrent方法执行。
err := self.makeCurrent(parent, header)
if err != nil {
return nil, fmt.Errorf("got error when create mining context, err: %s", err)
}
// Create the current work task and check any fork transitions needed
work := self.current
if self.config.DAOForkSupport && self.config.DAOForkBlock != nil && self.config.DAOForkBlock.Cmp(header.Number) == 0 {
misc.ApplyDAOHardFork(work.state)
}
makeCurrent先新建stateDB和dposContext,然后组装一个Work结构体。
func (self *worker) makeCurrent(parent *types.Block, header *types.Header) error {
state, err := self.chain.StateAt(parent.Root())
if err != nil {
return err
}
dposContext, err := types.NewDposContextFromProto(self.chainDb, parent.Header().DposContext)
if err != nil {
return err
}
work := &Work{
config: self.config,
signer: types.NewEIP155Signer(self.config.ChainId),
state: state,
dposContext: dposContext,
ancestors: set.New(),
family: set.New(),
uncles: set.New(),
header: header,
createdAt: time.Now(),
}
// when 08 is processed ancestors contain 07 (quick block)
for _, ancestor := range self.chain.GetBlocksFromHash(parent.Hash(), 7) {
for _, uncle := range ancestor.Uncles() {
work.family.Add(uncle.Hash())
}
work.family.Add(ancestor.Hash())
work.ancestors.Add(ancestor.Hash())
}
// Keep track of transactions which return errors so they can be removed
work.tcount = 0
self.current = work
return nil
}
Work结构体中,ancestors存储的是6个祖先块,family存储的是6个祖先块和它们各自的叔块,组装后的Work结构体赋值给*worker.current。
4.2.2.3 从交易池获取pending交易集
然后从交易池里获取所有pending状态的交易,这些交易按账户分组,每个账户里的交易按nonce排序后返回交易集,这里暂且叫S1:
pending, err := self.eth.TxPool().Pending() //S1 = pending
txs := types.NewTransactionsByPriceAndNonce(self.current.signer, pending)
4.2.2.4 交易集结构化处理
再然后通过NewTransactionsByPriceAndNonce函数对交易集进行结构化,它把S1集合里每个账户的第一笔交易分离出来作为heads集合,返回如下结构:
return &TransactionsByPriceAndNonce{
txs: txs, //S1集合中每个账户除去第一个交易后的交易集
heads: heads, //这个集合由每个账户的第一个交易组成
signer: signer,
}
4.2.2.5 交易执行过程分析
调用commitTransactions方法,执行新区块包含的所有交易。
这个方法是对处理后的交易集txs的具体执行,所谓执行交易,笼统地说就是把转账、合约或dpos交易类型的数据写入对应的内存Trie,再从Trie刷到本地DB中去。
func (env *Work) commitTransactions(mux *event.TypeMux, txs *types.TransactionsByPriceAndNonce, bc *core.BlockChain, coinbase common.Address) {
gp := new(core.GasPool).AddGas(env.header.GasLimit)
var coalescedLogs []*types.Log
for {
// Retrieve the next transaction and abort if all done
tx := txs.Peek()
if tx == nil {
break
}
// Error may be ignored here. The error has already been checked
// during transaction acceptance is the transaction pool.
//
// We use the eip155 signer regardless of the current hf.
from, _ := types.Sender(env.signer, tx)
// Check whether the tx is replay protected. If we're not in the EIP155 hf
// phase, start ignoring the sender until we do.
if tx.Protected() && !env.config.IsEIP155(env.header.Number) {
log.Trace("Ignoring reply protected transaction", "hash", tx.Hash(), "eip155", env.config.EIP155Block)
txs.Pop()
continue
}
// Start executing the transaction
env.state.Prepare(tx.Hash(), common.Hash{}, env.tcount)
err, logs := env.commitTransaction(tx, bc, coinbase, gp)
switch err {
case core.ErrGasLimitReached:
// Pop the current out-of-gas transaction without shifting in the next from the account
log.Trace("Gas limit exceeded for current block", "sender", from)
txs.Pop()
case core.ErrNonceTooLow:
// New head notification data race between the transaction pool and miner, shift
log.Trace("Skipping transaction with low nonce", "sender", from, "nonce", tx.Nonce())
txs.Shift()
case core.ErrNonceTooHigh:
// Reorg notification data race between the transaction pool and miner, skip account =
log.Trace("Skipping account with hight nonce", "sender", from, "nonce", tx.Nonce())
txs.Pop()
case nil:
// Everything ok, collect the logs and shift in the next transaction from the same account
coalescedLogs = append(coalescedLogs, logs...)
env.tcount++
txs.Shift()
default:
// Strange error, discard the transaction and get the next in line (note, the
// nonce-too-high clause will prevent us from executing in vain).
log.Debug("Transaction failed, account skipped", "hash", tx.Hash(), "err", err)
txs.Shift()
}
}
if len(coalescedLogs) > 0 || env.tcount > 0 {
// make a copy, the state caches the logs and these logs get "upgraded" from pending to mined
// logs by filling in the block hash when the block was mined by the local miner. This can
// cause a race condition if a log was "upgraded" before the PendingLogsEvent is processed.
cpy := make([]*types.Log, len(coalescedLogs))
for i, l := range coalescedLogs {
cpy[i] = new(types.Log)
*cpy[i] = *l
}
go func(logs []*types.Log, tcount int) {
if len(logs) > 0 {
mux.Post(core.PendingLogsEvent{Logs: logs})
}
if tcount > 0 {
mux.Post(core.PendingStateEvent{})
}
}(cpy, env.tcount)
}
}
该方法对结构化处理后的txs遍历执行,分为几步:
- Work.state.Prepare()\
这是给StateDB设置交易hash、区块hash(此时为空)、交易索引。\
StateDB是用来操作整个账户树也即world state trie的,每执行一笔交易就更改一次world state trie。\
交易索引是指在对txs.heads进行遍历的时候的自增数,这个索引在本区块内唯一,因为它是本区块包含的所有pending交易涉及的账户及各账户下所有交易的总递增。commitTransactions函数对txs的遍历方式是:从遍历txs.heads开始,获取第一个账户的第一笔交易,然后获取同一账户的第二笔交易以此类推,如果该账户没有交易了,继续txs.heads的下一个账户。\
也就是按账户优先级先遍历其下的所有交易,其次遍历所有账户(堆级别操作),txs结构化就是为这种循环方式准备的。
func (self *StateDB) Prepare(thash, bhash common.Hash, ti int) {
self.thash = thash
self.bhash = bhash
self.txIndex = ti
}
-
Work.commitTransaction()\
执行单笔交易,先对stateDB这个大结构做一个版本号快照,也要对dpos的五棵树上下文即dposContext做一个备份,然后调用core.ApplyTransaction()方法,如果出错就退回快照和备份,执行成功后把交易加入Work.txs,(这个txs是为Finalize的时候传参用的,因为在遍历执行交易的时候会把原txs结构破坏,做个备份)交易收据加入Work.receipts,最后返回收据日志。func (env *Work) commitTransaction(tx *types.Transaction, bc *core.BlockChain, coinbase common.Address, gp *core.GasPool) (error, []*types.Log) { snap := env.state.Snapshot() dposSnap := env.dposContext.Snapshot() receipt, _, err := core.ApplyTransaction(env.config, env.dposContext, bc, &coinbase, gp, env.state, env.header, tx, env.header.GasUsed, vm.Config{}) if err != nil { env.state.RevertToSnapshot(snap) env.dposContext.RevertToSnapShot(dposSnap) return err, nil } env.txs = append(env.txs, tx) env.receipts = append(env.receipts, receipt) return nil, receipt.Logs }
看一下ApplyTransaction()是如何具体执行交易的:
func ApplyTransaction(config *params.ChainConfig, dposContext *types.DposContext, bc *BlockChain, author *common.Address, gp *GasPool, statedb *state.StateDB, header *types.Header, tx *types.Transaction, usedGas *big.Int, cfg vm.Config) (*types.Receipt, *big.Int, error) { msg, err := tx.AsMessage(types.MakeSigner(config, header.Number)) if err != nil { return nil, nil, err } if msg.To() == nil && msg.Type() != types.Binary { return nil, nil, types.ErrInvalidType } // Create a new context to be used in the EVM environment context := NewEVMContext(msg, header, bc, author) // Create a new environment which holds all relevant information // about the transaction and calling mechanisms. vmenv := vm.NewEVM(context, statedb, config, cfg) // Apply the transaction to the current state (included in the env) _, gas, failed, err := ApplyMessage(vmenv, msg, gp) if err != nil { return nil, nil, err } if msg.Type() != types.Binary { if err = applyDposMessage(dposContext, msg); err != nil { return nil, nil, err } } // Update the state with pending changes var root []byte if config.IsByzantium(header.Number) { statedb.Finalise(true) } else { root = statedb.IntermediateRoot(config.IsEIP158(header.Number)).Bytes() } usedGas.Add(usedGas, gas) // Create a new receipt for the transaction, storing the intermediate root and gas used by the tx // based on the eip phase, we're passing wether the root touch-delete accounts. receipt := types.NewReceipt(root, failed, usedGas) receipt.TxHash = tx.Hash() receipt.GasUsed = new(big.Int).Set(gas) // if the transaction created a contract, store the creation address in the receipt. if msg.To() == nil { receipt.ContractAddress = crypto.CreateAddress(vmenv.Context.Origin, tx.Nonce()) } // Set the receipt logs and create a bloom for filtering receipt.Logs = statedb.GetLogs(tx.Hash()) receipt.Bloom = types.CreateBloom(types.Receipts{receipt}) return receipt, gas, err }
NewEVMContext是构建一个EVM执行环境,这个环境如下:
var beneficiary common.Address if author == nil { beneficiary, _ = chain.Engine().Author(header) // Ignore error, we're past header validation } else { beneficiary = *author } return vm.Context{ //是否能够转账函数,会判断发起交易账户余额是否大于转账数量 CanTransfer: CanTransfer, //转账函数,给转账地址减去转账额,同时给接收地址加上转账额 Transfer: Transfer, //区块头hash GetHash: GetHashFn(header, chain), Origin: msg.From(), Coinbase: beneficiary, BlockNumber: new(big.Int).Set(header.Number), Time: new(big.Int).Set(header.Time), Difficulty: new(big.Int).Set(header.Difficulty), GasLimit: new(big.Int).Set(header.GasLimit), GasPrice: new(big.Int).Set(msg.GasPrice()), }
beneficiary应该是Coinbase,这里是个bug。可以看一下core.state_processor.go里的Procee方法调用ApplyTransaction的时候author传的是nil,而这里判断author为nil的时候从header里取的却是validator。
NewEVM是创建一个携带了EVM环境和编译器的虚拟机。然后调用ApplyMessage(),这个函数最主要的是对当前交易进行状态转换TransitionDb()。
TransitionDb详解
func (st *StateTransition) TransitionDb() (ret []byte, requiredGas, usedGas *big.Int, failed bool, err error) {
if err = st.preCheck(); err != nil {
return
}
msg := st.msg
sender := st.from() // err checked in preCheck
homestead := st.evm.ChainConfig().IsHomestead(st.evm.BlockNumber)
contractCreation := msg.To() == nil
// Pay intrinsic gas
// TODO convert to uint64
intrinsicGas := IntrinsicGas(st.data, contractCreation, homestead)
if intrinsicGas.BitLen() > 64 {
return nil, nil, nil, false, vm.ErrOutOfGas
}
if err = st.useGas(intrinsicGas.Uint64()); err != nil {
return nil, nil, nil, false, err
}
var (
evm = st.evm
// vm errors do not effect consensus and are therefor
// not assigned to err, except for insufficient balance
// error.
vmerr error
)
if contractCreation {
ret, _, st.gas, vmerr = evm.Create(sender, st.data, st.gas, st.value)
} else {
// Increment the nonce for the next transaction
st.state.SetNonce(sender.Address(), st.state.GetNonce(sender.Address())+1)
ret, st.gas, vmerr = evm.Call(sender, st.to().Address(), st.data, st.gas, st.value)
}
if vmerr != nil {
log.Debug("VM returned with error", "err", vmerr)
// The only possible consensus-error would be if there wasn't
// sufficient balance to make the transfer happen. The first
// balance transfer may never fail.
if vmerr == vm.ErrInsufficientBalance {
return nil, nil, nil, false, vmerr
}
}
requiredGas = new(big.Int).Set(st.gasUsed())
st.refundGas()
st.state.AddBalance(st.evm.Coinbase, new(big.Int).Mul(st.gasUsed(), st.gasPrice))
return ret, requiredGas, st.gasUsed(), vmerr != nil, err
}
其中preCheck检查当前交易nonce和发送账户当前nonce是否一致,同时检查发送账户余额是否大于GasLimit,足够的话就先将余额减去gaslimit(过度状态转换),不足就返回一个常见的错误:“insufficient balance to pay for gas”。
IntrinsicGas()是计算交易所需固定费用:如果是创建合约交易,固定费用为53000gas,转账交易固定费用是21000gas,如果交易携带数据,这个数据对于创建合约是合约代码数据,对于转账交易是转账的附加说明数据,这些数据按字节存储收费,非0字节每位68gas,0字节每位4gas,总计起来就是执行交易所需的gas费。
useGas()判断提供的gas是否满足上面计算出的内部所需费用,足够的话从提供的gas里扣除内部所需费用(状态转换)。
因为ApplyTransaction传的参数msg已经将dpos类型且to为空的交易排除出去了。
所以当这里msg.To() == nil的时候,只剩下msg.Type == 0这一种原始交易的可能了。msg.To为空说明该交易不是转账、不是合约调用,只能是创建合约交易,根据msg.To是否为空,分两种情况,Create创建合约和Call调用合约,这两种情况都覆盖了转账行为。
1)if contractCreation{…},即to==nil,说明是创建合约交易,调用evm.Create()。
// Create creates a new contract using code as deployment code.
func (evm *EVM) Create(caller ContractRef, code []byte, gas uint64, value *big.Int) (ret []byte, contractAddr common.Address, leftOverGas uint64, err error) {
// Depth check execution. Fail if we're trying to execute above the
// limit.
if evm.depth > int(params.CallCreateDepth) {
return nil, common.Address{}, gas, ErrDepth
}
if !evm.CanTransfer(evm.StateDB, caller.Address(), value) {
return nil, common.Address{}, gas, ErrInsufficientBalance
}
// Ensure there's no existing contract already at the designated address
nonce := evm.StateDB.GetNonce(caller.Address())
evm.StateDB.SetNonce(caller.Address(), nonce+1)
contractAddr = crypto.CreateAddress(caller.Address(), nonce)
contractHash := evm.StateDB.GetCodeHash(contractAddr)
if evm.StateDB.GetNonce(contractAddr) != 0 || (contractHash != (common.Hash{}) && contractHash != emptyCodeHash) {
return nil, common.Address{}, 0, ErrContractAddressCollision
}
// Create a new account on the state
snapshot := evm.StateDB.Snapshot()
evm.StateDB.CreateAccount(contractAddr)
if evm.ChainConfig().IsEIP158(evm.BlockNumber) {
evm.StateDB.SetNonce(contractAddr, 1)
}
evm.Transfer(evm.StateDB, caller.Address(), contractAddr, value)
// initialise a new contract and set the code that is to be used by the
// E The contract is a scoped evmironment for this execution context
// only.
contract := NewContract(caller, AccountRef(contractAddr), value, gas)
contract.SetCallCode(&contractAddr, crypto.Keccak256Hash(code), code)
if evm.vmConfig.NoRecursion && evm.depth > 0 {
return nil, contractAddr, gas, nil
}
ret, err = run(evm, snapshot, contract, nil)
// check whether the max code size has been exceeded
maxCodeSizeExceeded := evm.ChainConfig().IsEIP158(evm.BlockNumber) && len(ret) > params.MaxCodeSize
// if the contract creation ran successfully and no errors were returned
// calculate the gas required to store the code. If the code could not
// be stored due to not enough gas set an error and let it be handled
// by the error checking condition below.
if err == nil && !maxCodeSizeExceeded {
createDataGas := uint64(len(ret)) * params.CreateDataGas
if contract.UseGas(createDataGas) {
evm.StateDB.SetCode(contractAddr, ret)
} else {
err = ErrCodeStoreOutOfGas
}
}
// When an error was returned by the EVM or when setting the creation code
// above we revert to the snapshot and consume any gas remaining. Additionally
// when we're in homestead this also counts for code storage gas errors.
if maxCodeSizeExceeded || (err != nil && (evm.ChainConfig().IsHomestead(evm.BlockNumber) || err != ErrCodeStoreOutOfGas)) {
evm.StateDB.RevertToSnapshot(snapshot)
if err != errExecutionReverted {
contract.UseGas(contract.Gas)
}
}
// Assign err if contract code size exceeds the max while the err is still empty.
if maxCodeSizeExceeded && err == nil {
err = errMaxCodeSizeExceeded
}
return ret, contractAddr, contract.Gas, err
}
注意这里传入的gas是已经扣除了固定费用的剩余gas。evm是基于栈的简单虚拟机,最多支持1024栈深度,超过就报错。
然后在这里调用evmContext的CanTransfer()判断发起交易地址余额是否大于转账数量,是的话就将发起交易的账户的nonce+1。
生成合约账户地址:合约账户的地址生成规则是,由发起交易的地址和该nonce计算生成,生成地址后,此时仅有地址,根据地址获取该合约账户的nonce应该为0、codeHash应该为空hash,不符合这些判断说明地址冲突,报错退出。
紧接着创建一个新账户evm.StateDB.CreateAccount(contractAddr),这个函数创建的是一个普通账户(即EOA和Contract账户的未分化形式)。\
新账户的地址就是上面计算生成的地址,Nonce设为0,Balance设为0,但是如果之前已存在同样地址的账户那么Balance就设为之前账户的余额,CodeHash设为空hash注意不是空。EIP158之后的新账号nonce设为1。
evm.Transfer():如果创建账户的时候有资助代币(eth),则将代币从发起地址转移到新账户地址。
然后NewContract()构建一个合约上下文环境contract。
SetCallCode(),给contract环境对象设置入参Code、CodeHash。
run():EVM编译、执行合约的创建,执行EVM栈操作。\
run执行返回合约body字节码(code storage),如果长度超过24576也存储不了,然后计算存储这个合约字节码的gas费用=长度*200。最后给stateObject对象设置code,给账户(Account)设置codeHash,这样那个新账户就成了一个合约账户。
2)else{…}如果不是创建合约交易(即to!=nil),调用evm.Call()。这个Call是执行合约交易,包括转账类型的交易、调用合约交易。
func (evm *EVM) Call(caller ContractRef, addr common.Address, input []byte, gas uint64, value *big.Int) (ret []byte, leftOverGas uint64, err error) {
if evm.vmConfig.NoRecursion && evm.depth > 0 {
return nil, gas, nil
}
// Fail if we're trying to execute above the call depth limit
if evm.depth > int(params.CallCreateDepth) {
return nil, gas, ErrDepth
}
// Fail if we're trying to transfer more than the available balance
if !evm.Context.CanTransfer(evm.StateDB, caller.Address(), value) {
return nil, gas, ErrInsufficientBalance
}
var (
to = AccountRef(addr)
snapshot = evm.StateDB.Snapshot()
)
if !evm.StateDB.Exist(addr) {
precompiles := PrecompiledContractsHomestead
if evm.ChainConfig().IsByzantium(evm.BlockNumber) {
precompiles = PrecompiledContractsByzantium
}
if precompiles[addr] == nil && evm.ChainConfig().IsEIP158(evm.BlockNumber) && value.Sign() == 0 {
return nil, gas, nil
}
evm.StateDB.CreateAccount(addr)
}
evm.Transfer(evm.StateDB, caller.Address(), to.Address(), value)
// initialise a new contract and set the code that is to be used by the
// E The contract is a scoped environment for this execution context
// only.
contract := NewContract(caller, to, value, gas)
contract.SetCallCode(&addr, evm.StateDB.GetCodeHash(addr), evm.StateDB.GetCode(addr))
ret, err = run(evm, snapshot, contract, input)
// When an error was returned by the EVM or when setting the creation code
// above we revert to the snapshot and consume any gas remaining. Additionally
// when we're in homestead this also counts for code storage gas errors.
if err != nil {
evm.StateDB.RevertToSnapshot(snapshot)
if err != errExecutionReverted {
contract.UseGas(contract.Gas)
}
}
return ret, contract.Gas, err
}
Call函数先来三个判断:evm编译器被禁用或者evm执行栈深超过1024或者转账数额超过余额就报错。
注意以下几个Call步骤和Create的区别:
evm.StateDB.Exist(addr)是从stateObjects这个所有stateObject的map集合中查找是否存to地址,如果不存在,则调用evm.StateDB.CreateAccount(addr)创建一个新账户,这和Create里调的是同一个函数,即CreateAccount创建的是一个普通账户。
evm.Transfer():将代币从发起地址转移到to地址(包括纯转账类型的交易、给合约地址转入代币等)
NewContract()构建一个合约上下文环境contract。
SetCallCode():这个函数和Create里的SetCallCode()传的入参不一样,它是从to地址获取code,然后才给to账户设置code、codehash等,这隐含了两种可能性,如果获取到了code那么这个账户自然是合约账户,如果没有获取到,那这个账户就是外部拥有账户(EOA)
run():EVM编译、执行EVM栈操作。
这个Call除了转账、调用合约,还包括执dpos交易,当交易是dpos类型的交易的时候,它其实是个空合约,之所以要执行dpos这类空合约是要计算其gas。
TransitionDB()在交易执行完后,将剩余gas返退回给发起者账户地址,同时把挖矿节点设置的Coinbase的余额增加上消耗的gas。
除了Call(),evm还提供了另外3个合约调用方法:\
CallCode(),已经弃用,由DelegateCall()替代\
DelegateCall()\
StaticCall()暂时未用
type CallContext interface {
// Call another contract
Call(env *EVM, me ContractRef, addr common.Address, data []byte, gas, value *big.Int) ([]byte, error)
// Take another's contract code and execute within our own context
CallCode(env *EVM, me ContractRef, addr common.Address, data []byte, gas, value *big.Int) ([]byte, error)
// Same as CallCode except sender and value is propagated from parent to child scope
DelegateCall(env *EVM, me ContractRef, addr common.Address, data []byte, gas *big.Int) ([]byte, error)
// Create a new contract
Create(env *EVM, me ContractRef, data []byte, gas, value *big.Int) ([]byte, common.Address, error)
}
我们上面讨论的是交易,根据黄皮书的定义交易就两种:创建合约、消息调用。区分二者的标志就是to是否为空。由外部用户触发的才能叫交易,所以用户发起创建合约、用户发起合约调用都叫交易,对应的就是我们上面分析的Create和Call两种情况。
转账这种交易执行的是Call()而不是Create(),因为to不为空。
用户调用合约A,这叫交易,执行的是Call(),紧接着A里边又调用了合约B,那么这不叫交易叫内部调用,执行的就不是Call(),而是DelegateCall()了,Call和DelegateCall的区别是:Call总是直接改变to的的storage,而DelegateCall改变的是caller(即A)的storage,而不是to的storage。那个NewContract上下文构造函数就是做msg.caller、to等指向工作的。\
至于DelegateCall为什么替代CallCode,是修改了一点即msg.sender在DelegateCall里永远指向用户,而CallCode里的sender则指向的是caller。
ApplyMessage()结束后,判断一下是否属于DPOS交易,是的话就执行applyDposMessage中对应的交易,即dpos的四种交易:成为候选人、退出候选人、投票、取消投票,具体执行就是更改对应的Trie。\
然后调用statedb.Finalise删除掉空账户,再更新状态树,得到最新的world state root hash(intermediate root)。\
然后生成一个收据,收据里包括:
交易的hash\
执行成败状态\
消耗的费用\
若是创建合约交易就把合约地址也写到收据的ContractAddress字段里\
日志\
Bloom关于日志,栈操作的时候会记录下日志,日志信息如下:
type Log struct { // Consensus fields: // address of the contract that generated the event Address common.Address `json:"address" gencodec:"required"` // list of topics provided by the contract. Topics []common.Hash `json:"topics" gencodec:"required"` // supplied by the contract, usually ABI-encoded Data []byte `json:"data" gencodec:"required"` // Derived fields. These fields are filled in by the node // but not secured by consensus. // block in which the transaction was included BlockNumber uint64 `json:"blockNumber"` // hash of the transaction TxHash common.Hash `json:"transactionHash" gencodec:"required"` // index of the transaction in the block TxIndex uint `json:"transactionIndex" gencodec:"required"` // hash of the block in which the transaction was included BlockHash common.Hash `json:"blockHash"` // index of the log in the receipt Index uint `json:"logIndex" gencodec:"required"` // The Removed field is true if this log was reverted due to a chain reorganisation. // You must pay attention to this field if you receive logs through a filter query. Removed bool `json:"removed"` }
ApplyTransaction()最终返回收据。
至此,单笔交易执行过程commitTransaction()结束。
如此循环执行,直到所有交易执行完成。\
在循环执行交易的过程中,我们把所有交易收据的日志写入了一个集合,等交易全部执行完成,异步将这个日志集合向所有已注册的事件接收者发送:
mux.Post(core.PendingLogsEvent{Logs: logs})
mux.Post(core.PendingStateEvent{})
func (mux *TypeMux) Post(ev interface{}) error {
event := &TypeMuxEvent{
Time: time.Now(),
Data: ev,
}
rtyp := reflect.TypeOf(ev)
mux.mutex.RLock()
if mux.stopped {
mux.mutex.RUnlock()
return ErrMuxClosed
}
subs := mux.subm[rtyp]
mux.mutex.RUnlock()
for _, sub := range subs {
sub.deliver(event)
}
return nil
}
投递相应的事件到TypeMuxSubscription的postC通道中。
func (s *TypeMuxSubscription) deliver(event *TypeMuxEvent) {
// Short circuit delivery if stale event
if s.created.After(event.Time) {
return
}
// Otherwise deliver the event
s.postMu.RLock()
defer s.postMu.RUnlock()
select {
case s.postC <- event:
case <-s.closing:
}
}
关于事件的订阅、发送单列章节讲。
commitTransactions()结束,现在回到了createNewWork中,代码继续遍历叔块和损坏的叔块,这段代码其实在DPOS中已经不需要了,因为DPOS中没有叔块,chainSideCh事件被删除,possibleUncles没有被赋值的机会了。
4.2.2.6 Finalize定型新块
把header、账户状态、交易、收据等信息传给dpos引擎去定型。参见3.6节。
4.2.2.7 检查之前的块是否上链
注意:是检查本节点之前挖的块是否上链,而不是当前挖出的块。当前块离上链为时尚早。
每个以太坊节点会维护一个未确认块集,集合内有个环状容器,这个容器容纳仅由自身挖出的块,在最乐观的情况下(即连续由本节点挖出块的情况下),最大容纳5个块。当第6个连续的块由本节点挖出的时候就会触发unconfirmedBlocks.Shift()的执行(这里“执行”的上下文含义是满足函数内部的判断条件,而不仅仅指函数被调用,下同)。
但大多数情况下,一个节点不会连续出块,那么可能在本节点第二次挖出块的时候,当前区块链高度就已经超过之前挖出的那个块6个高度了,也会触发unconfirmedBlocks.Shift()执行。换句话说就是通常情况下检查自己出的前一个块有没有加入到链上。
Shift的作用,是检查未确认块集,这个未确认集并不是说真的就全是一直未被加入到链上的块,而是当该节点满足上面两段描述的“执行”条件时,都会检查一下之前挖出的块有没有被确认(加入区块链),如果当前区块链高度,高于未确认集环状容器内那些块6个高度之后,那些块还没有被加入到链上,就从未确认块集合中删除那些块。
这个函数的意思着重表达:在至少6个高度的==时间==之后,才会去检查是否加入到链上,至于上没上链它也不能改变什么,就是给本节点一个之前的块被怎么处理了的通知。为什么是这样的时点呢?可能是要留出6个高度的时间等所有节点都确认吧,后文再说。
func (set *unconfirmedBlocks) Shift(height uint64) {
set.lock.Lock()
defer set.lock.Unlock()
for set.blocks != nil {
// Retrieve the next unconfirmed block and abort if too fresh
next := set.blocks.Value.(*unconfirmedBlock)
if next.index+uint64(set.depth) > height {
break
}
// Block seems to exceed depth allowance, check for canonical status
header := set.chain.GetHeaderByNumber(next.index)
switch {
case header == nil:
log.Warn("Failed to retrieve header of mined block", "number", next.index, "hash", next.hash)
case header.Hash() == next.hash:
log.Info(" block reached canonical chain", "number", next.index, "hash", next.hash)
default:
log.Info("⑂ block became a side fork", "number", next.index, "hash", next.hash)
}
// Drop the block out of the ring
if set.blocks.Value == set.blocks.Next().Value {
set.blocks = nil
} else {
//下面的代码处于循环中,实现对for set.blocks的迭代赋值
set.blocks = set.blocks.Move(-1) //指向最后一个环元素
set.blocks.Unlink(1) //删除原第一个
set.blocks = set.blocks.Move(1) //指向原第二个
}
}
}
4.2.3 Seal封装新块为最终状态
这里就是调用dpos引擎的Seal规则了,即给区块签名,参见3.7节。
func (self *worker) mintBlock(now int64) {
……
result, err := self.engine.Seal(self.chain, work.Block, self.quitCh)
……
}
这个result和work对象都被发送到self.recv通道中去了。
func (self *worker) mintBlock(now int64) {
……
self.recv <- &Result{work, result}
……
}
4.3 新块入库、广播
work.wait()在geth运行的时候就监听了work.recv通道,它做了如下几件事:
func (self *worker) wait() {
for {
for result := range self.recv {
atomic.AddInt32(&self.atWork, -1)
if result == nil || result.Block == nil {
continue
}
block := result.Block
work := result.Work
// Update the block hash in all logs since it is now available and not when the
// receipt/log of individual transactions were created.
for _, r := range work.receipts {
for _, l := range r.Logs {
l.BlockHash = block.Hash()
}
}
for _, log := range work.state.Logs() {
log.BlockHash = block.Hash()
}
stat, err := self.chain.WriteBlockAndState(block, work.receipts, work.state)
if err != nil {
log.Error("Failed writing block to chain", "err", err)
continue
}
// check if canon block and write transactions
if stat == core.CanonStatTy {
// implicit by posting ChainHeadEvent
}
// Broadcast the block and announce chain insertion event
self.mux.Post(core.NewMinedBlockEvent{Block: block})
var (
events []interface{}
logs = work.state.Logs()
)
events = append(events, core.ChainEvent{Block: block, Hash: block.Hash(), Logs: logs})
if stat == core.CanonStatTy {
events = append(events, core.ChainHeadEvent{Block: block})
}
self.chain.PostChainEvents(events, logs)
// Insert the block into the set of pending ones to wait for confirmations
self.unconfirmed.Insert(block.NumberU64(), block.Hash())
log.Info("Successfully sealed new block", "number", block.Number(), "hash", block.Hash())
}
}
}
1)写入本节点数据库(WriteBlockAndState)
当通道里接收到新区块后,wait就调用chain.WriteBlockAndState()里的WriteBlock()把区块写入数据库,区块的body部分和header部分独立存储在db中,body的key是“b”+blockNumber+blockHash,值是交易集、叔块集的rlp。header的key是“h”+blockNumber。
2)Post NewMinedBlockEvent
3)PostChainEvents
4)将区块插入unconfirmedBlocks集合
4.3.1 事件订阅发送机制
Subscribe函数实现1个订阅者订阅多个事件。
4.4 新块上链
后续再贴上来……