用最简单的术语来说,挖矿就是不断重复计算区块头的哈希值,修改一个参数(即nonce字段),直到生成的哈希值与特定的target相匹配的一个过程。
阅读源码前先参考《精通比特币》梳理一遍节点挖矿的流程:
(1). 构建一个空区块,称为候选区块
(2). 从内存池中打包交易至候选区块
(3). 构造区块头,填写区块头的下述字段
1)填写版本号version字段
2)填写父区块哈希prevhash字段
3)用merkle树汇总全部的交易,将merkle root的哈希值填写至merkle root字段
4)填写时间戳timestamp字段
5)填写难度目标target字段
(4). 开始挖矿。挖矿就是不断重复计算区块头的哈希值,修改nonce参数,直到找到一个满足条件的nonce值。当挖矿节点成功求出一个解后把解填入区块头的nonce字段。
(5). 这时一个新区块就成功挖出了,然后挖矿节点会做下面这些事:
1) 按照标准清单检验新区块,检验通过后进行下面的 2)和 3)步骤
2)立刻将这个新区块发给它的所有相邻节点,相邻节点收到这个新区块后进行验证,验证有效后会继续传播给所有相邻节点。
3)将这个新区块连接到现有的区块链中,按照如下规则:
根据新区块的prevhash字段在现有区块链中寻找这个父区块,
(Ⅰ) 如果父区块是主区块链的“末梢”,则将新区块添加上去即可;
(Ⅱ) 如果父区块所在的链是备用链,则节点将新区块添加到备用链,同时比较备用链与主链的工作量。如果备用链比主链积累了更多的工作量,节点将选择备用链作为其新的主链,而之前的主链则成为了备用链;
(Ⅲ) 如果在现有的区块链中找不到它的父区块,那么这个区块被认为是“孤块”。孤块会被保存在孤块池中,直到它们的父区块被节点接收到。一旦收到了父区块并且将其连接到现有的区块链上,节点就会将孤块从孤块池中取出,并且连接到它的父区块,让它作为区块链的一部分。
(后续阅读源码后会发现,上述流程(5),即新区块挖出后所做的操作,在源码中体现在ProcessNewBlock()方法里,另外,当节点接收到一个新区块时也会调用这个方法,也就是说当挖矿节点挖出一个新区块和节点接收到一个新区块时都会调用ProcessNewBlock()方法)
来看看挖矿的入口,可以通过RPC命令generate和generatetoaddress进行挖矿,分别定义在src/wallet/rpcwallet.cpp里:
static const CRPCCommand commands[] =
{ // category name actor (function) argNames
// --------------------- ------------------------ ----------------------- ----------
{ "rawtransactions", "fundrawtransaction", &fundrawtransaction, {"hexstring","options","iswitness"} },
.........
{ "wallet", "rescanblockchain", &rescanblockchain, {"start_height", "stop_height"} },
{ "generating", "generate", &generate, {"nblocks","maxtries"} },
};
和src/rpc/mining.cpp中:
static const CRPCCommand commands[] =
{ // category name actor (function) argNames
// --------------------- ------------------------ ----------------------- ----------
{ "mining", "getnetworkhashps", &getnetworkhashps, {"nblocks","height"} },
{ "mining", "getmininginfo", &getmininginfo, {} },
.........
{ "generating", "generatetoaddress", &generatetoaddress, {"nblocks","address","maxtries"} },
.........
};
上述两个RPC命令分别对应generate(const JSONRPCRequest& request) 和 generatetoaddress(const JSONRPCRequest& request) 方法,这两个方法最终都会调用generateBlocks()方法:
UniValue generateBlocks(std::shared_ptr coinbaseScript, int nGenerate, uint64_t nMaxTries, bool keepScript)
{
static const int nInnerLoopCount = 0x10000;//这个变量是用来控制一个新的候选区快hash计算时修改nNonce值的次数的
int nHeightEnd = 0;
int nHeight = 0;
{ // Don't keep cs_main locked
LOCK(cs_main);
nHeight = chainActive.Height();//当前主链的高度
nHeightEnd = nHeight+nGenerate;//nGenerate是要生成的新区块数,nHeightEnd则是最终挖出新区块后主链的高度
}
unsigned int nExtraNonce = 0;
UniValue blockHashes(UniValue::VARR);
while (nHeight < nHeightEnd)//开始挖矿,直到挖出目标数量的新区块
{
//创建一个新的候选区快
std::unique_ptr pblocktemplate(BlockAssembler(Params()).CreateNewBlock(coinbaseScript->reserveScript));
if (!pblocktemplate.get())
throw JSONRPCError(RPC_INTERNAL_ERROR, "Couldn't create new block");
CBlock *pblock = &pblocktemplate->block;
//修改区块的extranonce值
{
LOCK(cs_main);
IncrementExtraNonce(pblock, chainActive.Tip(), nExtraNonce);
}
//这里就开始真正挖矿的hash计算了,不断修改pblock->nNonce的值,计算hash,检查是否满足难度目标target(代表着工作量证明)
while (nMaxTries > 0 && pblock->nNonce < nInnerLoopCount && !CheckProofOfWork(pblock->GetHash(), pblock->nBits, Params().GetConsensus())) {
++pblock->nNonce;
--nMaxTries;
}
if (nMaxTries == 0) {
break;
}
//如果nNonce值修改的次数已经超过了nInnerLoopCount规定的次数还没有找到工作量证明的一个解,则废弃这个候选区块,重新回到前面创建一个新的候选区块
if (pblock->nNonce == nInnerLoopCount) {
continue;
}
//到了这里说明候选区块在前面的过程中已经完成了工作量证明,候选区块成为了一个有效的新区块
std::shared_ptr shared_pblock = std::make_shared(*pblock);
//调用ProcessNewBlock()处理这个有效的新区块
if (!ProcessNewBlock(Params(), shared_pblock, true, nullptr))
throw JSONRPCError(RPC_INTERNAL_ERROR, "ProcessNewBlock, block not accepted");
++nHeight;
//将生成的所有新区块的hash保存起来
blockHashes.push_back(pblock->GetHash().GetHex());
//mark script as important because it was used at least for one coinbase output if the script came from the wallet
if (keepScript)
{
coinbaseScript->KeepScript();
}
}
return blockHashes;//返回所有新区块的hash
}
上面源码中已经对关键的逻辑做了注释,主要分为三部分主要逻辑:创建候选区块、进行hash计算完成工作量证明、处理新区块。接下去将对这几部分源码做更进一步的详细分析。
2.1创建候选区块
创建候选区块是通过这部分代码完成的:
//创建一个新的候选区快
std::unique_ptr pblocktemplate(BlockAssembler(Params()).CreateNewBlock(coinbaseScript->reserveScript));
if (!pblocktemplate.get())
throw JSONRPCError(RPC_INTERNAL_ERROR, "Couldn't create new block");
CBlock *pblock = &pblocktemplate->block;
//修改区块的extranonce值和计算默克尔树根节点的hash
{
LOCK(cs_main);
IncrementExtraNonce(pblock, chainActive.Tip(), nExtraNonce);
}
主要包含两部分:
CreateNewBlock()方法完成计算并填写填写nVersion ,nTime,hashPrevBlock,nBits,将nNonce初始化为0的操作;IncrementExtraNonce()方法完成修改nExtraNonce值和计算并填写hashMerkleRoot字段。
看下BlockAssembler::CreateNewBlock()方法的源码:
std::unique_ptr BlockAssembler::CreateNewBlock(const CScript& scriptPubKeyIn, bool fMineWitnessTx)
{
int64_t nTimeStart = GetTimeMicros();
resetBlock();
pblocktemplate.reset(new CBlockTemplate());
if(!pblocktemplate.get())
return nullptr;
pblock = &pblocktemplate->block; // pointer for convenience
// Add dummy coinbase tx as first transaction
//先添加一个虚设的coinbase交易作为区块的第一个交易,相当于占位,因为每个区块的第一个交易必须是coinbase交易,后面会初始化这个伪coinbase交易
pblock->vtx.emplace_back();
pblocktemplate->vTxFees.push_back(-1); // updated at end
pblocktemplate->vTxSigOpsCost.push_back(-1); // updated at end
LOCK2(cs_main, mempool.cs);
CBlockIndex* pindexPrev = chainActive.Tip(); //获取当前主链的末端区块,作为新区块的父区块
assert(pindexPrev != nullptr);
nHeight = pindexPrev->nHeight + 1;//新区块的高度是=当前主链的高度+1
pblock->nVersion = ComputeBlockVersion(pindexPrev, chainparams.GetConsensus());//计算并填写区块的版本号字段
// -regtest only: allow overriding block.nVersion with
// -blockversion=N to test forking scenarios
if (chainparams.MineBlocksOnDemand())
pblock->nVersion = gArgs.GetArg("-blockversion", pblock->nVersion);
pblock->nTime = GetAdjustedTime();//计算并填写区块的时间戳字段
const int64_t nMedianTimePast = pindexPrev->GetMedianTimePast();
nLockTimeCutoff = (STANDARD_LOCKTIME_VERIFY_FLAGS & LOCKTIME_MEDIAN_TIME_PAST)
? nMedianTimePast
: pblock->GetBlockTime();
// Decide whether to include witness transactions
// This is only needed in case the witness softfork activation is reverted
// (which would require a very deep reorganization) or when
// -promiscuousmempoolflags is used.
// TODO: replace this with a call to main to assess validity of a mempool
// transaction (which in most cases can be a no-op).
fIncludeWitness = IsWitnessEnabled(pindexPrev, chainparams.GetConsensus()) && fMineWitnessTx;
int nPackagesSelected = 0;
int nDescendantsUpdated = 0;
addPackageTxs(nPackagesSelected, nDescendantsUpdated);//根据交易选择算法从内存池中选择交易打包进区块,这个过程并不会把交易从内存池中移除,移除交易的过程是在后续的处理新区块的方法ProcessNewBlock()里
int64_t nTime1 = GetTimeMicros();
nLastBlockTx = nBlockTx;
nLastBlockWeight = nBlockWeight;
// Create coinbase transaction.
CMutableTransaction coinbaseTx;
coinbaseTx.vin.resize(1);
coinbaseTx.vin[0].prevout.SetNull();
coinbaseTx.vout.resize(1);
coinbaseTx.vout[0].scriptPubKey = scriptPubKeyIn;
coinbaseTx.vout[0].nValue = nFees + GetBlockSubsidy(nHeight, chainparams.GetConsensus());//coinbase交易的输出就是矿工的奖励,交易费+区块奖励
coinbaseTx.vin[0].scriptSig = CScript() << nHeight << OP_0;
pblock->vtx[0] = MakeTransactionRef(std::move(coinbaseTx));//初始化之前添加的coinbase交易
pblocktemplate->vchCoinbaseCommitment = GenerateCoinbaseCommitment(*pblock, pindexPrev, chainparams.GetConsensus());
pblocktemplate->vTxFees[0] = -nFees;
LogPrintf("CreateNewBlock(): block weight: %u txs: %u fees: %ld sigops %d\n", GetBlockWeight(*pblock), nBlockTx, nFees, nBlockSigOpsCost);
// Fill in header
pblock->hashPrevBlock = pindexPrev->GetBlockHash();//填写父区块hash
UpdateTime(pblock, chainparams.GetConsensus(), pindexPrev);
pblock->nBits = GetNextWorkRequired(pindexPrev, pblock, chainparams.GetConsensus());//计算并填写难度目标target,这里即变量nBits
pblock->nNonce = 0;//nNonce先初始化为0
pblocktemplate->vTxSigOpsCost[0] = WITNESS_SCALE_FACTOR * GetLegacySigOpCount(*pblock->vtx[0]);
CValidationState state;
if (!TestBlockValidity(state, chainparams, *pblock, pindexPrev, false, false)) {
throw std::runtime_error(strprintf("%s: TestBlockValidity failed: %s", __func__, FormatStateMessage(state)));
}
int64_t nTime2 = GetTimeMicros();
LogPrint(BCLog::BENCH, "CreateNewBlock() packages: %.2fms (%d packages, %d updated descendants), validity: %.2fms (total %.2fms)\n", 0.001 * (nTime1 - nTimeStart), nPackagesSelected, nDescendantsUpdated, 0.001 * (nTime2 - nTime1), 0.001 * (nTime2 - nTimeStart));
return std::move(pblocktemplate);
}
上述源码已经对创建候选区块过程中的关键部分做了注释,接下来继续对这些关键部分做更进一步的分析,包括打包交易、计算挖矿奖励、计算难度目标、修改extranonce值,计算默克尔树根节点hash。
2.1.1打包交易
从内存池中选择交易打包进区块,看下 addPackageTxs() 这个方法:
// This transaction selection algorithm orders the mempool based
// on feerate of a transaction including all unconfirmed ancestors.
// Since we don't remove transactions from the mempool as we select them
// for block inclusion, we need an alternate method of updating the feerate
// of a transaction with its not-yet-selected ancestors as we go.
// This is accomplished by walking the in-mempool descendants of selected
// transactions and storing a temporary modified state in mapModifiedTxs.
// Each time through the loop, we compare the best transaction in
// mapModifiedTxs with the next transaction in the mempool to decide what
// transaction package to work on next.
void BlockAssembler::addPackageTxs(int &nPackagesSelected, int &nDescendantsUpdated)
{
// mapModifiedTx will store sorted packages after they are modified
// because some of their txs are already in the block
indexed_modified_transaction_set mapModifiedTx;
// Keep track of entries that failed inclusion, to avoid duplicate work
CTxMemPool::setEntries failedTx;
// Start by adding all descendants of previously added txs to mapModifiedTx
// and modifying them for their already included ancestors
UpdatePackagesForAdded(inBlock, mapModifiedTx);
CTxMemPool::indexed_transaction_set::index::type::iterator mi = mempool.mapTx.get().begin();
CTxMemPool::txiter iter;
// Limit the number of attempts to add transactions to the block when it is
// close to full; this is just a simple heuristic to finish quickly if the
// mempool has a lot of entries.
const int64_t MAX_CONSECUTIVE_FAILURES = 1000;
int64_t nConsecutiveFailed = 0;
while (mi != mempool.mapTx.get().end() || !mapModifiedTx.empty())
{
// First try to find a new transaction in mapTx to evaluate.
if (mi != mempool.mapTx.get().end() &&
SkipMapTxEntry(mempool.mapTx.project<0>(mi), mapModifiedTx, failedTx)) {
++mi;
continue;
}
// Now that mi is not stale, determine which transaction to evaluate:
// the next entry from mapTx, or the best from mapModifiedTx?
bool fUsingModified = false;
modtxscoreiter modit = mapModifiedTx.get().begin();
if (mi == mempool.mapTx.get().end()) {
// We're out of entries in mapTx; use the entry from mapModifiedTx
iter = modit->iter;
fUsingModified = true;
} else {
// Try to compare the mapTx entry to the mapModifiedTx entry
iter = mempool.mapTx.project<0>(mi);
if (modit != mapModifiedTx.get().end() &&
CompareTxMemPoolEntryByAncestorFee()(*modit, CTxMemPoolModifiedEntry(iter))) {
// The best entry in mapModifiedTx has higher score
// than the one from mapTx.
// Switch which transaction (package) to consider
iter = modit->iter;
fUsingModified = true;
} else {
// Either no entry in mapModifiedTx, or it's worse than mapTx.
// Increment mi for the next loop iteration.
++mi;
}
}
// We skip mapTx entries that are inBlock, and mapModifiedTx shouldn't
// contain anything that is inBlock.
assert(!inBlock.count(iter));
uint64_t packageSize = iter->GetSizeWithAncestors();
CAmount packageFees = iter->GetModFeesWithAncestors();
int64_t packageSigOpsCost = iter->GetSigOpCostWithAncestors();
if (fUsingModified) {
packageSize = modit->nSizeWithAncestors;
packageFees = modit->nModFeesWithAncestors;
packageSigOpsCost = modit->nSigOpCostWithAncestors;
}
if (packageFees < blockMinFeeRate.GetFee(packageSize)) {
// Everything else we might consider has a lower fee rate
return;
}
if (!TestPackage(packageSize, packageSigOpsCost)) {
if (fUsingModified) {
// Since we always look at the best entry in mapModifiedTx,
// we must erase failed entries so that we can consider the
// next best entry on the next loop iteration
mapModifiedTx.get().erase(modit);
failedTx.insert(iter);
}
++nConsecutiveFailed;
if (nConsecutiveFailed > MAX_CONSECUTIVE_FAILURES && nBlockWeight >
nBlockMaxWeight - 4000) {
// Give up if we're close to full and haven't succeeded in a while
break;
}
continue;
}
CTxMemPool::setEntries ancestors;
uint64_t nNoLimit = std::numeric_limits::max();
std::string dummy;
mempool.CalculateMemPoolAncestors(*iter, ancestors, nNoLimit, nNoLimit, nNoLimit, nNoLimit, dummy, false);
onlyUnconfirmed(ancestors);
ancestors.insert(iter);
// Test if all tx's are Final
if (!TestPackageTransactions(ancestors)) {
if (fUsingModified) {
mapModifiedTx.get().erase(modit);
failedTx.insert(iter);
}
continue;
}
// This transaction will make it in; reset the failed counter.
nConsecutiveFailed = 0;
// Package can be added. Sort the entries in a valid order.
std::vector sortedEntries;
SortForBlock(ancestors, iter, sortedEntries);
for (size_t i=0; i
2.1.2计算挖矿奖励
coinbase交易的输出就是矿工的奖励:
coinbaseTx.vout[0].nValue = nFees + GetBlockSubsidy(nHeight, chainparams.GetConsensus());//coinbase交易的输出就是矿工的奖励,交易费+区块奖励
矿工的奖励 = 交易费 + 区块奖励。
2.1.2.1 交易费
交易费nFees是上述打包交易的addPackageTxs()方法根据打包进区块的所有交易的费用累加起来得到的,看下addPackageTxs()方法的这段代码:
for (size_t i=0; i
会调用 AddToBlock():
void BlockAssembler::AddToBlock(CTxMemPool::txiter iter)
{
pblock->vtx.emplace_back(iter->GetSharedTx());
pblocktemplate->vTxFees.push_back(iter->GetFee());
pblocktemplate->vTxSigOpsCost.push_back(iter->GetSigOpCost());
nBlockWeight += iter->GetTxWeight();
++nBlockTx;
nBlockSigOpsCost += iter->GetSigOpCost();
nFees += iter->GetFee();
inBlock.insert(iter);
bool fPrintPriority = gArgs.GetBoolArg("-printpriority", DEFAULT_PRINTPRIORITY);
if (fPrintPriority) {
LogPrintf("fee %s txid %s\n",
CFeeRate(iter->GetModifiedFee(), iter->GetTxSize()).ToString(),
iter->GetTx().GetHash().ToString());
}
}
nFees += iter->GetFee();将所有打包进区块的交易的费用累加起来。
2.1.2.2区块奖励
看下GetBlockSubsidy()方法是如何计算奖励的:
CAmount GetBlockSubsidy(int nHeight, const Consensus::Params& consensusParams)
{
int halvings = nHeight / consensusParams.nSubsidyHalvingInterval;
// Force block reward to zero when right shift is undefined.
if (halvings >= 64)
return 0;
CAmount nSubsidy = 50 * COIN;
// Subsidy is cut in half every 210,000 blocks which will occur approximately every 4 years.
nSubsidy >>= halvings;
return nSubsidy;
}
根据当前区块的高度/奖励减半的间隔区块数nSubsidyHalvingInterval,从而得到减半的次数,其中nSubsidyHalvingInterval在主网中是210000,该变量定义在src/chainparams.cpp的CMainParams类中。每生成210000个区块,区块奖励就减半,根据比特币平均每10分钟产生一个区块计算,产生210000个区块需要大约4年,即每隔4年,区块奖励就减半一次。
计算出减半次数halvings后根据初始是50个比特币的奖励就可以计算出当前的奖励。
2.1.3 计算难度目标
nBits代表着工作量证明的难度目标值,这个值是由系统计算生成的
pblock->nBits = GetNextWorkRequired(pindexPrev, pblock, chainparams.GetConsensus());
看下GetNextWorkRequired()方法是如何获取难度目标值的
unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params)
{
assert(pindexLast != nullptr);
unsigned int nProofOfWorkLimit = UintToArith256(params.powLimit).GetCompact();
// Only change once per difficulty adjustment interval
//如果新区块不需要进行难度调整,即新区块的高度不能被2016整除,则用父区块的难度目标即可
if ((pindexLast->nHeight+1) % params.DifficultyAdjustmentInterval() != 0)
{
if (params.fPowAllowMinDifficultyBlocks)
{
// Special difficulty rule for testnet:
// If the new block's timestamp is more than 2* 10 minutes
// then allow mining of a min-difficulty block.
if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2)
return nProofOfWorkLimit;
else
{
// Return the last non-special-min-difficulty-rules-block
const CBlockIndex* pindex = pindexLast;
while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit)
pindex = pindex->pprev;
return pindex->nBits;
}
}
return pindexLast->nBits;
}
// Go back by what we want to be 14 days worth of blocks
//到了这里说明新区块恰好需要进行难度调整,即新区块的高度能被2016整除,则进行一次难度调整,计算新的难度目标
int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1);
assert(nHeightFirst >= 0);
const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst);
assert(pindexFirst);
return CalculateNextWorkRequired(pindexLast, pindexFirst->GetBlockTime(), params);
}
下面对代码中的关键部分做更进一步的分析。
2.1.3.1难度调整周期
比特币系统是以2016个区块为一个周期自动进行难度调整的,2016是这样算出来的:比特币规定2个星期,即14天,进行一次难度调整,平均每10分钟生成一个区块,按此计算,即2016个区块调整一次。看下params.DifficultyAdjustmentInterval()方法:
int64_t DifficultyAdjustmentInterval() const { return nPowTargetTimespan / nPowTargetSpacing; }
nPowTargetTimespan和nPowTargetSpacing变量的初始化是在src/chainparams.cpp的CMainParams类中:
consensus.nPowTargetTimespan = 14 * 24 * 60 * 60; // two weeks
consensus.nPowTargetSpacing = 10 * 60;
nPowTargetTimespan表示调整周期是2周,nPowTargetSpacing表示每10分钟生成一个区块,nPowTargetTimespan / nPowTargetSpacing 的结果是2016,也就是说每隔2016个区块进行一次难度调整,计算新的难度目标,计算方法见下面。
2.1.3.2计算新的难度目标
看下CalculateNextWorkRequired()方法:
unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params)
{
if (params.fPowNoRetargeting)
return pindexLast->nBits;
// Limit adjustment step
// 计算生成最近的2016个区块实际花费了多少时间
int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime;
//这里需要限制调整的步长,即把实际花费的时间限制在0.5周和8周之间
if (nActualTimespan < params.nPowTargetTimespan/4)//params.nPowTargetTimespan是2周,即20160分钟
nActualTimespan = params.nPowTargetTimespan/4;
if (nActualTimespan > params.nPowTargetTimespan*4)
nActualTimespan = params.nPowTargetTimespan*4;
// Retarget
const arith_uint256 bnPowLimit = UintToArith256(params.powLimit);
arith_uint256 bnNew;
bnNew.SetCompact(pindexLast->nBits);//旧的难度目标值
bnNew *= nActualTimespan;
bnNew /= params.nPowTargetTimespan;
if (bnNew > bnPowLimit)
bnNew = bnPowLimit;
return bnNew.GetCompact();
}
计算公式:新的难度目标值 = 旧的难度目标值 * 生成最近2016个区块所花费的实际时间 / 系统期望生成2016个区块的时间
其中代码中:nBits 即 旧的难度目标值,nActualTimespa 即 生成最近2016个区块所花费的实际时间 ,
params.nPowTargetTimespan 即 系统期望生成2016个区块的时间 。
2.1.3.3难度目标的表示
上面讲了难度目标的计算方法,这里再进一步讲一下难度目标的表示方法,难度目标值用nBits表示,nBits是一个无符号的32位整数,定义在src/chain.h的CBlockIndex类中:
uint32_t nBits;
这个无符号整数的最高位的1个字节代表指数(exponent),低位的3个字节代表系数(coefficient),这个记法将工作量证明的target表示为系数/指数(coefficient/exponent)的格式。
计算难度目标target的公式为:target = coefficient * 2^(8 * (exponent – 3))
例如在区块277,316中,nBits的值为 0x1903a30c,在这个区块里,0x19为指数,而 0x03a30c为系数,计算难度值:
target = 0x03a30c * 2^(0x08 * (0x19 - 0x03))
=> target = 0x03a30c * 2^(0x08 * 0x16)
=> target = 0x03a30c * 2^0xB0
按十进制计算为:
=> target = 238,348 * 2^176
=> target = 22,829,202,948,393,929,850,749,706,076,701,368,331,072,452,018,388,575,715,328
转化回十六进制后为:
=> target = 0x0000000000000003A30C00000000000000000000000000000000000000000000
上述过程就是由无符号的32位整数nBits转为难度值的详细步骤。
由无符号的32位整数nBits转为难度值的函数
(如:0x1903a30c 转为 0x0000000000000003A30C00000000000000000000000000000000000000000000 ):
// This implementation directly uses shifts instead of going
// through an intermediate MPI representation.
arith_uint256& arith_uint256::SetCompact(uint32_t nCompact, bool* pfNegative, bool* pfOverflow)
{
int nSize = nCompact >> 24;
uint32_t nWord = nCompact & 0x007fffff;
if (nSize <= 3) {
nWord >>= 8 * (3 - nSize);
*this = nWord;
} else {
*this = nWord;
*this <<= 8 * (nSize - 3);
}
if (pfNegative)
*pfNegative = nWord != 0 && (nCompact & 0x00800000) != 0;
if (pfOverflow)
*pfOverflow = nWord != 0 && ((nSize > 34) ||
(nWord > 0xff && nSize > 33) ||
(nWord > 0xffff && nSize > 32));
return *this;
}
由难度值转为无符号的32位整数nBits的函数
(如:0x0000000000000003A30C00000000000000000000000000000000000000000000 转为 0x1903a30c ):
uint32_t arith_uint256::GetCompact(bool fNegative) const
{
int nSize = (bits() + 7) / 8;
uint32_t nCompact = 0;
if (nSize <= 3) {
nCompact = GetLow64() << 8 * (3 - nSize);
} else {
arith_uint256 bn = *this >> 8 * (nSize - 3);
nCompact = bn.GetLow64();
}
// The 0x00800000 bit denotes the sign.
// Thus, if it is already set, divide the mantissa by 256 and increase the exponent.
if (nCompact & 0x00800000) {
nCompact >>= 8;
nSize++;
}
assert((nCompact & ~0x007fffff) == 0);
assert(nSize < 256);
nCompact |= nSize << 24;
nCompact |= (fNegative && (nCompact & 0x007fffff) ? 0x00800000 : 0);
return nCompact;
}
这两个方法定义在src/arith_uint256.h的 arith_uint256类中。
2.1.4 修改extranonce值,计算默克尔树根节点hash
上述CreateNewBlock()方法包含了填写nVersion ,nTime,hashPrevBlock,nBits,将nNonce初始化为0的操作,但是填写区块头的hashMerkleRoot字段的操作并不在CreateNewBlock()方法里,而是在IncrementExtraNonce()方法里:
void IncrementExtraNonce(CBlock* pblock, const CBlockIndex* pindexPrev, unsigned int& nExtraNonce)
{
// Update nExtraNonce
static uint256 hashPrevBlock;
if (hashPrevBlock != pblock->hashPrevBlock)
{
nExtraNonce = 0;
hashPrevBlock = pblock->hashPrevBlock;
}
++nExtraNonce;
unsigned int nHeight = pindexPrev->nHeight+1; // Height first in coinbase required for block.version=2
CMutableTransaction txCoinbase(*pblock->vtx[0]);
txCoinbase.vin[0].scriptSig = (CScript() << nHeight << CScriptNum(nExtraNonce)) + COINBASE_FLAGS;
assert(txCoinbase.vin[0].scriptSig.size() <= 100);
pblock->vtx[0] = MakeTransactionRef(std::move(txCoinbase));
pblock->hashMerkleRoot = BlockMerkleRoot(*pblock);
}
下面做进一步分析
2.1.4.1修改extranonce值
分析源码前先根据*《精通比特币》第八章 挖矿和共识* 了解一下extranonce的用途
自2012年以来,比特币挖矿发展出一个解决区块头结构的根本性限制的方案。在比特币的早期,矿工可以通过遍历随机数
(Nonce)直到生成的哈希值小于target从而挖出一个区块。随着难度的增加,矿工经常在尝试了40亿个nonce值后仍然没有出块。然而,这很容易通过更新区块的时间戳并计算经过的时间来解决。因为时间戳是区块头的一部分,它的变化可以让矿工再次遍历nonce的值。然而,当挖矿硬件的速度超过了4GH/秒,这种方法变得越来越困难,因为随机数的取值在一秒内就被用尽了。随着ASIC矿机的出现并且超过了TH/秒的hash速率后,挖矿软件为了找到有效的块,需要更多的空间来储存nonce值。时间戳可以延伸一点,但如果把它移动得太过遥远(too far into the future),会导致区块变为无效。区块头里面需要一个新的“改变”的源头。解决方案是使用coinbase交易作为额外随机值的来源,因为coinbase脚本可以储存2-100字节的数据,矿工们开始使用这个空间作为额外随机值的空间,允许他们去探索一个大得多的区块头值的范围来找到有效的区块。这个coinbase交易包含在merkle树中,这意味着任何coinbase脚本的变化将导致Merkle根的变化。8个字节的额外随机数,加上4个字节的“标准”随机数,允许矿工每秒尝试2^96(8后面跟28个零)种可能性而无需修改时间戳。如果未来矿工尝试完了以上所有的可能性,他们还可以通过修改时间戳来解决。同样,coinbase脚本中也有更多的空间可以为将来随机数的额外空间扩展做准备。
总结来说,就是原本区块头里的32位无符号整数的nonce字段空间太小了,最多只能大约有40亿(准确值是:2^32 - 1,即4294967295)种可能性,解决方案是在coinbase交易的输入脚本里分配8个字节作为额外随机数的空间。这样,8个字节的额外随机数,加上4个字节的“标准”随机数,就可以允许矿工每秒尝试2^96(8后面跟28个零)种可能性而无需修改时间戳。
可以看到采用了这种方案后,矿工可以遍历的可能性由 (2^32 - 1) 升到了 2^96 种。
好了,原理介绍完了,看下源码:
txCoinbase.vin[0].scriptSig = (CScript() << nHeight << CScriptNum(nExtraNonce)) + COINBASE_FLAGS;
将nExtraNonce写入到coinbase交易的scriptSig解锁脚本中。
2.1.4.2计算并填写hashMerkleRoot字段
由于coinbase交易包含在merkle树中,这意味着任何coinbase脚本的变化将导致Merkle根的变化,这也是为什么计算hashMerkleRoot要在extranonce值更新之后再进行。看下BlockMerkleRoot()方法源码:
uint256 BlockMerkleRoot(const CBlock& block, bool* mutated)
{
std::vector leaves;
leaves.resize(block.vtx.size());
for (size_t s = 0; s < block.vtx.size(); s++) {
leaves[s] = block.vtx[s]->GetHash();
}
return ComputeMerkleRoot(leaves, mutated);
}
最终会调用MerkleComputation()方法计算MerkleRoot的hash,该方法定义在src/consensus/merkle.cpp中:
/* This implements a constant-space merkle root/path calculator, limited to 2^32 leaves. */
static void MerkleComputation(const std::vector& leaves, uint256* proot, bool* pmutated, uint32_t branchpos, std::vector* pbranch) {
if (pbranch) pbranch->clear();
if (leaves.size() == 0) {
if (pmutated) *pmutated = false;
if (proot) *proot = uint256();
return;
}
bool mutated = false;
// count is the number of leaves processed so far.
uint32_t count = 0;
// inner is an array of eagerly computed subtree hashes, indexed by tree
// level (0 being the leaves).
// For example, when count is 25 (11001 in binary), inner[4] is the hash of
// the first 16 leaves, inner[3] of the next 8 leaves, and inner[0] equal to
// the last leaf. The other inner entries are undefined.
uint256 inner[32];
// Which position in inner is a hash that depends on the matching leaf.
int matchlevel = -1;
// First process all leaves into 'inner' values.
while (count < leaves.size()) {
uint256 h = leaves[count];
bool matchh = count == branchpos;
count++;
int level;
// For each of the lower bits in count that are 0, do 1 step. Each
// corresponds to an inner value that existed before processing the
// current leaf, and each needs a hash to combine it.
for (level = 0; !(count & (((uint32_t)1) << level)); level++) {
if (pbranch) {
if (matchh) {
pbranch->push_back(inner[level]);
} else if (matchlevel == level) {
pbranch->push_back(h);
matchh = true;
}
}
mutated |= (inner[level] == h);
CHash256().Write(inner[level].begin(), 32).Write(h.begin(), 32).Finalize(h.begin());
}
// Store the resulting hash at inner position level.
inner[level] = h;
if (matchh) {
matchlevel = level;
}
}
// Do a final 'sweep' over the rightmost branch of the tree to process
// odd levels, and reduce everything to a single top value.
// Level is the level (counted from the bottom) up to which we've sweeped.
int level = 0;
// As long as bit number level in count is zero, skip it. It means there
// is nothing left at this level.
while (!(count & (((uint32_t)1) << level))) {
level++;
}
uint256 h = inner[level];
bool matchh = matchlevel == level;
while (count != (((uint32_t)1) << level)) {
// If we reach this point, h is an inner value that is not the top.
// We combine it with itself (Bitcoin's special rule for odd levels in
// the tree) to produce a higher level one.
if (pbranch && matchh) {
pbranch->push_back(h);
}
CHash256().Write(h.begin(), 32).Write(h.begin(), 32).Finalize(h.begin());
// Increment count to the value it would have if two entries at this
// level had existed.
count += (((uint32_t)1) << level);
level++;
// And propagate the result upwards accordingly.
while (!(count & (((uint32_t)1) << level))) {
if (pbranch) {
if (matchh) {
pbranch->push_back(inner[level]);
} else if (matchlevel == level) {
pbranch->push_back(h);
matchh = true;
}
}
CHash256().Write(inner[level].begin(), 32).Write(h.begin(), 32).Finalize(h.begin());
level++;
}
}
// Return result.
if (pmutated) *pmutated = mutated;
if (proot) *proot = h;
}
2.2进行hash计算完成工作量证明
上面讲了一个新的候选区块的构建过程,包括计算并填写了nVersion ,nTime,hashPrevBlock,nBits字段,将nNonce字段初始化为0,计算并填写hashMerkleRoot字段。一个新的候选区块此时已经构建完成了,接下来就是不断进行hash计算,使候选区块成为一个有效的新区块。
//这里就开始真正挖矿的hash计算了,不断修改pblock->nNonce的值,计算hash,检查是否满足难度目标target(代表着工作量证明)
while (nMaxTries > 0 && pblock->nNonce < nInnerLoopCount && !CheckProofOfWork(pblock->GetHash(), pblock->nBits, Params().GetConsensus())) {
++pblock->nNonce;
--nMaxTries;
}
if (nMaxTries == 0) {
break;
}
//如果nNonce值修改的次数已经超过了nInnerLoopCount规定的次数还没有找到工作量证明的一个解,则废弃这个候选区块,重新回到前面创建一个新的候选区块进行新一轮工作
if (pblock->nNonce == nInnerLoopCount) {
continue;
}
以nInnerLoopCount(值为0x10000,即65536)次计算为一轮进行工作量证明的计算,若没有找到一个解,则重新创建一个新的候选区块进行新一轮工作。工作量证明的计算方法很清晰,就是将nNonce值不断加1,并计算区块的hash,根据CheckProofOfWork()方法判断是否满足难度目标:
bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
bool fNegative;
bool fOverflow;
arith_uint256 bnTarget;
bnTarget.SetCompact(nBits, &fNegative, &fOverflow);
// Check range
if (fNegative || bnTarget == 0 || fOverflow || bnTarget > UintToArith256(params.powLimit))
return false;
// Check proof of work matches claimed amount
if (UintToArith256(hash) > bnTarget)
return false;
return true;
}
先通过UintToArith256(hash)方法把256bit的二进制格式的数据转为256bit的无符号大整数格式的数据,然后再和难度目标bnTarget进行比较,如果区块的hash比难度目标小,则说明完成了工作量证明计算。
2.3处理新区块
完成了前面的的工作量证明计算,这时候选区块就成为了一个有效的新区块了,开始处理该新区块:
//调用ProcessNewBlock()处理这个有效的新区块
if (!ProcessNewBlock(Params(), shared_pblock, true, nullptr))
throw JSONRPCError(RPC_INTERNAL_ERROR, "ProcessNewBlock, block not accepted");
看下ProcessNewBlock()方法源码:
bool ProcessNewBlock(const CChainParams& chainparams, const std::shared_ptr pblock, bool fForceProcessing, bool *fNewBlock)
{
AssertLockNotHeld(cs_main);
{
CBlockIndex *pindex = nullptr;
if (fNewBlock) *fNewBlock = false;
CValidationState state;
// Ensure that CheckBlock() passes before calling AcceptBlock, as
// belt-and-suspenders.
bool ret = CheckBlock(*pblock, state, chainparams.GetConsensus());//先检验新区块,遵循小心谨慎的原则
LOCK(cs_main);
if (ret) {
// Store to disk
//将新区块存入磁盘
ret = g_chainstate.AcceptBlock(pblock, state, chainparams, &pindex, fForceProcessing, nullptr, fNewBlock);
}
if (!ret) {
GetMainSignals().BlockChecked(*pblock, state);
return error("%s: AcceptBlock FAILED (%s)", __func__, state.GetDebugMessage());
}
}
NotifyHeaderTip();
CValidationState state; // Only used to report errors, not invalidity - ignore it
//将新区块添加到本地区块链
if (!g_chainstate.ActivateBestChain(state, chainparams, pblock))
return error("%s: ActivateBestChain failed", __func__);
return true;
}
上述源码从大的逻辑来看比较简单:检验新区块 -> 新区块存入磁盘 -> 新区块添加到本地区块链,
但是每个步骤里面都做了大量操作,下面来详细分析下:
2.3.1检验新区块
遵循小心谨慎的原则(as belt-and-suspenders),处理新区块的第一步就是检验新区块,
看源码前可以先了解下*《精通比特币》第十章 挖矿和共识* 所列的检查清单:
区块的数据结构语法上有效
区块头的哈希值小于目标难度(确认工作量证明)
区块时间戳早于未来两个小时(允许时间错误)
区块大小在可接受的限制之内
第一个交易(且只有第一个)是coinbase交易
使用之前章节“Independent Verification of Transactions”已经讨论过的交易检查清单检查区块中的所有交易并确保它们都是有效的
接下来看下CheckBlock()源码:
bool CheckBlock(const CBlock& block, CValidationState& state, const Consensus::Params& consensusParams, bool fCheckPOW, bool fCheckMerkleRoot)
{
// These are checks that are independent of context.
if (block.fChecked)
return true;
// Check that the header is valid (particularly PoW). This is mostly
// redundant with the call in AcceptBlockHeader.
//检验区块头是否有效,特别是工作量证明
if (!CheckBlockHeader(block, state, consensusParams, fCheckPOW))
return false;
// Check the merkle root.
//检验merkle root,重新计算一遍区块中交易的merkle root,并与之前计算的作比较,从而判断区块中的交易是否被改过了
if (fCheckMerkleRoot) {
bool mutated;
uint256 hashMerkleRoot2 = BlockMerkleRoot(block, &mutated);
if (block.hashMerkleRoot != hashMerkleRoot2)
return state.DoS(100, false, REJECT_INVALID, "bad-txnmrklroot", true, "hashMerkleRoot mismatch");
// Check for merkle tree malleability (CVE-2012-2459): repeating sequences
// of transactions in a block without affecting the merkle root of a block,
// while still invalidating it.
if (mutated)
return state.DoS(100, false, REJECT_INVALID, "bad-txns-duplicate", true, "duplicate transaction");
}
// All potential-corruption validation must be done before we do any
// transaction validation, as otherwise we may mark the header as invalid
// because we receive the wrong transactions for it.
// Note that witness malleability is checked in ContextualCheckBlock, so no
// checks that use witness data may be performed here.
// Size limits
//检验区块的大小是否在限制范围内
if (block.vtx.empty() || block.vtx.size() * WITNESS_SCALE_FACTOR > MAX_BLOCK_WEIGHT || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION | SERIALIZE_TRANSACTION_NO_WITNESS) * WITNESS_SCALE_FACTOR > MAX_BLOCK_WEIGHT)
return state.DoS(100, false, REJECT_INVALID, "bad-blk-length", false, "size limits failed");
// First transaction must be coinbase, the rest must not be
//第一个交易必须是coinbase交易,其他交易必须不是coinbase交易
if (block.vtx.empty() || !block.vtx[0]->IsCoinBase())
return state.DoS(100, false, REJECT_INVALID, "bad-cb-missing", false, "first tx is not coinbase");
for (unsigned int i = 1; i < block.vtx.size(); i++)
if (block.vtx[i]->IsCoinBase())
return state.DoS(100, false, REJECT_INVALID, "bad-cb-multiple", false, "more than one coinbase");
// Check transactions
//按照交易检查清单检查区块中的每一笔交易
for (const auto& tx : block.vtx)
if (!CheckTransaction(*tx, state, false))
return state.Invalid(false, state.GetRejectCode(), state.GetRejectReason(),
strprintf("Transaction check failed (tx hash %s) %s", tx->GetHash().ToString(), state.GetDebugMessage()));
unsigned int nSigOps = 0;
for (const auto& tx : block.vtx)
{
nSigOps += GetLegacySigOpCount(*tx);
}
if (nSigOps * WITNESS_SCALE_FACTOR > MAX_BLOCK_SIGOPS_COST)
return state.DoS(100, false, REJECT_INVALID, "bad-blk-sigops", false, "out-of-bounds SigOpCount");
if (fCheckPOW && fCheckMerkleRoot)
block.fChecked = true;
return true;
}
源码中对每一项检验做了注释,其实英文注释已经很清晰明确了。
2.3.2新区块存入磁盘
对新区块检验通过后,调用 g_chainstate.AcceptBlock() 将新区块存入磁盘,这个方法做了大量的工作,先来看下AcceptBlock()源码:
/** Store block on disk. If dbp is non-nullptr, the file is known to already reside on disk */
bool CChainState::AcceptBlock(const std::shared_ptr& pblock, CValidationState& state, const CChainParams& chainparams, CBlockIndex** ppindex, bool fRequested, const CDiskBlockPos* dbp, bool* fNewBlock)
{
const CBlock& block = *pblock;
if (fNewBlock) *fNewBlock = false;
AssertLockHeld(cs_main);
CBlockIndex *pindexDummy = nullptr;
CBlockIndex *&pindex = ppindex ? *ppindex : pindexDummy;
//检验区块头
if (!AcceptBlockHeader(block, state, chainparams, &pindex))
return false;
// Try to process all requested blocks that we don't have, but only
// process an unrequested block if it's new and has enough work to
// advance our tip, and isn't too many blocks ahead.
bool fAlreadyHave = pindex->nStatus & BLOCK_HAVE_DATA;
bool fHasMoreOrSameWork = (chainActive.Tip() ? pindex->nChainWork >= chainActive.Tip()->nChainWork : true);
// Blocks that are too out-of-order needlessly limit the effectiveness of
// pruning, because pruning will not delete block files that contain any
// blocks which are too close in height to the tip. Apply this test
// regardless of whether pruning is enabled; it should generally be safe to
// not process unrequested blocks.
bool fTooFarAhead = (pindex->nHeight > int(chainActive.Height() + MIN_BLOCKS_TO_KEEP));
// TODO: Decouple this function from the block download logic by removing fRequested
// This requires some new chain data structure to efficiently look up if a
// block is in a chain leading to a candidate for best tip, despite not
// being such a candidate itself.
// TODO: deal better with return value and error conditions for duplicate
// and unrequested blocks.
if (fAlreadyHave) return true;
if (!fRequested) { // If we didn't ask for it:
if (pindex->nTx != 0) return true; // This is a previously-processed block that was pruned
if (!fHasMoreOrSameWork) return true; // Don't process less-work chains
if (fTooFarAhead) return true; // Block height is too high
// Protect against DoS attacks from low-work chains.
// If our tip is behind, a peer could try to send us
// low-work blocks on a fake chain that we would never
// request; don't process these.
if (pindex->nChainWork < nMinimumChainWork) return true;
}
if (fNewBlock) *fNewBlock = true;
//检验区块,若检验不通过,则将区块的状态标志nStatus相应的状态位置为BLOCK_FAILED_VALID,并将区块加入脏区块集合中
if (!CheckBlock(block, state, chainparams.GetConsensus()) ||
!ContextualCheckBlock(block, state, chainparams.GetConsensus(), pindex->pprev)) {
if (state.IsInvalid() && !state.CorruptionPossible()) {
pindex->nStatus |= BLOCK_FAILED_VALID;
setDirtyBlockIndex.insert(pindex);
}
return error("%s: %s", __func__, FormatStateMessage(state));
}
// Header is valid/has work, merkle tree and segwit merkle tree are good...RELAY NOW
// (but if it does not build on our best tip, let the SendMessages loop relay it)
if (!IsInitialBlockDownload() && chainActive.Tip() == pindex->pprev)
GetMainSignals().NewPoWValidBlock(pindex, pblock);//所有检验通过,开始广播致密区块
// Write block to history file
try {
//将区块写入到磁盘的历史文件中
CDiskBlockPos blockPos = SaveBlockToDisk(block, pindex->nHeight, chainparams, dbp);
if (blockPos.IsNull()) {
state.Error(strprintf("%s: Failed to find position to write new block to disk", __func__));
return false;
}
if (!ReceivedBlockTransactions(block, state, pindex, blockPos, chainparams.GetConsensus()))
return error("AcceptBlock(): ReceivedBlockTransactions failed");
} catch (const std::runtime_error& e) {
return AbortNode(state, std::string("System error: ") + e.what());
}
if (fCheckForPruning)
FlushStateToDisk(chainparams, state, FLUSH_STATE_NONE); // we just allocated more disk space for block files
CheckBlockIndex(chainparams.GetConsensus());
return true;
}
上面关键的逻辑已经做了注释,下面对这些关键逻辑做进一步分析:
2.3.2.1检验区块
调用AcceptBlockHeader()方法检验区块头,调用CheckBlock() 和 ContextualCheckBlock()检验区块。
2.3.2.2广播致密区块(compact block)
所有检验通过后,调用GetMainSignals().NewPoWValidBlock()开始广播致密区块,最终会调用PeerLogicValidation::NewPoWValidBlock()方法:
void PeerLogicValidation::NewPoWValidBlock(const CBlockIndex *pindex, const std::shared_ptr& pblock) {
std::shared_ptr pcmpctblock = std::make_shared (*pblock, true);
const CNetMsgMaker msgMaker(PROTOCOL_VERSION);
LOCK(cs_main);
static int nHighestFastAnnounce = 0;
if (pindex->nHeight <= nHighestFastAnnounce)
return;
nHighestFastAnnounce = pindex->nHeight;
bool fWitnessEnabled = IsWitnessEnabled(pindex->pprev, Params().GetConsensus());
uint256 hashBlock(pblock->GetHash());
{
LOCK(cs_most_recent_block);
most_recent_block_hash = hashBlock;
most_recent_block = pblock;
most_recent_compact_block = pcmpctblock;
fWitnessesPresentInMostRecentCompactBlock = fWitnessEnabled;
}
//遍历所有节点,发送CMPCTBLOCK类型的消息来发送致密区块数据
connman->ForEachNode([this, &pcmpctblock, pindex, &msgMaker, fWitnessEnabled, &hashBlock](CNode* pnode) {
// TODO: Avoid the repeated-serialization here
if (pnode->nVersion < INVALID_CB_NO_BAN_VERSION || pnode->fDisconnect)
return;
ProcessBlockAvailability(pnode->GetId());
CNodeState &state = *State(pnode->GetId());
// If the peer has, or we announced to them the previous block already,
// but we don't think they have this one, go ahead and announce it
if (state.fPreferHeaderAndIDs && (!fWitnessEnabled || state.fWantsCmpctWitness) &&
!PeerHasHeader(&state, pindex) && PeerHasHeader(&state, pindex->pprev)) {
LogPrint(BCLog::NET, "%s sending header-and-ids %s to peer=%d\n", "PeerLogicValidation::NewPoWValidBlock",
hashBlock.ToString(), pnode->GetId());
connman->PushMessage(pnode, msgMaker.Make(NetMsgType::CMPCTBLOCK, *pcmpctblock));
state.pindexBestHeaderSent = pindex;
}
});
}
先调用CBlockHeaderAndShortTxIDs()方法根据区块数据构建致密区块,致密区块包含区块头和交易id,然后遍历所有节点,发送CMPCTBLOCK类型的消息来发送致密区块数据,这里之所以广播的是致密区块而不是完整区块是根据
BIP-0152 Compact Block Relay 即致密区块中继,实现的,这种方案能够减少p2p网络节点广播区块所需的带宽。(更多资料可参考:致密区块(Compact block): 比特币全节点用户的福音)
那么当节点接收到CMPCTBLOCK致密区块消息时做什么操作,可以看下net_processing.cpp的ProcessMessage()方法的处理CMPCTBLOCK消息的这部分分支:
else if (strCommand == NetMsgType::CMPCTBLOCK && !fImporting && !fReindex) // Ignore blocks received while importing
{
CBlockHeaderAndShortTxIDs cmpctblock;
vRecv >> cmpctblock;
bool received_new_header = false;
这部分分支的源码有很多,这里就不放完整的源码了,分析源码可以知道,节点接收到CMPCTBLOCK消息时会根据情况发送GETBLOCKTXN类型的消息以获取区块中的交易数据。
2.3.2.3将区块写入到磁盘的历史文件
接着调用SaveBlockToDisk()将区块写入到磁盘的历史文件中
/** Store block on disk. If dbp is non-nullptr, the file is known to already reside on disk */
static CDiskBlockPos SaveBlockToDisk(const CBlock& block, int nHeight, const CChainParams& chainparams, const CDiskBlockPos* dbp) {
unsigned int nBlockSize = ::GetSerializeSize(block, SER_DISK, CLIENT_VERSION);
CDiskBlockPos blockPos;
if (dbp != nullptr)
blockPos = *dbp;
if (!FindBlockPos(blockPos, nBlockSize+8, nHeight, block.GetBlockTime(), dbp != nullptr)) {
error("%s: FindBlockPos failed", __func__);
return CDiskBlockPos();
}
if (dbp == nullptr) {
if (!WriteBlockToDisk(block, blockPos, chainparams.MessageStart())) {
AbortNode("Failed to write block");
return CDiskBlockPos();
}
}
return blockPos;
}
2.3.2.3将区块的有效标志位提升为BLOCK_VALID_TRANSACTIONS
区块写入磁盘成功后调用ReceivedBlockTransactions()方法将区块的有效标志位提升为BLOCK_VALID_TRANSACTIONS
/** Mark a block as having its data received and checked (up to BLOCK_VALID_TRANSACTIONS). */
bool CChainState::ReceivedBlockTransactions(const CBlock &block, CValidationState& state, CBlockIndex *pindexNew, const CDiskBlockPos& pos, const Consensus::Params& consensusParams)
{
pindexNew->nTx = block.vtx.size();
pindexNew->nChainTx = 0;
pindexNew->nFile = pos.nFile;
pindexNew->nDataPos = pos.nPos;
pindexNew->nUndoPos = 0;
pindexNew->nStatus |= BLOCK_HAVE_DATA;
if (IsWitnessEnabled(pindexNew->pprev, consensusParams)) {
pindexNew->nStatus |= BLOCK_OPT_WITNESS;
}
pindexNew->RaiseValidity(BLOCK_VALID_TRANSACTIONS);
setDirtyBlockIndex.insert(pindexNew);
if (pindexNew->pprev == nullptr || pindexNew->pprev->nChainTx) {
// If pindexNew is the genesis block or all parents are BLOCK_VALID_TRANSACTIONS.
std::deque queue;
queue.push_back(pindexNew);
// Recursively process any descendant blocks that now may be eligible to be connected.
while (!queue.empty()) {
CBlockIndex *pindex = queue.front();
queue.pop_front();
pindex->nChainTx = (pindex->pprev ? pindex->pprev->nChainTx : 0) + pindex->nTx;
{
LOCK(cs_nBlockSequenceId);
pindex->nSequenceId = nBlockSequenceId++;
}
if (chainActive.Tip() == nullptr || !setBlockIndexCandidates.value_comp()(pindex, chainActive.Tip())) {
setBlockIndexCandidates.insert(pindex);
}
std::pair::iterator, std::multimap::iterator> range = mapBlocksUnlinked.equal_range(pindex);
while (range.first != range.second) {
std::multimap::iterator it = range.first;
queue.push_back(it->second);
range.first++;
mapBlocksUnlinked.erase(it);
}
}
} else {
if (pindexNew->pprev && pindexNew->pprev->IsValid(BLOCK_VALID_TREE)) {
mapBlocksUnlinked.insert(std::make_pair(pindexNew->pprev, pindexNew));
}
}
return true;
}
2.3.3新区块添加到本地区块链
最后就是调用ActivateBestChain()方法将新区块添加到本地区块链:
CValidationState state; // Only used to report errors, not invalidity - ignore it
//将新区块添加到本地区块链
if (!g_chainstate.ActivateBestChain(state, chainparams, pblock))
return error("%s: ActivateBestChain failed", __func__);
return true;
}
看下ActivateBestChain()方法源码:
/**
* Make the best chain active, in multiple steps. The result is either failure
* or an activated best chain. pblock is either nullptr or a pointer to a block
* that is already loaded (to avoid loading it again from disk).
*/
bool CChainState::ActivateBestChain(CValidationState &state, const CChainParams& chainparams, std::shared_ptr pblock) {
// Note that while we're often called here from ProcessNewBlock, this is
// far from a guarantee. Things in the P2P/RPC will often end up calling
// us in the middle of ProcessNewBlock - do not assume pblock is set
// sanely for performance or correctness!
AssertLockNotHeld(cs_main);
// ABC maintains a fair degree of expensive-to-calculate internal state
// because this function periodically releases cs_main so that it does not lock up other threads for too long
// during large connects - and to allow for e.g. the callback queue to drain
// we use m_cs_chainstate to enforce mutual exclusion so that only one caller may execute this function at a time
LOCK(m_cs_chainstate);
CBlockIndex *pindexMostWork = nullptr;
CBlockIndex *pindexNewTip = nullptr;
int nStopAtHeight = gArgs.GetArg("-stopatheight", DEFAULT_STOPATHEIGHT);
do {
boost::this_thread::interruption_point();
if (GetMainSignals().CallbacksPending() > 10) {
// Block until the validation queue drains. This should largely
// never happen in normal operation, however may happen during
// reindex, causing memory blowup if we run too far ahead.
SyncWithValidationInterfaceQueue();
}
{
LOCK(cs_main);
CBlockIndex* starting_tip = chainActive.Tip();//starting_tip指向主链的末端区块
bool blocks_connected = false;
do {
// We absolutely may not unlock cs_main until we've made forward progress
// (with the exception of shutdown due to hardware issues, low disk space, etc).
ConnectTrace connectTrace(mempool); // Destructed before cs_main is unlocked
if (pindexMostWork == nullptr) {
pindexMostWork = FindMostWorkChain();//寻找具有最大工作量的链的末端区块
}
// Whether we have anything to do at all.
if (pindexMostWork == nullptr || pindexMostWork == chainActive.Tip()) {//如果当前主链就是具有最大工作量的链
break;
}
bool fInvalidFound = false;
std::shared_ptr nullBlockPtr;
//将新区块添加到具有最大工作量证明的链的末端区块pindexMostWork的后面
if (!ActivateBestChainStep(state, chainparams, pindexMostWork, pblock && pblock->GetHash() == pindexMostWork->GetBlockHash() ? pblock : nullBlockPtr, fInvalidFound, connectTrace))
return false;
blocks_connected = true;//到这里,说明区块已经添加成功
if (fInvalidFound) {
// Wipe cache, we may need another branch now.
pindexMostWork = nullptr;
}
pindexNewTip = chainActive.Tip();//pindexNewTip指向当前主链的末端区块
for (const PerBlockConnectTrace& trace : connectTrace.GetBlocksConnected()) {
assert(trace.pblock && trace.pindex);
GetMainSignals().BlockConnected(trace.pblock, trace.pindex, trace.conflictedTxs);//通知监听器区块添加成功了
}
} while (!chainActive.Tip() || (starting_tip && CBlockIndexWorkComparator()(chainActive.Tip(), starting_tip)));
if (!blocks_connected) return true;
const CBlockIndex* pindexFork = chainActive.FindFork(starting_tip);
bool fInitialDownload = IsInitialBlockDownload();
// Notify external listeners about the new tip.
// Enqueue while holding cs_main to ensure that UpdatedBlockTip is called in the order in which blocks are connected
if (pindexFork != pindexNewTip) {
// Notify ValidationInterface subscribers
GetMainSignals().UpdatedBlockTip(pindexNewTip, pindexFork, fInitialDownload);
// Always notify the UI if a new block tip was connected
uiInterface.NotifyBlockTip(fInitialDownload, pindexNewTip);
}
}
// When we reach this point, we switched to a new tip (stored in pindexNewTip).
if (nStopAtHeight && pindexNewTip && pindexNewTip->nHeight >= nStopAtHeight) StartShutdown();
// We check shutdown only after giving ActivateBestChainStep a chance to run once so that we
// never shutdown before connecting the genesis block during LoadChainTip(). Previously this
// caused an assert() failure during shutdown in such cases as the UTXO DB flushing checks
// that the best block hash is non-null.
if (ShutdownRequested())
break;
} while (pindexNewTip != pindexMostWork);
CheckBlockIndex(chainparams.GetConsensus());
// Write changes periodically to disk, after relay.
if (!FlushStateToDisk(chainparams, state, FLUSH_STATE_PERIODIC)) {
return false;
}
return true;
}
下面对上述的几个关键逻辑做进一步的源码分析:
2.3.3.1寻找具有最大工作量的链
通过FindMostWorkChain()方法寻找具有最大工作量的链:
/**
* Return the tip of the chain with the most work in it, that isn't
* known to be invalid (it's however far from certain to be valid).
*/
CBlockIndex* CChainState::FindMostWorkChain() {
do {
CBlockIndex *pindexNew = nullptr;
// Find the best candidate header.
{
std::set::reverse_iterator it = setBlockIndexCandidates.rbegin();
if (it == setBlockIndexCandidates.rend())
return nullptr;
pindexNew = *it;
}
// Check whether all blocks on the path between the currently active chain and the candidate are valid.
// Just going until the active chain is an optimization, as we know all blocks in it are valid already.
CBlockIndex *pindexTest = pindexNew;
bool fInvalidAncestor = false;
while (pindexTest && !chainActive.Contains(pindexTest)) {
assert(pindexTest->nChainTx || pindexTest->nHeight == 0);
// Pruned nodes may have entries in setBlockIndexCandidates for
// which block files have been deleted. Remove those as candidates
// for the most work chain if we come across them; we can't switch
// to a chain unless we have all the non-active-chain parent blocks.
bool fFailedChain = pindexTest->nStatus & BLOCK_FAILED_MASK;
bool fMissingData = !(pindexTest->nStatus & BLOCK_HAVE_DATA);
if (fFailedChain || fMissingData) {
// Candidate chain is not usable (either invalid or missing data)
if (fFailedChain && (pindexBestInvalid == nullptr || pindexNew->nChainWork > pindexBestInvalid->nChainWork))
pindexBestInvalid = pindexNew;
CBlockIndex *pindexFailed = pindexNew;
// Remove the entire chain from the set.
while (pindexTest != pindexFailed) {
if (fFailedChain) {
pindexFailed->nStatus |= BLOCK_FAILED_CHILD;
} else if (fMissingData) {
// If we're missing data, then add back to mapBlocksUnlinked,
// so that if the block arrives in the future we can try adding
// to setBlockIndexCandidates again.
mapBlocksUnlinked.insert(std::make_pair(pindexFailed->pprev, pindexFailed));
}
setBlockIndexCandidates.erase(pindexFailed);
pindexFailed = pindexFailed->pprev;
}
setBlockIndexCandidates.erase(pindexTest);
fInvalidAncestor = true;
break;
}
pindexTest = pindexTest->pprev;
}
if (!fInvalidAncestor)
return pindexNew;
} while(true);
}
2.3.3.2将新区块添加到具有最大工作量证明的链
上述寻找出具有最大工作量证明的链之后,调用ActivateBestChainStep()方法将新区块添加到该最长链
/**
* Try to make some progress towards making pindexMostWork the active block.
* pblock is either nullptr or a pointer to a CBlock corresponding to pindexMostWork.
*/
bool CChainState::ActivateBestChainStep(CValidationState& state, const CChainParams& chainparams, CBlockIndex* pindexMostWork, const std::shared_ptr& pblock, bool& fInvalidFound, ConnectTrace& connectTrace)
{
AssertLockHeld(cs_main);
const CBlockIndex *pindexOldTip = chainActive.Tip();
const CBlockIndex *pindexFork = chainActive.FindFork(pindexMostWork);
// Disconnect active blocks which are no longer in the best chain.
bool fBlocksDisconnected = false;
DisconnectedBlockTransactions disconnectpool;
while (chainActive.Tip() && chainActive.Tip() != pindexFork) {
if (!DisconnectTip(state, chainparams, &disconnectpool)) {
// This is likely a fatal error, but keep the mempool consistent,
// just in case. Only remove from the mempool in this case.
UpdateMempoolForReorg(disconnectpool, false);
return false;
}
fBlocksDisconnected = true;
}
// Build list of new blocks to connect.
std::vector vpindexToConnect;
bool fContinue = true;
int nHeight = pindexFork ? pindexFork->nHeight : -1;
while (fContinue && nHeight != pindexMostWork->nHeight) {
// Don't iterate the entire list of potential improvements toward the best tip, as we likely only need
// a few blocks along the way.
int nTargetHeight = std::min(nHeight + 32, pindexMostWork->nHeight);
vpindexToConnect.clear();
vpindexToConnect.reserve(nTargetHeight - nHeight);
CBlockIndex *pindexIter = pindexMostWork->GetAncestor(nTargetHeight);
while (pindexIter && pindexIter->nHeight != nHeight) {
vpindexToConnect.push_back(pindexIter);
pindexIter = pindexIter->pprev;
}
nHeight = nTargetHeight;
// Connect new blocks.
for (CBlockIndex *pindexConnect : reverse_iterate(vpindexToConnect)) {
if (!ConnectTip(state, chainparams, pindexConnect, pindexConnect == pindexMostWork ? pblock : std::shared_ptr(), connectTrace, disconnectpool)) {
if (state.IsInvalid()) {
// The block violates a consensus rule.
if (!state.CorruptionPossible())
InvalidChainFound(vpindexToConnect.back());
state = CValidationState();
fInvalidFound = true;
fContinue = false;
break;
} else {
// A system error occurred (disk space, database error, ...).
// Make the mempool consistent with the current tip, just in case
// any observers try to use it before shutdown.
UpdateMempoolForReorg(disconnectpool, false);
return false;
}
} else {
PruneBlockIndexCandidates();
if (!pindexOldTip || chainActive.Tip()->nChainWork > pindexOldTip->nChainWork) {
// We're in a better position than we were. Return temporarily to release the lock.
fContinue = false;
break;
}
}
}
}
if (fBlocksDisconnected) {
// If any blocks were disconnected, disconnectpool may be non empty. Add
// any disconnected transactions back to the mempool.
UpdateMempoolForReorg(disconnectpool, true);
}
mempool.check(pcoinsTip.get());
// Callbacks/notifications for a new best chain.
if (fInvalidFound)
CheckForkWarningConditionsOnNewFork(vpindexToConnect.back());
else
CheckForkWarningConditions();
return true;
}
2.3.3.3通知区块添加成功
上述将新区块添加到具有最大工作量证明的链之后,通知监听器区块添加成功了:
for (const PerBlockConnectTrace& trace : connectTrace.GetBlocksConnected()) {
assert(trace.pblock && trace.pindex);
GetMainSignals().BlockConnected(trace.pblock, trace.pindex, trace.conflictedTxs);
}
看下GetMainSignals().BlockConnected()源码,最终会调用 PeerLogicValidation::BlockConnected() 和CWallet::BlockConnected()
2.3.3.3.1PeerLogicValidation::BlockConnected()
遍历此区块中的所有交易,判断是否是孤儿交易,若是,则将该交易从孤儿交易池中移除
void PeerLogicValidation::BlockConnected(const std::shared_ptr& pblock, const CBlockIndex* pindex, const std::vector& vtxConflicted) {
LOCK(g_cs_orphans);
std::vector vOrphanErase;
for (const CTransactionRef& ptx : pblock->vtx) {
const CTransaction& tx = *ptx;
// Which orphan pool entries must we evict?
for (const auto& txin : tx.vin) {
auto itByPrev = mapOrphanTransactionsByPrev.find(txin.prevout);
if (itByPrev == mapOrphanTransactionsByPrev.end()) continue;
for (auto mi = itByPrev->second.begin(); mi != itByPrev->second.end(); ++mi) {
const CTransaction& orphanTx = *(*mi)->second.tx;
const uint256& orphanHash = orphanTx.GetHash();
vOrphanErase.push_back(orphanHash);
}
}
}
// Erase orphan transactions include or precluded by this block
if (vOrphanErase.size()) {
int nErased = 0;
for (uint256 &orphanHash : vOrphanErase) {
nErased += EraseOrphanTx(orphanHash);
}
LogPrint(BCLog::MEMPOOL, "Erased %d orphan tx included or conflicted by block\n", nErased);
}
g_last_tip_update = GetTime();
}
2.3.3.3.2CWallet::BlockConnected()
遍历此区块中的所有交易,将交易从内存池中移除
void CWallet::BlockConnected(const std::shared_ptr& pblock, const CBlockIndex *pindex, const std::vector& vtxConflicted) {
LOCK2(cs_main, cs_wallet);
// TODO: Temporarily ensure that mempool removals are notified before
// connected transactions. This shouldn't matter, but the abandoned
// state of transactions in our wallet is currently cleared when we
// receive another notification and there is a race condition where
// notification of a connected conflict might cause an outside process
// to abandon a transaction and then have it inadvertently cleared by
// the notification that the conflicted transaction was evicted.
for (const CTransactionRef& ptx : vtxConflicted) {
SyncTransaction(ptx);
TransactionRemovedFromMempool(ptx);
}
for (size_t i = 0; i < pblock->vtx.size(); i++) {
SyncTransaction(pblock->vtx[i], pindex, i);
TransactionRemovedFromMempool(pblock->vtx[i]);
}
m_last_block_processed = pindex;
}