tendermint源码分析(一):node

一.tendermint文件结构

  • abci-client:Tendermint充当有关一个应用的ABCI客户端,并且维护3个连接:mempool,consensu和query。
  • blockchain:提供存储,pool(一组peers)以及在peers之间存储以及交换区块的reactor。
  • consensus:Tendermint core的核心,实现了共识算法。包括两个“子模块”:wal(write-ahead logging,预写式日志)用于保证数据完整性和replay:宕机恢复后replay区块和消息。
  • mempool:内存池模块会处理所有流入系统的交易,不论它们是来自peers(即peers也可以产生交易)还是应用(应用也可以产生交易,当然主要是应用产生)。
  • p2p:围绕点对点通信提供了抽象。
  • rpc:Tendermint的RPC服务。
  • state:表示最近的状态和执行子模块(根据应用执行区块)。
  • types:一组公开暴露的类型和方法。
  • node:tendermint节点的定义

二.tendermint/node分析:

  • id.go
    依赖的包:time,crypto。通过crypto生成节点的公钥和私钥。id中定义了NodeID,PrivNodeID,NodeGreeting, SignedNodegreeting 四个结构体。

tendermint源码分析(一):node_第1张图片

  • node.go

1.tendermint/node启动流程分析
tendermint源码分析(一):node_第2张图片
1.首先做包的初始化,包的初始化按照在程序中导入的顺序来进行。
2.main函数的核心函数是cmd.Execute,而追踪下去,真正做事的是(c *Command) execute方法。
下面是(c *Command) execute的核心处理逻辑。
tendermint源码分析(一):node_第3张图片
最后落在了initFiles函数,命令执行成功会输出如下信息

node --proxy_app=dummy --home "/Users/zcy/go/src/github.com/tendermint/tendermint"

tendermint源码分析(一):node_第4张图片

2.启动流程源码分析

if c.RunE != nil {
    if err := c.RunE(c, argWoFlags); err != nil {
        return err
    }
} else {
    c.Run(c, argWoFlags) 
}

c.RunE(c, argWoFlags)是调用匿名函数RunE,而RunE的赋值则是在NewRunNodeCmd函数(/cmd/tendermints/commands/run_node.go):

// NewRunNodeCmd returns the command that allows the CLI to start a
// node. It can be used with a custom PrivValidator and in-process ABCI application.
func NewRunNodeCmd(nodeProvider nm.NodeProvider) *cobra.Command {
    cmd := &cobra.Command{
        Use:   "node",
        Short: "Run the tendermint node",
        RunE: func(cmd *cobra.Command, args []string) error {   //匿名函数
            // Create & start node
            n, err := nodeProvider(config, logger)
            if err != nil {
                return fmt.Errorf("Failed to create node: %v", err)
            }

        if err := n.Start(); err != nil {
            return fmt.Errorf("Failed to start node: %v", err)
        } else {
            logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())  //节点启动完毕打印日志Started node
        }

        // Trap signal, run forever.
        n.RunForever()

        return nil
    },
}

    AddNodeFlags(cmd)
    return cmd
}

这个匿名函数有三个功能:
1、创建node(nodeProvider(config, logger))
2、启动node(n.Start())
3、监听停止node的信号(n.RunForever())

  • 创建node
    nodeProvider(config,logger)具体执行DefaultNewNode函数(/node/node.go#DefaultNewNode)。DefaultNewNode函数会返回一个Node全局变量。
// DefaultNewNode returns a Tendermint node with default settings for the
// PrivValidator, ClientCreator, GenesisDoc, and DBProvider.
// It implements NodeProvider.
func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
    return NewNode(config,
        types.LoadOrGenPrivValidatorFS(config.PrivValidatorFile()),
        proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()),
        DefaultGenesisDocProviderFunc(config),
        DefaultDBProvider,
        logger)
}

DefaultNewNode的第一个实参为config。config的定义在/cmd/tendermint/commands/root.go:

var (
    config = cfg.DefaultConfig()
)

注意:用户在命令行传入的参数,那只是启动节点所需的非常小的一部分参数。大多数参数都是需要从默认配置中加载的。config全局变量加载了node所需的所有默认参数(主要是默认基础配置、默认RPC配置、默认P2P配置、内存池配置、共识配置和交易索引配置)
DefaultNewNode的第二个实参为types.LoadOrGenPrivValidatorFS(config.PrivValidatorFile()),该函数会返回一个PrivValidatorFS实例。

DefaultNewNode的第三个实参为proxy.DefaultClientCreator(config.ProxyApp,config.ABCI,config.DBDir()),其中DefaultClientCreator函数(/proxy/client.go#DefaultClientCreator)的第一个实参config.ProxyApp=“dummy”;
第二个实参config.ABCI=“socket”;第三个实参config.DBDir()

但是具体负责创建这个全局变量的函数走的是NewNode函数。代码在/node/node.go#NewNode。

// NewNode returns a new, ready to go, Tendermint Node.
func NewNode(config *cfg.Config,
    privValidator types.PrivValidator,
    clientCreator proxy.ClientCreator,
    genesisDocProvider GenesisDocProvider,
    dbProvider DBProvider,
    logger log.Logger) (*Node, error) {

    // Get BlockStore
    //初始化blockstore数据库
    blockStoreDB, err := dbProvider(&DBContext{"blockstore", config})
    if err != nil {
        return nil, err
    }
    blockStore := bc.NewBlockStore(blockStoreDB)

    // Get State
    //初始化state数据库
    stateDB, err := dbProvider(&DBContext{"state", config})
    if err != nil {
        return nil, err
    }

    // Get genesis doc
    // TODO: move to state package?
    //从硬盘上读取创世文件
    genDoc, err := loadGenesisDoc(stateDB)
    if err != nil {
        genDoc, err = genesisDocProvider()
        if err != nil {
            return nil, err
        }
        // save genesis doc to prevent a certain class of user errors (e.g. when it
        // was changed, accidentally or not). Also good for audit trail.
        saveGenesisDoc(stateDB, genDoc)
    }

    state, err := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
    if err != nil {
        return nil, err
    }

    // Create the proxyApp, which manages connections (consensus, mempool, query)
    // and sync tendermint and the app by performing a handshake
    // and replaying any necessary blocks
    consensusLogger := logger.With("module", "consensus")
    handshaker := cs.NewHandshaker(stateDB, state, blockStore)
    handshaker.SetLogger(consensusLogger)
    proxyApp := proxy.NewAppConns(clientCreator, handshaker)
    proxyApp.SetLogger(logger.With("module", "proxy"))
    //Start()
    if err := proxyApp.Start(); err != nil {
        return nil, fmt.Errorf("Error starting proxy app connections: %v", err)
    }

    // reload the state (it may have been updated by the handshake)
    state = sm.LoadState(stateDB)

    // Decide whether to fast-sync or not
    // We don't fast-sync when the only validator is us.
    fastSync := config.FastSync         //默认开启快速同步
    if state.Validators.Size() == 1 {
        addr, _ := state.Validators.GetByIndex(0)   //返回验证人的地址
        if bytes.Equal(privValidator.GetAddress(), addr) {
            fastSync = false         //如果只有一个验证者,禁用快速同步
        }
    }

    // Log(打印日志) whether this node is a validator or an observer(观察者)
    if state.Validators.HasAddress(privValidator.GetAddress()) {
        consensusLogger.Info("This node is a validator", "addr", privValidator.GetAddress(), "pubKey", privValidator.GetPubKey())
    } else {
        consensusLogger.Info("This node is not a validator", "addr", privValidator.GetAddress(), "pubKey", privValidator.GetPubKey())
    }

    // Make MempoolReactor
    mempoolLogger := logger.With("module", "mempool")
    //创建交易池
    mempool := mempl.NewMempool(config.Mempool, proxyApp.Mempool(), state.LastBlockHeight)
    mempool.InitWAL() // no need to have the mempool wal during tests
    mempool.SetLogger(mempoolLogger)
    mempoolReactor := mempl.NewMempoolReactor(config.Mempool, mempool)
    mempoolReactor.SetLogger(mempoolLogger)

    if config.Consensus.WaitForTxs() {
        mempool.EnableTxsAvailable()
    }

    // Make Evidence Reactor
    evidenceDB, err := dbProvider(&DBContext{"evidence", config})
    if err != nil {
        return nil, err
    }
    evidenceLogger := logger.With("module", "evidence")
    evidenceStore := evidence.NewEvidenceStore(evidenceDB)
    evidencePool := evidence.NewEvidencePool(stateDB, evidenceStore)
    evidencePool.SetLogger(evidenceLogger)
    evidenceReactor := evidence.NewEvidenceReactor(evidencePool)
    evidenceReactor.SetLogger(evidenceLogger)

    blockExecLogger := logger.With("module", "state")
    // make block executor for consensus and blockchain reactors to execute blocks
    blockExec := sm.NewBlockExecutor(stateDB, blockExecLogger, proxyApp.Consensus(), mempool, evidencePool)

    // Make BlockchainReactor
    bcReactor := bc.NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
    bcReactor.SetLogger(logger.With("module", "blockchain"))

    // Make ConsensusReactor
    consensusState := cs.NewConsensusState(config.Consensus, state.Copy(),
        blockExec, blockStore, mempool, evidencePool)
    consensusState.SetLogger(consensusLogger)
    if privValidator != nil {
        consensusState.SetPrivValidator(privValidator)
    }
    consensusReactor := cs.NewConsensusReactor(consensusState, fastSync)
    consensusReactor.SetLogger(consensusLogger)

    p2pLogger := logger.With("module", "p2p")

    sw := p2p.NewSwitch(config.P2P)
    sw.SetLogger(p2pLogger)
    sw.AddReactor("MEMPOOL", mempoolReactor)
    sw.AddReactor("BLOCKCHAIN", bcReactor)
    sw.AddReactor("CONSENSUS", consensusReactor)
    sw.AddReactor("EVIDENCE", evidenceReactor)

    // Optionally, start the pex reactor
    var addrBook pex.AddrBook
    var trustMetricStore *trust.TrustMetricStore
    if config.P2P.PexReactor {
        addrBook = pex.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict)
        addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile()))

        // Get the trust metric history data
        trustHistoryDB, err := dbProvider(&DBContext{"trusthistory", config})
        if err != nil {
            return nil, err
        }
        trustMetricStore = trust.NewTrustMetricStore(trustHistoryDB, trust.DefaultConfig())
        trustMetricStore.SetLogger(p2pLogger)

        var seeds []string
        if config.P2P.Seeds != "" {
            seeds = strings.Split(config.P2P.Seeds, ",")
        }
        pexReactor := pex.NewPEXReactor(addrBook,
            &pex.PEXReactorConfig{Seeds: seeds, SeedMode: config.P2P.SeedMode})
        pexReactor.SetLogger(p2pLogger)
        sw.AddReactor("PEX", pexReactor)
    }

    // Filter peers by addr or pubkey with an ABCI query.
    // If the query return code is OK, add peer.
    // XXX: Query format subject to change
    if config.FilterPeers {
        // NOTE: addr is ip:port
        sw.SetAddrFilter(func(addr net.Addr) error {
            resQuery, err := proxyApp.Query().QuerySync(abci.RequestQuery{Path: cmn.Fmt("/p2p/filter/addr/%s", addr.String())})
            if err != nil {
                return err
            }
            if resQuery.IsErr() {
                return fmt.Errorf("Error querying abci app: %v", resQuery)
            }
            return nil
        })
        sw.SetPubKeyFilter(func(pubkey crypto.PubKey) error {
            resQuery, err := proxyApp.Query().QuerySync(abci.RequestQuery{Path: cmn.Fmt("/p2p/filter/pubkey/%X", pubkey.Bytes())})
            if err != nil {
                return err
            }
            if resQuery.IsErr() {
                return fmt.Errorf("Error querying abci app: %v", resQuery)
            }
            return nil
        })
    }

    eventBus := types.NewEventBus()
    eventBus.SetLogger(logger.With("module", "events"))

    // services which will be publishing and/or subscribing for messages (events)
    // consensusReactor will set it on consensusState and blockExecutor
    consensusReactor.SetEventBus(eventBus)

    // Transaction indexing
    var txIndexer txindex.TxIndexer
    switch config.TxIndex.Indexer {
    case "kv":
        store, err := dbProvider(&DBContext{"tx_index", config})
        if err != nil {
            return nil, err
        }
        if config.TxIndex.IndexTags != "" {
            txIndexer = kv.NewTxIndex(store, kv.IndexTags(strings.Split(config.TxIndex.IndexTags, ",")))
        } else if config.TxIndex.IndexAllTags {
            txIndexer = kv.NewTxIndex(store, kv.IndexAllTags())
        } else {
            txIndexer = kv.NewTxIndex(store)
        }
    default:
        txIndexer = &null.TxIndex{}
    }

    indexerService := txindex.NewIndexerService(txIndexer, eventBus)

    // run the profile server
    profileHost := config.ProfListenAddress
    if profileHost != "" {
        go func() {
            logger.Error("Profile server", "err", http.ListenAndServe(profileHost, nil))
        }()
    }
    //创建node,并给成员赋值
    node := &Node{
        config:        config,
        genesisDoc:    genDoc,
        privValidator: privValidator,

        sw:               sw,
        addrBook:         addrBook,
        trustMetricStore: trustMetricStore,

        stateDB:          stateDB,
        blockStore:       blockStore,
        bcReactor:        bcReactor,
        mempoolReactor:   mempoolReactor,
        consensusState:   consensusState,
        consensusReactor: consensusReactor,
        evidencePool:     evidencePool,
        proxyApp:         proxyApp,
        txIndexer:        txIndexer,
        indexerService:   indexerService,
        eventBus:         eventBus,
    }
    node.BaseService = *cmn.NewBaseService(logger, "Node", node)
    return node, nil
}

这里,我们有必要看看Node的定义:

// Node is the highest level interface to a full Tendermint node.
// It includes all configuration information and running services.
type Node struct {
    cmn.BaseService   //内部类型

    // config
    config        *cfg.Config
    genesisDoc    *types.GenesisDoc   // initial validator set
    privValidator types.PrivValidator // local node's validator key

    // network
    sw               *p2p.Switch             // p2p connections
    addrBook         pex.AddrBook            // known peers 已知的peer
    trustMetricStore *trust.TrustMetricStore // trust metrics for all peers

    // services
    eventBus         *types.EventBus // pub/sub for services
    stateDB          dbm.DB
    blockStore       *bc.BlockStore         // store the blockchain to disk
    bcReactor        *bc.BlockchainReactor  // for fast-syncing
    mempoolReactor   *mempl.MempoolReactor  // for gossipping transactions
    consensusState   *cs.ConsensusState     // latest consensus state
    consensusReactor *cs.ConsensusReactor   // for participating in the consensus
    evidencePool     *evidence.EvidencePool // tracking evidence
    proxyApp         proxy.AppConns         // connection to the application
    rpcListeners     []net.Listener         // rpc servers
    txIndexer        txindex.TxIndexer
    indexerService   *txindex.IndexerService
}
  • 启动node
    负责启动的函数是OnStart,定义在/node/node.go#OnStart()
// OnStart starts the Node. It implements cmn.Service.
func (n *Node) OnStart() error {
    err := n.eventBus.Start()   //复用了BaseService的(bs *BaseService) Start()方法
    if err != nil {
        return err
    }

// Run the RPC server first
// so we can eg. receive txs for the first block
if n.config.RPC.ListenAddress != "" {
    listeners, err := n.startRPC()    //启动RPC
    if err != nil {
        return err
    }
    n.rpcListeners = listeners
}

// Create & add listener
protocol, address := cmn.ProtocolAndAddress(n.config.P2P.ListenAddress)
l := p2p.NewDefaultListener(protocol, address, n.config.P2P.SkipUPNP, n.Logger.With("module", "p2p"))
n.sw.AddListener(l)

// Generate node PrivKey
// TODO: pass in like priv_val
nodeKey, err := p2p.LoadOrGenNodeKey(n.config.NodeKeyFile())
if err != nil {
    return err
}
n.Logger.Info("P2P Node ID", "ID", nodeKey.ID(), "file", n.config.NodeKeyFile())

// Start the switch
n.sw.SetNodeInfo(n.makeNodeInfo(nodeKey.PubKey()))
n.sw.SetNodeKey(nodeKey)
err = n.sw.Start()
if err != nil {
    return err
}

// Always connect to persistent peers
if n.config.P2P.PersistentPeers != "" {
    err = n.sw.DialPeersAsync(n.addrBook, strings.Split(n.config.P2P.PersistentPeers, ","), true)
    if err != nil {
        return err
    }
}

// start tx indexer
return n.indexerService.Start()
}
  • 停止node

TM中node启动时执行了n.RunForever(),它负责监听中断信号,然后停掉node。

// RunForever waits for an interrupt signal and stops the node.
func (n *Node) RunForever() {
    // Sleep forever and then...
    cmn.TrapSignal(func() {
        n.Stop()              //调用BaseService的(bs *BaseService) Stop方法
    })
}

具体负责中断信号的是TrapSignal函数(vendor/github.com/tendermint/tmlibs/common/os.go):

// TrapSignal catches the SIGTERM and executes cb function. After that it exits
// with code 1.
func TrapSignal(cb func()) {
   c := make(chan os.Signal, 1)
   signal.Notify(c, os.Interrupt, syscall.SIGTERM)
   go func() {
      for sig := range c {
         fmt.Printf("captured %v, exiting...\n", sig)
         if cb != nil {
            cb()
         }
         os.Exit(1)
      }
   }()
   select {}
}

TrapSignal函数监听了SIGTERM信号。当用户触发了ctrl+c才终止node。

具体负责node的停止操作的是OnStop函数(/node/node.go#OnStop()):

// OnStop stops the Node. It implements cmn.Service.
func (n *Node) OnStop() {
    n.BaseService.OnStop()

    n.Logger.Info("Stopping Node")
    // TODO: gracefully disconnect from peers.
    n.sw.Stop()

    for _, l := range n.rpcListeners {
        n.Logger.Info("Closing rpc listener", "listener", l)
        if err := l.Close(); err != nil {
            n.Logger.Error("Error closing listener", "listener", l, "err", err)
        }
    }

    n.eventBus.Stop()

    n.indexerService.Stop()
}

3.启动案例分析:
首先,创建了多个应用连接(multiAppConn),具体是三个连接(query,mempool和consensu)。注意:这里的应用是指在本地以dummy运行。

I[03-25|04:40:48.219] Starting multiAppConn                        module=proxy impl=multiAppConn
I[03-25|04:40:48.219] Starting localClient                         module=abci-client connection=query impl=localClient
I[03-25|04:40:48.219] Starting localClient                         module=abci-client connection=mempool impl=localClient
I[03-25|04:40:48.219] Starting localClient                         module=abci-client connection=consensus impl=localClient

接着Tendermint Core和应用完成了一次握手。

I[03-25|04:40:48.219] ABCI Handshake                               module=consensus appHeight=0 appHash=
I[03-25|04:40:48.219] ABCI Replay Blocks                           module=consensus appHeight=0 storeHeight=0 stateHeight=0
I[03-25|04:40:48.219] Completed ABCI Handshake - Tendermint and App are synced module=consensus appHeight=0 appHash=

之后,开启了一系列的服务(为正式启动node做准备),例如事件开关(event switch),reactor以及为了检测IP地址,完成了UPNP发现。
“Started node”消息预示着node启动完毕,随时准备区块创建。

I[03-25|04:40:51.248] Started node                                module=main nodeInfo="NodeInfo{pk: {PubKeyEd25519{54E665EFA8EF7AD5763C421129F66FBD367B253788543C0E1B9CF047C45771D3}}, moniker: bootnode-hongkong, network: test-chain-MErM3n [listen 172.31.22.212:46656], version: 0.16.0 ([wire_version=0.7.2 p2p_version=0.5.0 consensus_version=v1/0.2.2 rpc_version=0.7.0/3 tx_index=on rpc_addr=tcp://0.0.0.0:46657])}"

接着是一个标准的区块创建过程,即进入到新的一轮中,提案一个区块,接收超过2/3的预投票(prevote),接着预提交(precommit)并且最终提交一个区块(finalise)。

I[03-25|04:40:51.250] enterNewRound(1/0). Current: 1/0/RoundStepNewHeight module=consensus
I[03-25|04:40:51.250] enterPropose(1/0). Current: 1/0/RoundStepNewRound module=consensus
I[03-25|04:40:51.250] enterPropose: Our turn to propose            module=consensus proposer=BD8F0160796AEC1E6454C55CCA6AF15EACF94A2D privValidator="PrivValidator{BD8F0160796AEC1E6454C55CCA6AF15EACF94A2D LH:0, LR:0, LS:0}"
I[03-25|04:40:51.253] Signed proposal                              module=consensus height=1 round=0 proposal="Proposal{1/0 1:57D9BC19067F (-1,:0:000000000000) {/3FB2AD239E88.../} @ 2018-03-25T04:40:51.250Z}"
I[03-25|04:40:51.259] Received complete proposal block             module=consensus height=1 hash=2CFA5BC56A2E5D0BB94B84EB35AD05CC8F47CC49
I[03-25|04:40:51.259] enterPrevote(1/0). Current: 1/0/RoundStepPropose module=consensus
I[03-25|04:40:51.259] enterPrevote: ProposalBlock is valid         module=consensus height=1 round=0
I[03-25|04:40:51.261] Signed and pushed vote                       module=consensus height=1 round=0 vote="Vote{0:BD8F0160796A 1/00/1(Prevote) 2CFA5BC56A2E {/347B4DC16274.../} @ 2018-03-25T04:40:51.259Z}" err=null
I[03-25|04:40:51.265] Added to prevote                             module=consensus vote="Vote{0:BD8F0160796A 1/00/1(Prevote) 2CFA5BC56A2E {/347B4DC16274.../} @ 2018-03-25T04:40:51.259Z}" prevotes="VoteSet{H:1 R:0 T:1 +2/3:2CFA5BC56A2E5D0BB94B84EB35AD05CC8F47CC49:1:57D9BC19067F BA{1:X} map[]}"
I[03-25|04:40:51.265] enterPrecommit(1/0). Current: 1/0/RoundStepPrevote module=consensus
I[03-25|04:40:51.265] enterPrecommit: +2/3 prevoted proposal block. Locking module=consensus hash=2CFA5BC56A2E5D0BB94B84EB35AD05CC8F47CC49
I[03-25|04:40:51.267] Signed and pushed vote                       module=consensus height=1 round=0 vote="Vote{0:BD8F0160796A 1/00/2(Precommit) 2CFA5BC56A2E {/1AB356C41018.../} @ 2018-03-25T04:40:51.265Z}" err=null
I[03-25|04:40:51.271] Added to precommit                           module=consensus vote="Vote{0:BD8F0160796A 1/00/2(Precommit) 2CFA5BC56A2E {/1AB356C41018.../} @ 2018-03-25T04:40:51.265Z}" precommits="VoteSet{H:1 R:0 T:2 +2/3:2CFA5BC56A2E5D0BB94B84EB35AD05CC8F47CC49:1:57D9BC19067F BA{1:X} map[]}"
I[03-25|04:40:51.271] enterCommit(1/0). Current: 1/0/RoundStepPrecommit module=consensus
I[03-25|04:40:51.274] Finalizing commit of block with 0 txs        module=consensus height=1 hash=2CFA5BC56A2E5D0BB94B84EB35AD05CC8F47CC49 root=
I[03-25|04:40:51.274] Block{
  Header{
    ChainID:        test-chain-MErM3n
    Height:         1
    Time:           2018-03-25 12:40:51.25 +0800 CST
    NumTxs:         0
    TotalTxs:       0
    LastBlockID:    :0:000000000000
    LastCommit:
    Data:
    Validators:     7DBC0AC152D72CA2DA37CAB8D6D5A73B8DA9DC43
    App:
    Conensus:       0B8CEF95EC57AC2D96038FD0AE3901C14FAE8E73
    Results:
    Evidence:
  }#2CFA5BC56A2E5D0BB94B84EB35AD05CC8F47CC49
  Data{
  }#
  Data{
  }#
  Commit{
    BlockID:    :0:000000000000
    Precommits:
  }#
}#2CFA5BC56A2E5D0BB94B84EB35AD05CC8F47CC49 module=consensus
I[03-25|04:40:51.279] Executed block                               module=state height=1 validTxs=0 invalidTxs=0
I[03-25|04:40:51.281] Committed state                              module=state height=1 txs=0 appHash=0000000000000000
I[03-25|04:40:51.281] Recheck txs                                  module=mempool numtxs=0 height=1

你可能感兴趣的:(tendermint源码分析(一):node)