Hadoop没有启动datanodes

采用Hadoop自带的基准测试工具写入文件时,出现问题:

There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
    .......

采用$jps命令查看,datanodes节点中只启动了“nodemanager",并无“datanodes”进程。
搜索后大多博客说是“多次格式化namenode导致的namenode与datanode之间的不一致”。

于是删除之前的dfs.datanode.data.dir目录(我没有数据就哦),并重新修改了各节点/hadoop-3.1.3/etc/hadoop/hdfs-site.xml的相关数据路径:


        
                dfs.namenode.secondary.http-address
                your-node-host:9001
        
        
                dfs.namenode.name.dir
                file:/home/user/bigData/hdfs/name
        
        
                dfs.datanode.data.dir
                file:/home/user/bigData/hdfs/data
        
        ......

重新格式化hadoop namenode -format
然后再启动集群,还真解决了吖~

你可能感兴趣的:(Hadoop没有启动datanodes)