Hadoop 问题集及解决办法

Hadoop 问题集及解决办法

  • HBase 运行报错: ClusterId read in ZooKeeper is null

    使用了非HBase自带的zookeeper,需要指明所使用的zookeeper地址和znode

    1. hbase.zookeeper.quorum: is used to connect to the zookeeper cluster
    2. zookeeper.znode.parent. tells which znode keeps the data (and address for HMaster) for the cluster

       hbase.zookeeper.quorum
       localhost
   
   
       zookeeper.znode.parent
       /hbase-unsecure
   
  • 在mapreduce作业中使用hbase时,例如:hbase some-hbase-mapreduce.jar指令,报ClassPathNotFound异常

需要将hbase依赖添加到mapreduce中

在mapred-site.xml文件中添加如下配置:



    mapreduce.application.classpath
    ${你的mapreduce框架所用到的库的路径}:/usr/hdp/2.6.4.0-91/hbase/conf:/usr/hdp/2.6.4.0-91/hbase:/usr/hdp/2.6.4.0-91/hbase/lib/*


重启mapreduce框架

相关引用

  • 运行jps, “process information unavailable”,

Just remove hsperfdata_ folder from your /tmp folder and run jps again.

  • -bash: cannot create temp file for here-document: Permission denied [duplicate]

/tmp can be considered as a typical directory in most cases. You can recreate it, give it to root (chown root:root /tmp) and set 1777 permissions on it so that everyone can use it (chmod 1777 /tmp). This operation will be even more important if your /tmp is on a separate partition (which makes it a mount point).

  • 未知主机异常

检查yarn配置的主机是否正确

  • Permissions incorrectly set for dir **/tmp/nm-local-dir/filecache, should be rwxr-xr-x, actual value = rwxrwxrwx

/tmp文件夹权限不正确,需更改为rwxr-xr-x,在windows 或 WSL下,/tmp文件夹权限不能随意更改,因此应更改hadoop临时文件夹路径,不能指向/tmp

你可能感兴趣的:(Hadoop 问题集及解决办法)