安装hadoop3.0版本踩坑

1、hdfs的web页面默认端口是9870 yarn的web页面端口是8088
2、配置文件中的slaves文件没了,变成了workers文件,在里面配置datanode节点
3、在进行namenode格式化是有几个Fail,不要因此怀疑自己,只要common.Storage: Storage directory /usr/local/hadoop-3.0.2/hdfs/name has been successfully formatted. 这个提醒是存在的就没有问题
4、在启动时,start-dfs.sh start-yarn.sh时报错

Starting namenodes on [namenode]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [datanode1]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.

解决办法:

注意是在文件开始空白处

在start-dfs.sh中:

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root 

在start-yarn.sh中

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

这样就解决了

后面加上一些基础点

ssh免密码登录的命令:

ssh-keygen -t rsa -P ""

在~目录的.ssh下生成秘钥
将生成的公钥id_rsa.pub 内容追加到authorized_keys(执行命令:cat id_rsa.pub >> authorized_keys)

core-site.xml

<configuration>

        <property>
                <name>fs.defaultFSname>
                <value>hdfs://namenode:9000value>
        property>
        
        <property>
         <name>io.file.buffer.sizename>
         <value>131072value>
       property>
        
        <property>
                <name>hadoop.tmp.dirname>
                <value>/usr/local/hadoop-3.0.2/tmpvalue>
        property>

configuration>

hdfs-site.xml

<configuration>
    <property>
      <name>dfs.namenode.secondary.http-addressname>
      <value>datanode1:50090value>
    property>
    <property>
      <name>dfs.replicationname>
      <value>3value>
    property>
    <property>
      <name>dfs.namenode.name.dirname>
      <value>file:/usr/local/hadoop-3.0.2/hdfs/namevalue>
    property>
    <property>
      <name>dfs.datanode.data.dirname>
      <value>file:/usr/local/hadoop-3.0.2/hdfs/datavalue>
    property>
configuration>

mapred-site.xml

<configuration>

 <property>
    <name>mapreduce.framework.namename>
    <value>yarnvalue>
  property>
configuration>

yarn-site.xml

<configuration>



     <property>
          <name>yarn.nodemanager.aux-servicesname>
          <value>mapreduce_shufflevalue>
     property>
     <property>
           <name>yarn.resourcemanager.addressname>
           <value>namenode:8032value>
     property>
     <property>
          <name>yarn.resourcemanager.scheduler.addressname>
          <value>namenode:8030value>
      property>
     <property>
         <name>yarn.resourcemanager.resource-tracker.addressname>
         <value>namenode:8031value>
     property>
     <property>
         <name>yarn.resourcemanager.admin.addressname>
         <value>namenode:8033value>
     property>
     <property>
         <name>yarn.resourcemanager.webapp.addressname>
         <value>namenode:8088value>
     property>
configuration>

workers

namenode
datanode1
datanode2

hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_171

namenode格式化

hdfs namenode -format

你可能感兴趣的:(面试,大数据)