hadoop配置文件

(1)配置$HADOOP_HOME/conf/hadoop-env.sh
切换到Hadoop的安装路径找到hadoop-0.20.2下的conf/hadoop-env.sh文件
将:# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
改为:export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
(2) 配置$HADOOP_HOME/conf/core-site.xml
切换到Hadoop的安装路径找到hadoop-0.20.2下的conf/core-site.xml文件
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://192.168.0.118:9000</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hadoop/tmp</value>
</property>
-- 配置第二名称节点
<property>
  <name>fs.checkpoint.dir</name>
  <value>/home/hadoop/secondname</value> 
</property>
-- 设置回收站保留时间
<property>
  <name>fs.trash.interval</name>
  <value>10080</value>
  <description>
      Number of minutes between trash checkpoints. If zero, the trash feature is disabled
  </description>
</property>
</configuration>
(3) 配置$HADOOP_HOME/conf/hdfs-site.xml
切换到Hadoop的安装路径找到hadoop-0.20.2下的conf/hdfs-site.xml文件内容如下:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/name</value>
</property> 
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property> 
<property>
<name>dfs.http.address</name>
<value>192.168.0.118:50070</value>
</property> 
<property>
<name>dfs.secondary.http.address</name>
<value>192.168.0.118:50070</value>
</property> 
</configuration>
(4) 配置$HADOOP_HOME/conf/mapred-site.xml
切换到hadoop的安装路径找到hadoop-0.20.2下的conf/mapred-site.xml文件内容如下:
<configuration>
<property>
<name>mapred.local.dir</name>
<value>/home/hadoop/temp</value>
</property> 
<property>
<name>mapred.job.tracker</name>
<value>192.168.0.118:9001</value>
</property> 
<property>
<name>mapred.map.tasks</name>
<value>7</value>
</property> 
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property> 
<property>
<name>mapred.tasktracker.reduce.tasks.maximum
</name>
<value>3</value>
</property> 
<property>
  <name>mapred.child.java.opts</name>
 <value>-Xmx512m</value>
</property>
</configuration>

你可能感兴趣的:(hadoop配置文件)