HADOOP三台机集群测试简单配置

简单配置:(n为只需要在namenode上设置,n,d为namenode和datanode都需要设置,配置文件在hadoop/conf下)

/etc/profile:(n,d)
export HADOOP_HOME=/app/hadoop-1.1.2
export PATH=$HADOOP_HOME/bin:$PATH

masters:(n)
192.168.110.128

slaves:(n)
192.168.110.129
192.168.110.130

core-site.xml:(n,d)


<configuration>
<!-- lang: xml -->
<property>
<!-- lang: xml -->
<name>fs.default.name</name>
<!-- lang: xml -->
<!--set namenode address -->
<!-- lang: xml -->
<value>hdfs://192.168.110.128:9000</value>
<!-- lang: xml -->
</property>
<!-- lang: xml -->
<property>
<!-- lang: xml -->
<!-- set templete dir -->
<!-- lang: xml -->
<name>hadoop.tmp.dir</name>
<!-- lang: xml -->
<value>/hadoop-tmp</value>
<!-- lang: xml -->
</property>
<!-- lang: xml -->
</configuration>

hdfs-site.xml:(n,d)


<configuration>
<!-- lang: xml -->
<property>
<!-- lang: xml -->
<!-- set datanode num -->
<!-- lang: xml -->
<name>dfs.replication</name>
<!-- lang: xml -->
<value>2</value>
<!-- lang: xml -->
</property>
<!-- lang: xml -->
</configuration>

mapred-site.xml:(n,d)


<configuration>
<!-- lang: xml -->
<property>
<!-- lang: xml -->
<!-- set  job tracker address-->
<!-- lang: xml -->
<name>mapred.job.tracker</name>
<!-- lang: xml -->
<value>192.168.110.128:9001</value>
<!-- lang: xml -->
</property>
<!-- lang: xml -->
</configuration>

hadoop_env.sh:(n,d)
export JAVA_HOME=/app/jdk1.6.0_43

iptables会阻断端口,要注意!

启动/停止hadoop时免输入密码:
在master上:
$ ssh-keygen -t dsa
$ cat ~/.ssh/id_dsa.pub » ~/.ssh/authorized_keys

并将id_dsa.pub复制至slaver机的当前用户目录下。

你可能感兴趣的:(HADOOP三台机集群测试简单配置)