1、首先下载hadoop tar -zxvf hadoop-1.1.2.tar.gz
2、 在hadoop/conf目录
(1) 编辑 hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_25
(2) 编辑 core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(3)配置hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
(4)配置mapred-site.xml
<configuration>
<property> <name>mapred.job.tracker</name> <value>yjdabc:9001</value> </property>
</configuration>
(5)配置masters文件和slaves文件(通常来说masters和slaves中的内容是一样的通过 “hosts”文件中配置的地址来绑定)
将里面内容改为:localhost
(6)vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.193 sb
3 验证Hadoop(修改配置项后,一定要运行此命令重新配置namenode)
在hadoop/bin目录下
./hadoop namenode -format
4 启动Hadoop
在hadoop/bin目录下
./start-all.sh
5 查看JPS查看hadoop是否完全启动
[root@localhost bin]# jps
15219 JobTracker
15156 SecondaryNameNode
15495 Jps
15326 TaskTracker
15044 DataNode
14959 NameNode
6 查看集群状态
./hadoop dfsadmin -report