下载hadoop的压缩包
http://www.apache.org/dyn/closer.cgi/hadoop/core
本文使用的版本为1.1.1
以hadoop用户登录
su - hadoop
把文件hadoop-1.1.1.tar.gz复制到/home/hadoop文件夹下
sudo cp 源地址 /home/hadoop/
cd ~ tar -zxf hadoop-1.1.1.tar.gz
sudo chown -R hadoop:hadoop hadoop-1.1.1下面开始配置hadoop
cd hadoop-1.1.1/
编辑conf/hadoop-env.sh文件
sudo gedit conf/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_38编辑conf/core-site.xml文件
sudo gedit conf/core-site.xml增加后的内容如下(注意localhost一定要替换成你本机的IP)
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoop-${user.name}</value> <description>A base for other temporary directories.</description> </property> </configuration>编辑conf/hdfs-site.xml文件,
dfs.replication
定义了你的文件的份数,单机环境下,设成一份就可以了
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
编辑conf/mapred-site.xml文件(注意localhost一定要替换成你本机的IP)
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
bin/hadoop namenode -format
修改hosts文件
vi /etc/hosts
hostip hostname
启动hadoop
bin/start-all.sh浏览器查看hadoop的运行状态
MapReduce的运行状态
http://localhost:50070/dfshealth.jspHDFS的web页面
http://localhost:50030/jobtracker.jsp