环境介绍:3台虚拟机安装centos6.464位、jdk1.7 64位、hadoop2.5.1 64位
1. 修改主机名和/etc/hosts文件
1)修改主机名(非必要)
vi /etc/sysconfig/network HOSTNAME=m1
重启后生效。
2)/etc/hosts是ip地址和其对应主机名文件,使机器知道ip和主机名对应关系,格式如下:
修改/etc/hosts文件,增加如下地址映射:
192.168.0.108 m1 192.168.0.109 s1 192.168.0.110 s2
1)生成密钥:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
以上是两个单引号。
2)将id_dsa.pub(公钥)追加到授权的key中:
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
3)将认证文件复制到其它节点上:
scp ~/.ssh/authorized_keys 192.168.0.109:~/.ssh/ scp ~/.ssh/authorized_keys 192.168.0.110:~/.ssh/
3)测试:
ssh s1
chmod u+x jdk-7u71-linux-x64.tar.gz
4) 安装
tar –zvxf jdk-7u71-linux-x64.tar.gz–C /usr/local/program/
重命名jdk为jdk1.7 (用mv命令)
5) 配置环境变量:vi /etc/profile加入以下三行
#JAVA_HOME export JAVA_HOME=/usr/local/program/jdk1.7 export PATH=$JAVA_HOME/bin:$PATH
5)执行source /etc/profile使环境变量的配置生效
6)执行java –version查看jdk版本,验证是否成功。
每台节点都要安装hadoop。上传hadoop-2.5.1.tar.gz到用户usr/local/download目录下。
1) 解压
tar -zvxf hadoop-2.5.1.tar.gz -C/usr/local/program/
2) 添加环境变量:vi /etc/profile,尾部添加如下
export JAVA_HOME=/usr/local/program/jdk1.7 exportHADOOP_HOME=/usr/local/program/hadoop-2.5.1 export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop exportCLASSPATH=.:$JAVA_HOME/lib:$HADOOP_HOME/lib:$CLASSPATH exportPATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
设置立即生效:
source /etc/profile
3) 修改Hadoop配置文件
(1) core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://m1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/program/hadoop_tmp</value> </property> </configuration>
(2) hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>m1:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/program/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/program/dfs/data</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
注:访问namenode的hdfs使用50070端口,访问datanode的webhdfs使用50075端口。要想不区分端口,直接使用namenode的IP和端口进行所有的webhdfs操作,就需要在所有的datanode上都设置hdfs-site.xml中的dfs.webhdfs.enabled为true。
(3) mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>m1:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>m1:19888</value> </property> </configuration>
jobhistory是Hadoop自带了一个历史服务器,记录Mapreduce历史作业。默认情况下,jobhistory没有启动,可用以下命令启动:
sbin/mr-jobhistory-daemon.sh start historyserver
<configuration> <!-- Site specific YARN configurationproperties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>m1:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>m1:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>m1:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>m1:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>m1:8088</value> </property> </configuration>
(5) slaves
S1 S2
(6) 分别在hadoop-env.sh和yarn-env.sh中添加JAVA_HOME
export JAVA_HOME=/usr/local/program/jdk1.7
1)格式化
hdfs namenode –format
2) 启动Hadoop
start-dfs.sh start-yarn.sh
也可以用一条命令:
start-all.sh
3) 停止Hadoop
stop-all.sh
4)jps查看进程
1. 7692 ResourceManager 2. 8428 JobHistoryServer 3. 7348 NameNode 4. 14874 Jps 5. 7539 SecondaryNameNode
(1)http://192.168.0.108:50070
(2)http://192.168.0.108:8088/
(3)192.168.0.108:19888
1)建立输入文件:
1. vi wordcount.txt 2. 输入内容为: 3. hello you 4. hello me 5. hello everyone
2)建立目录
hadoop fs -mkdir /data/wordcount hadoop fs –mkdir /output/
3)上传文件
hadoop fs -put wordcount.txt/data/wordcount/
4)执行wordcount程序
hadoop jar usr/local/program/Hadoop-2.5.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar wordcount /data/wordcount /output/wordcount/
5)查看结果
hadoop fs -text /output/wordcount/part-r-00000
1. [root@m1mydata]# hadoop fs -text /output/wordcount/part-r-00000 2. everyone 1 3. hello 3 4. me 1 5. you 1