另一篇相关文章地址:
http://blog.csdn.net/nisjlvhudy/article/details/49338851
1、安装环境
4台CentOS7,64位,Hadoop2.7需要64位Linux版
对应的HOSTS文件内容:
10.91.99.101 master
10.91.99.102 slave1
10.91.99.103 slave2
10.91.99.104 slave3
2、更改Hostname(用
hostnamectl
命令)及进行免密认证
每台服务器都生成公钥,再合并到authorized_keys
(1)CentOS默认没有启动ssh无密登录,去掉/etc/ssh/sshd_config其中2行的注释,每台服务器都要设置,
#RSAAuthentication yes
#PubkeyAuthentication yes
(2)输入命令,ssh-keygen -t rsa,生成key,都不输入密码,一直回车,/root就会生成.ssh文件夹,每台服务器都要设置
(3)合并公钥到authorized_keys文件,在Master服务器,进入/home/hs/.ssh目录,通过SSH命令合并,
cat id_rsa.pub>> authorized_keys
(4)把Master服务器的authorized_keys、known_hosts复制到Slave服务器的/root/.ssh目录
(5)完成,ssh
[email protected]、ssh
[email protected]就不需要输入密码了
具体脚本:
scp id_rsa.pub [email protected]:~/.ssh/id_rsa.pub_sl1
scp id_rsa.pub [email protected]:~/.ssh/id_rsa.pub_sl2
scp id_rsa.pub [email protected]:~/.ssh/id_rsa.pub_sl3
cat id_rsa.pub_sl1>> authorized_keys
cat id_rsa.pub_sl2>> authorized_keys
cat id_rsa.pub_sl3>> authorized_keys
scp authorized_keys [email protected]:~/.ssh/
scp authorized_keys [email protected]:~/.ssh/
scp authorized_keys [email protected]:~/.ssh/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
ssh 10.91.99.101
ssh 10.91.99.102
ssh 10.91.99.103
ssh 10.91.99.104
对应的.ssh目录如果不存在,要自己手动添加。
具体这一步可参照另一篇文章:
http://blog.csdn.net/nisjlvhudy/article/details/49338817
3、安装JDK,Hadoop2.7需要JDK7
yum -y install java-1.7.0-openjdk*
java -version
4、下载Hadoop安装包及解压
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz
tar -xzvf hadoop-2.7.2.tar.gz
mv hadoop-2.7.2 ~/opt/
5、创建对应目录
cd hadoop-2.7.2
mkdir tmp
mkdir -p hdfs/data
mkdir -p hdfs/name
6、设置Java_Home环境变量
vi ~/.bash_profile
java_home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95-2.6.4.0.el7_2.x86_64
7、进入Hadoop配置目录
cd /home/hs/opt/hadoop-2.7.2/etc/hadoop
配置主要4、5个配置文件
7.1、core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hs/opt/hadoop-2.7.2/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
</configuration>
7.2、hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hs/opt/hadoop-2.7.2/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hs/opt/hadoop-2.7.2/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
7.3、mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
7.4、yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
7.5、hadoop-env.sh、yarn-env.sh添加JAVA_HOME路径
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95-2.6.4.0.el7_2.x86_64
(
实际未进行配置,Hadoop也正常启动了)
8、配置slaves文件
cd /home/hs/opt/hadoop-2.7.2/etc/hadoop
cat slaves
slave1
slave2
slave3
9、将master上的Hadoop目录传送到3台slave
[hs@master opt]$ pwd
/home/hs/opt
scp -r hadoop-2.7.2/ hs@slave1:~/opt/
scp -r hadoop-2.7.2/ hs@slave2:~/opt/
scp -r hadoop-2.7.2/ hs@slave3:~/opt/
10、在Master服务器启动hadoop,从节点会自动启动
进入/home/hadoop/hadoop-2.7.2目录
(1)初始化,输入命令
,./bin/hdfs namenode -format
(2)全部启动
sbin/start-all.sh,也可以分开sbin/start-dfs.sh、sbin/start-yarn.sh
(3)停止的话,输入命令,sbin/stop-all.sh
(4)输入命令,jps,可以看到相关信息
11、Web访问,要先开放端口或者直接关闭防火墙
(1)输入命令,systemctl stop firewalld.service
(2)浏览器打开
http://master:8088/
(3)浏览器打开
http://master:50070/
12、至此,hadoop安装配置完成