查看CentOS自带JDK是否已安装。
yum list installed |grep java。
安装和更新java
yum -y install java-1.7.0-openjdk*
设置java_home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.99-2.6.5.0.el7_2.x86_64
vi ~/.bash_profile
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.99-2.6.5.0.el7_2.x86_64
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_PREFIX=/home/hadoop/hadoop-2.7.2
export HADOOP_HOME=/home/hadoop/hadoop-2.7.2
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
yum install ssh
yum install rsync
useradd hadoop
passwd hadoop
修改机器名
步骤1:
修改/etc/sysconfig/network中的hostname
vi /etc/sysconfig/network HOSTNAME=localhost.localdomain #修改localhost.localdomain为orcl1
修改network的HOSTNAME项。点前面是主机名,点后面是域名。没有点就是主机名。
centos 7修改 vi /etc/hostname
这个是永久修改,重启后生效。
步骤2:
修改/etc/hosts文件
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 hadoop_master
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 hadoop_master
10.20.77.172 hadoop_master
10.20.77.173 hadoop_slave1
10.20.77.174 hadoop_slave2
10.20.77.175 hadoop_slave3
shutdown -r now #最后,重启服务器即可。
配置SSH的无密码登录:可新建专用用户hadoop进行操作,cd命令进入所属目录下,输入以下指令(已安装ssh)
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys chmod 0600 ~/.ssh/authorized_keys
解释一下,第一条生成ssh密码的命令,-t 参数表示生成算法,有rsa和dsa两种;-P表示使用的密码,这里使用“”空字符串表示无密码。
第二条命令将生成的密钥写入authorized_keys文件。
这时输入 ssh localhost,弹出写入提示后回车,便可无密码登录本机。同理,将authorized_keys文件 通过 scp命令拷贝到其它主机相同目录下,则可无密码登录其它机器。
scp /home/hadoop/.ssh/authorized_keys hadoop@hadoop_slave1:/home/hadoop/ 在hadoop_slave1 cd /home/hadoop/ cat authorized_keys >> .ssh/authorized_keys
su - hadoop
cd /home/hadoop
mkdir tmp
mkdir hdfs
mkdir hdfs/data
mkdir hdfs/name
wget http://apache.fayea.com/hadoop/common/stable/hadoop-2.7.2.tar.gz
解压到/home/hadoop/hadoop-2.7.2/
修改/home/hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml
完整文件如下
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop_master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
</configuration>
这个是hadoop的核心配置文件,这里需要配置的就这两个属性,fs.default.name配置了hadoop的HDFS系统的命名,位置为主机的9000端口;hadoop.tmp.dir配置了hadoop的tmp目录的根位置。这里使用了一个文件系统中没有的位置,所以要先用mkdir命令新建一下。
hdfs-site.xml
<property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hadoop/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.http.address</name> <value>hadoop_master:50070</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop_master:9001</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>
cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop_master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop_master:19888</value> </property>
修改yarn-site.xml
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>hadoop_master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop_master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop_master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop_master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop_master:8088</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>768</value> </property>
vi slaves
hadoop_slave1 hadoop_slave2 hadoop_slave3
其它机器配置
yum install ssh yum install rsync vi /etc/hosts 10.20.77.172 hadoop_master 10.20.77.173 hadoop_slave1 10.20.77.174 hadoop_slave2 10.20.77.175 hadoop_slave3 useradd hadoop passwd hadoop 再从master机器拷贝hadoop过来 scp -r /home/hadoop/* hadoop@hadoop_slave1:/home/hadoop/
参考资料:
http://jingyan.baidu.com/article/27fa73269c02fe46f9271f45.html
http://www.powerxing.com/install-hadoop/
http://www.powerxing.com/install-hadoop-cluster/
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html