设置固定IP地址及网关
-
设置IP
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=08:00:27:BD:9D:B5 #不用改
TYPE=Ethernet
UUID=53e4e4b6-9724-43ab-9da7-68792e611031 #不用改
ONBOOT=yes #开机启动
NM_CONTROLLED=yes
BOOTPROTO=static #静态IP
IPADDR=192.168.30.50 #IP地址
NETMASK=255.255.255.0 #子网掩码
-
设置网关
# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=Hadoop.Master
GATEWAY=192.168.30.1 #网关
-
设置DNS
vi /etc/resolv.conf
nameserver xxx.xxx.xxx.xxx #根据实际情况设置
nameserver 114.114.114.114 #可以设置多个
-
重启网卡
service network restart
-
测试网络
ping www.baidu.com #ping不通一般是DNS问题
-
设置主机名对应IP地址
vi /etc/hosts
#添加如下内容
192.168.30.50 Hadoop.Master
添加Hadoop用户
-
添加用户组
groupadd hadoop
-
添加用户并分配用户组
useradd -g hadoop hadoop
-
修改用户密码
passwd hadoop
关闭服务
-
关闭防火墙
service iptables stop #关闭防火墙服务
chkconfig iptables off #关闭防火墙开机启动
service ip6tables stop
chkconfig ip6tables off
-
关闭SELinux
vi /etc/sysconfig/selinux
#修改如下内容
SELINUX=enforcing -> SELINUX=disabled
#再执行如下命令
setenforce 0
getenforce
Required Software
-
查看ssh与rsync安装状态
rpm -qa|grep openssh
rpm -qa|grep rsync
-
安装ssh与rsync
yum -y install ssh
yum -y install rsync
-
切换hadoop用户
su - hadoop
-
Setup passphraseless ssh
#use dsa
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
#use rsa
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
#将id_dsa.pub追加到授权的key中
#选择 dsa 或 rsa 二选一
$ chmod 0600 ~/.ssh/authorized_keys
#权限的设置非常重要,因为不安全的设置安全设置,会让你不能使用RSA功能
-
测试ssh连接
ssh localhost
#如果不需要输入密码,则是成功
Java安装与配置
-
下载地址
http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html
注:我这里使用的是:jdk-8u91-linux-x64.tar.gz
-
切换至root用户
sudo su -
-
将压缩包解压至/usr/local/jdk1.8.0_91 目录
tar zxvf /home/hadoop/jdk-8u91-linux-x64.tar.gz -C /usr/local/jdk1.8.0_91
-
设置环境变量
vi /etc/profile
#追加如下内容
export JAVA_HOME=/usr/local/jdk1.8.0_91
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin
-
使环境变量生效
source /etc/profile
java -version
Hadoop安装与配置
-
下载地址
http://hadoop.apache.org/releases.html
注:我下载的是hadoop-2.7.2.tar.gz
-
将压缩包解压至/usr/local目录
tar zxvf /home/hadoop/hadoop-2.7.2.tar.gz -C /usr/local/
-
创建hadoop数据目录
mkdir /usr/local/hadoop-2.7.2/tmp
-
将hadoop文件夹授权给hadoop用户
chown -R hadoop:hadoop /usr/local/hadoop-2.7.2/
-
设置环境变量
vim /etc/profile
#追加如下内容
export HADOOP_HOME=/usr/local/hadoop-2.7.2
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
-
使环境变量生效
source /etc/profile
#测试环境变量设置
hadoop version
配置HDFS
-
切换至Hadoop用户
su - hadoop
-
修改hadoop-env.sh
cd /usr/local/hadoop-2.7.2/etc/hadoop/
vi hadoop-env.sh
#追加如下内容
export JAVA_HOME=/usr/local/jdk1.8.0_91
-
修改core-site.xml
vi core-site.xml
#添加如下内容
fs.defaultFS
hdfs://hadoop:10080
hadoop.tmp.dir
/usr/local/hadoop-2.7.2/tmp/
A base for other temporary directories.
-
修改hdfs-site.xml
vi hdfs-site.xml
#添加如下内容
dfs.replication
1
dfs.datanode.http.address
0.0.0.0:10675
dfs.namenode.http-address
0.0.0.0:10670
-
格式化hdfs
hdfs namenode -format
#注:出现Exiting with status 0即为成功
-
启动hdfs
start-dfs.sh
#停止命令
stop-dfs.sh
#注:输出如下内容
15/09/21 18:09:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Hadoop.Master]
Hadoop.Master: starting namenode, logging to /usr/hadoop/logs/hadoop-hadoop-namenode-Hadoop.Master.out
Hadoop.Master: starting datanode, logging to /usr/hadoop/logs/hadoop-hadoop-datanode-Hadoop.Master.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is b5:96:b2:68:e6:63:1a:3c:7d:08:67:4b:ae:80:e2:e3.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/hadoop/logs/hadoop-hadoop-secondarynamenode-Hadoop.Master.out
15/09/21 18:09:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicab
-
查看进程
jps
#注:输出类似如下内容
[hadoop@CDVM-213018144 hadoop]$ jps
1155 NameNode
1492 SecondaryNameNode
1307 DataNode
6142 Jps
-
使用web查看Hadoop运行状态
http://10.213.18.144:10670/
#默认是50070
在HDFS上运行WordCount
-
创建HDFS用户目录
hdfs dfs -mkdir /user
hdfs dfs -mkdir /user/hadoop #根据自己的情况调整/user/
-
复制输入文件(要处理的文件)到HDFS上
hdfs dfs -put /usr/local/hadoop-2.7.2/etc/hadoop input
-
查看我们复制到HDFS上的文件
hdfs dfs -ls input
-
运行单词检索(grep)程序
hadoop jar /usr/local/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar grep input output 'dfs[a-z.]+'
#WordCount
#hadoop jar /usr/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount input output
#说明:output文件夹如已经存在则需要删除或指定其他文件夹。
-
查看运行结果
hdfs dfs -cat output/*
配置YARN
-
修改mapred-site.xml
cd /usr/local/hadoop-2.7.2/etc/hadoop/
cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
#添加如下内容
mapreduce.framework.name
yarn
-
修改yarn-site.xml
vi yarn-site.xml
#添加如下内容
yarn.nodemanager.aux-services
mapreduce_shuffle
-
启动YARN
start-yarn.sh
#停止yarn stop-yarn.sh
-
查看当前java进程
[hadoop@CDVM-213018144 hadoop]$ jps
#输出如下
1155 NameNode
1492 SecondaryNameNode
1307 DataNode
21979 Jps
21551 ResourceManager
21679 NodeManager
运行你的mapReduce程序
配置好如上配置再运行mapReduce程序时即是yarn中运行, 使用web
-
查看Yarn运行状态
http://你的服务器ip地址:8088/
HDFS常用命令
-
创建HDFS文件夹
在根目录创建input文件夹
hdfs dfs -mkdir -p /input
在用户目录创建input文件夹
说明:如果不指定“/目录”,则默认在用户目录创建文件夹
hdfs dfs -mkdir -p input
#hdfs dfs -mkdir -p /user/hadoop/input
-
查看HDFS文件夹
查看HDFS根文件夹
hdfs dfs -ls /
查看HDFS用户目录文件夹
hdfs dfs -ls
查看HDFS用户目录文件夹下input文件夹
hdfs dfs -ls input
#等同与 hdfs dfs -ls /user/hadoop/input
复制文件到HDFS
hdfs dfs -put /usr/hadoop/etc/hadoop input
删除文件夹
hdfs dfs -rm -r input
参考资料
http://baike.xsoftlab.net/view/292.html
http://www.powerxing.com/install-hadoop/
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
http://www.cnblogs.com/zx247549135/p/4381800.html
http://www.zhixing123.cn/ubuntu/40649.html
http://stackoverflow.com/questions/24119486/hadoop-namenode-recovery-from-metadata-backup
http://blog.csdn.net/amber_amber/article/details/46896719