CentOS 模板安装可参博客: https://blog.csdn.net/gobitan/article/details/80993354
192.168.159.194 hadoop01
192.168.159.195 hadoop02
192.168.159.196 hadoop03
hadoop01 作为主节点,上面部署的服务较多。
完全分布式包含多个节点,按功能可划分为:
部署架构
配置文件
可分为三类:
hadoop-2.7.3/share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml
hadoop-2.7.3/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
hadoop-2.7.3/share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
hadoop-2.7.3/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client- core/mapred-default.xml
hadoop-2.7.3/etc/hadoop/core-site.xml
hadoop-2.7.3/etc/hadoop/hdfs-site.xml
hadoop-2.7.3/etc/hadoop/yarn-site.xml
hadoop-2.7.3/etc/hadoop/mapred-site.xml
控制脚本文件,在 hadoop-2.7.3/etc/hadoop/*-env.sh
说明:以下操作在 hadoop01 上进行。
备注:配置这里主要是想通过域名或者IP地址找到相应的机器
127.0.0.1 localhost
192.168.159.194 hadoop01
192.168.159.195 hadoop02
192.168.159.196 hadoop03
可以在官网 https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz
将 hadoop-2.7.3.tar.gz 上传到/root 目录下。
[root@hadoop01 ~]# cd /opt/
[root@hadoop01 opt]# tar zxf ~/hadoop-2.7.3.tar.gz
[root@hadoop01 opt]# cd hadoop-2.7.3/
创建 hadoop 需要的目录
[root@hadoop01 ~]# mkdir -p /opt/hadoop-2.7.3/data/namenode
[root@hadoop01 ~]# mkdir -p /opt/hadoop-2.7.3/data/datanode
配置 hadoop-env.sh
编辑 etc/hadoop/hadoop-env.sh,修改 JAVA_HOME 的值如下: # The java implementation to use.
备注:这样做是避免,Hadoop配置文件中读不到$JAVA_HOME而报错。
export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64/jre
配置core-site.xml
编辑 etc/hadoop/core-site.xml,修改如下:
说明:hadoop.tmp.dir 默认值为"/tmp/hadoop-${user.name}"。Linux 操作系统重启后,这个目录会被清空,这可能导致数据丢失,因此需要修改。
fs.defaultFS:定义master的URI和端口
配置 hdfs-site.xml
编辑 etc/hadoop/hdfs-site.xml,修改如下:
dfs.replication:配置文件复制数量
dfs.namenode.name.dir:NN所使用的元数据保存
dfs.datanode.data.dir:保存dn节点数据的路径
配置 mapred-site.xml
[root@hadoop01 hadoop-2.7.3]# cd etc/hadoop/
[root@hadoop01 hadoop]# mv mapred-site.xml.template mapred-site.xml
[root@hadoop01 hadoop]# cd ../../
编辑 etc/hadoop/mapred-site.xml,修改如下:
mapreduce.framework.name:yarn :启用yarn 作为资源管理框架
mapreduce.jobhistory.address:MapReduce JobHistory服务器IPC主机:端口
mapreduce.jobhistory.webapp.address:配置web地址
编辑 etc/hadoop/yarn-site.xml,修改如下:
yarn.resourcemanager.hostname:配置主机名
yarn.nodemanager.aux-services:NodeManager上运行的附属服务。需配置成mapreduce_shuffle,才可运行MapReduce程序
配置 etc/hadoop/slaves
该文件指定哪些服务器节点是datanode节点。删除locahost,添加所有datanode节点的主机名
编辑 etc/hadoop/slaves,修改如下:
hadoop01
hadoop02
hadoop03
配置 hadoop 的环境变量
编辑/etc/profile,在文件末尾加上如下内容:
export HADOOP_HOME=/opt/hadoop-2.7.3
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
说明:需要执行” . /etc/profile”让环境变量生效。
注意:下面第五步的操作需要依次在 hadoop01、hadoop02 和 hadoop03 上执行。
hadoop01 上的操作
生成私钥和公钥对[root@hadoop01~]# ssh-keygen -t rsa
本机免登录
[root@hadoop01 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
执行如下命令验证 SSH 本机免登录配置,如下:
[root@hadoop01 ~]# ssh localhost [root@hadoop01 ~]# ssh hadoop01
hadoop01 到 hadoop02 的免登录
[root@hadoop01 ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop02
执行如下命令验证 SSH 到hadoop02 免登录配置,如下:
[root@hadoop01 ~]# ssh hadoop02
hadoop01 到 hadoop03 的免登录
[root@hadoop01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop03
执行如下命令验证 SSH 到hadoop03 免登录配置,如下:
[root@hadoop01 ~]# ssh hadoop03
同 步 /etc/hosts 到 hadoop02 和 hadoop03
[root@hadoop01 ~]# scp /etc/hosts hadoop02:/etc/hosts
[root@hadoop01 ~]# scp /etc/hosts hadoop03:/etc/hosts
同 步 /etc/profile 到 hadoop02 和 hadoop03
[root@hadoop01 ~]# scp /etc/profile hadoop02:/etc/profile
[root@hadoop01 ~]# scp /etc/profile hadoop03:/etc/profile
同步 hadoop 包及配置到 hadoop02 和 hadoop03
[root@hadoop01 ~]# scp -r /opt/hadoop-2.7.3/ hadoop02:/opt/hadoop-2.7.3/
[root@hadoop01 ~]# scp -r /opt/hadoop-2.7.3/ hadoop03:/opt/hadoop-2.7.3/
注意:本步操作在hadoop01 上进行。
[root@hadoop01 ~]# hdfs namenode -format
如果执行成功,会在日志末尾看到格式化成功的提示,如下:
INFO common.Storage: Storage directory /opt/hadoop-2.7.3/hadoop-tmp/dfs/name has been successfully formatted.
[root@hadoop01 ~]# start-dfs.sh
hadoop01: starting namenode, logging to /opt/hadoop-2.7.3/logs/hadoop-root- namenode-hadoop01.out
hadoop01: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-root-datanode- hadoop01.out
hadoop03: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-root-datanode- hadoop03.out
hadoop02: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-root-datanode- hadoop02.out
注意:这里在 hadoop01 上启动了 NameNode、DataNode 和 SecondaryNameNode,在
hadoop02 和 hadoop03 上分别启动了 DataNode。
查看 haoop01 上启动的进程
[root@hadoop01 ~]# jps
4950 SecondaryNameNode
4653 NameNode
4751 DataNode
上面的启动命令启动了 HDFS 的管理节点 NameNode 和数据节点 DataNode,以及
NameNode 的辅助节点,即 SecondaryNameNode。
查看 hadoop02 上启动的进程
[root@hadoop02 ~]# jps
1290 DataNode
查看 hadoop03 上启动的进程
[root@hadoop03 ~]# jps
1261 DataNode
第 八 步 : 启 动 YARN [root@hadoop01 ~]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-resourcemanager- hadoop01.out
hadoop01: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root- nodemanager-hadoop01.out
hadoop03: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root- nodemanager-hadoop03.out
hadoop02: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root- nodemanager-hadoop02.out
[root@hadoop01 ~]#
注意:这里在 hadoop01 启动了 ResourceManager 和 NodeManager,在 hadoop02 和
hadoop03 上分别启动了 NodeManager。
查看 haoop01 上启动的进程
[root@hadoop01 ~]# jps
4950 SecondaryNameNode
4653 NameNode
5342 ResourceManager
4751 DataNode
5439 NodeManager
查看 hadoop02 上启动的进程
[root@hadoop02 ~]# jps
1591 NodeManager
1290 DataNode
查看 hadoop03 上启动的进程[root@hadoop03 ~]# jps
1563 NodeManager
1261 DataNode
[root@hadoop01 dfs]# mr-jobhistory-daemon.sh
start historyserver
starting historyserver, logging to /opt/hadoop-2.7.3/logs/mapred-root-historyserver- hadoop01.out
确认进程已启动[root@hadoop01 ~]# jps
4950 SecondaryNameNode
4653 NameNode
5342 ResourceManager
4751 DataNode
5439 NodeManager
粗体部分为新启动的进程
查看ResourceManager 的 Web 界面http://192.168.159.194:8088
查看 Job History Server 的web 页面http://192.168.159.194:19888/
[root@hadoop01 ~]# hdfs dfs -mkdir -p /user/root
注意:这里的 root,如果你是其他用户就换成相应的用户名。
[root@hadoop01 ~]# cd /opt/hadoop-2.7.3/
[root@hadoop01 hadoop-2.7.3]# hdfs dfs -put etc/hadoop input
这里举例拷贝et/hadoop 目录下的文件到HDFS 中。
[root@hadoop01 hadoop-2.7.3]# hdfs dfs -ls input
[root@hadoop01 ~]# cd /opt/hadoop-2.7.3/
[root@hadoop01 hadoop-2.7.3]# bin/hadoop jar share/hadoop/mapreduce/hadoop- mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
各个节点上进程的变化
hadoop01 上新增进程:YarnChild、MRAppMaster 和 RunJar
hadoop02 和 hadoop03 上有多个新增的 YarnChild 进程。
[root@ hadoop01 hadoop-2.7.3]# hdfs dfs -get output output
[root@ hadoop01 hadoop-2.7.3]# cat output/*
6 dfs.audit.logger
4 dfs.class
3 dfs.server.namenode.
2 dfs.period
2 dfs.audit.log.maxfilesize
2 dfs.audit.log.maxbackupindex
1 dfsmetrics.log
1 dfsadmin
1 dfs.servers
1 dfs.replication
1 dfs.file
或者直接查看
[root@hadoop01 hadoop-2.7.3]# hdfs dfs -cat output/*
这里可以看到每个包含dfs 的关键词在 etc/hadoop 的所有文件中出现的次数的统计。
用 linux 命令来统计一下"dfs.class"的次数,结果为 4 次,与 mapreduce 统计的一致。
[root@hadoop01 hadoop-2.7.3]#grep -r 'dfs.class' etc/hadoop/
etc/hadoop/hadoop- metrics.properties:dfs.class=org.apache.hadoop.metrics.spi.NullContext etc/hadoop/hadoop- metrics.properties:#dfs.class=org.apache.hadoop.metrics.file.FileContext etc/hadoop/hadoop-metrics.properties:# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext etc/hadoop/hadoop-metrics.properties:# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 [root@hadoop01 hadoop-2.7.3]#
[root@hadoop01 ~]# mr-jobhistory-daemon.sh stop historyserver
[root@hadoop01 hadoop-2.7.3]# stop-yarn.sh
[root@hadoop01 hadoop-2.7.3]# stop-dfs.sh