准备3台虚拟机,最低要求:内存4G,硬盘50G
这里准备的虚拟机是16G,硬盘100G的配置
机器配置如下:
hostname | ip | 内存 | cpu | 磁盘 |
---|---|---|---|---|
cdh01 | 192.168.43.12 | 16G | 2c | 100G |
cdh02 | 192.168.43.135 | 16G | 2c | 100G |
cdh03 | 192.168.43.75 | 16G | 2c | 100G |
这里使用hsy用户
useradd hsy
passwd 123456
vim /etc/sudoers
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
hsy ALL=(ALL) ALL
所有机器都下载依赖,或者使用其中一台下载好,在进行克隆
sudo yum install -y epel-release
sudo yum install -y psmisc nc net-tools rsync vim lrzsz ntp libzstd openssl-static
systemctl start ntpd
systemctl stop ntpd
每台机器设置各自的hostname
sudo hostnamectl --static set-hostname cdh01
sudo hostnamectl --static set-hostname cdh02
sudo hostnamectl --static set-hostname cdh03
所有机器都设置
sudo vim /etc/hosts
192.168.43.12 cdh01
192.168.43.135 cdh02
192.168.43.75 cdh03
sudo systemctl stop firewalld
sudo systemctl disable firewalld
在/opt目录下创建module、software文件夹
sudo mkdir module
sudo mkdir software
sudo chown hsy:hsy /opt/module/opt/software
tar vf jdk-8u212-linux-x64.tar.gz -C /opt/module/
sudo vim /etc/profile
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_212
export PATH=$PATH:$JAVA_HOME/bin
sudo source /etc/profile
java -version
tar vf hadoop-3.1.3.tar.gz -C /opt/module/
sudo vim /etc/profile
##HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
sudo source /etc/profile
drwxr-xr-x. 2 hsy hsy 4096 12月 22 2020 bin
drwxr-xr-x. 3 hsy hsy 4096 12月 22 2020 etc
drwxr-xr-x. 2 hsy hsy 4096 12月 22 2020 include
drwxr-xr-x. 3 hsy hsy 4096 12月 22 2020 lib
drwxr-xr-x. 2 hsy hsy 4096 12月 22 2020 libexec
-rw-r--r--. 1 hsy hsy 15429 12月 22 2020 LICENSE.txt
-rw-r--r--. 1 hsy hsy 101 12月 22 2020 NOTICE.txt
-rw-r--r--. 1 hsy hsy 1366 12月 22 2020 README.txt
drwxr-xr-x. 2 hsy hsy 4096 12月 22 2020 sbin
drwxr-xr-x. 4 hsy hsy 4096 12月 22 2020 share
Hadoop运行模式包括:本地模式、伪分布式模式以及完全分布式模式。(这里使用完全分布式模式)
ssh-keygen -t rsa
ssh-copy-id cdh01
ssh-copy-id cdh02
ssh-copy-id cdh03
cd /home/hsy
vim xsync
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in cdh01 cdh02 cdh03
do
echo ==================== $host ====================
#3. 遍历所有目录,挨个发送
for file in $@
do
#4 判断文件是否存在
if [ -e $file ]
then
#5. 获取父目录
pdir=$(cd -P $(dirname $file); pwd)
#6. 获取当前文件的名称
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
chmod +x xsync
sudo mv xsync /bin/
sudo xsync /bin/xsync
服务 | cdh01 | cdh02 | cdh03 |
---|---|---|---|
HDFS | NameNode、DataNode | DataNode | SecondaryNameNode、DataNode |
YARN | NodeManager | ResourceManager、NodeManager | NodeManager |
cd $HADOOP_HOME/etc/hadoop
vim core-site.xml
fs.defaultFS
hdfs://cdh01:8020
hadoop.data.dir
/opt/module/hadoop-3.1.3/data
hadoop.proxyuser.hsy.hosts
*
hadoop.proxyuser.hsy.groups
*
cd $HADOOP_HOME/etc/hadoop
vim hdfs-site.xml
dfs.namenode.name.dir
file://${hadoop.data.dir}/name
dfs.datanode.data.dir
file://${hadoop.data.dir}/data
dfs.namenode.checkpoint.dir
file://${hadoop.data.dir}/namesecondary
dfs.client.datanode-restart.timeout
30
dfs.namenode.secondary.http-address
cdh03:9868
dfs.namenode.http-address
cdh01:50070
cd $HADOOP_HOME/etc/hadoop
vim yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.resourcemanager.hostname
cdh02
yarn.nodemanager.env-whitelist
JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME
vim mapred-site.xml
mapreduce.framework.name
yarn
xsync /opt/module/hadoop-3.1.3/etc/hadoop/
hdfs namenode -format
hdfs --daemon start namenode
hdfs --daemon start datanode
vim /opt/module/hadoop-3.1.3/etc/hadoop/workers
cdh01
cdh02
cdh03
xsync /opt/module/hadoop-3.1.3/etc
cd /opt/module/hadoop-3.1.3
rm -rf data/*
rm -rf logs/*
cd /opt/module/hadoop-3.1.3
sh sbin/start-dfs.sh
cd /opt/module/hadoop-3.1.3
sh sbin/start-yarn.sh
如果访问页面时,前端JS报错,导致页面显示不完整,参考解决方案:Hadoop查看Secondary Namenode Web端无信息的解决办法
http://cdh03:9868/
hdfs --daemon start namenode
hdfs --daemon stop namenode
hdfs --daemon start datanode
hdfs --daemon stop datanode
hdfs --daemon start secondarynamenode
hdfs --daemon stop secondarynamenode
yarn --daemon start resourcemanager
yarn --daemon stop resourcemanager
yarn --daemon start nodemanager
yarn --daemon stop nodemanager
cd /opt/module/hadoop-3.1.3
sh sbin/start-dfs.sh
sh sbin/stop-dfs.sh
cd /opt/module/hadoop-3.1.3
sh sbin/start-yarn.sh
sh sbin/stop-yarn.sh
vi mapred-site.xml
在该文件里面增加如下配置
mapreduce.jobhistory.address
cdh01:10020
mapreduce.jobhistory.webapp.address
cdh01:19888
xsync $HADOOP_HOME/etc/hadoop/mapred-site.xml
mapred –daemon start historyserver
mapred –daemon stop historyserver
http://cdh01:19888/jobhistory
日志聚集概念:应用运行完成以后,将程序运行日志信息上传到HDFS系统上。
日志聚集功能好处:可以方便的查看到程序运行详情,方便开发调试。
注意:开启日志聚集功能,需要重新启动NodeManager 、ResourceManager和HistoryManager。
vim yarn-site.xml
在该文件里面增加如下配置。
yarn.log-aggregation-enable
true
yarn.log.server.url
http://hadoop102:19888/jobhistory/logs
yarn.log-aggregation.retain-seconds
604800
xsync $HADOOP_HOME/etc/hadoop/yarn-site.xml
cd /opt/module/hadoop-3.1.3
sh sbin/stop-yarn.sh
mapred --daemon stop historyserver
cd /opt/module/hadoop-3.1.3
sh sbin/start-yarn.sh
mapred --daemon start historyserver
在Hadoop的配置文件core-site.xml中增加如下配置
hadoop.http.staticuser.user
hsy
dfs.permissions.enabled
false