原创|Linux|CentOS|Inst Hadoop

一、安装Shell
参考:https://jingyan.baidu.com/article/19192ad8d2bdcde53e5707da.html

二、安装Xftp
参考:https://jingyan.baidu.com/article/624e74590fea4f34e9ba5a74.html

三、安装JDK
需要使用JDK8,不能使用JDK12或其他高版本
下载JDK网址:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

原创|Linux|CentOS|Inst Hadoop_第1张图片
原创|Linux|CentOS|Inst Hadoop_第2张图片
原创|Linux|CentOS|Inst Hadoop_第3张图片
原创|Linux|CentOS|Inst Hadoop_第4张图片
原创|Linux|CentOS|Inst Hadoop_第5张图片
原创|Linux|CentOS|Inst Hadoop_第6张图片
原创|Linux|CentOS|Inst Hadoop_第7张图片
cp jdk-8u201-linux-x64  /opt/  #JDK移到OPT文件夹
cd /opt/  #进入到OPT文件夹
rpm -ivh jdk-8u201-linux-x64.rpm #安装JDK
#修改配置
vi /etc/hosts
#增加配置
192.168.192.129 bigdata
# Esc : wq 保存退出
原创|Linux|CentOS|Inst Hadoop_第8张图片
#修改配置
vi /etc/profile
#验证是否配置成功(不会显示)
echo $JAVA_HOME

#刷新配置
source /etc/profile
#末尾增加如下信息
JAVA_HOME=/usr/java/jdk1.8.0_201-amd64
HADOOP_HOME=/opt/hadoop-3.1.2
PATH=$JAVA_HOME/BIN:$HADOOP_HOME/bin:$PATH
# Esc : wq 保存退出
原创|Linux|CentOS|Inst Hadoop_第9张图片

四、配置免密码登陆

cd
ssh-keygen -t rsa
#生成公钥和私钥
#三次回车,不用输入密码
#进入到SSH文件夹
cd .ssh/
cat id_rsa.pub >> authorized_keys
chmod 644 authorized_keys
#验证是否成功
ssh bigdata
#确认输入yes

五、安装及配置Hadoop

cd 
cd /opt/
tar zxf hadoop-3.1.2.tar.gz
cd /opt/hadoop-3.1.2/etc/hadoop/
cd
cd /opt/hadoop-3.1.2/etc/hadoop
vi core-site.xml
# 间插入
    
       fs.default.name
       hdfs://bigdata:9000
    

    
       hadoop.tmp.dir
       /opt/hadoop-3.1.2/current/tmp
    

    
      fs.trash.interval
      4320
    

# Esc : wq 保存退出
原创|Linux|CentOS|Inst Hadoop_第10张图片
vi hdfs-site.xml
# 间插入
    
      dfs.namenode.name.dir
      /opt/hadoop-3.1.2/current/namenode/data
    
    
      dfs.datanode.data.dir
      /opt/hadoop-3.1.2/current/datanode/data
    
    
      dfs.replication
      1
    
    
      dfs.webhdfs.enabled
      true
    
    
      dfs.permissions.superusergroup
      staff
    
    
      dfs.permissions.enabled
      false
    
    
      dfs.http.address
      0.0.0.0:50070
    
# Esc : wq 保存退出
vi yarn-site.xml
# 间插入
     
       yarn.resourcemanager.hostname
       bigdata
     
     
       yarn.nodemanager.aux-services
       mapreduce_shuffle
     
     
       yarn.nodemanager.aux-                                   services.mapreduce.shuffle.class
       org.apache.hadoop.mapred.ShuffleHandler
     
     
       yarn.resourcemanager,address
       bigdata:18040
     
     
       yarn.resourcemanager,scheduler.address
       bigdata:18030
     
     
       yarn.resourcemanager.resource-tracker.address
       bigdata:18025
     
     
       yarn.resource.manager.admin.address
       bigdata:18141
     
     
       yarn.resourcemanager,webapp.address
       bigdata:18088
     
     
       yarn,1og-aggregation-enable
       true
     
     
       yarn.1og-aggregation.retain-seconds
       86400
     
       yarn.resourcemanager.resource-tracker.address
       bigdata:18025
     
     
       yarn.resource.manager.admin.address
       bigdata:18141
     
     
       yarn.resourcemanager,webapp.address
       bigdata:18088
     
     
       yarn,1og-aggregation-enable
       true
     
     
       yarn.1og-aggregation.retain-seconds
       86400
     
     
       yarn,1og-aggregation,retain-check-interval-seconds
       86400
     
     
       yarn,nodemanager,remote-app-log-dir
       /tmp/logs
     
     
       yarn.nodemanager,remote-app-1og-dir-suffix
       logs
     
vi mapred-site.xml
# 间插入
     
       mapreduce.framework.name
       yarn
     
     
       mapreduce.jobtracker.http.address
       bigdata:50030
       
     
       mapreduce.jobhisotry.address
       bigdata:10020
     
     
       mapreduce.jobhistory.webapp.address
       bigdata:19888
     
     
       mapreduce.jobhistory.done-dir
       /jobhistory/done
     
     
       mapreduce.intermedlate-done-dir
       /jobhisotry/done _intermediate
     
     
       mapreduce.job.ubertask.enable
       true
     
     
       mapred.job.tracker.http.address
       0.0.0.0:50030
     
     
       mapred.task.tracker.http.address
       0.0.0.0:50060
     
vi slaves

# Esc : wq 保存退出
vi hadoop-env.sh
#export JAVA_HOME=
#改成
export JAVA_HOME=/usr/java/jdk1.8.0_201-amd64/
# Esc : wq 保存退出

六、格式化

cd
hdfs namenode -format
cd
/opt/hadoop-3.1.2/sbin/start-all.sh

七、关闭防火墙

cd
yum -y install iptables-services
cd 
service iptables stop  #临时关闭防火墙

systemctl disable  firewalld.service #永久关闭防火墙,重启才能生效,可不执行

service iptables status  #查看防火墙状态

八、检验Hadoop状态

cd
jps
#显示如下信息则成功
原创|Linux|CentOS|Inst Hadoop_第11张图片
#浏览器打开,验证是否成功
192.168.192.129:50070
原创|Linux|CentOS|Inst Hadoop_第12张图片

九、FAQ
1、浏览器无法打开50070
参考:https://blog.csdn.net/zxz547388910/article/details/86468925
若还无法打开50070:
①、删除/opt/hadoop-3.1.2/current/tmp #删除,慎用!
②、hdfs namenode -format #格式化,慎用!
③、hadoop-daemon.sh start namenode #启动namenode
④、netstat -ntlp #看50070是否有在监控范围
⑤、start-all.sh

2、用户定义的配置问题
①ERROR: Attempting to launch hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting launch.

②、ERROR: Attempting to launch yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting launch.
ERROR: Attempting to launch yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting launch.
参考:https://blog.csdn.net/u013725455/article/details/70147331

③、定义错误的配置问题
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER
参考:https://blog.csdn.net/weixin_38763887/article/details/79157652

④、出现错误:start-all.sh: 未找到命令
输入:sh start-all.sh或 ./start-all.sh

⑤、常见问题
参考:http://www.cnblogs.com/dimg/p/9790448.html

⑥、其余补充:
更改hostname方法:sudo hostnamectl set-hostname

你可能感兴趣的:(原创|Linux|CentOS|Inst Hadoop)