Hadoop搭建失败遇到的问题解决方案

一、单机部署HADOOP:(非分布式)


1、环境准备


(1)虚拟内存

dd if=/dev/zero of=swap bs=1M count=2048

mkswap swap

swapon swap

chmod 0600 swap


(2)本地解析文件


vim /etc/hosts


192.168.100.1 server


2、安装HADOOP,配置JAVA环境


yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y


tar zxvf hadoop-3.1.2.tar.gz -C /usr/local/


ln -s hadoop-3.1.2/ hadoop


vim /etc/profile


PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64


source /etc/profile


3、测试


hadoop version


cd /usr/local/hadoop/share/hadoop/mapreduce


hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10000000000


二、单机部署HADOOP:(伪分布式)


1、SSH免密登录

ssh-keygen

ssh-copy-id -i id_rsa.pub 192.168.100.1


2、配置HDFS


vim hadoop-env.sh


export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64

export HDFS_NAMENODE_USER=root

export HDFS_DATANODE_USER=root

export HDFS_SECONDARYNAMENODE_USER=root

export YARN_RESOURCEMANAGER_USER=root

export YARN_NODEMANAGER_USER=root


vim core-site.xml


hadoop.tmp.dir

/usr/local/hadoop/tmp


fs.default.name

hdfs://server:9000


vim hdfs-site.xml


dfs.replication

1


dfs.permissions

false


hadoop namenode -format


start-dfs.sh&stop-dfs.sh


hadoop dfsadmin -report


3、配置MAPREDUCE


vim mapred-site.xml


mapreduce.framework.name

yarn


mapreduce.job.tracker

hdfs://server:8001

true


mapreduce.framework.name

yarn


mapreduce.application.classpath

/usr/local/hadoop/etc/hadoop,

/usr/local/hadoop/share/hadoop/common/,

/usr/local/hadoop/share/hadoop/common/lib/,

/usr/local/hadoop/share/hadoop/hdfs/,

/usr/local/hadoop/share/hadoop/hdfs/lib/,

/usr/local/hadoop/share/hadoop/mapreduce/,

/usr/local/hadoop/share/hadoop/mapreduce/lib/,

/usr/local/hadoop/share/hadoop/yarn/,

/usr/local/hadoop/share/hadoop/yarn/lib/


4、配置YARN


hadoop classpath


centos7.5+hadoop3.1.2实战图文攻略--2019持续更新


vim yarn-site.xml


yarn.resourcemanager.hostname

server


yarn.nodemanager.aux-services

mapreduce_shuffle


yarn.application.classpath

/usr/local/hadoop-3.1.2/etc/hadoop:/usr/local/hadoop-3.1.2/share/hadoop/common/lib/:/usr/local/hadoop-3.1.2/share/hadoop/common/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/lib/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/lib/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/:/usr/local/hadoop-3.1.2/share/hadoop/yarn:/usr/local/hadoop-3.1.2/share/hadoop/yarn/lib/:/usr/local/hadoop-3.1.2/share/hadoop/yarn/


5、启动并测试


start-all.sh&stop-all.sh


hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10


web访问HDFS:http://192.168.100.1:9870


web访问MAPREDUCE:http://192.168.100.1:8088

你可能感兴趣的:(Hadoop搭建失败遇到的问题解决方案)