需要3台虚拟机master(192.138.137.161),slaver01(192.138.137.162),slaver02(192.138.137.163)
系统centos8,配置1核CPU,2G内存,20G硬盘
软件包:hadoop-3.2.2.tar.gz,jdk-8u191-linux-x64.tar.gz
安装虚拟机Centos8:https://blog.csdn.net/dp340823/article/details/112056146
宿主机连接wifi,centos8静态IP联网:https://blog.csdn.net/dp340823/article/details/112056911
这些操作是在master(192.138.137.161)进行的,
后续将文件scp到slaver01(192.138.137.162)和slaver02(192.138.137.163)即可
tar zxvf jdk-8u191-linux-x64.tar.gz -C /opt
tar zxvf hadoop-3.2.1.tar.gz -C /opt
vim /etc/profile
#java
export JAVA_HOME=/opt/jdk1.8.0_191
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
#hadoop
export HADOOP_HOME=/opt/hadoop-3.2.1
export PATH=$PATH:$HADOOP_HOME/bin
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
source /etc/profile
java -version
hadoop version
3台虚拟机都要做
systemctl stop firewalld
firewall-cmd --state
vim /etc/hosts
192.168.137.161 master
192.168.137.162 slaver01
192.168.137.163 slaver02
ssh-keygen -t rsa
一直回车即可
ssh-copy-id master
ssh-copy-id slaver01
ssh-copy-id slaver02
ssh master
配置文件都在/opt/hadoop-3.2.1/etc/hadoop/目录下
vim /opt/hadoop-3.2.1/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/opt/jdk1.8.0_191
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
vim /opt/hadoop-3.2.1/etc/hadoop/core-site.xml
fs.defaultFS
hdfs://master:9000
hadoop.tmp.dir
/opt/hadoop-3.2.1/tmp
vim /opt/hadoop-3.2.1/etc/hadoop/hdfs-site.xml
dfs.namenode.http-address
master:50070
dfs.namenode.secondary.http-address
slaver01:50090
dfs.namenode.name.dir
/opt/hadoop-3.2.1/name
dfs.replication
2
dfs.datanode.data.dir
/opt/hadoop-3.2.1/data
vim /opt/hadoop-3.2.1/etc/hadoop/mapred-site.xml
mapreduce.framework.name
yarn
vim /opt/hadoop-3.2.1/etc/hadoop/yarn-site.xml
yarn.resourcemanager.hostname
master
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
vim masters
vim workers
mkdir tmp name data
scp /etc/profile slaver01:/etc/
scp /etc/profile slaver02:/etc/
scp -r /opt slaver01:/
scp -r /opt slaver02:/
需要在slaver01、slaver02执行source /etc/profile 使用配置文件生效
java -version和hadoop version验证slaver01和slaver02 上的jdk和hadoop是否安装成功
3台虚拟机都要做
./bin/hdfs namenode -format
./sbin/start-dfs.sh
./sbin/start-yarn.sh
master
slaver01
slaver02
点击nodes查看