然后回车;
3.4.1选择最小化安装
3.4.2安装目标位置
不需要任何操作,直接点完成
3.4.3KDUMP禁用
3.6.2 创建一个账号:
账号密码随意设置,自己记住就行!
(1)查看当前ip:ifconfig 或 ip addr
(2)查看虚拟机NAT模式网关(192.168.61.2)
(3)配置ip信息:
vim /etc/sysconfig/network-scripts/ifcfg-ens33
# 修改BOOTPROTO=static
# 修改ONBOOT=yes
# 添加IPADDR=与上面NET模式子网地址相同,最后以为可以随意填(例如:192.168.88.101)
# 添加NETMASK=255.255.255.0
# 添加GATEWAY=子网地址,最后一位修改为2(例如:192.168.88.2)
# 添加DNS1=223.5.5.5(阿里)
# 添加NDS2=114.114.114.114(全国通用)
(4)重启网络服务
systemctl restart network
(5)再次查看网络配置
ifconfig
ens33的一些信息已经发生了改变
(6)ping 百度:
ping www.baidu.com
(7)重启网络管理服务器
systemctl restart NetworkManager
(8)关闭防火墙
systemctl stop firewalld
(9)禁用防火墙
systemctl disable firewalld
(10)查看防火墙状态
systemctl status firewalld
(11)禁用selinux
vi /etc/selinux/config
# SELINUX=disabled
ssh远程连接
(1)在线安装时间同步服务:
yum install -y ntp
(2)在线安装vim编辑工具
yum insyall -y vim
(3) 设置定时任务,要求ntp每隔1分钟与时间服务器同步一次:
crontab -e
# */1 * * * * /usr/sbin/ntpdate ntp4.aliyun.com
(1)新建目录:
mkdir /opt/softwares /opt/modules
(2)使用finalshell上传jdk到/opt/softwares目录
(3)解压jdk11
tar -zxvf jdk-11.0.16.1_linux-x64_bin.tar.gz -C ../modules/
(4)配置JAVA_HOME和PATH:
vim /etc/profile
JAVA_HOME=/opt/modules/jdk-11.0.16.1
export PATH=$PATH:$JAVA_HOME/bin
(5)刷新:
source /etc/profile
(6)测试是否配置成功:
java -version
(7)修改主机名
vim /etc/hostname
hadoop101
(8)修改hosts
vim /etc/hosts
192.168.88.101 hadoop101
192.168.88.102 hadoop102
192.168.88.103 hadoop103
按照上述操作完成hadoop103的克隆
# 分别修改hadoop102和hadoop103的ip
vi /etc/sysconfig/network-scripts/ifcfg-ens33
# hadoop102
IPADDR=192.168.88.102
# 修改主机名
vi /etc/hostname
hadoop103按照上述操作执行
# 例如
ping 192.168.88.102
(1)依次在每台机器上执行:ssh-keygen -t rsa 生成本机秘钥
ssh-keygen -t rsa
# 四次回车
(2)依次在每台机器上执行:
ssh-copy-id hadoop101
ssh-copy-id hadoop102
ssh-copy-id hadoop103
# 将本机公钥发送给其他机器
# 按照提示执行
(1)测试免密登录
# 例如:
ssh hadoop102
# 退出
exit
(1)上传hadoop-3.2.4.tar.gz到/opt/softwares目录,并解压到/opt/modules
tar -zxvf hadoop-3.2.4.tar.gz -C /opt/modules/
(2)配置/etc/profile,添加HADOOP_HOME和PATH
vim /etc/profile
source /etc/profile
hadoop version
(3)进入此路径下:/opt/modules/hadoop-3.2.4/etc/hadoop
export JAVA_HOME=/opt/modules/jdk-11.0.16.1
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
(4)配置HDFS
<configuration>
<property>
<name>fs.defaultFSname>
<value>hdfs://hadoop101:9000value>
property>
<property>
<name>hadoop.tmp.dirname>
<value>file:/opt/modules/hadoop-3.2.4/tmpvalue>
property>
configuration>
<configuration>
<property>
<name>dfs.replicationname>
<value>3value>
property>
<property>
<name>dfs.permissions.enabledname>
<value>falsevalue>
property>
<property>
<name>dfs.namenode.name.dirname>
<value>file:/opt/modules/hadoop-3.2.4/tmp/dfs/namevalue>
property>
<property>
<name>dfs.datanode.data.dirname>
<value>file:/opt/modules/hadoop-3.2.4/tmp/dfs/datavalue>
property>
<property>
<name>dfs.namenode.http-addressname>
<value>hadoop101:50070value>
property>
configuration>
hadoop101
hadoop102
hadoop103
(5)配置YARN
<configuration>
<property>
<name>mapreduce.framework.namename>
<value>yarnvalue>
property>
configuration>
hadoop classpath
<configuration>
<property>
<name>yarn.nodemanager.aux-servicesname>
<value>mapreduce_shufflevalue>
property>
<property>
<name>yarn.resourcemanager.addressname>
<value>hadoop101:8032value>
property>
<property>
<name>yarn.application.classpathname>
<value>hadoop classpath返回的值value>
property>
configuration>
(6)复制hadoop101机器上配置好的hadoop到另外两台机器
yum install -y rsync
rsync -av /opt/modules/hadoop-3.2.4 root@hadoop102:/opt/modules/
rsync -av /opt/modules/hadoop-3.2.4 root@hadoop103:/opt/modules/
(7)同步另外两台机器中的hadoop环境变量
rsync -av /etc/profile root@hadoop102:/etc/profile
rsync -av /etc/profile root@hadoop103:/etc/profile
source /etc/profile
hadoop version
(8) 在hadoop101上格式化namenode
hadoop namenode -format
(9)启动hadoop
start-all.sh
主机 | 进程 |
---|---|
hadoop101 | Datanode NodeManager NameNode ResourceManager |
hadoop102 | DataNode NodeManager |
hadoop103 | DataNode NodeManager |