1、系统偏好设置 ----- 共享 ---- 勾选远程登录,所有用户
2、打开终端,输入命令 ssh-keygen -t rsa,一直回车即可
3、查看生成的公钥和私钥
cd ~/.ssh
ls
会看到~/.ssh目录下有两个文件:
①私钥:id_rsa
②公钥:id_rsa.pub
4、 将公钥内容写入到~/.ssh/authorized_keys中
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
5、测试
在terminal终端输入 ssh localhost
如果出现以下询问输入yes,不需要输入密码就能登录,说明配置成功
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/bin/zsh -c "$(curl -fsSL https://gitee.com/cunkai/HomebrewCN/raw/master/Homebrew.sh)"
点击回车,输入开机密码即可安装,此过程比较慢,请耐心等待,下载源按需自选。
brew install hadoop
红框为你本地hadoop的路径,后续配置的路径都要替换为这个路径
进入hadoop的目录:
cd /usr/local/Cellar/hadoop/3.3.3/libexec/etc/hadoop
ls
修改配置文件
vim core-site.xml
进入文件后加入:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
<!--用来指定hadoop运行时产生文件的存放目录 自己创建-->
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/Cellar/hadoop/tmp</value>
</property>
</configuration>
vim hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!--不是root用户也可以写文件到hdfs-->
<property>
<name>dfs.permissions</name>
<value>false</value> <!--关闭防火墙-->
</property>
<!--把路径换成本地的name坐在位置-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/Cellar/hadoop/tmp/dfs/name</value>
</property>
<!--在本地新建一个存放hadoop数据的文件夹,然后将路径在这里配置一下-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/Cellar/hadoop/tmp/dfs/data</value>
</property>
</configuration>
vim mapred-site.xml
<configuration>
<property>
<!--指定mapreduce运行在yarn上-->
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9010</value>
</property>
<!-- 新添加 -->
<!-- 下面的路径就是你hadoop distribution directory -->
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/Cellar/hadoop/3.3.3/libexec</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/Cellar/hadoop/3.3.3/libexec</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/Cellar/hadoop/3.3.3/libexec</value>
</property>
</configuration>
vim yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:9000</value>
</property>
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>100</value>
</property>
</configuration>
cd /usr/local/Cellar/hadoop/3.3.3/libexec/sbin
./start-dfs.sh
cd /usr/local/Cellar/hadoop/3.3.3/libexec/etc
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"
./stop-yarn.sh
cd /usr/local/Cellar/hadoop/3.3.3/libexec/sbin
./start-yarn.sh
在浏览器中输入 http://localhost:8088/cluster 看到一下界面则启动成功
./stop-yarn.sh
本文参考:https://blog.csdn.net/wttey/article/details/125278591