相较于伪分布,多了以下内容
vi /etc/hosts,增加以下内容:
192.168.154.110 master
192.168.154.111 slvae1
192.168.154.112 slave2
在每个用户的家目录下的.ssh目录:
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub sun@master
ssh-copy-id -i ~/.ssh/id_rsa.pub sun@slave1
ssh-copy-id -i ~/.ssh/id_rsa.pub sun@slave2
或者在主节点的家目录下的.ssh目录执行:
ssh-keygen -t rsa
然后生成了id_rsa与id_rsa.pub,分别是私有与公有秘钥,我们要把公有秘钥复制到一个authorized_keys文件内,这个文件的作用就是完成无密码访问。
然后执行:
scp -r /home/sun/.ssh/authorized_keys sun@slave1/2:/home/sun/.ssh/
或
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub root@slave1:/root/.ssh/authorized_keys
scp /root/.ssh/id rsa.pub root@slave2:/root/.ssh/authorized_keys
再给所有虚报机上的authorized_keys添加权限
chmod 644 /root/.ssh/authorized_keys
vi /etc/profile
export JAVA_HOME=jdk目录
export PATH=$PATH: $JAVA_HOME/bin
export HADOOP_HOME=hadoop目录
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
刷新环境变量
source /etc/profile
vi hadoop-env.sh
export JAVA_HOME=jdk目录
export HADOOP_HOME=hadoop目录
vi core-site.xml
<!--设定hadoop运行时产生文件的存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/sun/app/hadoop-3.1.2/tmp</value>
<description>a base for other tempory directories.</description>
</property>
<!--指定namenode的通信地址,默认8020端口 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
vi hdfs-site.xml
<configuration>
<!-- namenode 上存储 hdfs 名字空间元数据-->
<property>
<name>dfs.name.dir</name>
<value>/home/sun/app/hadoop-3.1.2/tmp/dfs/name</value>
</property>
<!-- datanode 上数据块的物理存储位置-->
<property>
<name>dfs.data.dir</name>
<value>/home/sun/app//hadoop-3.1.2/tmp/dfs/data</value>
</property>
<!-- 设置 hdfs 副本数量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
注:这个文件可以设置namenode,secondarynamenode的位置
<name>dfs.namenode.http-address</name>
<value>msater:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>msater:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>need not permissions</description>
</property>
vi mapred-site.xml
<configuration>
<!-- 指定yarn运行-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property>
</configuration
或者
<property>
<name>mapred.job.tracker</name>
<value>msater:49001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/usr/local/hadoop/var</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
vi yarn-site.xml
<!-- 指定ResourceManager的地址 -->
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>5</value>
</property>
</configuration>
或者
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>输入刚才返回的Hadoop classpath路径</value>
</property>
vi workers
slave1
slave2
/home/hadoop/hadoop/sbin/
目录下的几个文件,分别在 start-dfs.sh
和 stop-dfs.sh
中添加如下内容:HDFS_DATANODE_USER=hadoop
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=hadoop
HDFS_SECONDARYNAMENODE_USER=hadoop
分别在 start-yarn.sh
和 stop-yarn.sh
中添加如下内容:
YARN_RESOURCEMANAGER_USER=hadoop
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=hadoop
或者 vi hadoop-env.sh
添加如下代码
export JAVA_HOME=jdk目录
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.2.1
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
使用scp命令将master下的目录复制到各个从节点的相应位置上
scp -r jdk目录 node2:jdk目录
scp -r hadoop目录 node2:hadoop目录
scp -r /etc/profile node2:/etc/
scp -r jdk目录 node3:jdk目录
scp -r hadoop目录 node3:hadoop目录
scp -r /etc/profile node3:/etc/
在从节点(slave1 slave2)上分别运行下述命令刷新环境变量
source /etc/profile