Yarn分布式集群环境部署

YARN配置

在Hadoop-HA高可用分布式环境中改造

NN-1 NN-2 DN ZK ZKFC JNN RM NM
node01 * * *
node02 * * * * * *
node03 * * * * *
node04 * * * *

node01:

1)mapred-site.xml


  <property>
        <name>mapreduce.framework.namename>
        <value>yarnvalue>
  property>

2)yarn-site.xml

配置RS

 <property>
        <name>yarn.nodemanager.aux-servicesname>
        <value>mapreduce_shufflevalue>
 property>
<property>
   <name>yarn.resourcemanager.ha.enabledname>
   <value>truevalue>
 property>
 <property>
   <name>yarn.resourcemanager.cluster-idname>
   <value>cluster1value>
 property>
 <property>
   <name>yarn.resourcemanager.ha.rm-idsname>
   <value>rm1,rm2value>
 property>
 <property>
   <name>yarn.resourcemanager.hostname.rm1name>
   <value>bd003value>
 property>
 <property>
   <name>yarn.resourcemanager.hostname.rm2name>
   <value>bd004value>
 property>
 <property>
   <name>yarn.resourcemanager.zk-addressname>
   <value>bd002:2181,bd003:2181,bd004:2181value>
 property>

分发至2/3/4

#分发
scp yarn-site.xml  bd002:`pwd`

3)启动

node1:
start-yarn.sh
node3、node4
yarn-daemon.sh start resourcemanager

4)访问测试

bd003:8088
bd004:8088

Yarn分布式集群环境部署_第1张图片

5)最后所有服务的停止和启动

1)启动
节点2,3,4,启动zk
#zkServer.sh start
节点1:
# start-dfs.sh 
#start-yarn.sh

node3、node4
#yarn-daemon.sh start resourcemanager

2)停止
节点1:
# stop-dfs.sh 
#stop-yarn.sh
node3、node4
#yarn-daemon.sh stop resourcemanager
节点2,3,4,停止zk
#zkServer.sh stop

写了几个脚本,提高效率

#管理resourcemanager

#!/bin/bash
for i in  bd003 bd004
do
echo "---------- $1 ing $i ----------"
ssh $i "source ~/.bash_profile;yarn-daemon.sh $1 resourcemanager"
done


#查看进程
#!/bin/bash
for i in bd001 bd002 bd003 bd004
do
echo "-------------------------------------"
echo "------------ JPS $i -----------------"
ssh $i "source ~/.bash_profile;jps"
echo "-------------------------------------"
sleep 1
done


#批量重启
#!/bin/bash
for i in  bd002 bd003 bd004
do
echo "-------------------------------------"
echo "------------ init $1 $i -----------------"
ssh $i "init $1"
sleep 2
done

问题整理:启动报错

bd002: datanode running as process 1501. Stop it first.
bd004: datanode running as process 1432. Stop it first.

hadoop出现namenode running as process 18472. Stop it first.等等,
类似的出现了好几个。

解决方法:
应该是没有正常停止服务导致的
在启动之前你需要在重新启动hadoop之前要先stop掉所有的hadoop服务。 
然后恢复正常启动。
   #stop-all.sh
   #start-all.sh

你可能感兴趣的:(Linux,大数据)