1. 根据上一章配好的集群,现为Myhost1配置backupNode和SecondaryNamenode,由于机器有限,这里就不为Myhost2配置backupNode和SecondaryNamenode,但是方法相同.
2. 我们选定Myhost4为SecondaryNamenode,Myhost5为backupNode.
配置并启动SecondaryNamenode:
1. 配置:为Myhost1的 hdfs-site.xml加入如下配置,指定SecondaryNamenode.
<property> <name>dfs.namenode.secondary.http-address</name> <value> Myhost4:9001</value> </property>
2. Myhost4的hdfs-site.xml 加入如下配置,指定nn的url和本地的checkpoint.dir.
<property> <name>dfs.federation.nameservice.id</name> <value> Myhost1:50070</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>/home/yuling.sh/checkpoint-data</value> </property>
3. 启动SecondaryNamenode. 在Myhost1上运行命令:sbin/star-dfs.sh或者在Myhost4上运行sbin/hadoop-daemo.sh start SecondaryNamenode 即可以启动SecondaryNamenode.可以通过log或者网页Myhost4:50090查看其状态.另外在checkpoint.dir下会有元数据信息.
配置并启动backupNode:
1. 配置Myhost5的hdfs-site.xml,加入如下配置信息:
<property> <name>dfs.namenode.backup.address</name> <value> Myhost5:9002</value> </property> <property> <name>dfs.namenode.backup.http-address</name> <value> Myhost5:9003</value> </property> <property> <name>dfs.namenode.http-address</name> <value> Myhost1:50070</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/yuling.sh/ backup-data</value> </property>
2. 启动backupNode,在Myhost5上运行bin/hdfs namenode –backup &
3. 在dfs.namenode.name.dir下查看元数据信息.