搭建Hadoop-HA环境

前提:搭建Hadoop完全分布式环境

node01 node02 node03 node04
NameNode01 NameNode02 NameNode03
DataNode01 DataNode02 DataNode03
JournalNode01 JournalNode02 JournalNode03
  1. 配置node01、node02、node03、node04上的Hadoop

在node01上修改/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
添加:


  
    fs.defaultFS
    hdfs://manualHACluster
  
  
    hadoop.tmp.dir
    /opt/hadoop/data/tmp/manual_ha
  

在node01上修改/opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
添加:


  
    dfs.replication
    2
  
  
    dfs.nameservices
    manualHACluster
  
  
    dfs.ha.namenodes.manualHACluster
    NN01,NN02,NN03
  
  
    dfs.namenode.rpc-address.manualHACluster.NN01
    node01:8020
  
  
    dfs.namenode.rpc-address.manualHACluster.NN02
    node02:8020
  
  
    dfs.namenode.rpc-address.manualHACluster.NN03
    node03:8020
  
  
    dfs.namenode.shared.edits.dir
    qjournal://node01:8485;node02:8485;node03:8485/manualHACluster
  
    dfs.client.failover.proxy.provider.manualHACluster
    org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
  
  
    dfs.ha.fencing.methods
    sshfence
  
  
    dfs.ha.fencing.ssh.private-key-files
    /root/.ssh/id_rsa
  
  
    dfs.journalnode.edits.dir
    /opt/hadoop/data/tmp/manual_ha
  

将node01上的/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml/opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml拷贝到node02、node03、node04:
scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node02:/opt/hadoop/hadoop-3.1.1/etc/hadoop/ && scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node03:/opt/hadoop/hadoop-3.1.1/etc/hadoop/ && scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node04:/opt/hadoop/hadoop-3.1.1/etc/hadoop/

  1. 配置node01、node02、node03上的环境变量

在node01上修改/etc/profile
vim /etc/profile
添加:

export HDFS_JOURNALNODE_USER=root

将node01上的/etc/profile拷贝到node02、node03
scp /etc/profile node02:/etc/ && scp /etc/profile node03:/etc/
在node01、node02、node03上运行:
. /etc/profile

  1. 启动JournalNode

在node01、node02、node03上运行:
hdfs --daemon start journalnode

  1. 格式化Hadoop

在node01上运行:
hdfs namenode -format

  1. 启动NameNode

在node01上运行:
hdfs --daemon start namenode
在node02、node03上运行:
hdfs namenode -bootstrapStandby

  1. 启动Hadoop

在node01/node02/node03/node04上运行:
start-dfs.sh

  1. 查看进程

在node01、node02、node03、node04上运行:
jps

  1. 访问网页

NameNode:http://192.168.163.191:9870
NameNode:http://192.168.163.192:9870
NameNode:http://192.168.163.193:9870
DataNode:http://192.168.163.192:9864
DataNode:http://192.168.163.193:9864
DataNode:http://192.168.163.194:9864

你可能感兴趣的:(搭建Hadoop-HA环境)