集群规划:
NN-1:Namenode(active)
NN-1:Namenode(standby)
DN :Datanode
ZK:Zookeeper
ZKFC:Zookeeper Failover Controller
JUN:Journalnode
服务器名称 | 进程 |
---|---|
node01 | NN-1、ZKFC、JUN |
node02 | NN-2、DN、ZK、ZKFC、JUN |
node03 | DN、ZK、JUN |
node04 | DN、ZK |
yum install ntp
ntpdate ntp1.aliyun.com
node01->node01
node01->node02
node01->node03
node01->node04
node02->node01
①所有节点执行
——命令:ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
②在node01节点执行,将node01的公钥加入到其他节点的白名单中
——命令:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node01
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node02
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node03
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node04
②在node02节点执行,将node02的公钥加入到node01的白名单中
——命令:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node01
注:node01~04为服务器名称,可用ip地址替换
yum remove *openjdk*
tar -zxvf jdk-8u131-linux-x64.tar.gz
vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
或. /etc/profile
java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
dfs.nameservices
mycluster
dfs.ha.namenodes.mycluster
nn1,nn2
dfs.namenode.rpc-address.mycluster.nn1
node01:8020
dfs.namenode.rpc-address.mycluster.nn2
node02:8020
dfs.namenode.http-address.mycluster.nn1
node01:50070
dfs.namenode.http-address.mycluster.nn2
node02:50070
dfs.namenode.shared.edits.dir
qjournal://node01:8485;node02:8485;node03:8485/mycluster
dfs.journalnode.edits.dir
/var/sxt/hadoop/ha/jn
dfs.client.failover.proxy.provider.mycluster
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.private-key-files
/root/.ssh/id_rsa
dfs.ha.automatic-failover.enabled
true
fs.defaultFS
hdfs://mycluster
ha.zookeeper.quorum
node02:2181,node03:2181,node04:2181
hadoop.tmp.dir
/var/abc/hadoop/cluster
6.修改slaves配置文件
文件位于:/解压目录/etc/hadoop/slaves
将localhost修改为node02 node03 node04
7.将配置好的安装包分发到其他节点上
——命令:
scp -r hadoop-2.6.5 node02:·pwd·
scp -r hadoop-2.6.5 node03:·pwd·
scp -r hadoop-2.6.5 node04:·pwd·
注:安装包目录应该统一
tar zxf zookeeper-3.4.10.tar.gz
mv zoo_sample.cfg zoo.cfg
scp -r zookeeper-3.4.10 node03:·pwd·
scp -r zookeeper-3.4.10 node04:·pwd·
8.6拷贝完毕后,在各自节点上创建myid号,ID号要依次递增
8.9启动Zookeeper
——命令:zkServer.sh start
查看Zookeeper状态,一个为leader,两个为follower则配置完成
——命令:zkServer.sh status
hadoop-daemon.sh start journalnode
hdfs namenode -format
hadoop-daemon.sh start namenode
9.3在另外一台NameNode节点执行如下命令
——命令:hdfs namenode -bootstrapStandby
10.启动ZKFC
——命令:hdfs zkfc -formatZK
**11.关闭所有节点上的进程 **
——命令:stop-dfs.sh
**12.启动HDFS **
——命令:start-dfs.sh
至此高可用完全分布式的HDFS就搭建完成了
未完待续…