HBase1.2.5+Hadoop2.7.3+ZooKeeper3.4.6分布式搭建

1、环境准备

3台RedHat6.8

10.9.44.60 master

10.9.44.61 salve1

10.9.44.62 salve2

—— IP和对应关系加入hosts

包:HBase1.2.5、Hadoop2.7.3、ZooKeeper3.4.6、jdk-8u91-linux-x64.rpm

* 集群间和本地登陆本机要可以使用ssh双向无密码登陆执行

2、在3台机器上安装JDK ,添加环境变量/etc/profile

# rpm -ivh jdk-8u91-linux-x64.rpm
export JAVA_HOME=/usr/java/jdk1.8.0_91  # 注意路径是否正确
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/data/yunva/hadoop-2.7.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

3、Hadoop集群安装(安装一台,cp至另外两台)

解压至安装目录

# tar -xzvf hadoop-2.7.3.tar.gz -C /data/

更改配置

① /data/hadoop-2.7.3/etc/hadoop/core-site.xml


    
        fs.default.name
        hdfs://master:9000
    

② /data/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

确认JDK路径

export JAVA_HOME=/usr/java/jdk1.8.0_91

③ /data/hadoop-2.7.3/etc/hadoop/hdfs-site.xml

创建Hadoop的数据目录和用户目录

mkdir /data/hadoop-2.7.3/hadoop/data
mkdir /data/hadoop-2.7.3/hadoop/name

   
        dfs.name.dir
        /data/hadoop-2.7.3/hadoop/name
    
    
        dfs.data.dir
        /data/hadoop-2.7.3/hadoop/data
    
    
        dfs.replication
        3
    

④ /data/hadoop-2.7.3/etc/hadoop/mapred-site.xml

cp mapred-site.xml.template mapred-site.xml


    
        mapred.job.tracker
        master:9001
    

⑤ 修改slaves

 

# cat /data/hadoop-2.7.3/etc/hadoop/slaves
slave1
slave2

三台机器配置相同、环境相同直接拷贝

scp -r /data/hadoop-2.7.3/ slave1:/data/
scp -r /data/hadoop-2.7.3/ slave2:/data/

启动Hadoop

./bin/hadoop namenode -format		第一次启动前格式化namenode
./sbin/start-all.sh	启动
# jps		//通过jps命令可以查看进程

4、Zookeeper集群安装(安装一台,cp至另外两台)

   ①解压至安装目录

# tar -xzvf zookeeper-3.4.6.tar.gz -C /data/
# cd /data/zookeeper-3.4.6/conf/
# cp zoo_sample.cfg zoo.cfg
# mkdir /data/zookeeper-3.4.6/zkdata
# mkdir /data/zookeeper-3.4.6/logs
[root@master conf]# grep -v ^# zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper-3.4.6/zkdata
dataLogDir=/data/zookeeper-3.4.6/logs
clientPort=2181
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

 

② 在创建的zkdata目录下,创建myid文件,例:myid内容是1 表示server.1、myid文件内容是2 表示sever.2 ,以此类推

启动zk ,在每台zk机器上执行 ./bin/zkServer.sh start

//启动期间可能会报错

2019-01-13 17:56:06,208 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-01-13 17:56:06,209 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  2, proposed zxid=0x0
2019-01-13 17:56:06,214 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.roun
d), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-01-13 17:56:06,215 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.roun
d), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-01-13 17:56:06,215 [myid:2] - WARN  [WorkerSender[myid=2]:QuorumCnxManager@382] - Cannot open channel to 3 at election address slave2/10.9.44.62:3888
java.net.ConnectException: 拒绝连接
  at java.net.PlainSocketImpl.socketConnect(Native Method)
  at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
  at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  at java.net.Socket.connect(Socket.java:589)
  at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
  at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
  at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
  at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
  at java.lang.Thread.run(Thread.java:745)
2019-01-13 17:56:06,216 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.roun
d), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state

........................

① 检查防火墙、 zoo.cfg配置文件中的server.1=master:2888:3888 和/etc/hosts文件

② zk集群节点启动时都会试图去连接集群中的其它节点,先启动是连不上其他没启动的,所以上面日志前面部分的异常可以忽略

5、Hbase集群安装

在/data/hbase-1.2.5/conf/hbase-env.sh中添加

export JAVA_HOME=/usr/java/jdk1.8.0_91
export HBASE_CLASSPATH=/data/hadoop-2.7.3/etc/hadoop/
export HBASE_MANAGES_ZK=false

更改 /data/hbase-1.2.5/conf/hbase-site.xml


    
        hbase.rootdir
        hdfs://master:9000/hbase
    
    
        hbase.master
        master
    
    
        hbase.cluster.distributed
        true
    
    
        hbase.zookeeper.property.clientPort
        2181
    
    
        hbase.zookeeper.quorum
        master,slave1,slave2
    
    
        zookeeper.session.timeout
        60000000
    
    
        dfs.support.append
        true
    

更改/data/hbase-1.2.5/conf/regionservers

# cat /data/hbase-1.2.5/conf/regionservers 
slave1
slave2

同步安装包

scp -r /data/hbase-1.2.5/ slave1:/data

scp -r /data/hbase-1.2.5/ slave2:/data

6、启动集群

启动zk

/data/zookeeper-3.4.6/bin/zkServer.sh start

启动Hadoop

/data/hadoop-2.7.3/sbin/start-all.sh

启动Hbase

/data/hbase-1.2.5/bin/start-hbase.sh

---
验证:
hadoop:
http://IP:8088/cluster/cluster
hbase:
http://IP:16010/master-status
hdfs:
http://IP:50070/dfshealth.html#tab-overview

 

***  https://www.code007.net

你可能感兴趣的:(工作日常)