[置顶] Hbase+GlusterFS可行性

通过测试时证明:在分布式模式下GlusterFs可以代替hadoop的HDFS,作为Hbase的底层数据存储。

           配置过程如下:

                      首先,介绍下搭建的集群模型:

                                  共4太服务器,hostname分别为:h221  h222  d184 d186(各个服务器之间可以ssh 无密码登陆)

                                   Hmaseter为:h221

                                  Hregionserver为:h222  d184 d186

                                 zookeeper为:h221(Hbase内置zookeeper)

                                 在h221  h222  d184 d186 四台服务器的/mnt目录下穿件hbase文件夹作为GlusterFs的mount点(即GlusterFs 的客户端),就是每台服务器都有一个/mnt/hbae/目录作为GlusterFs的client,然后让Hbase把数据写到这个目录下面。(GlusterFs集群的搭建参见Gluster安装部署,这里不详细说明,主要是配置Hbase)。

                   然后修改Hbase的配置文件:hbase-env.sh    hbase-site.xml    regionservers,这3个配置文件都在/home/xmail/hbase-0.92.1/conf/目录下

                 1. 配置文件:hbase-env.sh(h221  h222  d184 d186配置文件相同)

                     

# Set environment variables here.

# The java implementation to use.  Java 1.6 required.
# export JAVA_HOME=/usr/java/jdk1.6.0/
export JAVA_HOME=/usr/java/jdk1.7.0_02//JDK路径

# Extra Java CLASSPATH elements.  Optional.
# export HBASE_CLASSPATH=

# The maximum amount of heap to use, in MB. Default is 1000.
# export HBASE_HEAPSIZE=1000

# Extra Java runtime options.
# Below are what we set by default.  May only work with SUN JVM.
# For more on why as well as other possible settings,
# see http://wiki.apache.org/hadoop/PerformanceTuning
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"

# Uncomment below to enable java garbage collection logging in the .out file.
# export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

# Uncomment below if you intend to use the EXPERIMENTAL off heap cache.
# export HBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize="
# Set hbase.offheapcache.percentage in hbase-site.xml to a nonzero value.


# Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
#
# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
# export HBASE_MASTER_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
# export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
# export HBASE_THRIFT_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"

# File naming hosts on which HRegionServers will run.  $HBASE_HOME/conf/regionservers by default.
# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers

# Extra ssh options.  Empty by default.
# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"

# Where log files are stored.  $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs

# A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HBASE_NICENESS=10

# The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1

# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true//zookeeper是用内置的还是自己搭建zookeeper(true为使用内置的)

 

                    2.配置文件: hbase-site.xml   

<configuration>

<property>
<name>hbase.rootdir</name>
  <value>file:///mnt/hbase/</value> //和GlusterFs客户端的路径相对应
</property>

  <property>
  <name>hbase.cluster.distributed</name>//true为分布式模式,false为单机模式
  <value>true</value>

 

 <property>
  <name>hbase.master</name>
  <value>h221:60000</value>
  </property>

  <property>
  <name>hbase.zookeeper.quorum</name>//zookeeper为221
  <value>h221</value>
  </property>
</configuration>

 

 

配置文件 regionservers                                            //HregionServer服务器

h222
d186
d184

配置完毕,进入 h221服务器的/home/xmail/hbase-0.92.1/bin 

#./start-hbase.sh  //开启hbase集群

#./hbase shell//开启Hbase指令

#hbase(main):001:0> status
3 servers, 0 dead, 0.7500 average load

配置成功!

参考资料:http://blog.nosqlfan.com/html/3371.html

测试:查看当hbase中的一个regionServer宕机, 是否还能读取到这个HregionServer上的数据,即底层的GlusterFs是否共享数据

测试方法:在h222服务器上给hbase创建6个表(保证每个HR 都能存数据),让h222宕掉,在d184上读取所有表中的数据,如果能读到所有表中的数据,就说明底层的GlusterFs是否共享数据,GlusterFs可以代替HDFS。

测试结果: 在d184上能读取所有表中的数据,GlusterFs可以代替HDFS!

你可能感兴趣的:(java,服务器,测试,hbase,jmx,variables)