所需安装包:
hbase-0.94.1的安装包
安装流程:
1、配置hbase-env.xml
export JAVA_HOME=/usr/java/jdk1.6.0_37 export HBASE_MANAGES_ZK=true
2、配置hbase-site.xml
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>dd1,dd2,dd3</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hadoop/zookeeper</value> </property> </configuration>
3、配置regionservers
dd1 dd2 dd3
4、配置hadoop中的hdfs-site.xml,添加配置
<property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property>
该参数限制了datanode所允许同时执行的发送和接受任务的数量,缺省为256,hadoop-defaults.xml中通常不设置这个参数。这个限制看来实际有些偏小,高负载下,DFSClient 在put数据的时候会报 could not read from stream 的 Exception。
An Hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is called xcievers
(yes, this is misspelled).
Not having this configuration in place makes for strange looking failures. Eventually you'll see a complain in the datanode logs complaining about the xcievers exceeded, but on the run up to this one manifestation is complaint about missing blocks. For example: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
5、删除hbase/lib包下面的hadoop-core-1.0.X.jar和commons-collections-3.2.1.jar,将hadoop目录下的core包和commons-collections-3.2.1.jar拷贝到hbase的lib下面 ---这个操作是为了保证hadoop和hbase的包的版本一致,以为出现一些类似版本不兼容的问题。
6、查看修改/etc/hosts是否绑定了127.0.0.1,如果有将其注释
下面是官网的说明
Before we proceed, make sure you are good on the below loopback prerequisite.
HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, for example, will default to 127.0.1.1 and this will cause problems for you.
/etc/hosts
should look something like this:
127.0.0.1 localhost 127.0.0.1 ubuntu.ubuntu-domain ubuntu
这里说了hbase默认的会送地址是127.0.0.1,配置应该像上面这样,他这里添加了 127.0.0.1 localhost 这段。其实也可以直接不添加,把下面的这句话注释掉。问题就解决了!
2012-12-19 22:11:46,018 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not
be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-12-19 22:11:46,024 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
解决方法:
修改/etc/hosts
注释掉加粗的一行
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.130.170 master 192.168.130.168 dd1 192.168.130.162 dd2 192.168.130.248 dd3 192.168.130.164 dd4
解决方法:
1).方案1
在hbase-site.xml添加配置
<property> <name>hbase.master.maxclockskew</name> <value>180000</value> <description>Time difference of regionserver from master</description> </property>
2).方案2
修改各结点时间,使其误差在30s内
clock --set --date="10/29/2011 18:46:50"
clock --hctosys