Hadoop系列之HBASE(分布式数据库)安装配置

1.hbase安装   
  cd /root/soft
  tar zxvf hbase-0.98.5-hadoop2-bin.tar.gz
  mv hbase-0.98.5-hadoop2  /usr/local/hadoop/hbase
2.添加环境变量(所有节点都增加)
  #vim /etc/profile
  export HBASE_HOME=/usr/local/hadoop/hbase
  export PATH=$PATH:/usr/local/hadoop/hbase/bin
  #source /etc/profile
3.修改 hbase-env.sh,hbase-site.xml,regionservers配置文件
  cd  /usr/local/hadoop/hbase/conf
  #vim hbase-env.sh (添加下面内容)
  export JAVA_HOME=/usr/java/default
  export HADOOP_HOME=/usr/local/hadoop
  export HBASE_HOME=/usr/local/hadoop/hbase
  export PATH=$PATH:/usr/local/hadoop/hbase/bin  
  export HBASE_MANAGES_ZK=true  
  //HBASE_MANAGES_ZK=false 时使用自己部署的zookeeper,为true时使用默认自带的zookeeper。
#使用默认的zookeeper,修改hbase-site.xml如下:

 <configuration>  
    <property>  
        <name>hbase.rootdir</name>r  
        <value>hdfs://hdfs-master:9000/hbase</value>
         //必须与你的hadoop主机名,端口号一致;Hbase该项并不识别机器IP,只能使用机器hostname才行
    </property>  
    <property>  
        <name>hbase.cluster.distributed</name>  
        <value>true</value>  
    </property>  
    <property>  
        <name>hbase.zookeeper.quorum</name>  
        <value>hdfs-master,hdfs-slave1,hdfs-slave2</value>
    </property>  
    <property>  
        <name>hbase.master</name>  
        <value>192.168.3.10:60000</value>  
    </property>  
    <property>  
        <name>zookeeper.session.timeout</name>  
        <value>60000</value>  
    </property>  
    <property>  
        <name>hbase.zookeeper.property.clientPort</name>  
        <value>2222</value>  
    </property>  
  </configuration>

#使用独立安装的zookeeper,修改hbase-site.xml如下:

 <configuration>  
    <property>  
        <name>hbase.rootdir</name>r  
        <value>hdfs://hdfs-master:9000/hbase</value>
    </property>  
    <property>  
        <name>hbase.cluster.distributed</name>  
        <value>true</value>  
    </property>  
    <property>  
        <name>hbase.zookeeper.quorum</name>  
        <value>hdfs-master,hdfs-slave1,hdfs-slave2</value>
    </property>  
    <property>  
        <name>hbase.master</name>  
        <value>hdfs-master:60000</value>  
    </property>  
  </configuration>

#vim regionservers (这里添加所有DataNode主机名)
  hdfs-slave1  
  hdfs-slave2  
4.将文件分发到集群其它DataNode节点上
   scp -r /usr/local/hadoop/hbase [email protected]:/usr/local/hadoop/
   scp -r /usr/local/hadoop/hbase [email protected]:/usr/local/hadoop/
5.在NameNode启动hbase:
 /usr/local/hadoop/hbase/bin/start-hbase.sh
//在NameNode用jps命令查看
[[root@hdfs-master soft]# jps
10546 Jps
2282 SecondaryNameNode
10040 HQuorumPeer
10124 HMaster
2127 NameNode
2437 ResourceManager
//在datanode上用jps命令查看
[root@hdfs-slave1 hadoop]# jps
836 DataNode
3140 HRegionServer
3329 Jps
3028 HQuorumPeer
6.测试hbase功能
 hbase shell
6.1显示数据表及创建student表(含有name和address字段)
  hbase(main):015:0> list
  TABLE                                                                           
  0 row(s) in 0.0220 seconds
  => []
  hbase(main):016:0> create 'student','name','address'  
  0 row(s) in 0.4350 seconds
  => Hbase::Table - student
6.2插入一条记录,只能插入某列
  hbase(main):017:0> put 'student','1','name','tom'
  0 row(s) in 0.2500 seconds
6.3. 根据row值 查询一条记录
   hbase(main):018:0> get 'student','1'
   COLUMN                CELL                                                      
   name:                timestamp=1411002916692, value=tom                        
  1 row(s) in 0.0260 seconds
6.4. 给学生的地址簇插入家庭地址
  hbase(main):019:0> put 'student','1','address:home','shenzhen street'
  0 row(s) in 0.0180 seconds
6.5查询学生的家庭地址
  hbase(main):020:0> get 'student','1',{COLUMN=>'address:home'}
  COLUMN                CELL                                                      
  address:home         timestamp=1411003134400, value=shenzhen street            
  1 row(s) in 0.0250 seconds
6.6删除数据表(需要先disable '表名')
  hbase(main):023:0> disable "student"
  0 row(s) in 1.3480 seconds
  hbase(main):024:0> drop "student"
  0 row(s) in 0.2060 seconds
  hbase(main):025:0> list
  TABLE                                                                           
  0 row(s) in 0.0210 seconds
  => []
7.WEB页面来管理查看HBase数据库。
  HMaster:http://192.168.3.10:60010/master.jsp

你可能感兴趣的:(hadoop,hbase,分布式数据库)