HBase is the Hadoop database. Think of it as a distributed, scalable, big data store. Use HBase when you need random, realtime read/write access to your Big Data. This project’s goal is the hosting of very large tables — billions of rows X millions of columns — atop clusters of commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google’s Bigtable. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS. — Apache HBase Homepage
The following sections outline the various ways in which Titan can be used in concert with HBase.
HBase can be run as a standalone database on the same local host as Titan and the end-user application. In this model, Titan and HBase communicate with one another via a localhost
socket. Running Titan over HBase requires the following setup steps:
start-hbase.sh
script in the bin directory inside the extracted HBase directory. To stop HBase, use stop-hbase.sh
.$ ./bin/start-hbase.sh
starting master, logging to ../logs/hbase-master-machine-name.local.out
Now, you can create an HBase TitanGraph using the following Gremlin snippet:
// Gremlin
conf = new BaseConfiguration();
conf.setProperty("storage.backend","hbase");
g = TitanFactory.open(conf);
or the following Java snippet:
// Java
Configuration conf = new BaseConfiguration();
conf.setProperty("storage.backend","hbase");
TitanGraph g = TitanFactory.open(conf);
Note, that you do not need to specify a hostname since a localhost connection is attempted by default.
When the graph needs to scale beyond the confines of a single machine, then HBase and Titan are logically separated into different machines. In this model, the HBase cluster maintains the graph representation and any number of Titan instances maintain socket-based read/write access to the HBase cluster. The end-user application can directly interact with Titan within the same JVM as Titan.
For example, suppose we have a running HBase cluster with two machines at IP address 77.77.77.77 and 77.77.77.78, then connecting Titan with the cluster is accomplished as follows:
Configuration conf = new BaseConfiguration();
conf.setProperty("storage.backend","hbase");
conf.setProperty("storage.hostname","77.77.77.77,77.77.77.78");
TitanGraph g = TitanFactory.open(conf);
storage.hostname accepts a comma separated list of IP addresses and hostname for any subset of machines in the HBase cluster Titan should connect to. Also, in the Gremlin shell, you can not define the type of the variables conf
and g
. Therefore, simply leave off the type declaration.
Finally, Rexster can be wrapped around each Titan instance defined in the previous subsection. In this way, the end-user application need not be a Java-based application as it can communicate with Rexster over REST. This type of deployment is great for polyglot architectures where various components written in different languages need to reference and compute on the graph.
http://rexster.titan.machine1/mygraph/vertices/1
http://rexster.titan.machine2/mygraph/tp/gremlin?script=g.v(1).out('follows').out('created')
In this case, each Rexster server would be configured to connect to the HBase cluster. The following shows the graph specific fragment of the Rexster configuration. Refer to the Rexster configuration page for a complete example.
<graph>
<graph-name>mygraph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-location></graph-location>
<graph-read-only>false</graph-read-only>
<properties>
<storage.backend>hbase</storage.backend>
<storage.hostname>77.77.77.77,77.77.77.78</storage.hostname>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
In addition to the general Titan Graph Configuration, there are the following HBase specific Titan configuration options:
Option | Description | Value | Default | Modifiable |
---|---|---|---|---|
storage.tablename | Name of the HBase table in which to store the Titan specific column families | String | titan | No |
storage.hostname | Comma separated list of IP addresses or hostnames of the HBase ZooKeeper/Quorum nodes that fronts the HBase cluster this Titan instance connects to | IP addresses or hostnames. Leave empty to connect to localhost. | – | Yes |
storage.port | Port on which to connect to HBase cluster nodes. Leave empty to use default port. | Integer | – | Yes |
storage.short‑cf‑names | True to replace Titan’s conventional column family names by single character mnemonics. Short CF names save space because HBase stores the CF in every KeyValue. | Boolean | false in 0.4.x, true in 0.5.x | No |
Please refer to the HBase configuration documentation for more HBase configuration options and their description. By prefixing the respective HBase configuration option with storage.hbase-config in the Titan configuration it will be passed on to HBase at initialization time. This allows arbitrary HBase configuration options to be configured through Titan.
Titan over HBase supports global vertex and edge iteration. However, note that all these vertices and/or edges will be loaded into memory which can causeOutOfMemoryException
. Use Faunus to iterate over all vertices or edges in large graphs.
graphNames = rexster.graphNames.toArray() g = rexster.getGraph("titan") v1 = g.addVertex() v1.setProperty("name","Suspect A") v2 = g.addVertex() v2.setProperty("name","Victim 1") v3 = g.addVertex() v3.setProperty("name","Suspect B") v4 = g.addVertex() v4.setProperty("name","Victim 2") v5 = g.addVertex() v5.setProperty("name","Gang Leader") e1 = g.addEdge(v1,v2, 'Robs') e2 = g.addEdge(v3,v4, 'Robs') e3 = g.addEdge(v5,v1, 'Controls') e4 = g.addEdge(v5,v3, 'Controls')
Ref: https://github.com/thinkaurelius/titan/wiki/Using-HBase
https://github.com/thinkaurelius/titan/wiki/Rexster-Graph-Server