转自:http://blog.csdn.net/dajuezhao/article/details/6365053
转自:http://blog.csdn.net/dajuezhao/article/details/6365053
一、MR生成HFile文件
[java] view plain copy
package insert.tools.hfile;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.KeyValueSortReducer;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class TestHFileToHBase {
public static class TestHFileToHBaseMapper extends Mapper {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] values = value.toString().split("/t", 2);
byte[] row = Bytes.toBytes(values[0]);
ImmutableBytesWritable k = new ImmutableBytesWritable(row);
KeyValue kvProtocol = new KeyValue(row, "PROTOCOLID".getBytes(), "PROTOCOLID".getBytes(), values[1]
.getBytes());
context.write(k, kvProtocol);
// KeyValue kvSrcip = new KeyValue(row, "SRCIP".getBytes(),
// "SRCIP".getBytes(), values[1].getBytes());
// context.write(k, kvSrcip);
// HFileOutputFormat.getRecordWriter
}
}
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
Configuration conf = HBaseConfiguration.create();
Job job = new Job(conf, "TestHFileToHBase");
job.setJarByClass(TestHFileToHBase.class);
job.setOutputKeyClass(ImmutableBytesWritable.class);
job.setOutputValueClass(KeyValue.class);
job.setMapperClass(TestHFileToHBaseMapper.class);
job.setReducerClass(KeyValueSortReducer.class);
// job.setOutputFormatClass(org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.class);
job.setOutputFormatClass(HFileOutputFormat.class);
// job.setNumReduceTasks(4);
// job.setPartitionerClass(org.apache.hadoop.hbase.mapreduce.SimpleTotalOrderPartitioner.class);
// HBaseAdmin admin = new HBaseAdmin(conf);
// HTable table = new HTable(conf, "hua");
HFileOutputFormat.configureIncrementalLoad(job, table);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
三、MR生成HFile的注意事项
1. 无论是map还是reduce作为最终的输出结果,输出的key和value的类型应该是: 或者< ImmutableBytesWritable, Put>。
2. Map或者reduce的输出类型是KeyValue 或Put对应KeyValueSortReducer或PutSortReducer。
3. MR例子中job.setOutputFormatClass(HFileOutputFormat.class); HFileOutputFormat是改进后的mr,可适用于多列族同时生成HFile文件,源码中只适合一次对单列族组织成HFile文件。
4. MR例子中HFileOutputFormat.configureIncrementalLoad(job, table);自动对job进行配置,SimpleTotalOrderPartitioner是需要先对key进行整体排序,然后划分到每个reduce中,保证每一个reducer中的的key最小最大值区间范围,是不会有交集的。
因为入库到Hbase的时候,作为一个整体的Region,key是绝对有序的。
5. MR例子中最后生成HFile存储在HDFS上,输出路径下的子目录是各个列族。如果对HFile进行入库HBase,相当于move HFile到HBase的Region中,HFile子目录的列族内容没有了。
四、HFile入库到HBase
[java] view plain copy
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles;
import org.apache.hadoop.hbase.util.Bytes;
public class TestLoadIncrementalHFileToHBase {
// private static final byte[] TABLE = Bytes.toBytes("hua");
// private static final byte[] QUALIFIER = Bytes.toBytes("PROTOCOLID");
// private static final byte[] FAMILY = Bytes.toBytes("PROTOCOLID");
public static void main(String[] args) throws IOException {
Configuration conf = HBaseConfiguration.create();
// byte[] TABLE = Bytes.toBytes("hua");
byte[] TABLE = Bytes.toBytes(args[0]);
HTable table = new HTable(TABLE);
LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
loader.doBulkLoad(new Path(args[1]), table);
// loader.doBulkLoad(new Path("/hua/testHFileResult/"), table);
}
}
五、HFile入库到HBase注意事项
1. 通过HBase中 LoadIncrementalHFiles的doBulkLoad方法,对生成的HFile文件入库,入库的第一个参数是表名,第二个参数是HFile的路径(以上MR生成HFile的输出路径),也可一个个列族录入到HBase中对应的表列族。
2. 如何入库的相关链接:
http://hbase.apache.org/docs/r0.89.20100726/bulk-loads.html
http://hbase.apache.org/docs/r0.20.6/api/org/apache/hadoop/hbase/mapreduce/package-summary.html#bulk
http://genius-bai.javaeye.com/blog/641927
3. 入库分为代码入库以及脚本入库。代码入库有两种,一种是
Hadoop jar hbase-VERSION.jar completebulkload /myoutput mytable;
另外一种是通过以上的TestLoadIncrementalHFileToHBase类。
脚本入库为:jruby $HBASE_HOME/bin/loadtable.rb hbase-mytable hadoop-hbase-hfile-outputdir。