运行 importtsv 导入数据时 报错:

[hadoop@master ~]$ hadoop jar /usr/hbase/hbase-0.94.12-security.jar importtsv

Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/collect/Multimap

       at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:43)

       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

       at java.lang.reflect.Method.invoke(Method.java:606)

       at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Multimap

       at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

       at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

       at java.security.AccessController.doPrivileged(Native Method)

       at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)

       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

       ... 6 more

[hadoop@master ~]$


这是因为运行jar文件时缺少google-collect-.jar 文件,但是This library was renamed to Guava!,所以应该找guava文件,

那么在这里是hadoop 执行jar,因此是因为hadoop lib目录中缺少 guava-xx.jar文件。


把$HBASE_HOME/lib目中的guava-xx.jar 复制到$HADOOP_HOME/lib目录中,解决问题:


[hadoop@master lib]$ hadoop jar /usr/hbase/hbase-0.94.12-security.jar importtsv

ERROR: Wrong number of arguments: 0

Usage: importtsv -Dimporttsv.columns=a,b,c


Imports the given input directory of TSV data into the specified table.


The column names of the TSV data must be specified using the -Dimporttsv.columns

option. This option takes the form of comma-separated column names, where each

column name is either a simple column family, or a columnfamily:qualifier. The special

column name HBASE_ROW_KEY is used to designate that this column should be used

as the row key for each imported record. You must specify exactly one column

to be the row key, and you must specify a column name for every column that exists in the

input data. Another special column HBASE_TS_KEY designates that this column should be

used as timestamp for each record. Unlike HBASE_ROW_KEY, HBASE_TS_KEY is optional.

You must specify atmost one column as timestamp key for each imported record.

Record with invalid timestamps (blank, non-numeric) will be treated as bad record.

Note: if you use this option, then 'importtsv.timestamp' option will be ignored.


By default importtsv will load data directly into HBase. To instead generate

HFiles of data to prepare for a bulk data load, pass the option:

 -Dimporttsv.bulk.output=/path/for/output

 Note: if you do not use this option, then the target table must already exist in HBase


Other options that may be specified with -D include:

 -Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line

 '-Dimporttsv.separator=|' - eg separate on pipes instead of tabs

 -Dimporttsv.timestamp=currentTimeAsLong - use the specified timestamp for the import

 -Dimporttsv.mapper.class=my.Mapper - A user-defined Mapper to use instead of org.apache.hadoop.hbase.mapreduce.TsvImporterMapper

For performance consider the following options:

 -Dmapred.map.tasks.speculative.execution=false

 -Dmapred.reduce.tasks.speculative.execution=false

[hadoop@master lib]$


到这里说明已经正常使用