sqoop import --connect jdbc:mysql://localhost/db --username foo --table TEST
2.账号密码
sqoop import --connect jdbc:mysql://database.example.com/employees \
--username aaron --password 12345
3.驱动
sqoop import --driver com.microsoft.jdbc.sqlserver.SQLServerDriver \
--connect ...
4.写sql语句导入的方式
sqoop import \
--query 'SELECT a.*, b.* FROM a JOIN b on (a.id == b.id) WHERE $CONDITIONS' \
--split-by a.id --target-dir /user/foo/joinresults
如果是顺序导入的话,可以只开一个线程
sqoop import \
--query 'SELECT a.*, b.* FROM a JOIN b on (a.id == b.id) WHERE $CONDITIONS' \
-m 1 --target-dir /user/foo/joinresults
sqoop import --connnect --table foo --warehouse-dir /shared \
或者
sqoop import --connnect --table foo --target-dir /dest \
9.传递参数给快速导入的工具,使用--开头,下面这句命令传递给mysql默认的字符集是latin1。
sqoop import --connect jdbc:mysql://server.foo.com/db --table bar \
--direct -- --default-character-set=latin1
sqoop import ... --null-string '\\N' --null-non-string '\\N'
property.name
property.value
如果不在这里面配置的话,就需要像这样写命令
sqoop import -D property.name=property.value ...
#指定列
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
--columns "employee_id,first_name,last_name,job_title"
#使用8个线程
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
-m 8
#快速模式
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
--direct
#使用sequencefile作为存储方式
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
--class-name com.foocorp.Employee --as-sequencefile
#分隔符
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
--fields-terminated-by '\t' --lines-terminated-by '\n' \
--optionally-enclosed-by '\"'
#导入到hive
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
--hive-import
#条件过滤
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
--where "start_date > '2010-01-01'"
#用dept_id作为分个字段
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES \
--split-by dept_id
#追加导入
$ sqoop import --connect jdbc:mysql://db.foo.com/somedb --table sometable \
--where "id > 100000" --target-dir /incremental_dataset --append
sqoop import-all-tables --connect jdbc:mysql://db.foo.com/corp
sqoop-export --table foo --update-key id --export-dir /path/to/data --connect …
UPDATE foo SET msg='this is a test', bar=42 WHERE id=0;
UPDATE foo SET msg='some more data', bar=100 WHERE id=1;
...
$ sqoop export --connect jdbc:mysql://db.example.com/foo --table bar \
--export-dir /results/bar_data
$ sqoop export --connect jdbc:mysql://db.example.com/foo --table bar
–export-dir /results/bar_data --validate
$ sqoop export --connect jdbc:mysql://db.example.com/foo --call barproc
26.Validate 它用来比较源数据和目标数据的数量 它有三个接口 Validator。
它有三个接口
Validator.
Property: validator
Description: Driver for validation,
must implement org.apache.sqoop.validation.Validator
Supported values: The value has to be a fully qualified class name.
Default value: org.apache.sqoop.validation.RowCountValidator
Validation Threshold
Property: validation-threshold
Description: Drives the decision based on the validation meeting the
threshold or not. Must implement
org.apache.sqoop.validation.ValidationThreshold
Supported values: The value has to be a fully qualified class name.
Default value: org.apache.sqoop.validation.AbsoluteValidationThreshold
Validation Failure Handler
Property: validation-failurehandler
Description: Responsible for handling failures, must implement
org.apache.sqoop.validation.ValidationFailureHandler
Supported values: The value has to be a fully qualified class name.
Default value: org.apache.sqoop.validation.LogOnFailureHandler27.validate例子
$ sqoop import --connect jdbc:mysql://db.foo.com/corp
–table EMPLOYEES --validate
$ sqoop export --connect jdbc:mysql://db.example.com/foo --table bar
–export-dir /results/bar_data --validate
28.sqoop job 保存常用的作业,以便下次快速调用
--create
--delete
--exec
--show
--list 列出所有的job
29.例子
#创建job
$ sqoop job --create myjob – import --connect jdbc:mysql://example.com/db
–table mytable
#列出所有job
$ sqoop job --list
#查看job
$ sqoop job --show myjob
Job: myjob
Tool: import
Options:
direct.import = false
codegen.input.delimiters.record = 0
hdfs.append.dir = false
db.table = mytable
…
#执行job
$ sqoop job --exec myjob
10/08/19 13:08:45 INFO tool.CodeGenTool: Beginning code generation
…
#重写参数
$ sqoop job --exec myjob – --username someuser -P
Enter password:
…
30.别的常用工具
sqoop-metastore
sqoop-merge
#合并两个目录
$ sqoop merge --new-data newer --onto older --target-dir merged
–jar-file datatypes.jar --class-name Foo --merge-key id
sqoop-codegen
sqoop-create-hive-table
#在hive中创建一个名叫emps的和employees一样的表
$ sqoop create-hive-table --connect jdbc:mysql://db.example.com/corp
–table employees --hive-table emps
sqoop-eval
#选择10行数据
$ sqoop eval --connect jdbc:mysql://db.example.com/corp
–query “SELECT * FROM employees LIMIT 10”
#往foo表插入一行
$ sqoop eval --connect jdbc:mysql://db.example.com/corp
-e "INSERT INTO foo VALUES(42, ‘bar’)"
sqoop-list-databases
$ sqoop list-databases --connect jdbc:mysql://database.example.com/
information_schema
employees
sqoop-list-tables
后面是附录,我把前面攒得一些东西放在这里了。
import的主要参数
–connect jdbc连接地址
–connection-manager 连接管理者
–driver 驱动类
–hadoop-mapred-home $HADOOP_MAPRED_HOME
–help help信息
-P 从命令行输入密码
–password 密码
–username 账号
–verbose 打印信息
–connection-param-file 可选参数
Argument Description
–append 添加到hdfs中已经存在的dataset
–as-avrodatafile 导入数据作为avrodata
–as-sequencefile 导入数据位SequenceFiles
–as-textfile 默认导入数据为文本
–boundary-query
–columns
–direct 使用直接导入快速路径
–direct-split-size
–fetch-size
–inline-lob-limit
-m,–num-mappers
-e,–query
–split-by
–table
–target-dir
–warehouse-dir
–where
-z,–compress Enable compression
–compression-codec
–null-string
–null-non-string
export主要参数
–direct 快速导入
–export-dir
转义字符相关参数。
Argument Description
–enclosed-by
–escaped-by
–fields-terminated-by
–lines-terminated-by
–mysql-delimiters 使用mysql的默认分隔符: , lines: \n escaped-by: \ optionally-enclosed-by: ’
–op