sqoop使用

  1. list-databases 列出数据库
    sqoop list-databases –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs

  2. list-tables 列出某数据库下的表
    sqoop list-tables–connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs

  3. eval 可以快速地使用SQL语句对关系数据库进行操作,这可以使得在使用import这种工具进行数据导入的时候,可以预先了解相关的SQL语句是否正确,并能将结果显示在控制台
    sqoop eval –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs -e “select * from b_code”

  4. create-hive-table 生成与关系数据库表的表结构对应的HIVE表
    sqoop create-hive-table –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs -table B_CODE -hive-table h_b_code(注:oracle导入的时候表名需要使用大写!!!)

  5. 导入oracle表到hive中
    sqoop import –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs –table B_CODE –hive-table h_b_code –hive-import
    Hive arguments: Argument Description
    --hive-home

    Override $HIVE_HOME
    --hive-import Import tables into Hive (Uses Hive’s default delimiters if none are set.)
    --hive-overwrite Overwrite existing data in the Hive table.
    --create-hive-table If set, then the job will fail if the target hive
    table exits. By default this property is false.
    --hive-table Sets the table name to use when importing to Hive.
    --hive-drop-import-delims Drops \n, \r, and \01 from string fields when importing to Hive.
    --hive-delims-replacement Replace \n, \r, and \01 from string fields with user defined string when importing to Hive.
    --hive-partition-key Name of a hive field to partition are sharded on
    --hive-partition-value String-value that serves as partition key for this imported into hive in this job.
    --map-column-hive Override default mapping from SQL type to Hive type for configured columns.

  6. sqoop list-databases –connect jdbc:mysql://172.21.80.123:3306/ –username root –password 1234

sqoop import --connect jdbc:mysql://172.21.80.123:3306/hadoop --username root --password 1234 --table lsemp --hbase-table lsemp --hbase-create-table --hbase-row-key empno --column-family info
HBase arguments:```

Argument    Description
--column-family     Sets the target column family for the import
--hbase-create-table    If specified, create missing HBase tables
--hbase-row-key    Specifies which input column to use as the row key
In case, if input table contains composite
key, then  must be in the form of a
comma-separated list of composite key
attributes
--hbase-table   Specifies an HBase table to use as the target instead of HDFS
--hbase-bulkload    Enables bulk loading
mysql相关
1.执行sql
sqoop eval --connect jdbc:mysql://localhost:3306/hive --username root --password 123456 -e 'show tables'
2.列出库
sqoop list-databases --connect jdbc:mysql://localhost:3306/ --username root --password 123456
3.导入mysql数据到hive
sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password 123456 --table student --hive-table student  --hive-import --hive-database test
4.导出hive数据到mysql
sqoop export --connect jdbc:mysql://localhost:3306/test --username root --password 123456  --table word_count -export-dir  /user/hive/warehouse/test.db/word_count --input-fields-terminated-by '\001'

你可能感兴趣的:(sqoop)