使用sqoop从hive导入oracle报错

新建shell脚本 hive2oracle.sh

#!/bin/bash
sqoop export --connect jdbc:oracle:thin:@//10.10.10.10:1521/DB  --username user --password 123456 --table DB.TT_REPAIR_PART  -m 4 --input-fields-terminated-by '\t' --export-dir /user/hive/dws_tt_repair_part --input-null-string '\\N' --input-null-non-string '\\N' --columns=RSSC_NAME,RSSC_CODE,SST_NAME,SST_CODE,PROVINCE_ID,PROVINCE_NAME,CITY_ID,CITY_NAME,BILL_DATE,CALL_AMOUNT,TECH_AMOUNT

新建workflow
使用sqoop从hive导入oracle报错_第1张图片
执行脚本报错:

Caused by: java.io.IOException: java.sql.SQLException: ORA-12899: value too large for column "DACU"."TT_REPAIR_PART"."SST_NAME" (actual: 51, maximum: 50)
	at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:233)
	at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:46)
	at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:664)
	at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
	at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
	at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:90)
	... 10 more

查看日志SST_NAME字段超过限制!
于是修改oracle表字段长度:

CREATE TABLE TT_REPAIR_PART
(
RSSC_NAME VARCHAR2(100), 
RSSC_CODE VARCHAR2(10), 
SST_NAME VARCHAR2(100), 
SST_CODE VARCHAR2(8), 
PROVINCE_ID VARCHAR2(10), 
PROVINCE_NAME VARCHAR2(50), 
CITY_ID VARCHAR2(10), 
CITY_NAME VARCHAR2(50), 
BILL_DATE VARCHAR2(8), 
CALL_AMOUNT NUMBER, 
TECH_AMOUNT NUMBER
);

重新执行脚本OK了

你可能感兴趣的:(hive,oracle)