sqoop从hive导入orc格式数据到mysql

首先上命令,因为我是在python中执行的,所以得以命令行的形式传递到调度:

import subprocess as commands
command = 'sqoop export ' \
              "--connect  'jdbc:mysql://{host}/{db}?characterEncoding=utf8&autoReconnect=true' " \
              '--username {user} ' \
              '--password {password} ' \
              '--table  opencourse_rpt_live_convert_ha ' \
              '--columns "out_order_id,course_id,customer_id,customer_created_time,user_id,unionid,nickname,mobile,opencourse_pay_time,fl_seller_id,fl_seller_name,department_id,department_name,type,channel_source,channel_name,source_name,opencourse_subject_id,opencourse_subject_name,opencourse_bussiness_id,opencourse_bussiness_name,open_course_date,end_inversion_date,phase_nums,phase_name,new_course_id,new_course_name,is_active,is_half_past_eight_login,is_half_past_nine_login,day1_live_study_time,day1_playback_time,day1_is_live_complete,normal_amount_order_id,normal_course_id,normal_course_name,pay_type,normal_amount,normal_amount_time,normal_subject_id,normal_subject_name,normal_bussiness_id,normal_bussiness_name,normal_amount_seller_id,normal_amount_seller_name,experience_course_cnt,last7days_is_pay_traincamp,etl_process_pt" '\
              '--hcatalog-database dm ' \
              '--hcatalog-table opencourse_rpt_live_convert_ha ' \
              '--hcatalog-partition-keys pt ' \
              '--hcatalog-partition-values {pt} ' \
              "--input-null-string '\\\\N' " \
              "--input-null-non-string '\\\\N' ".format(host=con_config[0], db=con_config[4], user=con_config[1],
                                                        password=con_config[2],
                                                        pt=data_util.get_previous_2hour(day))
    print(command)
    print('sqoop同步任务开始')
    echo, logs = commands.getstatusoutput(command)

其中,subprocess这个模块是用来执行命令行的,commands.getstatusoutput()和os.system()的功能一样,不过能把日志捕获到,而os.system()无法捕获日志,sqoop命令中:

  • 导出用export 导入用import
  • –connect是连接mysql的地址,
  • –username 用户名
  • –password 密码
  • –table 导入的mysql的表
  • –columns 显示的指定导出列和顺序,如果全表导出的话写不写无所谓
  • –hcatalog-database hive的库名
  • –hcatalog-table hive的表名
  • –hcatalog-partition-keys 分区名 如果有多个分区用逗号隔开
  • –hcatalog-partition-values 分区值,多个分区用逗号隔开,必须与上面的keys一一对应
  • –input-null-string string类型的空值转换
  • –input-null-non-string 非string类型的空值转换

顺利导入,记录一下遇到的坑:

第一

命令,网上找的基本都是

sqoop export \
--connect jdbc:mysql://localhost/db \
--username root \
--password 123456 \
--table table123 \   
--export-dir /user/foo/joinresults \   #导出hdfs路径
--fields-terminated-by '|'

可以看到这种命令为直接在hdfs上找到对应的文件进行导出,对于我这种不能直接接触线上服务器且和运维关系不好的人来说就很难受,因为不知道文件路径并且需要指定分隔符,并且只能导出text的,万一表是orc或者per那就不行
而我这边是用了一个新命令‘hcatalog’来代替,hcatalog是sqoop在最近版本中才加入,我找到的文档是1.4.6,他可以直接像mysql一样指定库和表,省去了文件路径和分隔符等参数,并且无论是什么格式的表都能导出。

第二

测试的时候写了十条数据,没问题,我就美滋滋的以为搞定了,结果一到正式表,直接报错

21/12/14 17:04:13 WARN mapred.YARNRunner: Usage of -Djava.library.path in yarn.app.mapreduce.am.command-opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the app master JVM env using yarn.app.mapreduce.am.env config settings.
21/12/14 17:04:14 INFO impl.YarnClientImpl: Submitted application application_1620452173217_2477593
21/12/14 17:04:14 INFO mapreduce.Job: The url to track the job: http://kkb-hadoop-1:8088/proxy/application_1620452173217_2477593/
21/12/14 17:04:14 INFO mapreduce.Job: Running job: job_1620452173217_2477593
21/12/14 17:05:02 INFO mapreduce.Job: Job job_1620452173217_2477593 running in uber mode : false
21/12/14 17:05:02 INFO mapreduce.Job:  map 0% reduce 0%
21/12/14 17:05:08 INFO mapreduce.Job:  map 100% reduce 0%
21/12/14 17:05:12 INFO mapreduce.Job: Job job_1620452173217_2477593 failed with state FAILED due to: Task failed task_1620452173217_2477593_m_000002
Job failed as tasks failed. failedMaps:1 failedReduces:0

21/12/14 17:05:12 INFO mapreduce.Job: Counters: 13
	Job Counters 
		Failed map tasks=1
		Killed map tasks=3
		Launched map tasks=4
		Data-local map tasks=2
		Rack-local map tasks=2
		Total time spent by all maps in occupied slots (ms)=33022
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=16511
		Total vcore-milliseconds taken by all map tasks=16511
		Total megabyte-milliseconds taken by all map tasks=33814528
	Map-Reduce Framework
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
21/12/14 17:05:12 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
21/12/14 17:05:12 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 290.3572 seconds (0 bytes/sec)
21/12/14 17:05:12 INFO mapreduce.ExportJobBase: Exported 0 records.
21/12/14 17:05:12 ERROR tool.ExportTool: Error during export: 
Export job failed!
	at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:439)
	at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:930)
	at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:92)
	at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:111)
	at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
	at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
[2021-12-14 17:05:13,262] {logging_mixin.py:112} INFO - 同步失败

网上查了半天都说是字段类型或者名字不匹配,仔细检查过后发现都一致,在看其它方向,从上面报错里找到记录日志的url:http://kkb-hadoop-1:8088/proxy/application_1620452173217_2477593/发现错误是这个
Error: java.io.IOException: java.sql.SQLException: Incorrect string value: '\xF0\x9F\x8D\x8A' for column 'nickname' at row 15 at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:233) at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:46) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:664) at
由于我不懂java只能又是一顿查找,找到资料说因为数据中有emoji表情,而数据库是utf-8
utf8不支持emoji,是因为emoji是用4个字节存储的字符,而mysql的utf8只能存储1-3个字节的字符。需要改成utf8mb4,按照网上的操作把库/表/字段都修改之后,错误并没有消除,后来在大佬文章的指导下修改了连库链接,顺利导出,即:在原有的jdbc:mysql://{host}/{db} 后添加
?characterEncoding=utf8&autoReconnect=true,
添加 characterEncoding=utf8 会被自动识别为 utf8mb4 ;autoReconnect=true 参数必须添加。
问题解决

第三

语言问题,这算是一个低级错误,由于python中双引号和单引号没有区别,所以我第一次这么写的

'--input-null-string "\\\\N" ' \
'--input-null-non-string "\\\\N" '

结果最终是要在命令行中执行的,然后在命令行中的版本是这样

--input-null-string "\\N" 

在shell双引号是不会字符化‘\’的,然后最终传递给java的就是\N,所以报错,因为java直接把\N读取为空格了,而在hive中,空值是以\N显示的,这里的\N不是空格,而是字符。
后来改成

"--input-null-string '\\\\N' " \
"--input-null-non-string '\\\\N' "

搞定
同时,也有其他的小问题,比如第一次写入时没有写“\\N”而是写了“\N”,导致直接在python层面进行一次转义变成“\N”,然后传递给shell变成空格了,传递给java就是空格,很明显不对
最后的命令格式应该是这样的

[2021-12-14 17:00:14,717] {logging_mixin.py:112} INFO - sqoop export --connect  'jdbc:mysql://{host}/{db}?characterEncoding=utf8&autoReconnect=true' --username user --password passwd --table  table_name --hcatalog-database database --hcatalog-table table_name  --hcatalog-partition-keys pt --hcatalog-partition-values pt_value --input-null-string '\\N' --input-null-non-string '\\N'

同步成功
参考引用:
sqoop从hive、hdfs导入导出数据(mysql)
sqoop官方文档
Mysql存储微信Emoji表情问题

你可能感兴趣的:(hive,mysql,sqoop)