When I export data to mysql using the following sqoop commands
--options-file sqoop.export.opt --export-dir inok/test/friendrecomend2 --table friend_rec --staging-table friend_rec_stage --clear-staging-table --update-key id --update-mode allowinsert
the content of sqoop.export.opt likes
export
--verbose
--connect
jdbc:mysql://192.168.122.1:3306/inokdatamine
--username
zhaohj
--password
123456
--direct
-m
1
The error comes:
1.
Staging table cannot be used when export is running in update mode.
Solution: delete --update-xx parameters
2.
Error during export: The active connection manager (org.apache.sqoop.manager.DirectMySQLManager) does not support staging of data for export.
Solution: delete --staging-xxxx parameters
Note
Support for staging data prior to pushing it into the destination table is not available for --direct
exports. It is also not available when export is invoked using the --update-key
option for updating existing data, and when stored procedures are used to insert the data.
3.
Cannot run program "mysqlimport": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at java.lang.Runtime.exec(Runtime.java:617)
Solution: Some suggest to install mysql-client in all nodes not only in some of ones. But When I add
--driver com.mysql.jdbc.Driver , then the error disappers.
Note
When using export in direct mode with MySQL, the MySQL bulk utility mysqlimport
must be available in the shell path of the task process.
4.
Caused by: java.lang.NumberFormatException: For input string: "1 156 (12: [69" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:492)
I run a mapreduce job to produce the friendrecomend2 data which using org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; format. The content likes
2 4 (1: [5]) 3 5 (2: [1, 4]) 4 2 (1: [5]),6 (1: [5]) 5 3 (2: [1, 4]),1 (1: [4]) 6 2 (1: [5]),4 (1: [5]),5 (1: [2])
I thought the error due to sqoop parse the line use default delimiters(comma-separated fields with newline-separated records).
Solution: add parameter --input-fields-terminated-by \t (Note: don't put sqoop commands is command input area
when there is space in parameter value. Instead, Using Params)
5.
java.io.IOException: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'friends' at row 1 at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:220)
the friend_rec table in mysql is :
Solution: change the friends filed length to 30000. This solve the sample data. But in my situation, the second filed in the files in hdfs may be longer more than 30000. So,I need to design the file output format
or schema to avoid this error.
6.
ERROR org.apache.sqoop.tool.ExportTool - Error during export: Mixed update/insert is not supported against the target database yet
where the sqoop export action likes
192.168.122.1:2015 hdfs://192.168.122.1:2014 --options-file sqoop.export.opt --export-dir /user/zhaohj/inok/friendrec2/data/friendrecomend --table friend_rec --driver com.mysql.jdbc.Driver --input-fields-terminated-by \t --update-key id --update-mode allowinsert /user/zhaohj/inok/friendrec/sqoop.export.opt#sqoop.export.opt
google results show that it is a bug in sqoop1.4.2 but my sqoop is 1.4.4
When I delete the --driver com.mysql.jdbc.Driver. The error No.6 disappers. But new one comes:
Error during export: MySQL direct connector does not support upsert mode. Please use JDBC based connector (remove --direct parameter)
Solution: remove --direct
The root reason is that mysql connector does't support upsert mode.
http://mail-archives.apache.org/mod_mbox/sqoop-user/201210.mbox/%3C20121004152956.GT16616@jarcec-thinkpad%3E
Note:Sqoop automatically generates code to parse and interpret records of the files containing the data to be exported back to the database. If these files were created with non-default delimiters (comma-separated fields with newline-separated records), you should specify the same delimiters again so that Sqoop can parse your files.
References
https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/K44cK_ZI_2w