DataX使用记录

DataX使用记录

DataX 是阿里巴巴集团内被广泛使用的离线数据同步工具/平台,实现包括 MySQL、SQL Server、Oracle、PostgreSQL、HDFS、Hive、HBase、OTS、ODPS 等各种异构数据源之间高效的数据同步功能。

00特性

DataX本身作为数据同步框架,将不同数据源的同步抽象为从源头数据源读取数据的Reader插件,以及向目标端写入数据的Writer插件,理论上DataX框架可以支持任意数据源类型的数据同步工作。同时DataX插件体系作为一套生态系统, 每接入一套新数据源该新加入的数据源即可实现和现有的数据源互通。

01下载DataX源码

$ git clone https://github.com/alibaba/DataX.git

02源码编译

$ cd /home/hadoop/source-code/DataX/
$ mvn -U clean package assembly:assembly -Dmaven.test.skip=true

编译成功

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 05:17 min
[INFO] Finished at: 2019-06-29T10:26:35+08:00
[INFO] Final Memory: 504M/2052M
[INFO] ------------------------------------------------------------------------

编译过程遇到的问题

问题一:
[ERROR] Failed to execute goal on project odpsreader: Could not resolve dependencies for project com.alibaba.datax:odpsreader:jar:0.0.1-SNAPSHOT: Could not find artifact com.alibaba.external:bouncycastle.provider:jar:1.38-jdk15 in nexus-aliyun (http://maven.aliyun.com/nexus/content/repositories/central) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :odpsreader

解决办法:将odpsreader模块下对应依赖下的0.19.3-public改为0.20.7-public

		
			com.aliyun.odps
			odps-sdk-core
			
			0.20.7-public
		
问题二:
[ERROR] Failed to execute goal on project otsstreamreader: Could not resolve dependencies for project com.alibaba.datax:otsstreamreader:jar:0.0.1-SNAPSHOT: Could not find artifact com.aliyun.openservices:tablestore-streamclient:jar:1.0.0-SNAPSHOT -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :otsstreamreader

解决办法:将otsstreamreader模块下对应依赖下的1.0.0-SNAPSHOT改为1.0.0

        
            com.aliyun.openservices
            tablestore-streamclient
            
            1.0.0
        
问题三:
ERROR] Failed to execute goal on project odpswriter: Could not resolve dependencies for project com.alibaba.datax:odpswriter:jar:0.0.1-SNAPSHOT: Could not find artifact com.alibaba.external:bouncycastle.provider:jar:1.38-jdk15 in nexus-aliyun (http://maven.aliyun.com/nexus/content/repositories/central) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :odpswriter

解决办法:将odpswriter模块下对应依赖下的0.19.3-public改为0.20.7-public

		
			com.aliyun.odps
			odps-sdk-core
			
			0.20.7-public
		

03测试

应用场景:使用datax,将数据从mysql同步到hdfs

第一步:创建配置文件
{
    "job": {
        "setting": {
            "speed": {
                 "channel": 3
            },
            "errorLimit": {
                "record": 0,
                "percentage": 0.02
            }
        },
        "content": [
            {
                "reader": {
                    "name": "mysqlreader",
                    "parameter": {
                        "username": "root",
                        "password": "root",
                        "splitPk": "",
                        "connection": [
                            {
                                "querySql": [
                                    "select biz_order_id,key_value,gmt_create,gmt_modified,attribute_cc,value_type,buyer_id,seller_id from tc_biz_vertical_test_0000;"
                                ],
                                "jdbcUrl": [
     "jdbc:mysql://192.168.133.186:3306/db_datax"
                                ]
                            }
                        ]
                    }
                },
               "writer": {
                    "name": "hdfswriter",
                    "parameter": {
                        "defaultFS": "hdfs://cluster1",
                        "fileType": "orc",
                        "path": "/user/hive/warehouse/hdfswriter.db/tc_biz_vertical_test_0000",
                        "fileName": "xxxx",
                        "column": [
                            {
                                "name": "biz_order_id",
                                "type": "BIGINT"
                            },
                            {
                                "name": "key_value",
                                "type": "VARCHAR"
                            },
                            {
                                "name": "gmt_create",
                                "type": "DATE"
                            },
                            {
                                "name": "gmt_modified",
                                "type": "DATE"
                            },
                            {
                                "name": "attribute_cc",
                                "type": "INT"
                            },
                            {
                                "name": "value_type",
                                "type": "INT"
                            },
                            {
                                "name": "buyer_id",
                                "type": "BIGINT"
                            },
                            {
                                "name": "seller_id",
                                "type": "BIGINT"
                            }
                        ],
                        "writeMode": "append",
                        "fieldDelimiter": "\t",
                        "compress":"NONE",
                        "hadoopConfig":{
                            "dfs.nameservices": "cluster1",
                            "dfs.ha.namenodes.cluster1": "master,slave",
                            "dfs.namenode.rpc-address.cluster1.master": "master:9001",
                            "dfs.namenode.rpc-address.cluster1.slave": "slave:9001",
                            "dfs.client.failover.proxy.provider.cluster1": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
                        }
                    }
                }
            }
        ]
    }
}
第二步:启动DataX
$ python datax.py /home/hadoop/Documents/datax/json/mysql2hive.json
测试结果
2019-06-29 19:23:37.709 [job-0] INFO  JobContainer -
任务启动时刻                    : 2019-06-29 19:23:25
任务结束时刻                    : 2019-06-29 19:23:37
任务总计耗时                    :                 11s
任务平均流量                    :               29B/s
记录写入速度                    :              0rec/s
读出记录总数                    :                   4
读写失败总数                    :                   0
验证数据
hive (hdfswriter)> select * from tc_biz_vertical_test_0000;
OK
tc_biz_vertical_test_0000.biz_order_id	tc_biz_vertical_test_0000.key_value	tc_biz_vertical_test_0000.gmt_create	tc_biz_vertical_test_0000.gmt_modified	tc_biz_vertical_test_0000.attribute_cc	tc_biz_vertical_test_0000.value_type	tc_biz_vertical_test_0000.buyer_id	tc_biz_vertical_test_0000.seller_id
666666666	;orderIds:20148888888,2014888888813800;	2011-09-24	2011-10-24	1	3	8888888	1
666666666	;orderIds:20148888888,2014888888813800;	2011-09-24	2011-10-24	1	4	8888888	1
888888888	;orderIds:20148888888,2014888888813800;	2011-09-24	2011-10-24	1	3	8888888	1
888888888	;orderIds:20148888888,2014888888813800;	2011-09-24	2011-10-24	1	4	8888888	1
Time taken: 0.132 seconds, Fetched: 4 row(s)

你可能感兴趣的:(大数据,源码,shell脚本)