MaxCompute导入数据

使用tunnel命令导入数据的官方文档:https://help.aliyun.com/document_detail/27809.html?spm=a2c4g.11186623.6.590.709b5791JljgYi

具体过程如下:

odps@ YITIAN_BJ_MC>tunnel upload /Users/yitian/Documents/MaxCompute/documents/learning-data/banking.txt bank_data;
Upload session: 20191103115707c2dbdb0b16a7c781
Start upload:/Users/yitian/Documents/MaxCompute/documents/learning-data/banking.txt
Using \n to split records
Upload in strict schema mode: true
Total bytes:4841548	 Split input to 1 blocks
2019-11-03 11:57:07	scan block: '1'
2019-11-03 11:57:08	scan block complete, blockid=1
2019-11-03 11:57:08	upload block: '1'
2019-11-03 11:57:13	1:0:4841548:/Users/yitian/Documents/MaxCompute/documents/learning-data/banking.txt	44%	2.1 MB	422.6 KB/s
2019-11-03 11:57:18	1:0:4841548:/Users/yitian/Documents/MaxCompute/documents/learning-data/banking.txt	97%	4.5 MB	458.8 KB/s
2019-11-03 11:57:20	1:0:4841548:/Users/yitian/Documents/MaxCompute/documents/learning-data/banking.txt	100%	4.6 MB	394 KB/s
2019-11-03 11:57:20	upload block complete, blockid=1
upload complete, average speed is 363.7 KB/s
OK
odps@ YITIAN_BJ_MC>select count(*) from bank_data;

ID = 20191103035736853ggvb472m
Log view:
http://logview.odps.aliyun.com/logview/?h=http://service.cn.maxcompute.aliyun.com/api&p=YITIAN_BJ_MC&i=20191103035736853ggvb472m&token=Mnk4VkVzSnZ4YjRhWVBvcThwUFI3OFJBbjJBPSxPRFBTX09CTzoxNzUzODE0MzI4NjM0Mjk4LDE1NzMzNTgyNTcseyJTdGF0ZW1lbnQiOlt7IkFjdGlvbiI6WyJvZHBzOlJlYWQiXSwiRWZmZWN0IjoiQWxsb3ciLCJSZXNvdXJjZSI6WyJhY3M6b2RwczoqOnByb2plY3RzL3lpdGlhbl9ial9tYy9pbnN0YW5jZXMvMjAxOTExMDMwMzU3MzY4NTNnZ3ZiNDcybSJdfV0sIlZlcnNpb24iOiIxIn0=
Job Queueing.
Summary:
resource cost: cpu 0.00 Core * Min, memory 0.00 GB * Min
inputs:
	yitian_bj_mc.bank_data: 41188 (454112 bytes)
outputs:
Job run time: 0.000
Job run mode: service job
Job run engine: execution engine
M1:
	instance count: 1
	run time: 0.000
	instance time:
		min: 0.000, max: 0.000, avg: 0.000
	input records:
		TableScan1: 41188  (min: 41188, max: 41188, avg: 41188)
	output records:
		StreamLineWrite1: 1  (min: 1, max: 1, avg: 1)
	metrics_output_count:
		HashAgg1: 1  (min: 1, max: 1, avg: 1)
		StreamLineWrite1: 1  (min: 1, max: 1, avg: 1)
		TableScan1: 41188  (min: 41188, max: 41188, avg: 41188)
	metrics_inner_time_ms:
		HashAgg1: 0  (min: 0, max: 0, avg: 0)	MaxInstance: 0
		StreamLineWrite1: 4  (min: 4, max: 4, avg: 4)	MaxInstance: 0
		TableScan1: 0  (min: 0, max: 0, avg: 0)	MaxInstance: 0
R2_1:
	instance count: 1
	run time: 0.000
	instance time:
		min: 0.000, max: 0.000, avg: 0.000
	input records:
		StreamLineRead1: 1  (min: 1, max: 1, avg: 1)
	output records:
		AdhocSink1: 1  (min: 1, max: 1, avg: 1)
	metrics_output_count:
		AdhocSink1: 1  (min: 1, max: 1, avg: 1)
		Project1: 1  (min: 1, max: 1, avg: 1)
		SortedAgg1: 1  (min: 1, max: 1, avg: 1)
		StreamLineRead1: 1  (min: 1, max: 1, avg: 1)
	metrics_inner_time_ms:
		AdhocSink1: 21  (min: 21, max: 21, avg: 21)	MaxInstance: 0
		Project1: 0  (min: 0, max: 0, avg: 0)	MaxInstance: 0
		SortedAgg1: 0  (min: 0, max: 0, avg: 0)	MaxInstance: 0
		StreamLineRead1: 51  (min: 51, max: 51, avg: 51)	MaxInstance: 0


+------------+
| _c0        |
+------------+
| 41188      |
+------------+
1 records (at most 10000 supported) fetched by instance tunnel.

在使用MaxCompute Studio进行可视化数据导入的时候,容易出现下面问题:

MaxCompute导入数据_第1张图片

如果这里的limit没有修改,最后导入的数据量只有100,需要注意一下。 

你可能感兴趣的:(MaxCompute)