Loading external data into greenplum database table using different ways...
Greenplum 有常规的COPY加载方法,有基于分布式的并行的gpfdist加载方法;COPY方式适合用于加载小数据;gpfdist适合大数据量加载;下文中将讨论这两种数据加载方式。
gp_sydb=# select current_database(),current_user,current_schema(),session_user,current_timestamp,version();
current_database | gp_sydb
current_user | gpadmin
current_schema | faa
session_user | gpadmin
now | 2017-06-04 15:33:01.000678+08
version | PostgreSQL 8.3.23 (Greenplum Database 5.0.0-alpha.5 build commit:2e87c5aa435c779b2f3837fa8c7273876497f6ba) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0 compiled on May 19 2017 18:14:12
1 COPY方式加载数据
使用COPY方式加载外部文件,可以指定文件类型、文件格式、日志信息,greenplum便会自动解析,将数据加载到目标表,这种方式比单纯的insert语句效率高,但它不是并行的,适合于加载少量数据。比如有如下包含两列的csv数据(csv数据和表结构可以在gpdb-sandbox-tutorials上获取到);
[gpadmin@gp-master faa]$ more L_AIRLINE_ID.csv
Code,Description
"19031","Mackey International Inc.: MAC"
"19032","Munz Northern Airlines Inc.: XY"
"19033","Cochise Airlines Inc.: COC"
"19034","Golden Gate Airlines Inc.: GSA"
"19035","Aeromech Inc.: RZZ"
"19036","Golden West Airlines Co.: GLW"
"19037","Puerto Rico Intl Airlines: PRN"
"19038","Air America Inc.: STZ"
"19039","Swift Aire Lines Inc.: SWT"
"19040","American Central Airlines: TSF"
"19041","Valdez Airlines: VEZ"
"19042","Southeast Alaska Airlines: WEB"
"19043","Altair Airlines Inc.: AAR"
"19044","Chitina Air Service: CHI"
"19045","Marco Island Airways Inc.: MRC"
"19046","Caribbean Air Services Inc.: OHZ"
"19047","Sundance Airlines: PRO"
"19048","Seair Alaska Airlines Inc.: SAI"
在数据库中创建相同结构的分布表;
gp_sydb=# \d faa.d_airlines
Table "faa.d_airlines"
Column | Type | Modifiers
--------------+---------+-----------
airlineid | integer |
airline_desc | text |
Distributed by: (airlineid)
将外部文件的数据加载到分布表中;
\COPY faa.d_airlines FROM 'L_AIRLINE_ID.csv' CSV HEADER
LOG ERRORS
SEGMENT REJECT LIMIT 500 ROWS;
LOG ERRORS 表示将有问题无法正常导出的数据记录到系统表中,数据加载完成后可以通过如下的语句查询到有问题无法正常导入的数据。
gp_sydb=# SELECT gp_read_error_log('D_AIRLINES');
gp_read_error_log
------------------------------------------------------------------------------------------------------------
("2017-06-04 12:07:13.151372+08",d_airlines,<stdin>,1517,,"unterminated CSV quoted field","""21395,""Virgin
Blue International Airlines t/a V Australia: VA""\r
这种方式导入数据和在oracle中通过sqlload,在mysql中通过load导入数据很相似,对于错误的监控greenplum也做得比较完善。对于csv文件,greenplum要求首行指定数据列名;其它的格式的文件,我们可能要指定分割符,结束符,这些都是统一的,比如|分隔换行符结束的text文件的导入;
\COPY country FROM '/data/gpdb/data01.txt'
WITH DELIMITER '|' LOG ERRORS
SEGMENT REJECT LIMIT 10 ROWS;
2 GPFDIST方式加载数据
gpfdist程序运行在数据存放的节点上,它将数据均匀分布到每个节点上,它是并行工作的,文件可以是按照特定格式存储后压缩的gzip文件,也可以是未压缩的原文件;这种方式适用于大数据加载。假设要加载如下的已经压缩的csv数据;
[gpadmin@gp-master faa]$ ls -ltr --block-size m otp*.gz
-rwxrwxrwx 1 root root 31M Aug 6 2012 otp200912.gz
-rwxrwxrwx 1 root root 30M Aug 6 2012 otp201001.gz
要将这些数据加载到faa.faa_otp_load表,我们需要先创建gpfdist进程;
[gpadmin@gp-master faa]$ gpfdist -d /mnt/vbox/greenplum-master/test_data/faa -p 8081 > /tmp/gpfdist.log 2>&1 &
[1] 19533
gpfdist类似文件服务器,需要指定端口,文件目录信息;gpfdist创建以后需要创建一张external table;
CREATE EXTERNAL TABLE faa.ext_load_otp
(LIKE faa.faa_otp_load)
LOCATION ('gpfdist://192.168.56.10:8081/otp*.gz')
FORMAT 'csv' (header)
LOG ERRORS SEGMENT REJECT LIMIT 50000 rows;
因为gpfdist要将数据均匀的分布到每个节点上,所以创建EXTERNAL TABLE时LOCATION中指定的地址要是集群内的节点能够访问的地址,如果指定为127.0.0.1仅仅是当前服务器可以访问,其它节点访问不了,系统会报错拒绝连接。
gp_sydb=# INSERT INTO faa.faa_otp_load SELECT * FROM faa.ext_load_otp;
ERROR: connection with gpfdist failed for gpfdist://localhost:8081/otp*.gz. effective url: http://127.0.0.1:8081/otp*.gz. error code = 111 (Connection refused) (seg2 slice1 192.168.56.12:40002 pid=3546)
从external table加载数据到表中;
gp_sydb=# INSERT INTO faa.faa_otp_load SELECT * FROM faa.ext_load_otp;
NOTICE: Found 26526 data formatting errors (26526 or more input rows). Rejected related input data.
INSERT 0 1024552
gp_sydb=# select count(*) from faa.faa_otp_load;
count
---------
1024552
(1 row)
Greenplum也支持通过external table对外部文件的访问,但数据不能存放到内存中,每次操作成本很高。
gp_sydb=# select count(*) from faa.ext_load_otp;
NOTICE: Found 26526 data formatting errors (26526 or more input rows). Rejected related input data.
count
---------
1024552
(1 row)
查看加载错误的数据(在external table表上查看);
SELECT gp_read_error_log('faa.ext_load_otp');
最后停止gpfdist进程;
[gpadmin@gp-master faa]$ killall gpfdist
3 GPLOAD
gpfdist的操作需要我们一步步的配置和执行,Greenplum提供了一个封装好的依赖配置的工具GPLOAD;首先我们创建所需的配置文件gpload01.yaml和操作日志表faa.load_audit;
create table faa.load_audit(tname varchar(100),tnode varchar(300),tdate timestamp);
vi gpload01.yaml
---
VERSION: 1.0.0.1
DATABASE: gp_sydb # 数据库名称
USER: gpadmin # 用户名
HOST: 192.168.56.10
PORT: 5432
GPLOAD:
INPUT:
- SOURCE:
LOCAL_HOSTNAME:
- 192.168.56.10
PORT: 8081
FILE: # 文件位置
- /mnt/vbox/greenplum-master/test_data/faa/otp*.gz
- FORMAT: csv
- QUOTE: '"'
- ERROR_LIMIT: 50000
- LOG_ERRORS: true
OUTPUT:
- TABLE: faa.faa_otp_load
- MODE: INSERT
PRELOAD:
- REUSE_TABLES: true
SQL:
- BEFORE: "INSERT INTO faa.load_audit VALUES('faa.faa_otp_load','start', current_timestamp)"
- AFTER: "INSERT INTO faa.load_audit VALUES('faa.faa_otp_load','end', current_timestamp)"
注意检查端口是否被占用,文件路径是否正确;最后执行加载;
[gpadmin@gp-master faa]$ gpload -f gpload01.yaml -l gpload01.log
2017-06-04 14:05:58|INFO|gpload session started 2017-06-04 14:05:58
2017-06-04 14:05:58|INFO|started gpfdist -p 8081 -P 8082 -f "/mnt/vbox/greenplum-master/test_data/faa/otp*.gz" -t 30
2017-06-04 14:05:58|INFO|did not find an external table to reuse. creating ext_gpload_reusable_e0c13d44_48eb_11e7_868b_0800279a5c02
2017-06-04 14:06:32|WARN|2084 bad rows
2017-06-04 14:06:32|WARN|Please use following query to access the detailed error
2017-06-04 14:06:32|WARN|select * from gp_read_error_log('ext_gpload_reusable_e0c13d44_48eb_11e7_868b_0800279a5c02') where cmdtime = '2017-06-04 14:05:58.88747+08'
2017-06-04 14:06:32|INFO|running time: 34.07 seconds
2017-06-04 14:06:32|INFO|rows Inserted = 1024552
2017-06-04 14:06:32|INFO|rows Updated = 0
2017-06-04 14:06:32|INFO|data formatting errors = 2084
2017-06-04 14:06:32|INFO|gpload succeeded with warnings
日志提示临时创建的external table,数据插入、更新、错误信息,还提示怎么查看错误数据。
gp_sydb=# select count(*) from faa.faa_otp_load;
count
---------
1024552
(1 row)
GPLOAD提供更多的支持和自动化操作,也更方便定制某些特殊的操作,比如监控和统计。