非对等gp集群迁移

gp迁移

老gp:单机1maste+4p
新gp:1+2集群,4p4m
为方便操作,n2n到服务器

edge -d node0 -a 168.168.1.49 -c lzsj -k lzsj1201 -r -l *.*.*.*:886 -s 255.255.0.0 -f

方式一:gp备份与恢复

pg_dump与pgrestore
dump为sql,太慢(dump全库一个通宵未完成,官方说一般只用来dump ddl,不dump data)
用做kettl资源库的备份还原尚可

# 备份pg_dump
/usr/local/greenplum/bin/pg_dump -F c -h 127.0.0.1 -p 2345 -U gpadmin -d bigdata -n public -s > /home/gpdata/kettle.sql
# 传输到新gp
scp /home/gpdata/kettle.sql [email protected]:/data/gpadmin/kettle.sql
# 还原pg_restore

pg_restore -d bigdata /home/gpdata/kettle.sql
# 或psql执行sql(推荐)
su gpadmin
psql -d bigdata /home/gpdata/kettle.sql

/usr/local/greenplum-db-5.0.0/bin/gptransfer -t bigdata.edw.dim_date --source-map-file=seg_hosts_file -a --dest-host=172.18.1.49 --dest-database=bigdata --dest-user=gpadmin --batch-size=2 --truncate > gptranfer.log

方式二:外部表 gpfdist

原有gptranser方式,但gp5取消了该功能,其思路:

  1. 老gp生成可写外部表(导出)
  2. scp传输到新gp(传输)
  3. 新gp通过外部表写入 (导入)

根据此原理,开发了以下步骤,并优化去掉了scp这步:

1 开启gpfdist

老gp172.18.120:

su gpadmin
mkdir /data/gpadmin/gpfdist
nohup /usr/local/greenplum-db-5.8.0/bin/gpfdist -d /data/gpadmin/gpfdist -p 9999 > /tmp/gpfdist.log &

新gp2个数据节点开启,减少数据传输,双写加快速度
171.18.1.50:

su gpadmin
mkdir /data/gpadmin/gpfdist
nohup /usr/local/greenplum-db-5.8.0/bin/gpfdist -d /data/gpadmin/gpfdist -p 9999 > /tmp/gpfdist.log &
#检测进程
ps -fe|grep gpfdist

171.18.1.51:

su gpadmin
mkdir /data/gpadmin/gpfdist
nohup /usr/local/greenplum-db-5.8.0/bin/gpfdist -d /data/gpadmin/gpfdist -p 9999 > /tmp/gpfdist.log &

2老GP创建可写外部表,并写入

由1.20写到50和51

create writable external table gpfdist.dim_date (
    date_date date,
    date_string varchar(10),
    year_id int,
    mon_id int,
    day_id int,
    week_id int,
    quarter_id int
)
location (
'gpfdist://172.18.1.50:9999/dim_date.txt',
'gpfdist://172.18.1.51:9999/dim_date.txt'
)
Format 'TEXT' (delimiter as E'|' null as '' escape 'OFF')
;
insert into  gpfdist.dim_date
select * from edw.dim_date
;

3新GP外部表,并读入

create external table gpfdist.dim_date (--drop external table gpfdist.dim_date
    date_date date,
    date_string varchar(10),
    year_id int,
    mon_id int,
    day_id int,
    week_id int,
    quarter_id int
)
location (
'gpfdist://172.18.1.50:9999/dim_date.txt',
'gpfdist://172.18.1.51:9999/dim_date.txt'
)
Format 'TEXT' (delimiter as E'|' null as '' escape 'OFF')
LOG ERRORS SEGMENT REJECT LIMIT 2 rows
;
insert into edw.dim_date
select * from gpfdist.dim_date
;

优化(压缩与排错)

压缩表导出文本数据量暴增4-10倍

测试本地导出txt再压缩,速度太慢:

  1. 老gp导出到本机txt(可写外部表方式)
  2. 压缩
    time gzip -c t_im_inventorybalance.txt > t_im_inventorybalance.txt.gz
    解压:tar -xzf all.tar.gz
  3. 新gp导入压缩文件

直接导出压缩格式:
sql命令copy导出到压缩:copy to program功能
5.1+才提供该功能,顺便将老gp5.0升级到5.8,参考:https://www.jianshu.com/p/8398625f9f91

copy edw.dim_date to program 'gzip > /home/gpdata/gpfdist/dim_date.gz'
delimiter as E'|' null as '' escape 'OFF'
或导出csv格式:(extra或missing data错误)
copy edw.dim_date to program 'gzip > /home/gpdata/gpfdist/dim_date.gz'
with csv

外部表导入数据

create external table gpfdist.dim_date (--drop external table gpfdist.dim_date
    date_date date,
    date_string varchar(10),
    year_id int,
    mon_id int,
    day_id int,
    week_id int,
    quarter_id int
)
location (
    'gpfdist://172.18.1.20:9999/dim_date.gz'
)
Format 'TEXT' (delimiter as E'|' null as '' escape 'OFF')
LOG ERRORS SEGMENT REJECT LIMIT 2 rows
;
-- 导入csv格式相应修改
/*
location (
    'gpfdist://172.18.1.20:9999/dim_date.gz'
)
Format 'CSV' 
LOG ERRORS SEGMENT REJECT LIMIT 2 rows
;
*/
insert into edw.dim_date
select * from gpfdist.dim_date
;
-- 查gpfist导入错误
SELECT gp_read_error_log('edw.dim_date');

直接copy到新gp(nfs待完善)

需要将/home/gpdata/gpfdist路径修改为远程服务器的目录路径:(nfs方式?)

copy edw.dim_date to program 'gzip > /home/gpdata/gpfdist/dim_date.gz'
delimiter as E'|' null as '' escape 'OFF'
或导出csv格式:(extra或missing data错误)
copy edw.dim_date to program 'gzip > /home/gpdata/gpfdist/dim_date.gz'
with csv

直接复制数据文件到同构集群,再扩展(待测试)

封装为组件(待开发)

遇到的坑

  1. pg_dump的存过脚本不能运行
    符号前后与其他字段连在一起,没有空格或换行,批量替换为 $$ (前后空格)即可
  2. pg_restore运行异常
    改:psql sql脚本文件

你可能感兴趣的:(非对等gp集群迁移)