greenplum 实验,动态增加节点
1.我原有在hadoop4-hadoop6上初始化了一个greenplum的集群
2.所有的primary,mirror都启动着,我连接master,在两个表里插入记录:
aligputf8=# insert into pg_catalog.pg_filespace_entry values(3052,15,'/home/gpadmin1/cxf/gpdata/gpdb_p1/gp6');
INSERT 0 1
aligputf8=# insert into pg_catalog.pg_filespace_entry values(3052,16,'/home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6');
INSERT 0 1
aligputf8=# insert into gp_segment_configuration values (15,6,'p','p','s','u',40000,'hadoop3','hadoop3',41000,null);
INSERT 0 1
aligputf8=# insert into gp_segment_configuration values (16,6,'m','m','s','u',50000,'hadoop3','hadoop3',51000,null);
INSERT 0 1
我在hadoop3上新建一个节点,primary跟mirror都在hadoop3上。
目录分别为/home/gpadmin1/cxf/gpdata/gpdb_p1/gp6和/home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6
3.将hadoop4上的40000primary节点跟hadoop5上的50000mirror节点的数据文件夹scp到hadoop3上。
[gpadmin1@hadoop4 cxf]$ scp -r gpdata hadoop3:/home/gpadmin1/cxf
[gpadmin1@hadoop5 cxf]$ scp -r mgpdata hadoop3:/home/gpadmin1/cxf
然后在hadoop3上重命名文件夹。使得文件夹名称跟刚刚插入的配置表一致
4.启动该primary跟mirror节点
$GPHOME/bin/pg_ctl -D /home/gpadmin1/cxf/gpdata/gpdb_p1/gp6 -l /home/gpadmin1/cxf/gpdata/gpdb_p1/gp6/pg_log/startup.log -o '-i -p 40000 --silent-mode=true -M quiescent -b 15 -C 6 -z 7' start
$GPHOME/bin/pg_ctl -D /home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6 -l /home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6/pg_log/startup.log -o '-i -p 50000 --silent-mode=true -M quiescent -b 16 -C 6 -z 7' start
5.运行gp_primarymirror命令设置新增两个postgres的主备(要先运行mirror,再运行primary)
因为刚刚是从hadoop4,跟hadoop5上scp过来的,所以要修改:
/home/gpadmin1/cxf/gpdata/gpdb_p1/gp6/gp_pmtransition_args
/home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6/gp_pmtransition_args
这两个gp_primarymirror要用到的参数文件
[gpadmin1@hadoop3 gpdb_p1]$ cat /home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6/gp_pmtransition_args
mirror
s
hadoop5
51000
hadoop4
41000
40000
[gpadmin1@hadoop3 gpdb_p1]$ vi /home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6/gp_pmtransition_args
mirror
s
hadoop3
51000
hadoop3
41000
40000
同理修改另外一个文件,然后运行gp_primarymirror命令
$GPHOME/bin/gp_primarymirror -h 127.0.0.1 -p 50000 -i /home/gpadmin1/cxf/mgpdata/gpdb_p1/gp6/gp_pmtransition_args &
$GPHOME/bin/gp_primarymirror -h 127.0.0.1 -p 40000 -i /home/gpadmin1/cxf/gpdata/gpdb_p1/gp6/gp_pmtransition_args &
尝试使用psql连接,看看该节点是否正常
[gpadmin1@hadoop3 gpdb_p1]$ PGOPTIONS="-c gp_session_role=utility" psql -p 40000
psql (8.2.14)
Type "help" for help.
aligputf8=#
正常连接,节点初始化成功
6.启动hadoop4上的master节点:修改两个参数,将“-x 14”修改为“-x 16”,“-z 6”修改为“-z 7”
$GPHOME/bin/pg_ctl -w -D /home/gpadmin1/cxf/gpdb_p1/gp-1 -l /home/gpadmin1/cxf/gpdb_p1/gp-1/pg_log/startup.log -o " -E -i -M master -p 5432 -b 1 -x 16 -C -1 -z 7 --silent-mode=true " start
启动成功,接着,尝试在hadoop3上,直接连primary插入一个记录。
[gpadmin1@hadoop3 gpdb_p1]$ PGOPTIONS="-c gp_session_role=utility" psql -p 40000
psql (8.2.14)
Type "help" for help.
aligputf8=# insert into cxf values('scutshuxue',7);
INSERT 0 1
aligputf8=#
接着,在hadoop4上连接master节点,查询刚刚插入的那条记录,
ps:cxf这个表是之前建立的,aligputf8=# create table cxf(a varchar(100),b int) DISTRIBUTED RANDOMLY;
[gpadmin1@hadoop4 cxf]$ psql
psql (8.2.14)
Type "help" for help.
aligputf8=# select * from cxf;
a | b
------------+---
feng | 3
wang | 5
scutshuxue | 7
xiao | 2
chen | 1
(5 rows)
可以找到刚刚插入的那条记录,添加节点成功。
7.使用gpstop关闭greenplum,成功
再使用gpstart命令启动greenplum,hadoop3上的节点可以顺利启动,添加成功
动态删除节点:
aligputf8=# delete from gp_segment_configuration where dbid in (16,15);
DELETE 2
aligputf8=# delete from pg_filespace_entry where fsedbid in (15,16);
DELETE 2
不重启master是会报错的。
[gpadmin1@hadoop4 cxf]$ psql
psql (8.2.14)
Type "help" for help.
aligputf8=# select * from cxf;
ERROR: Greenplum Database number of segments inconsistency: count is 7 from pg_catalog.gp_id table, but 6 from getCdbComponentDatabases()
aligputf8=#
重启master即可
[gpadmin1@hadoop4 cxf]$ gpstop -m
[gpadmin1@hadoop4 cxf]$ $GPHOME/bin/pg_ctl -w -D /home/gpadmin1/cxf/gpdb_p1/gp-1 -l /home/gpadmin1/cxf/gpdb_p1/gp-1/pg_log/startup.log -o " -E -i -M master -p 5432 -b 1 -x 14 -C -1 -z 6 --silent-mode=true " start
[gpadmin1@hadoop4 cxf]$ psql
NOTICE: Master mirroring synchronizing
psql (8.2.14)
Type "help" for help.
aligputf8=# select * from cxf;
a | b
------+---
chen | 1
xiao | 2
(2 rows)