Postgres-XL集群的搭建和测试详解

Postgres-XL集群的搭建和测试详解

CentOS6.5  192.168.0.101
CentOS6.5  192.168.0.102
CentOS6.5  192.168.0.103
CentOS6.5  192.168.0.104

一、主机规划
cnode1 (gtm)
cnode2 (gtm_proxy,coordinator,datanode)
cnode3 (gtm_proxy,coordinator,datanode)
cnode4 (gtm_proxy,coordinator,datanode)
cnode5 (gtm_proxy,coordinator,datanode)  

二、在每个节点配置主机Host
#vim /etc/hosts
192.168.0.101 cnode1
192.168.0.102 cnode2
192.168.0.103 cnode3
192.168.0.104 cnode4 


三、安装postgres-xl

1.在每一个节点上安装依赖包 

# yum install -y flex bison readline-devel zlib-devel openjade docbook-style-dsssl 

或者:
yum -y install flex
yum -y install bison
yum -y install openjade.x86_64
yum -y install jadetex.noarch
yum -y install docbook*



2.在每一个节点上安装postgres-xl
# tar -zxvf postgres-xl-v9.2-src.tar.gz
# cd postgres-xl
# ./configure --prefix=/usr/local/pgxl-9.2
# make
# make install 


3.在cnode1(gtm)中安装集群管理工具pgxc_ctl
在gtm或任意一台机器上上解压源码

# tar -zxvf postgres-xl-v9.2-src.tar.gz
# cd /opt/pgxl/postgres-xl/contrib/pgxc_ctl/
# make
# make install

这样pgxc_ctl就安装完成了,后续集群的初始化、启动和停止就可以用该命令来进行了。
默认会生成/home/postgres/pgxc_ctl目录,执行pgxc_ctl命令的时候,会默认读取该目录下的配置文件pgxc_ctl.conf。


四、配置集群

1.在每一个节点上创建用户Postgres
 
为每台主机创建用户postgres。
# useradd postgres
# passwd postgres
# 输入密码 12345678

2.在每一个节点上设置环境变量
# su - postgres
$ vi .bashrc #注意这里环境变量一定要添加到.bashrc文件中,否则会找不到相关命令

export PGHOME=/usr/local/pgxl-9.2
export PGUSER=postgres
export LD_LIBRARY_PATH=$PGHOME/lib
export PATH=$PGHOME/bin:$PATH 

3.在每一个节点赋给目录权限
安装过程中需要在datanode上创建目录,所以需要有$PGHOME目录的写权限。
在每台datanode和coordinator主机上执行以下命令:
chown -R postgres:postgres /usr/local/pgxl-9.2


五、在各个节点之间配置免密码连入(pgxc_ctl初始化集群的时候需要连入其他节点)
ssh-keygen -t rsa (in ~/.ssh目录下)
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
若不是在本地安装,需要把该文件的内容添加到其他需要安装的机器的authorized_keys文件中。
scp authorized_keys postgres@cnode2:/home/postgres/.ssh/
scp authorized_keys postgres@cnode3:/home/postgres/.ssh/
scp authorized_keys postgres@cnode4:/home/postgres/.ssh/ 


这样posgres用户就可以免密码登陆node1到node4主机了。

修改authorized_keys权限:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys 

测试免密码连入(第一次需要输入密码):
scp test_scp.txt postgres@cnode2:/home/postgres/
scp test_scp.txt postgres@cnode3:/home/postgres/
scp test_scp.txt postgres@cnode4:/home/postgres/


我这里是在GTM主机上执行安装过程的,所以需要在gtm机器上执行以上命令,并把生成的
id_rsa.pub 文件追加到其他datanode和coordinator主机的.ssh/authorized_keys上。

六、配置集群并使用pgxc_ctl初始化集群

首先:

在每一个协调者和数据节点上创建以下目录并赋权:
mkdir -p /data/pg/pg92data
chown -R postgres:postgres /data/pg/pg92data

其次:
#在cnode1节点(gtm)上创建集群的配置文件:pgxc_ctl.conf
#在目录/home/postgres/pgxc_ctl目录中创建pgxc_ctl.conf文件
#每一个节点的配置信息的具体含义参考:http://files.postgres-xl.org/documentation/pgxc_ctl.html

#pgxc_ctl.conf 的内容如下:
===========================

#user and path
pgxcOwner=postgres
pgxcUser=$pgxcOwner
pgxcInstallDir=/usr/pgxl-9.2

#gtm and gtmproxy
gtmMasterDir=/usr/pgxl-9.2/gtm
gtmMasterPort=6666
gtmMasterServer=cnode1
gtmSlave=n

#gtm proxy
gtmProxy=y
gtmProxyDir=/data/pg/pg92data
gtmProxyNames=(gtm_pxy1 gtm_pxy2 gtm_pxy3)
gtmProxyServers=(cnode2 cnode3 cnode4)
gtmProxyPorts=(20001 20001 20001)
gtmProxyDirs=($gtmProxyDir/gtm_pxy1 $gtmProxyDir/gtm_pxy2 $gtmProxyDir/gtm_pxy3)
gtmPxyExtraConfig=none
gtmPxySpecificExtraConfig=(none none none)

#coordinator
coordMasterDir=/data/pg/pg92data
coordNames=(coord1 coord2 coord3)
coordPorts=(5432 5432 5432)
poolerPorts=(20010 20010 20010)
coordPgHbaEntries=(172.17.210.0/32)
coordMasterServers=(cnode2 cnode3 cnode4)
coordMasterDirs=($coordMasterDir/coord1 $coordMasterDir/coord2 $coordMasterDir/coord3)
coordMaxWALsernder=0
coordMaxWALSenders=($coordMaxWALsernder $coordMaxWALsernder $coordMaxWALsernder)
coordSlave=n
coordSpecificExtraConfig=(none none none)
coordSpecificExtraPgHba=(none none none)


#datanode
datanodeNames=(datanode1 datanode2 datanode3)
datanodePorts=(15432 15432 15432)
datanodePoolerPorts=(20012 20012 20012)
datanodePgHbaEntries=(172.17.210.0/32)
datanodeMasterServers=(cnode2 cnode3 cnode4)
datanodeMasterDir=/data/pg/pg92data
datanodeMasterDirs=($datanodeMasterDir/datanode1 $datanodeMasterDir/datanode2 $datanodeMasterDir/datanode3)
datanodeMaxWalSender=0
datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender $datanodeMaxWalSender)
datanodeSlave=n
primaryDatanode=datanode1



七、执行pgxc_ctl初始化集群
pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf init all 


八、配置每一个协调者和数据节点的pg_hba.conf文件

增加配置免认证:
host all all 192.168.0.0/32 trust
host all all 192.168.0.0/32 trust
host all all 0.0.0.0/0 md5

九、启动集群

pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf start all 

注:
关闭集群的命令如下:
pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf stop all 

十、注册节点(coordinator,datadone都需要配置)

在每一个节点(除了gtm)上注册节点:
psql -p 5432 postgres   //访问协调节点
psql -p 15432 postgres  //访问数据节点

drop node coord1;
drop node coord2;
drop node coord3;
drop node datanode1;
drop node datanode2;
drop node datanode3;
create node coord1 with(TYPE=coordinator,HOST='cnode2',PORT=5432);
create node coord2 with(TYPE=coordinator,HOST='cnode3',PORT=5432);
create node coord3 with(TYPE=coordinator,HOST='cnode4',PORT=5432);
create node datanode1 with(TYPE=datanode,HOST='cnode2',PORT=15432,primary=true);
create node datanode2 with(TYPE=datanode,HOST='cnode3',PORT=15432,primary=false);
create node datanode3 with(TYPE=datanode,HOST='cnode4',PORT=15432,primary=false);
alter node coord1 with(TYPE=coordinator,HOST='cnode2',PORT=5432);
alter node coord2 with(TYPE=coordinator,HOST='cnode3',PORT=5432);
alter node coord3 with(TYPE=coordinator,HOST='cnode4',PORT=5432);
alter node datanode1 with(TYPE=datanode,HOST='cnode2',PORT=15432,primary=true);
alter node datanode2 with(TYPE=datanode,HOST='cnode3',PORT=15432,primary=false);
alter node datanode3 with(TYPE=datanode,HOST='cnode4',PORT=15432,primary=false);
select pgxc_pool_reload();
select * from pgxc_node;



十一、修改postgres的密码
ALTER USER postgres WITH PASSWORD '12345678';


十二、测试集群

1. 测试基础的DDL

(1).创建角色
在cnode2上执行:psql -p 5432 postgres //访问协调节点
create role postgres nosuperuser login encrypted password 'belle'; 


注:bsadmin是你要建的角色名,dbpwd为你用户的密码

(2).创建表空间
在cnode2上执行:psql -p 5432 postgres //访问协调节点
create tablespace tbs_book_shopping owner postgres location '/data/pg/pg92data/tbs/tbs_bs'; 

(3).创建数据库
在cnode2上执行:psql -p 5432 postgres //访问协调节点
create database book_shopping with owner postgres template template0 encoding 'UTF8' tablespace tbs_book_shopping; 

(4).进入数据库
在cnode2上执行:psql -p 5432 postgres //访问协调节点
\c book_shopping postgres

2. 测试分布式存储

建立数据节点分组,按分组分布式存储数据
#要在每一个coordinator和cnode节点(同一节点上按照端口区分)上增加同样的分组,不会自动同步分组信息
psql -h cnode2 -p 5432 postgres postgres -c "create node group gp1 with(datanode2,datanode3)";
psql -h cnode3 -p 5432 postgres postgres -c "create node group gp1 with(datanode2,datanode3)";
psql -h cnode4 -p 5432 postgres postgres -c "create node group gp1 with(datanode2,datanode3)";

psql -h cnode2 -p 15432 postgres postgres -c "create node group gp1 with(datanode2,datanode3)";
psql -h cnode3 -p 15432 postgres postgres -c "create node group gp1 with(datanode2,datanode3)";
psql -h cnode4 -p 15432 postgres postgres -c "create node group gp1 with(datanode2,datanode3)";

注:
如果在没有创建的gp1组的节点上操作会报错:
postgres=# create table t1(id serial8 primary key, info text, crt_time timestamp) distribute by hash(id) to group gp1;
NOTICE:  CREATE TABLE will create implicit sequence "t1_id_seq" for serial column "t1.id"
ERROR:  PGXC Group gp1: group not defined

重新加载一下集群的配置
psql -h cnode2 -p 5432 postgres postgres -c "select pgxc_pool_reload();";
psql -h cnode3 -p 5432 postgres postgres -c "select pgxc_pool_reload();";
psql -h cnode4 -p 5432 postgres postgres -c "select pgxc_pool_reload();";


验证在各个节点是否存在gp1组
psql -h cnode2 -p 5432 postgres postgres -c "select * from pgxc_group;";
psql -h cnode3 -p 5432 postgres postgres -c "select * from pgxc_group;";
psql -h cnode4 -p 5432 postgres postgres -c "select * from pgxc_group;";
psql -h cnode2 -p 15432 postgres postgres -c "select * from pgxc_group;";
psql -h cnode3 -p 15432 postgres postgres -c "select * from pgxc_group;";
psql -h cnode4 -p 15432 postgres postgres -c "select * from pgxc_group;";

也可以通过以下命令删除节点分组:
psql -h cnode2 -p 5432 postgres postgres -c "delete from pgxc_group where group_name = 'gp1';";
psql -h cnode3 -p 5432 postgres postgres -c "delete from pgxc_group where group_name = 'gp1';";
psql -h cnode4 -p 5432 postgres postgres -c "delete from pgxc_group where group_name = 'gp1';";



切换到数据库book_shopping

psql -p 5432 postgres
\c book_shopping postgres


创建表
create table t1(id serial8 primary key, info text, crt_time timestamp) distribute by hash(id) to group gp1;

(1). 插入数据测试
insert into t1 (info, crt_time) select md5(random()::text), clock_timestamp() from generate_series(1,1000);

查看数据
可以到cnode2和cnode2查看: book_shopping中t1表的数据量:
select count(*) from t1;

(2). 更新数据测试
更新和删除的操作只能在coordinator中执行
在coord1中,可以查询前5行:
select * from t1 limit 5;

 id |               info               |          crt_time          
----+----------------------------------+----------------------------
  1 | 2a85488b8f67cc249fb004345e3d387e | 2015-06-04 10:59:01.600481
  2 | cbc0b99298a92fb883b1df90ec9d2555 | 2015-06-04 10:59:01.60165
  5 | f254966bd9229619e9061fd2e7ea9bac | 2015-06-04 10:59:01.603552
  6 | b3b5db57115da5abb5c25b8084329804 | 2015-06-04 10:59:01.604219
  8 | 4ded54b3d846c5bfd608dbe95f39d2a7 | 2015-06-04 10:59:01.605467
 
在datanode1中可以知道,id为1的数据在cnode2种:
book_shopping=# select * from t1 where id=1;
 id |               info               |          crt_time          
----+----------------------------------+----------------------------
  1 | 2a85488b8f67cc249fb004345e3d387e | 2015-06-04 10:59:01.600481
(1 row)


在coord1中,更新id为1的数据:
update t1 set info='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' where id = 1;

book_shopping=# update t1 set info='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' where id = 1;
UPDATE 1


再回到在datanode1节点,查询id为1的记录状态:
book_shopping=# select * from t1 where id=1;
 id |               info               |          crt_time          
----+----------------------------------+----------------------------
  1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | 2015-06-04 10:59:01.600481
(1 row)

(3)查询测试
在coord1中,可以查询到分部在各个节点的数据:
book_shopping=# select * from t1;
  id  |               info               |          crt_time          
------+----------------------------------+----------------------------
    2 | cbc0b99298a92fb883b1df90ec9d2555 | 2015-06-04 10:59:01.60165
    5 | f254966bd9229619e9061fd2e7ea9bac | 2015-06-04 10:59:01.603552
    6 | b3b5db57115da5abb5c25b8084329804 | 2015-06-04 10:59:01.604219
    8 | 4ded54b3d846c5bfd608dbe95f39d2a7 | 2015-06-04 10:59:01.605467
    9 | 9e3035492918569a707fdae0c83ad9fd | 2015-06-04 10:59:01.606088
   12 | 9836e9da92da44eb1d51ad8e8d9cec67 | 2015-06-04 10:59:01.607926
   13 | 94b6fbea895cd530d6e109613270f2a5 | 2015-06-04 10:59:01.608486
   15 | 4a35e75632b17c67f8233d60c3b13cb1 | 2015-06-04 10:59:01.609578
   17 | cf9b88c7e50a2a984cabfd131b308c19 | 2015-06-04 10:59:01.610672
   19 | 7dafeea2571910171af39e5d7f2eb7d5 | 2015-06-04 10:59:01.611785
   21 | 2e5dd218b9521fcb5f0d653f09740fb7 | 2015-06-04 10:59:01.612912
   23 | e91fd409c1a1f80e701e0a962243484e | 2015-06-04 10:59:01.614065
   26 | 49a68c54f8c7e76a9770793706dc9796 | 2015-06-04 10:59:01.615807
   28 | 79699ca1b507c700548560e477d9bf0c | 2015-06-04 10:59:01.616982
   40 | 0eb75ff919396521584f2b833253b834 | 2015-06-04 10:59:01.623801
   41 | 0302e9fbb3e7bdc714c1584191edb136 | 2015-06-04 10:59:01.624409
   42 | 97692d97b5c9c11ca6108181f3573334 | 2015-06-04 10:59:01.624949
   44 | 90eddaf542e0485a96fa73fc78b059dc | 2015-06-04 10:59:01.626069
   46 | f2cafc526979aa2e86a434e5de9929ed | 2015-06-04 10:59:01.627223
   49 | 137e1311d07f117a2249b961eabe4001 | 2015-06-04 10:59:01.628939
   50 | 9323ec806c550b537e202fd5e61d8a24 | 2015-06-04 10:59:01.629536
   52 | dc41ba4e2046ae348b2ce01033b46efe | 2015-06-04 10:59:01.630633
   56 | 27ba4928937806bae5cf6c0359ab9a03 | 2015-06-04 10:59:01.632916
   57 | f50ede190d3383c060fe3829c7accb79 | 2015-06-04 10:59:01.6335
   62 | 16d2634d35b11d5dcd020083a12ee6eb | 2015-06-04 10:59:01.636402
   64 | 840c97d994cd9ea6fee86c6b1b5e43a1 | 2015-06-04 10:59:01.637562
   66 | aa82ea0624c44a8838e2fb7a3cb24a90 | 2015-06-04 10:59:01.638696
   67 | 40535733b65ab6a5023c7d7d142c435e | 2015-06-04 10:59:01.639286
   68 | c9496a076e2fbcca2a781dc665007219 | 2015-06-04 10:59:01.639821
   70 | 2f0f80cf8d2f6d06d591bf68fac1c253 | 2015-06-04 10:59:01.640937
   74 | 95b9eee52187d3131cd7e125eba809e1 | 2015-06-04 10:59:01.643187
   76 | 32e43e6ae96bf320775beccb61b7a48f | 2015-06-04 10:59:01.644315
   77 | 2d327bcc9cbbb4e155d1e8671ed71f71 | 2015-06-04 10:59:01.644962
   80 | 896a5f50e47b01bab9942ca099e2aa67 | 2015-06-04 10:59:01.646691
   81 | b3d5808db7235d5927055838d4666492 | 2015-06-04 10:59:01.647239
   83 | fc7a8ddffdb5d4d91f8f9c0238e3a577 | 2015-06-04 10:59:01.648333
   84 | c1c4ddbadeaf595a94dd1120aeb1479e | 2015-06-04 10:59:01.648877
   85 | d617f8bac6d368fa46f1c00d55834349 | 2015-06-04 10:59:01.649451
   87 | 11f591e7526e7145df6737090ba32556 | 2015-06-04 10:59:01.650531
   90 | e0287526bdf0c9f277db3d9a94489e68 | 2015-06-04 10:59:01.652254
   99 | 2197bf1dc04d3d327ec1b1440dac8249 | 2015-06-04 10:59:01.657322
  101 | 5319875171df0c35e5e4111c0a8dbea4 | 2015-06-04 10:59:01.658466
  more


 
  也可以查询符合条件的行:
  select * from t1 where info='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
 
  book_shopping=# select * from t1 where info='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
 id |               info               |          crt_time          
 ----+----------------------------------+----------------------------
  1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | 2015-06-04 10:59:01.600481
 (1 row)



(4)删除测试
 delete from t1 where info = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';


 更新和删除的操作只能在coordinator中执行,如果直接在datanode1中处理的话,会报错:
 book_shopping=# delete from t1 where info = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
 ERROR:  cannot execute DELETE in a read-only transaction


 
 在cnode1中,执行:
 book_shopping=# delete from t1 where info = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
 DELETE 1
 
 book_shopping=# select count(1) from t1;
 count
-------
   999
(1 row)


2. 复制存储测试
create table t2(id serial8 primary key, info text, crt_time timestamp) DISTRIBUTE BY REPLICATION;

(1). 测试插入数据
insert into t2 (info, crt_time) select md5(random()::text), clock_timestamp() from generate_series(1,200);

可以到datanode1,datanode2和datanode3查看book_shopping中t2表的数据量都是200:
select count(*) from t2;


(2). 查询操作

在coord1中,执行查询操作:
select * from t2 limit 10;

book_shopping=# select * from t2 limit 10;
 id |               info               |          crt_time          
----+----------------------------------+----------------------------
  1 | 6251581ce28a39e92d322a00880772df | 2015-06-04 11:33:37.100742
  2 | 35b39c91f678ce7acb406a62ab2a15af | 2015-06-04 11:33:37.101695
  3 | ad7657b5ddb5cc2273f925432c8fee40 | 2015-06-04 11:33:37.102295
  4 | f7f0b7bf1ae3d02b34f61e3b500dfe70 | 2015-06-04 11:33:37.102902
  5 | b661a1208585e01c8abcd7bc699c3ac4 | 2015-06-04 11:33:37.103556
  6 | 3b3434e38f5916fd86a14cef94060885 | 2015-06-04 11:33:37.104154
  7 | 8b2be24600a401b3d1770134243bc3b7 | 2015-06-04 11:33:37.104757
  8 | 597cc7d88f19dc58bf0be793d12514b7 | 2015-06-04 11:33:37.105326
  9 | 4b73d76881b3b33719165797b9c34534 | 2015-06-04 11:33:37.105898
 10 | d1b1b5ae9d22cfd132b8811bf256be94 | 2015-06-04 11:33:37.10648
(10 rows)


(3). 更新操作
在coord1中,执行更新操作:
update t2 set info='bbbbbbbbbbbbbbbbbbbbbbb' where id = 1;

 
再去datanode1,datanode2和datanode3查看数据是否更新:
select * from t2 where id = 1;

book_shopping=# select * from t2 where id = 1;
 id |          info           |          crt_time          
----+-------------------------+----------------------------
  1 | bbbbbbbbbbbbbbbbbbbbbbb | 2015-06-04 11:33:37.100742
(1 row)


(4). 删除操作
在coord1中,执行更新操作:
delete from t2 where id = 1;

book_shopping=# delete from t2 where id = 1;
DELETE 1


再去datanode1,datanode2和datanode3查看数据是否更新:
book_shopping=# select * from t2 where id = 1;
 id | info | crt_time
----+------+----------
(0 rows)


3. 测试存储过程
用pgAdmin III连接coord1节点,单开一个SQL查询,输入以下内容并点击执行:

create or replace function selectinto(id int) returns varchar as
$BODY$
    declare
        sql varchar;
        str varchar;
        re record;
    begin
        sql = 'select info from t1 where id=' || id;
        for re in execute sql loop
            str = str || re.info;
        end loop;
        return str;
    end
$BODY$
language plpgsql;

执行完后,会在每一个节点函数目录看到这个函数。



你可能感兴趣的:(Postgres)