【GP6安装配置】 Greenplum6.2.1 安装手记(下)

作者:lianghc
本文分为两部分

参数配置:【GP6安装配置】 Greenplum6.2.1 安装手记(上) 

执行安装:【GP6安装配置】 Greenplum6.2.1 安装手记(下)

目录

3. 集群软件安装

3.1 执行安装程序

3.2 创建hostfile_exkeys

3.3 集群互信,免密登陆

3.3.1  生成密钥

3.3.2 将本机的公钥复制到各个节点机器的authorized_keys文件中

3.3.3 使用gpssh-exkeys 工具,打通n-n的免密登陆

3.3.4 验证gpssh

3.4 同步master 配置到各个主机(非官方教程步骤)

3.4.1 批量添加gpadmin用户

3.4.2 打通gpadmin 用户免密登录

3.4.3 批量设置greenplum在gpadmin用户的环境变量

3.4.4  批量复制系统参数到其他节点

3.5 集群节点安装

3.5.1 模拟gpseginstall 脚本

3.5.2 创建集群数据目录

3.6 集群性能测试

3.6.1 网络性能测试

3.6.2 磁盘I/O 性能测试

3.6.3 集群时钟校验(非官方步骤)

4.  集群初始化

4.1  编写初始化配置文件

4.1.1 拷贝配置文件模板

4.1.2 根据需要修改参数

4.2 集群初始化

4.2.1 集群初始化命令参数

4.2.2 执行报错处理

4.3 初始化完成后续操作

4.3.2 设置环境变量

4.3.3 若删除重装,使用gpdeletesystem

4.3.4 配置pg_hba.conf

5 安装成功后配置

5.1 psql 登陆gp 并设置密码

5.1.1 登陆到不同节点

5.2 客户端登陆gp

5.2.1. 配置 pg_hba.conf

5.2.2. 修改postgresql.conf 

5.2.3. 加载修改的文件


3. 集群软件安装

参考:https://gpdb.docs.pivotal.io/6-2/install_guide/install_gpdb.html
# 与旧版本差异点

gp4.x/gp5.x 以前安装分为三部分
   1. 安装master(一般是个bin的可执行文件,安装,并可以指定安装目录)
   2. gpseginstall 安装各个seg
   3. gp群参数校验  
   4. gpinitsystem 集群初始化
   
gp6.2 开始不提供zip 格式压缩包,仅提供rpm包,安装分为两步
   1. 安装master(rpm -ivh / yum install -y),不可以指定安装目录,默认安装到/usr/local/
  2. gp6 没有 gpseginstall工具。所以要么自己打包master 安装好的gp目录并传到seg上,要么各个节点单独yum 安装。
     1.每个节点主机,单独yum
     2.打包主节点的安装目录,并分发给seg主机。
  3. 集群性能校验
  4. gpinitsystem 集群初始化 

3.1 执行安装程序

       执行安装脚本,默认安装到/usr/local/ 目录下。

 yum install -y ./greenplum-db-6.2.1-rhel6-x86_64.rpm

或者使用rpm 安装

rpm -ivh greenplum-db-6.2.1-rhel6-x86_64.rpm

       本次测试是内网机,无法联网下载所有的依赖包,也没有提前外网下载好依赖包。而是等安装时缺什么,再下载什么。现在缺少 libyaml,下载并上传至服务器,安装后再试运行gp安装程序。libyaml下载地址 http://rpmfind.net/linux/rpm2html/search.php?query=libyaml(x86-64)


[root@mdw gp_install_package]#  yum install -y ./greenplum-db-6.2.1-rhel6-x86_64.rpm
Loaded plugins: product-id, refresh-packagekit, search-disabled-repos, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Setting up Install Process
Examining ./greenplum-db-6.2.1-rhel6-x86_64.rpm: greenplum-db-6.2.1-1.el6.x86_64
Marking ./greenplum-db-6.2.1-rhel6-x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package greenplum-db.x86_64 0:6.2.1-1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================================
 Package                         Arch                      Version                           Repository                                           Size
=======================================================================================================================================================
Installing:
 greenplum-db                    x86_64                    6.2.1-1.el6                       /greenplum-db-6.2.1-rhel6-x86_64                    493 M

Transaction Summary
=======================================================================================================================================================
Install       1 Package(s)

Total size: 493 M
Installed size: 493 M
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : greenplum-db-6.2.1-1.el6.x86_64                                                                                                     1/1
  Verifying  : greenplum-db-6.2.1-1.el6.x86_64                                                                                                     1/1

Installed:
  greenplum-db.x86_64 0:6.2.1-1.el6

Complete!


3.2 创建hostfile_exkeys

在$GPHOME目录创建两个host文件(all_host,seg_host),用于后续使用gpssh,gpscp 等脚本host参数文件
all_host : 内容是集群所有主机名或ip,包含master,segment,standby等。
seg_host: 内容是所有 segment主机名或ip
若一台机器有多网卡,且网卡没有绑定成bond0模式时,需要将多网卡的ip 或者host都列出来。

[root@mdw ~]# cd /usr/local/
[root@mdw local]# ls
bin  etc  games  greenplum-db  greenplum-db-6.2.1  include  lib  lib64  libexec  openssh-6.5p1  sbin  share  src  ssl
[root@mdw local]# cd greenplum-db
[root@mdw greenplum-db]# ls
bin  docs  etc  ext  greenplum_path.sh  include  lib  open_source_license_pivotal_greenplum.txt  pxf  sbin  share
[root@mdw greenplum-db]# vi all_host
[root@mdw greenplum-db]# vi seg_host
[root@mdw greenplum-db]# cat all_host
mdw
sdw1
sdw2
[root@mdw greenplum-db]# cat seg_host
sdw1
sdw2

修改文件夹权限

[root@mdw greenplum-db]# chown -R gpadmin:gpadmin /usr/local/greenplum*

3.3 集群互信,免密登陆

## 与旧版本差异点
   gp6.x 以前无需3.3.1 ssh-keygen生成密钥,3.3.2 的ssh-copy-id 步骤,直接gpssh-exkeys -f all_host。


3.3.1  生成密钥

我的Linux还没有公私钥对,所以,要先生成一个

[root@mdw greenplum-db]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
88:c0:be:87:6a:c2:40:ed:fd:ab:34:f0:35:60:47:0f root@mdw
The key's randomart image is:
+--[ RSA 2048]----+
|       E         |
| .    . o        |
|  +  o . .       |
| o o..o.         |
|. o.o .oS        |
|.  +o.. .        |
|o o .+.          |
|.+ .. ..         |
|+    ....        |
+-----------------+

3.3.2 将本机的公钥复制到各个节点机器的authorized_keys文件中

[root@mdw greenplum-db]# ssh-copy-id sdw1

[root@mdw greenplum-db]# ssh-copy-id sdw2


3.3.3 使用gpssh-exkeys 工具,打通n-n的免密登陆

[root@mdw greenplum-db]# gpssh-exkeys -f all_host
Problem getting hostname for gpzq-sh-mb: [Errno -3] Temporary failure in name resolution
Traceback (most recent call last):
  File "/usr/local/greenplum-db/./bin/gpssh-exkeys", line 409, in 
    (primary, aliases, ipaddrs) = socket.gethostbyaddr(hostname)
socket.gaierror: [Errno -3] Temporary failure in name resolution
# 没有做章节( 2.2.1 配置每台机器host )的内容,导致上述错误
[root@gjzq-sh-mb greenplum-db]# hostname mdw
[root@gjzq-sh-mb greenplum-db]# gpssh-exkeys -f all_host
[STEP 1 of 5] create local ID and authorize on local host
  ... /root/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] retrieving credentials from remote hosts
  ... send to sdw1
  ... send to sdw2

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with sdw1
  ... finished key exchange with sdw2

[INFO] completed successfully

3.3.4 验证gpssh

[root@mdw ~]# source /usr/local/greenplum-db/greenplum_path.sh
[root@mdw ~]# gpssh -f /usr/local/greenplum-db/all_host -e 'ls /usr/local/'
[ mdw] ls /usr/local/
[ mdw] bin  games          greenplum-db-6.2.1  lib    libexec        sbin   src
[ mdw] etc  greenplum-db  include              lib64  openssh-6.5p1  share  ssl
[sdw1] ls /usr/local/
[sdw1] bin  games    lib    libexec         sbin   src
[sdw1] etc  include  lib64  openssh-6.5p1  share  ssl
[sdw2] ls /usr/local/
[sdw2] bin  games    lib    libexec         sbin   src
[sdw2] etc  include  lib64  openssh-6.5p1  share  ssl

3.4 同步master 配置到各个主机(非官方教程步骤)


本步骤非官方教程内容,官方教程在修改系统参数步骤中 就已经把集群所有主机的配置都改成一致的。本文档中前面修改参数部分,只修改master主机的参数,在本步骤中做集群统一配置。

3.4.1 批量添加gpadmin用户

[root@mdw greenplum-db]# source greenplum_path.sh
[root@mdw greenplum-db]# gpssh -f seg_host -e 'groupadd gpadmin;useradd gpadmin -r -m -g gpadmin;echo "gpadmin" | passwd --stdin gpadmin;'
[root@mdw greenplum-db]# gpssh -f seg_host -e 'ls /home/'

3.4.2 打通gpadmin 用户免密登录

 ## 与旧版本差异点
gp6 之前,gpadmin 用户的 免密登录步骤由gpseginstall 工具自动处理,gp6 需要人工处理。
[root@mdw greenplum-db-6.2.1]# su - gpadmin
[gpadmin@mdw ~]$ source /usr/local/greenplum-db/greenplum_path.sh
[gpadmin@mdw ~]$ ssh-keygen
[gpadmin@mdw ~]$ ssh-copy-id sdw1
[gpadmin@mdw ~]$ ssh-copy-id sdw2
[gpadmin@mdw ~]$ gpssh-exkeys -f /usr/local/greenplum-db/all_host

3.4.3 批量设置greenplum在gpadmin用户的环境变量

添加gp的安装目录,和话环境信息到 用户的环境变量中。
编辑 .bash_profile

cat >> /home/gpadmin/.bash_profile << EOF
source /usr/local/greenplum-db/greenplum_path.sh
EOF

编辑 .bashrc

cat >> /home/gpadmin/.bashrc << EOF
source /usr/local/greenplum-db/greenplum_path.sh
EOF

环境变量文件分发到其他节点

su - gpadmin
source /usr/local/greenplum-db/greenplum_path.sh
gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bash_profile  gpadmin@=:/home/gpadmin/.bash_profile
gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc 


3.4.4  批量复制系统参数到其他节点

# 示例:
su root
gpscp -f seg_host /etc/hosts   root@=:/etc/hosts
gpscp -f seg_host /etc/security/limits.conf   root@=:/etc/security/limits.conf 
gpscp -f seg_host /etc/sysctl.conf  root@=:/etc/sysctl.conf 
gpscp -f seg_host /etc/security/limits.d/90-nproc.conf   root@=:/etc/security/limits.d/90-nproc.conf
gpssh -f seg_host -e '/sbin/blockdev --setra 16384 /dev/sda'
gpssh -f seg_host -e 'echo deadline > /sys/block/sda/queue/scheduler'
gpssh -f seg_host -e 'sysctl -p'
gpssh -f seg_host -e 'reboot'

3.5 集群节点安装

## 与旧版本差异点
   目前官网缺少这部分说明。
在gp6 之前,有一个工具gpseginstall ,可以安装各个节点的gp软件。根据gpseginstall的日志可以分析出,gpseginstall的主要步骤是:
1. 节点上创建gp用户 (此步骤可略过)
2. 打包主节点安装目录
3. scp到各个seg 服务器
4. 解压,创建软连接
5. 授权给gpamdin
gpseginstall 安装日志,参考gp5 安装笔记


3.5.1 模拟gpseginstall 脚本

以下脚本模拟gpseginstall 的主要过程,完成gpsegment的部署

# root 用户下执行
# 变量设置
link_name='greenplum-db'                    #软连接名
binary_dir_location='/usr/local'            #安装路径
binary_dir_name='greenplum-db-6.2.1'        #安装目录
binary_path='/usr/local/greenplum-db-6.2.1' #全目录

# master节点上打包

chown -R gpadmin:gpadmin $binary_path
rm -f ${binary_path}.tar; rm -f ${binary_path}.tar.gz
cd $binary_dir_location; tar cf ${binary_dir_name}.tar ${binary_dir_name}
gzip ${binary_path}.tar

# 分发到segment

gpssh -f ${binary_path}/seg_host -e "mkdir -p ${binary_dir_location};rm -rf ${binary_path};rm -rf ${binary_path}.tar;rm -rf ${binary_path}.tar.gz"
gpscp -f ${binary_path}/seg_host ${binary_path}.tar.gz root@=:${binary_path}.tar.gz
gpssh -f ${binary_path}/seg_host -e "cd ${binary_dir_location};gzip -f -d ${binary_path}.tar.gz;tar xf ${binary_path}.tar"
gpssh -f ${binary_path}/seg_host -e "rm -rf ${binary_path}.tar;rm -rf ${binary_path}.tar.gz;rm -f ${binary_dir_location}/${link_name}"
gpssh -f ${binary_path}/seg_host -e ln -fs ${binary_dir_location}/${binary_dir_name} ${binary_dir_location}/${link_name}
gpssh -f ${binary_path}/seg_host -e "chown -R gpadmin:gpadmin ${binary_dir_location}/${link_name};chown -R gpadmin:gpadmin ${binary_dir_location}/${binary_dir_name}"
gpssh -f ${binary_path}/seg_host -e "source ${binary_path}/greenplum_path"
gpssh -f ${binary_path}/seg_host -e "cd ${binary_dir_location};ll"

3.5.2 创建集群数据目录

3.5.2.1 创建master 数据目录

 mkdir -p /opt/greenplum/data/master
 chown gpadmin:gpadmin /opt/greenplum/data/master

standby 数据目录(本次实验没有standby )
使用gpssh 远程给standby 创建数据目录

# source /usr/local/greenplum-db/greenplum_path.sh 
# gpssh -h smdw -e 'mkdir -p /data/master'
# gpssh -h smdw -e 'chown gpadmin:gpadmin /data/master'

3.5.2.2 创建segment 数据目录
本次计划每个主机安装两个 segment,两个mirror.

source /usr/local/greenplum-db/greenplum_path.sh 
gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /opt/greenplum/data1/primary'
gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /opt/greenplum/data1/mirror'
gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /opt/greenplum/data2/primary'
gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /opt/greenplum/data2/mirror'
gpssh -f /usr/local/greenplum-db/seg_host -e 'chown -R gpadmin /opt/greenplum/data*'


3.6 集群性能测试

## 与旧版本差异点
   gp6 取消了gpcheck 工具。目前可校验的部分是网络和磁盘IO性能。
   gpcheck工具可以对gp需要的系统参数,硬件配置进行校验

详情参考官网:
https://gpdb.docs.pivotal.io/6-2/install_guide/validate.html#topic1
扩展阅读:
https://yq.aliyun.com/articles/230896?spm=a2c4e.11155435.0.0.a9756e1eIiHSoH

个人经验(仅供才考,具体标准 要再找资料):
一般来说磁盘要达到2000M/s
网络至少1000M/s

3.6.1 网络性能测试

实验示例:


[root@mdw local]#  gpcheckperf -f /usr/local/greenplum-db/seg_host -r N -d /tmp
/usr/local/greenplum-db/./bin/gpcheckperf -f /usr/local/greenplum-db/seg_host -r N -d /tmp

-------------------
--  NETPERF TEST
-------------------
 Authorized only. All activity will be monitored and reported
NOTICE: -t is deprecated, and has no effect
NOTICE: -f is deprecated, and has no effect
 Authorized only. All activity will be monitored and reported
NOTICE: -t is deprecated, and has no effect
NOTICE: -f is deprecated, and has no effect
[Warning] netperf failed on sdw2 -> sdw1

====================
==  RESULT 2019-12-18T19:40:30.264321
====================
Netperf bisection bandwidth test
sdw1 -> sdw2 = 2273.930000

Summary:
sum = 2273.93 MB/sec
min = 2273.93 MB/sec
max = 2273.93 MB/sec
avg = 2273.93 MB/sec
median = 2273.93 MB/sec

 测试发现 netperf failed on sdw2 -> sdw1。检查发现是sdw2的hosts没有配置。

3.6.2 磁盘I/O 性能测试

实验单机装两个seg,但是只有一块盘,所以测试一个目录即可,测试月产生32G的数据,需要留有足够的磁盘空间。

 gpcheckperf -f /usr/local/greenplum-db/seg_host -r ds -D   -d /opt/greenplum/data1/primary
[root@mdw greenplum-db]#  gpcheckperf -f /usr/local/greenplum-db/seg_host -r ds -D   -d /opt/greenplum/data1/primary
/usr/local/greenplum-db/./bin/gpcheckperf -f /usr/local/greenplum-db/seg_host -r ds -D -d /opt/greenplum/data1/primary
--------------------
--  DISK WRITE TEST
--------------------
--------------------
--  DISK READ TEST
--------------------
--------------------
--  STREAM TEST
--------------------
====================
==  RESULT 2019-12-18T19:59:06.969229
====================
 disk write avg time (sec): 47.34
 disk write tot bytes: 66904850432
 disk write tot bandwidth (MB/s): 1411.59
 disk write min bandwidth (MB/s): 555.60 [sdw2]
 disk write max bandwidth (MB/s): 855.99 [sdw1]
 -- per host bandwidth --
    disk write bandwidth (MB/s): 855.99 [sdw1]
    disk write bandwidth (MB/s): 555.60 [sdw2]

 disk read avg time (sec): 87.33
 disk read tot bytes: 66904850432
 disk read tot bandwidth (MB/s): 738.54
 disk read min bandwidth (MB/s): 331.15 [sdw2]
 disk read max bandwidth (MB/s): 407.39 [sdw1]
 -- per host bandwidth --
    disk read bandwidth (MB/s): 407.39 [sdw1]
    disk read bandwidth (MB/s): 331.15 [sdw2]

 stream tot bandwidth (MB/s): 12924.30
 stream min bandwidth (MB/s): 6451.80 [sdw1]
 stream max bandwidth (MB/s): 6472.50 [sdw2]
 -- per host bandwidth --
    stream bandwidth (MB/s): 6451.80 [sdw1]
    stream bandwidth (MB/s): 6472.50 [sdw2]

3.6.3 集群时钟校验(非官方步骤)

验证集群时间,若不一致,需要修改ntp

gpssh -f /usr/local/greenplum-db/all_host -e 'date'


4.  集群初始化

官方文档:https://gpdb.docs.pivotal.io/6-2/install_guide/init_gpdb.html

4.1  编写初始化配置文件

4.1.1 拷贝配置文件模板

su - gpadmin
mkdir -p /home/gpadmin/gpconfigs
cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config


4.1.2 根据需要修改参数

注意:To specify PORT_BASE, review the port range specified in the net.ipv4.ip_local_port_range parameter in the /etc/sysctl.conf file.
主要修改的参数:


ARRAY_NAME="Greenplum Data Platform"
SEG_PREFIX=gpseg
PORT_BASE=6000
declare -a DATA_DIRECTORY=(/opt/greenplum/data1/primary /opt/greenplum/data2/primary)
MASTER_HOSTNAME=mdw
MASTER_DIRECTORY=/opt/greenplum/data/master
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE
MIRROR_PORT_BASE=7000
declare -a MIRROR_DATA_DIRECTORY=(/opt/greenplum/data1/mirror /opt/greenplum/data2/mirror)
DATABASE_NAME=gpdw


4.2 集群初始化

4.2.1 集群初始化命令参数

执行脚本:

gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /usr/local/greenplum-db/seg_host -D


4.2.2 执行报错处理

[gpadmin@mdw gpconfigs]$ gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /usr/local/greenplum-db/seg_host -D
...
/usr/local/greenplum-db/./bin/gpinitsystem: line 244: /tmp/cluster_tmp_file.8070: Permission denied
/bin/mv: cannot stat `/tmp/cluster_tmp_file.8070': Permission denied
...
20191218:20:22:57:008070 gpinitsystem:mdw:gpadmin-[FATAL]:-Unknown host sdw1: ping: icmp open socket: Operation not permitted
unknown host Script Exiting!

4.2.2.1 Permission denied 错误 处理

gpssh -f /usr/local/greenplum-db/all_host -e 'chmod 777 /tmp'

4.2.2.2 icmp open socket: Operation not permitted 错误处理

gpssh -f /usr/local/greenplum-db/all_host -e 'chmod u+s /bin/ping'

4.2.2.3 失败回退
安装中途失败,提示使用  bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_* 回退,执行该脚本即可,例如:


... 
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[FATAL]:-Unknown host gpzq-sh-mb: ping: unknown host gpzq-sh-mb
unknown host Script Exiting!
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Run command bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20191218_203938 to remove these changes
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-End Function BACKOUT_COMMAND
[gpadmin@mdw gpAdminLogs]$ ls
backout_gpinitsystem_gpadmin_20191218_203938  gpinitsystem_20191218.log
[gpadmin@mdw gpAdminLogs]$ bash backout_gpinitsystem_gpadmin_20191218_203938
Stopping Master instance
waiting for server to shut down.... done
server stopped
Removing Master log file
Removing Master lock files
Removing Master data directory files

若执行后仍然未清理干净,可执行一下语句后,再重新安装:

pg_ctl -D /opt/greenplum/data/master/gpseg-1 stop
rm -f /tmp/.s.PGSQL.5432 /tmp/.s.PGSQL.5432.lock
rm -Rf /opt/greenplum/data/master/gpseg-1


 4.2.2.4 ping: unknown host gpzq-sh-mb unknown host Script Exiting! 错误
请参考:
http://note.youdao.com/noteshare?id=8a72fdf1ec13a1c79b2d795e406b3dd2&sub=313FE99D57C84F2EA498DB6D7B79C7D3
编辑 /home/gpadmin/.gphostcache 文件,为一下内容:
[gpadmin@mdw ~]$ cat .gphostcache
mdw:mdw
sdw1:sdw1
sdw2:sdw2

4.3 初始化完成后续操作

顺利初始化完成,会 打印出 Greenplum Database instance successfully created。
日志生成到/home/gpadmin/gpAdminLogs/ 目录下,命名规则: gpinitsystem_${安装日期}.log
日志最后部分如下:


...
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[WARN]:-*******************************************************
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[WARN]:-were generated during the array creation
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-Please review contents of log file
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20191218.log
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-To determine level of criticality
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-These messages could be from a previous run of the utility
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-that was called today!
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[WARN]:-*******************************************************
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-End Function SCAN_LOG
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-To complete the environment configuration, please
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/opt/greenplum/data/master/gpseg-1"
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-   to access the Greenplum scripts for this instance:
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-   or, use -d /opt/greenplum/data/master/gpseg-1 option for the Greenplum scripts
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-   Example gpstate -d /opt/greenplum/data/master/gpseg-1
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20191218.log
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-Review options for gpinitstandby
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-The Master /opt/greenplum/data/master/gpseg-1/pg_hba.conf post gpinitsystem
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-new array must be explicitly added to this file
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-located in the /usr/local/greenplum-db/./docs directory
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20191218:20:45:51:013612 gpinitsystem:mdw:gpadmin-[INFO]:-End Main


仔细阅读日志最后面的内容,还有几个步骤需要操作。4.3.1 检查日志内容
日志中有如下提示:

Scan of log file indicates that some warnings or errors
were generated during the array creation
Please review contents of log file
/home/gpadmin/gpAdminLogs/gpinitsystem_20191218.log

Scan  warnings or errors:

 cat /home/gpadmin/gpAdminLogs/gpinitsystem_20191218.log|grep -E -i 'WARN|ERROR]'

根据日志内容做相应的调整,使集群性能达到最优。

4.3.2 设置环境变量

编辑gpadmin 用户的环境变量,增加

source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/opt/greenplum/data/master/gpseg-1

除此之外,通常还增加:

export PGPORT=5432       # 根据实际情况填写
export PGUSER=gpadmin    # 根据实际情况填写
export PGDATABASE=gpdw  # 根据实际情况填写

环境变量详情参考:https://gpdb.docs.pivotal.io/510/install_guide/env_var_ref.html

前面已经添加过 source /usr/local/greenplum-db/greenplum_path.sh,此处操作如下:

编辑 .bash_profile

su - gpadmin
cat >> /home/gpadmin/.bash_profile << EOF
export MASTER_DATA_DIRECTORY=/opt/greenplum/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=gpdw
EOF

编辑 .bashrc

cat >> /home/gpadmin/.bashrc << EOF
export MASTER_DATA_DIRECTORY=/opt/greenplum/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=gpdw
EOF

环境变量文件分发到其他节点

gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bash_profile  gpadmin@=:/home/gpadmin/.bash_profile
gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc
gpssh -f /usr/local/greenplum-db/all_host -e 'source /home/gpadmin/.bash_profile;source /home/gpadmin/.bashrc;'

4.3.3 若删除重装,使用gpdeletesystem

安装完成,出于种种原因,若需要集群删除重装,使用 gpdeletesystem  工具
详情参考官方文档:
https://gpdb.docs.pivotal.io/6-2/utility_guide/ref/gpdeletesystem.html#topic1
使用命令:

gpdeletesystem -d /opt/greenplum/data/master/gpseg-1 -f

-d 后面跟 MASTER_DATA_DIRECTORY(master 的数据目录),会清除master,segment所有的数据目录。
-f force, 终止所有进程,强制删除。示例:


[gpadmin@mdw ~]$ gpdeletesystem --help
[gpadmin@mdw ~]$ gpdeletesystem -d /opt/greenplum/data/master/gpseg-1 -f
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Checking for database dump files...
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Getting segment information...
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Greenplum Instance Deletion Parameters
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:---------------------------------------
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Greenplum Master hostname                  = localhost
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Greenplum Master data directory            = /opt/greenplum/data/master/gpseg-1
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Greenplum Master port                      = 5432
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Greenplum Force delete of dump files       = ON
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Batch size                                 = 32
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:---------------------------------------
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:- Segment Instance List
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:---------------------------------------
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Host:Datadir:Port
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-mdw:/opt/greenplum/data/master/gpseg-1:5432
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw1:/opt/greenplum/data1/primary/gpseg0:6000
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw2:/opt/greenplum/data1/mirror/gpseg0:7000
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw1:/opt/greenplum/data2/primary/gpseg1:6001
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw2:/opt/greenplum/data2/mirror/gpseg1:7001
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw2:/opt/greenplum/data1/primary/gpseg2:6000
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw1:/opt/greenplum/data1/mirror/gpseg2:7000
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw2:/opt/greenplum/data2/primary/gpseg3:6001
20191219:09:47:57:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-sdw1:/opt/greenplum/data2/mirror/gpseg3:7001

Continue with Greenplum instance deletion? Yy|Nn (default=N):
> y
20191219:09:48:01:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-FINAL WARNING, you are about to delete the Greenplum instance
20191219:09:48:01:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-on master host localhost.
20191219:09:48:01:020973 gpdeletesystem:mdw:gpadmin-[WARNING]:-There are database dump files, these will be DELETED if you continue!

Continue with Greenplum instance deletion? Yy|Nn (default=N):
> y
20191219:09:48:15:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Stopping database...
20191219:09:48:17:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Deleting tablespace directories...
20191219:09:48:17:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Waiting for worker threads to delete tablespace dirs...
20191219:09:48:17:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Deleting segments and removing data directories...
20191219:09:48:17:020973 gpdeletesystem:mdw:gpadmin-[INFO]:-Waiting for worker threads to complete...
20191219:09:48:18:020973 gpdeletesystem:mdw:gpadmin-[WARNING]:-Delete system completed but warnings were generated.

删除完成后再根据自己需要,调整集群初始化配置文件,并重新初始化。
vi /home/gpadmin/gpconfigs/gpinitsystem_config
gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /usr/local/greenplum-db/seg_host -D


4.3.4 配置pg_hba.conf

根据访问需要 ,配置pg_hba.conf。
/opt/greenplum/data/master/gpseg-1/pg_hba.conf

详情参考后文:5.2.1. 配置 pg_hba.conf

5 安装成功后配置

5.1 psql 登陆gp 并设置密码

是用psql 登录gp, 一般命令格式为:

psql -h hostname -p port -d database -U user -W password

-h后面接对应的master或者segment主机名
-p后面接master或者segment的端口号
-d后面接数据库名可将上述参数配置到用户环境变量中,linux 中使用gpadmin用户不需要密码。
psql 登录,并设置gpadmin用户密码示例:

[gpadmin@mdw gpconfigs]$ psql
psql (9.4.24)
Type "help" for help.

gpdw=# ALTER USER gpadmin WITH PASSWORD 'gpadmin';
ALTER ROLE
gpdw=# \q

5.1.1 登陆到不同节点

参数示意:

#登陆主节点
[gpadmin@mdw gpconfigs]$ PGOPTIONS='-c gp_session_role=utility' psql -h mdw -p5432 -d postgres

#登陆到segment,需要指定segment 端口。
[gpadmin@mdw gpconfigs]$ PGOPTIONS='-c gp_session_role=utility' psql -h sdw1 -p6000 -d postgres

5.2 客户端登陆gp

配置 pg_hba.conf
配置 postgresql.conf

5.2.1. 配置 pg_hba.conf

 参考配置说明:https://blog.csdn.net/yaoqiancuo3276/article/details/80404883


vi /opt/greenplum523/data/master/gpseg-1/pg_hba.conf

# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
# IPv4 local connections:
# IPv6 local connections:
local    all         gpadmin         ident
host     all         gpadmin         127.0.0.1/28    trust
host     all         gpadmin         172.28.25.204/32       trust
host     all         gpadmin         0.0.0.0/0   md5  # 新增规则允许任意ip 密码登陆
host     all         gpadmin         ::1/128       trust
host     all         gpadmin         fe80::250:56ff:fe91:63fc/128       trust
local    replication gpadmin         ident
host     replication gpadmin         samenet       trust

5.2.2. 修改postgresql.conf 

postgresql.conf里的监听地址设置为:
listen_addresses = '*'   # 允许监听任意ip gp6.0 默认会设置这个参数为 listen_addresses = '*'  

vi /opt/greenplum523/data/master/gpseg-1/postgresql.conf

5.2.3. 加载修改的文件

gpstop -u 

至此,安装完成。是用客户端软件登录验证即可。

后续再安装扩展插件,如gpcc, dblink,madlib 等

你可能感兴趣的:(Greenplum)