目录
作者:lianghc
时间:20191221
1.安装前准备
2. 安装步骤概述
2.1 常规安装
2.1 非常规安装法(先安装,再改参数)
3. 系统参数修改清单
3.1 /etc/host
3.2 /etc/sysctl.conf
3.2.1 /etc/sysctl.conf 原文件内容
3.3 /etc/security/limits.conf
3.4 /etc/security/limits.d/90-nproc.conf
3.5 iptables 关闭防火墙
3.6 blockdev
3.7 scheduler
3.8 检查字符集
3.9 更改文件系统xfs
4.安装Greenplum
4.1 安装master
4.1.1 准备安装文件,安装目录
4.1.2 执行安装命令
4.1.3 准备gp集群host文件
4.1.4 集群互信,免密登陆
4.2 复制配置文件到其他节点
4.3 安装 gp节点 (gpseginstall)(gp6 无此工具)
4.3.1 安装 segment, 并创建 gpadmin用户
4.3.2 安装 segment, 不创建 gpadmin用户
4.4 安装前集群参数校验
4.4.1 集群系统参数校验
4.4.2 集群时钟校验
4.4.3 集群网络性能测试
4.4.4 集群磁盘io性能测试
4.5. 集群初始化
4.5.1 创建master 数据目录
4.5.2 创建segment(primary,mirror) 数据目录
4.5.3 拷贝安装配置文件,并配置
4.5.4 集群初始化
4.5.5 实例化失败处理
4.5.6 实例化失败回退
4.5.7 安装完毕
4.5.8 修改gpadmin 用户.bash_profile
4.5.9 安装成功后的数据目录详解
5. 安装成功后配置
5.1 psql 登陆gp 并设置密码
5.1.1 登陆到不同节点
5.2 客户端登陆gp
5.2.2. 修改postgresql.conf
5.2.3. 加载修改的文件
1. 下载安装介质
pivotal 商业版 :https://network.pivotal.io/products/pivotal-gpdb
开源版 :https://github.com/greenplum-db/gpdb/releases
2. 软硬件说明
1. 下载安装文件,本次选取5.23版本,5.X的最后的版本
2. 本次在三台虚拟机上安装,规划安装1个master,4个segment,无standby.
3. 机器配置:2*2C cpu, 内存16G,硬盘50G。
附录:
视频教程:https://www.ixigua.com/i6740122076977299982
1. 修改系统参数(共享内存,磁盘调度算法,网络参数,用户资源等)
2. 关闭防火墙
3. 配置host
4. 创建安装目录
5. 解压安卓,或rpm安装master , 配置 host_all ,host_seg
6. gpssh-exkeys 打通各个节点
7. gpseginstall 分发安装介质到各个节点
8. gpcheck 检查参数,不合理再修改
1. 关闭防火墙
2 配置host
3. 创建安装目录
4. 解压安卓,或rpm安装master , 配置 host_all ,host_seg
5. gpssh-exkeys 打通各个节点
6. gpseginstall 分发安装介质到各个节点
8. gpcheck 检查参数,不合理再修改
8. 修改系统参数(共享内存,磁盘调度算法,网络参数,用户资源等)(先不改参数,最后根据安装校验结果再改)
相关参数具体数值,需要参考官方文档
《Greenplum5.0 最佳实践》 系统参数 :https://yq.aliyun.com/articles/225816?spm=a2c4e.11155435.0.0.a9756e1eIiHSoH
[root@mdw ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.28.25.204 mdw
172.28.25.205 sdw1
172.28.25.206 sdw2
/etc/sysctl.conf 内核参数 可参考 /opt/greenplum-db/./etc/gpcheck.cnf
官方链接:https://gpdb.docs.pivotal.io/5230/install_guide/prep_os_install_gpdb.html#topic3
[root@mdw ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
#
# Use '/sbin/sysctl -a' to list all possible parameters.
# Controls IP packet forwarding
# net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
# net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
# kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
# kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the default maxmimum size of a mesage queue
# kernel.msgmnb = 65536
# Controls the maximum size of a message, in bytes
# kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
# kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
# kernel.shmall = 4294967296
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.all.accept_redirects=0
kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 500 1024000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 10000 65535
net.core.netdev_max_backlog = 10000
vm.overcommit_memory = 2
将原参数文件清空,重写上述内容,不同的系统参数会有出入,正确的做法是,按照官方推荐参数 一一比对做修改,然后再通过gpscp 命令将 参数文件同步到各个子节点机器上。
[root@mdw greenplum523]# echo > /etc/sysctl.conf
[root@mdw greenplum523]# cat /etc/sysctl.conf
[root@mdw greenplum523]# vi /etc/sysctl.conf
[root@sdw1 ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
#
# Use '/sbin/sysctl -a' to list all possible parameters.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536
# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.all.accept_redirects=0
[root@mdw ~]# cat /etc/security/limits.conf
# 前文省略 ... 追加以下内容
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
[root@mdw ~]# cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc 131072
root soft nproc unlimited
[root@mdw ~]# service iptables status
iptables: Firewall is not running.
[root@mdw ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 200M 0 part /boot
└─sda2 8:2 0 49.8G 0 part
├─VolGroup-lv_swap (dm-0) 253:0 0 4G 0 lvm [SWAP]
└─VolGroup-lv_root (dm-1) 253:1 0 45.8G 0 lvm /
[root@mdw ~]# /sbin/blockdev --setra 16384 /dev/sda
[root@mdw ~]# /sbin/blockdev --getra /dev/sda
16384
例如:
[root@mdw ~]# /sbin/blockdev --setra 16384 /dev/sr0
[root@mdw ~]# /sbin/blockdev --setra 16384 /dev/sda
[root@mdw ~]# /sbin/blockdev --getra /dev/sda
16384
[root@mdw ~]# more /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]
[root@mdw ~]# echo deadline > /sys/block/sda/queue/scheduler
[root@mdw ~]# echo deadline > /sys/block/sr0/queue/scheduler
echo $LANG
# 查看文件系统
du -T
# 修改xfs
本次安装未做修改,影响未知
xfs和ext 对比:https://github.com/digoal/blog/blob/master/201610/20161002_02.md
安装分为三步
1. 安装master
2. 安装 节点
3. 初始化集群(创建segment,mirror ,schema 等)
# 解压
mv greenplum-db-5.23.0-rhel6-x86_64.zip /opt/
cd /opt/
unzip greenplum-db-5.23.0-rhel6-x86_64.zip
mkdir greenplum523
./greenplum-db-5.23.0-rhel6-x86_64.bin
# 查看安装目录,设置环境变量
cd greenplum523/
source greenplum_path.sh
vi host_all
mdw
sdw1
sdw2
vi host_seg
sdw1
sdw2
[root@mdw greenplum523]# gpssh-exkeys -f host_all
[STEP 1 of 5] create local ID and authorize on local host
... /root/.ssh/id_rsa file exists ... key generation skipped
[STEP 2 of 5] keyscan all hosts and update known_hosts file
[STEP 3 of 5] authorize current user on remote hosts
... send to sdw1
... send to sdw2
[STEP 4 of 5] determine common authentication file content
[STEP 5 of 5] copy authentication files to all remote hosts
... finished key exchange with sdw1
... finished key exchange with sdw2
[INFO] completed successfully
[root@mdw greenplum523]#
可以先进行步骤 4.4 再进行步骤4.2
方法一:
使用scp命令人工处理
方法二:
使用带有批量命令处理的ssh客户端,例如xshell, mobaxterm
方法三:
使用gpscp , 前提是 已经安装了greenplum 。(安装指的是,greenplum 在master已经安装,并通过gpseginstall 安装到各个节点,并不要求已经集群初始化过。),详情可参考 下列5.1,5.2 内容。
# root 用户下执行一下命令
gpscp -f host_seg /etc/security/limits.conf root@=:/etc/security/limits.conf
gpscp -f host_seg /etc/sysctl.conf root@=:/etc/sysctl.conf
gpscp -f host_seg /etc/security/limits.d/90-nproc.conf root@=:/etc/security/limits.d/90-nproc.conf
gpssh -f host_all -e 'sysctl -p'
# gpssh -f host_all -e 'chmod u+s /bin/ping'
[root@mdw greenplum523]# gpseginstall -f host_all -u gpadmin -p gpadmin
20191210:08:25:00:021082 gpseginstall:mdw:root-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /opt/greenplum523
binary_dir_location /opt
binary_dir_name greenplum523
20191210:08:25:00:021082 gpseginstall:mdw:root-[INFO]:-check cluster password access
20191210:08:25:01:021082 gpseginstall:mdw:root-[INFO]:-de-duplicate hostnames
20191210:08:25:01:021082 gpseginstall:mdw:root-[INFO]:-master hostname: mdw
20191210:08:25:01:021082 gpseginstall:mdw:root-[INFO]:-check for user gpadmin on cluster
20191210:08:25:01:021082 gpseginstall:mdw:root-[INFO]:-add user gpadmin on master
20191210:08:25:01:021082 gpseginstall:mdw:root-[INFO]:-add user gpadmin on cluster
20191210:08:25:02:021082 gpseginstall:mdw:root-[INFO]:-chown -R gpadmin:gpadmin /opt/greenplum-db
20191210:08:25:02:021082 gpseginstall:mdw:root-[INFO]:-chown -R gpadmin:gpadmin /opt/greenplum523
20191210:08:25:02:021082 gpseginstall:mdw:root-[INFO]:-rm -f /opt/greenplum523.tar; rm -f /opt/greenplum523.tar.gz
20191210:08:25:02:021082 gpseginstall:mdw:root-[INFO]:-cd /opt; tar cf greenplum523.tar greenplum523
20191210:08:25:04:021082 gpseginstall:mdw:root-[INFO]:-gzip /opt/greenplum523.tar
20191210:08:25:49:021082 gpseginstall:mdw:root-[INFO]:-remote command: mkdir -p /opt
20191210:08:25:49:021082 gpseginstall:mdw:root-[INFO]:-remote command: rm -rf /opt/greenplum523
20191210:08:25:49:021082 gpseginstall:mdw:root-[INFO]:-scp software to remote location
20191210:08:25:51:021082 gpseginstall:mdw:root-[INFO]:-remote command: gzip -f -d /opt/greenplum523.tar.gz
20191210:08:25:58:021082 gpseginstall:mdw:root-[INFO]:-md5 check on remote location
20191210:08:26:00:021082 gpseginstall:mdw:root-[INFO]:-remote command: cd /opt; tar xf greenplum523.tar
20191210:08:26:02:021082 gpseginstall:mdw:root-[INFO]:-remote command: rm -f /opt/greenplum523.tar
20191210:08:26:02:021082 gpseginstall:mdw:root-[INFO]:-remote command: cd /opt; rm -f greenplum-db; ln -fs greenplum523 greenplum-db
20191210:08:26:03:021082 gpseginstall:mdw:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /opt/greenplum-db
20191210:08:26:03:021082 gpseginstall:mdw:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /opt/greenplum523
20191210:08:26:03:021082 gpseginstall:mdw:root-[INFO]:-rm -f /opt/greenplum523.tar.gz
20191210:08:26:03:021082 gpseginstall:mdw:root-[INFO]:-Changing system passwords ...
20191210:08:26:04:021082 gpseginstall:mdw:root-[INFO]:-exchange ssh keys for user root
20191210:08:26:05:021082 gpseginstall:mdw:root-[INFO]:-exchange ssh keys for user gpadmin
20191210:08:26:07:021082 gpseginstall:mdw:root-[INFO]:-/opt/greenplum-db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin
20191210:08:26:07:021082 gpseginstall:mdw:root-[INFO]:-remote command: . /opt/greenplum-db/./greenplum_path.sh; /opt/greenplum-db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin
20191210:08:26:07:021082 gpseginstall:mdw:root-[INFO]:-version string on master: gpssh version 5.23.0 build commit:5eaaa5800e9f492683c3ce313e54d3db5afbce79
20191210:08:26:07:021082 gpseginstall:mdw:root-[INFO]:-remote command: . /opt/greenplum-db/./greenplum_path.sh; /opt/greenplum-db/./bin/gpssh --version
20191210:08:26:08:021082 gpseginstall:mdw:root-[INFO]:-remote command: . /opt/greenplum523/greenplum_path.sh; /opt/greenplum523/bin/gpssh --version
20191210:08:26:08:021082 gpseginstall:mdw:root-[INFO]:-SUCCESS -- Requested commands completed
[root@mdw greenplum523]# gpcheck -f host_all
20191211:13:07:48:026235 gpcheck:mdw:root-[INFO]:-dedupe hostnames
20191211:13:07:48:026235 gpcheck:mdw:root-[INFO]:-Detected platform: Generic Linux Cluster
20191211:13:07:48:026235 gpcheck:mdw:root-[INFO]:-generate data on servers
20191211:13:07:48:026235 gpcheck:mdw:root-[INFO]:-copy data files from servers
20191211:13:07:48:026235 gpcheck:mdw:root-[INFO]:-delete remote tmp files
20191211:13:07:49:026235 gpcheck:mdw:root-[INFO]:-Using gpcheck config file: /opt/greenplum-db/./etc/gpcheck.cnf
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw2): on device (sr0) IO scheduler 'cfq' does not match expected value 'deadline'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw2): on device (sda) IO scheduler 'cfq' does not match expected value 'deadline'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw2): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw2): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw2): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw1): on device (sr0) IO scheduler 'cfq' does not match expected value 'deadline'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw1): on device (sda) IO scheduler 'cfq' does not match expected value 'deadline'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw1): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw1): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384'
20191211:13:07:49:026235 gpcheck:mdw:root-[ERROR]:-GPCHECK_ERROR host(sdw1): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384'
20191211:13:07:49:026235 gpcheck:mdw:root-[INFO]:-gpcheck completing...
# 根据检查出的错误,做出如下调整
gpssh -f host_seg -e '/sbin/blockdev --setra 16384 /dev/sda'
gpssh -f host_seg -e 'echo deadline > /sys/block/sda/queue/scheduler'
gpssh -f host_seg -e 'echo deadline > /sys/block/sr0/queue/scheduler'
[root@mdw greenplum523]# gpcheck -f host_all
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-dedupe hostnames
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-Detected platform: Generic Linux Cluster
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-generate data on servers
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-copy data files from servers
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-delete remote tmp files
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-Using gpcheck config file: /opt/greenplum-db/./etc/gpcheck.cnf
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-GPCHECK_NORMAL
20191210:11:36:11:023984 gpcheck:mdw:root-[INFO]:-gpcheck completing...
[root@mdw greenplum523]# gpssh -f host_all -e 'date'
[ mdw] date
[ mdw] Wed Dec 11 13:06:47 CST 2019
[sdw1] date
[sdw1] Wed Dec 11 13:06:47 CST 2019
[sdw2] date
[sdw2] Wed Dec 11 13:06:47 CST 2019
若时钟不一致,需要进行ntp 时钟配置。
[gpadmin@mdw gpAdminLogs]$ gpcheckperf -f /opt/greenplum523/host_seg -r n -d /tmp
/opt/greenplum-db/./bin/gpcheckperf -f /opt/greenplum523/host_seg -r n -d /tmp
-------------------
-- NETPERF TEST
-------------------
Authorized only. All activity will be monitored and reported
Authorized only. All activity will be monitored and reported
====================
== RESULT
====================
Netperf bisection bandwidth test
sdw1 -> sdw2 = 1068.660000
sdw2 -> sdw1 = 1067.970000
Summary:
sum = 2136.63 MB/sec
min = 1067.97 MB/sec
max = 1068.66 MB/sec
avg = 1068.32 MB/sec
median = 1068.66 MB/sec
[gpadmin@mdw gpAdminLogs]$ gpcheckperf -f /opt/greenplum523/host_seg -r ds -D -d /opt/greenplum523/data/primary/
/opt/greenplum-db/./bin/gpcheckperf -f /opt/greenplum523/host_seg -r ds -D -d /opt/greenplum523/data/primary/
--------------------
-- DISK WRITE TEST
--------------------
--------------------
-- DISK READ TEST
--------------------
--------------------
-- STREAM TEST
--------------------
====================
== RESULT
====================
disk write avg time (sec): 15.50
disk write tot bytes: 16074407936
disk write tot bandwidth (MB/s): 1030.50
disk write min bandwidth (MB/s): 411.87 [sdw1]
disk write max bandwidth (MB/s): 618.63 [sdw2]
-- per host bandwidth --
disk write bandwidth (MB/s): 411.87 [sdw1]
disk write bandwidth (MB/s): 618.63 [sdw2]
disk read avg time (sec): 23.21
disk read tot bytes: 16074407936
disk read tot bandwidth (MB/s): 665.78
disk read min bandwidth (MB/s): 303.20 [sdw1]
disk read max bandwidth (MB/s): 362.58 [sdw2]
-- per host bandwidth --
disk read bandwidth (MB/s): 303.20 [sdw1]
disk read bandwidth (MB/s): 362.58 [sdw2]
stream tot bandwidth (MB/s): 20182.30
stream min bandwidth (MB/s): 9968.64 [sdw1]
stream max bandwidth (MB/s): 10213.66 [sdw2]
-- per host bandwidth --
stream bandwidth (MB/s): 9968.64 [sdw1]
stream bandwidth (MB/s): 10213.66 [sdw2]
视频教程:https://www.ixigua.com/i6741194071894671879
mkdir -p /opt/greenplum523/data/master
chown gpadmin:gpadmin -R /opt/greenplum523
gpssh -f host_seg -e 'mkdir -p /opt/greenplum523/data/primary'
gpssh -f host_seg -e 'mkdir -p /opt/greenplum523/data1/primary'
gpssh -f host_seg -e 'mkdir -p /opt/greenplum523/data/mirror'
gpssh -f host_seg -e 'mkdir -p /opt/greenplum523/data1/mirror'
gpssh -f host_seg -e 'chown gpadmin:gpadmin -R /opt/greenplum523'
cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config
4.5.3.1 配置文件示例
# FILE NAME: gpinitsystem_config
# Configuration file needed by the gpinitsystem
################################################
#### REQUIRED PARAMETERS
################################################
#### Name of this Greenplum system enclosed in quotes.
ARRAY_NAME="Greenplum Data Platform"
#### Naming convention for utility-generated data directories.
SEG_PREFIX=gpseg
#### Base number by which primary segment port numbers
#### are calculated.
PORT_BASE=6000
#### File system location(s) where primary segment data directories
#### will be created. The number of locations in the list dictate
#### the number of primary segments that will get created per
#### physical host (if multiple addresses for a host are listed in
#### the hostfile, the number of segments will be spread evenly across
#### the specified interface addresses).
declare -a DATA_DIRECTORY=(/opt/greenplum523/data/primary /opt/greenplum523/data1/primary)
#### OS-configured hostname or IP address of the master host.
MASTER_HOSTNAME=mdw
#### File system location where the master data directory
#### will be created.
MASTER_DIRECTORY=/opt/greenplum523/data/master
#### Port number for the master instance.
MASTER_PORT=5432
#### Shell utility used to connect to remote hosts.
TRUSTED_SHELL=ssh
#### Maximum log file segments between automatic WAL checkpoints.
CHECK_POINT_SEGMENTS=256
#### Default server-side character set encoding.
ENCODING=UNICODE
################################################
#### OPTIONAL MIRROR PARAMETERS
################################################
#### Base number by which mirror segment port numbers
#### are calculated.
MIRROR_PORT_BASE=7000
#### Base number by which primary file replication port
#### numbers are calculated.
REPLICATION_PORT_BASE=8000
#### Base number by which mirror file replication port
#### numbers are calculated.
MIRROR_REPLICATION_PORT_BASE=9000
#### File system location(s) where mirror segment data directories
#### will be created. The number of mirror locations must equal the
#### number of primary locations as specified in the
#### DATA_DIRECTORY parameter.
declare -a MIRROR_DATA_DIRECTORY=(/opt/greenplum523/data/mirror /opt/greenplum523/data1/mirror)
################################################
#### OTHER OPTIONAL PARAMETERS
################################################
#### Create a database of this name after initialization.
DATABASE_NAME=yjbdw
#### Specify the location of the host address file here instead of
#### with the the -h option of gpinitsystem.
#MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem
gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /opt/greenplum523/host_seg -D
4.5.4.1 初始化必须用gpamdin 用户执行
[root@mdw greenplum523]# gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /opt/greenplum523/host_seg -D
20191211:13:16:14:026732 gpinitsystem:mdw:root-[INFO]:-Start Main
20191211:13:16:14:026732 gpinitsystem:mdw:root-[INFO]:-Command line options passed to utility = -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /opt/greenplum523/host_seg -D
20191211:13:16:14:026732 gpinitsystem:mdw:root-[INFO]:-Start Function CHK_GPDB_ID
20191211:13:16:14:026732 gpinitsystem:mdw:root-[WARN]:-File permission mismatch. The root owns the Greenplum Database installation directory.
20191211:13:16:14:026732 gpinitsystem:mdw:root-[WARN]:-You are currently logged in as gpadmin and may not have sufficient
20191211:13:16:14:026732 gpinitsystem:mdw:root-[WARN]:-permissions to run the Greenplum binaries and management utilities.
20191211:13:16:14:026732 gpinitsystem:mdw:root-[INFO]:-End Function CHK_GPDB_ID
20191211:13:16:14:026732 gpinitsystem:mdw:root-[INFO]:-Start Function CHK_PARAMS
20191211:13:16:14:026732 gpinitsystem:mdw:root-[INFO]:-Checking configuration parameters, please wait...
20191211:13:16:14:026732 gpinitsystem:mdw:root-[INFO]:-Start Function ERROR_EXIT
20191211:13:16:14:gpinitsystem:mdw:root-[FATAL]:-Unable to run this script as root Script Exiting!
[root@mdw greenplum523]# su - gpadmin
[gpadmin@mdw ~]$ gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /opt/greenplum523/host_seg -D
20191211:13:16:45:026946 gpinitsystem:mdw:gpadmin-[INFO]:-Start Main
....
4.5.5.1 Unknown host gjzq-sh-mb Script Exiting
#[前面信息省略]
......
Continue with Greenplum creation Yy|Nn (default=N):
> y
20191210:17:00:46:005149 gpinitsystem:gjzq-sh-mb:gpadmin-[INFO]:-Building the Master instance database, please wait...
20191210:17:00:51:005149 gpinitsystem:gjzq-sh-mb:gpadmin-[INFO]:-Starting the Master in admin mode
20191210:17:00:54:gpinitsystem:gjzq-sh-mb:gpadmin-[FATAL]:-Unknown host gjzq-sh-mb Script Exiting!
20191210:17:00:54:005149 gpinitsystem:gjzq-sh-mb:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state
20191210:17:00:54:005149 gpinitsystem:gjzq-sh-mb:gpadmin-[WARN]:-Run command /bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20191210_170039 to remove these changes
20191210:17:00:54:005149 gpinitsystem:gjzq-sh-mb:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND
20191210:17:00:54:005149 gpinitsystem:gjzq-sh-mb:gpadmin-[INFO]:-End Function BACKOUT_COMMAND
根据错误信息,做出调整 本错误详见:【greenplum】Unknown host xxx Script Exiting!
执行回滚脚本 ./backout_gpinitsystem_gpadmin_20191210_1700394.5.5.2 /tmp/.s.PGSQL.6000.lock: Permission denied
#[前面信息省略]
......
2019-12-11 13:27:06.082128 CST,,,p16542,th1196001056,,,,0,,,seg-1,,,,,"FATAL","42501","could not open lock file ""/tmp/.s.PGSQL.6000.lock"": Permission denied",,,,,,,,"CreateLockFile","miscinit.c",821,1 0x96118b postgres errstart + 0x1db
2 0x9760f0 postgres (miscinit.c:0)
3 0x9771d2 postgres CreateSocketLockFile + 0x42
4 0x6fbcb1 postgres StreamServerPort + 0x6a1
5 0x7e02d2 postgres PostmasterMain (postmaster.c:1375)
6 0x718367 postgres main (main.c:206)
7 0x39f381ed1d libc.so.6 __libc_start_main + 0xfd
8 0x4cb475 postgres + 0x4cb475
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:-End Function PARALLEL_WAIT
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:-End Function PARALLEL_COUNT
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function PARALLEL_SUMMARY_STATUS_REPORT
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:-Parallel process exit status
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as completed = 0
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as killed = 0
20191211:13:28:07:029068 gpinitsystem:mdw:gpadmin-[WARN]:-Total processes marked as failed = 4 <<<<<
根据错误信息,做出调整
segment 节点上的/tmp 目录没有权限,用root用户授权
gpssh -f host_all -e 'chmod -R 777 /tmp'
2. 执行回滚脚本 ./backout_gpinitsystem_gpadmin_20191210_170039
4.5.5.3 No lock file /tmp/.s.PGSQL.5432.lock but process running on port 5432
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-No socket connection or lock file /tmp/.s.PGSQL.7001.lock found for port=7001
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-End Function GET_PG_PID_ACTIVE
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-End Function POSTGRES_PORT_CHK
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-End Function CREATE_GROUP_MIRROR_ARRAY
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-End Function CREATE_QE_ARRAY
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function CHK_QES
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-Checking Master host
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function CHK_DIR
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-End Function CHK_DIR
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function GET_PG_PID_ACTIVE
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[WARN]:-No lock file /tmp/.s.PGSQL.5432.lock but process running on port 5432
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-End Function GET_PG_PID_ACTIVE
20191211:13:40:48:002954 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function ERROR_EXIT
20191211:13:40:48:gpinitsystem:mdw:gpadmin-[FATAL]:-Found indication of postmaster process on port 5432 on Master host Script Exiting!
[gpadmin@mdw gpAdminLogs]$
根据错误信息,做出调整
[gpadmin@mdw gpAdminLogs]$ ps -aux | grep 5432
gpadmin 3933 0.0 0.0 103316 864 pts/0 S+ 13:43 0:00 grep 5432
gpadmin 31033 0.0 4.4 387908 175372 pts/0 S 13:26 0:00 /opt/greenplum523/bin/postgres -D /opt/greenplum523/data/master/gpseg-1 -i -p 5432 -c gp_role=utility -M master --gp_dbid=1 --gp_contentid=-1 --gp_num_contents_in_cluster=0 -m
...[省略]
[gpadmin@mdw gpAdminLogs]$ kill -9 31033
[gpadmin@mdw gpAdminLogs]$ ps -aux | grep 5432
gpadmin 3935 0.0 0.0 103316 864 pts/0 S+ 13:43 0:00 grep 5432
/home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_* 是系统清除的脚本
在gpamdin 用户下执行该脚本
若不小心在root 下执行后,脚本会自动清除,可手动复制以下脚本 再gpadmin 用户下再次执行
[gpadmin@mdw ~]$ cat /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20191210_170039
if [ xmdw != x`/bin/hostname` ];then /bin/echo "[FATAL]:-Not on original master host mdw, backout script exiting!";exit 2;fi
/bin/echo "Stopping Master instance"
if [ -d /opt/greenplum523/data/master/gpseg-1 ]; then export LD_LIBRARY_PATH=/opt/greenplum-db/./lib:/opt/greenplum-db/./ext/python/lib:/opt/greenplum-db/./lib:/opt/greenplum-db/./ext/python/lib:;export PGPORT=5432; /opt/greenplum-db/./bin/pg_ctl -D /opt/greenplum523/data/master/gpseg-1 stop; fi
/bin/echo Removing Master log file
/bin/rm -f /opt/greenplum523/data/master/gpseg-1.log
/bin/echo "Removing Master lock files"
/bin/rm -f /tmp/.s.PGSQL.5432 /tmp/.s.PGSQL.5432.lock
/bin/echo Removing Master data directory files
if [ -d /opt/greenplum523/data/master/gpseg-1 ]; then /bin/rm -Rf /opt/greenplum523/data/master/gpseg-1; fi
/bin/rm -f /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20191210_170039
执行回退脚本
[gpadmin@mdw ~]$ cd /home/gpadmin/gpAdminLogs/
[gpadmin@mdw gpAdminLogs]$ ls
backout_gpinitsystem_gpadmin_20191210_170039 gpinitsystem_20191210.log
[gpadmin@mdw gpAdminLogs]$bash backout_gpinitsystem_gpadmin_20191210_170039
Stopping Master instance
waiting for server to shut down.... done
server stopped
Removing Master log file
Removing Master lock files
Removing Master data directory files
#[前面信息省略]
......
20191211:13:47:06:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Successfully started new Greenplum instance
20191211:13:47:06:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20191211:13:47:06:003940 gpinitsystem:mdw:gpadmin-[INFO]:-End Function START_QD_PRODUCTION
20191211:13:47:06:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function CREATE_DATABASE
20191211:13:47:08:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function ERROR_CHK
20191211:13:47:08:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Successfully completed create database yjbdw
20191211:13:47:08:003940 gpinitsystem:mdw:gpadmin-[INFO]:-End Function ERROR_CHK
20191211:13:47:08:003940 gpinitsystem:mdw:gpadmin-[INFO]:-End Function CREATE_DATABASE
20191211:13:47:08:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function SET_GP_USER_PW
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function ERROR_CHK
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Successfully completed update Greenplum superuser password
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-End Function ERROR_CHK
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-End Function SET_GP_USER_PW
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function SCAN_LOG
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[WARN]:-*******************************************************
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[WARN]:-were generated during the array creation
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Please review contents of log file
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20191211.log
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-To determine level of criticality
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-These messages could be from a previous run of the utility
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-that was called today!
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[WARN]:-*******************************************************
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-End Function SCAN_LOG
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-To complete the environment configuration, please
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/opt/greenplum523/data/master/gpseg-1"
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:- to access the Greenplum scripts for this instance:
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:- or, use -d /opt/greenplum523/data/master/gpseg-1 option for the Greenplum scripts
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:- Example gpstate -d /opt/greenplum523/data/master/gpseg-1
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20191211.log
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Review options for gpinitstandby
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-The Master /opt/greenplum523/data/master/gpseg-1/pg_hba.conf post gpinitsystem
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-new array must be explicitly added to this file
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-located in the /opt/greenplum-db/./docs directory
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20191211:13:47:09:003940 gpinitsystem:mdw:gpadmin-[INFO]:-End Main
.bashrc和.bash_profile 追加gp环境信息
vi .bash_profile
vi .bashrc
# add 下面两行
source /opt/gp/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/opt/gp/data/master/gpseg-1
# 分发到其他节点
gpscp -f /opt/gp/host_seg /home/gp/.bash_profile root@=:/home/gp/.bash_profile
gpscp -f /opt/gp/host_seg /home/gp/.bashrc root@=:/home/gp/.bashrc
[gpadmin@mdw gpseg-1]$ psql -d yjbdw
psql (8.3.23)
Type "help" for help.
yjbdw=# ALTER USER gpadmin WITH PASSWORD 'gpadmin';
ALTER ROLE
yjbdw=# \q
参数示意:
-p后面接master或者segment的端口号
-h后面接对应的master或者segment主机名
-d后面接数据库名
登陆主节点
PGOPTIONS='-c gp_session_role=utility' psql -h mdw -p5432 -d postgres
登陆到segment
PGOPTIONS='-c gp_session_role=utility' psql -h sdw1 -p6000 -d postgres
配置 pg_hba.conf
配置postgresql.conf5.2.1. 配置 pg_hba.conf
配置说明:https://blog.csdn.net/yaoqiancuo3276/article/details/80404883
vi /opt/greenplum523/data/master/gpseg-1/pg_hba.conf
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
# IPv4 local connections:
# IPv6 local connections:
local all gpadmin ident
host all gpadmin 127.0.0.1/28 trust
host all gpadmin 172.28.25.204/32 trust
host all gpadmin 0.0.0.0/0 md5 # 新增规则允许任意ip 密码登陆
host all gpadmin ::1/128 trust
host all gpadmin fe80::250:56ff:fe91:63fc/128 trust
local replication gpadmin ident
host replication gpadmin samenet trust
postgresql.conf里的监听地址设置为:
listen_addresses = '*' # 允许监听任意ip
vi /opt/greenplum523/data/master/gpseg-1/postgresql.conf
gpstop -u