GreenPlum部署

一、前期准备


1.由于最小化安装 所以要所有的机器执行

yum install ed openssh-clients zip unzip perl bind-utils net-tools -y

2.所有服务器关闭防火墙

#临时停防火墙
systemctl stop firewalld
#彻底防火墙
systemctl disable firewalld
#防护墙状态
firewall-cmd --state

3.如果时间不同步先检查ntp然后强制同步

date -s "2020-06-20 11:10:00" 
clock -w

4.所有机器执行挂载

#查看服务器挂载信息
fdisk -l 

#如果已经存在挂载
#先卸载
umount /dev/sdb
#当遇到提示:设备正忙,卸载即告失败(/data: device is busy)
lsof /data
kill -9 2454
#如还不能解决,可以尝试重启机器,来解决杀不掉进程
reboot

#如果没有data文件夹
cd /
#创建data文件夹
mkdir data
#格式化磁盘
mkfs.xfs -f /dev/sdb

#挂载
echo "/dev/sdb1 /data xfs rw,noatime,inode64,allocsize=16m 1 1" >>/etc/fstab
#将/etc/fstab的所有内容重新加载
mount -a
#如果报错,注释掉之前的挂载
vi /etc/fstab
#查看是否挂载成功
df -h

5.修改所有机器名称

hostnamectl set-hostname mdw88
hostnamectl  --static

6.如下步骤打通SSH(所有节点)

#可先把/root/.ssh/ 清空…
cd /root/.ssh/
rm -rf *

7.所有节点均执行如下命令,遇到提示一路回车即可

ssh-keygen -t rsa

8.然后在主节点执行以下命令

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

9.scp文件到所有datenode节点

scp ~/.ssh/authorized_keys [email protected]:~/.ssh/
scp ~/.ssh/authorized_keys [email protected]:~/.ssh/
scp ~/.ssh/authorized_keys [email protected]:~/.ssh/


二、GP数据库安装


1.IP地址规划(master上执行)

vi /etc/hosts

10.214.138.132  mdw132
10.214.138.133  sdw133
10.214.138.134  sdw134
10.214.138.135  sdw135

2.安装greenplum(master上执行)

#安装软件上传至master目录:/home/gp_install

mkdir /home/gp_install
cd /home/gp_install
#授权
chmod 777 /home/gp_install
#WinSCP上传./greenplum-db-4.3.11.3-rhel5-x86_64.bin

3.安装本地GP,并交换密钥(master_root上执行)

#在文件所在目录下执行
./greenplum-db-4.3.11.3-rhel5-x86_64.bin

注意修改默认安装目录为:/usr/local/greenplum-db
#其它都默认为yes

#读取变量
source /usr/local/greenplum-db/greenplum_path.sh

4.配置全部服务器列表(master上执行)

注意:后续vi新建脚本都默认在master服务器的/home/gp_install目录下
同时执行命令也需在mdw94服务器的该目录下运行
cd /home/gp_install
vi hostfile_all
#添加主机从机名称:
mdw94
sdw93
sdw92

5.交换密钥(master上执行)

cd /root/.ssh/
rm -rf *
cd /home/gp_install/
gpssh-exkeys -f hostfile_all

#显示以下,则表示成功,中途可能会输入其它机器密码
[STEP 1 of 5] create local ID and authorize on local host
[INFO] completed successfully

#如果无限输入的话
    #1.有可能我们之前的多机信任有问题
    #2.ssh版本密钥不对,需要修改sshd_config(所有机器执行):
vi /etc/ssh/sshd_config
#添加
KexAlgorithms [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
#重启SSHD
service sshd restart
#最好改完以后,再重新生成一次秘钥,即从新多一个多机信任,先把cd /root/.ssh/ 下的文件都 rm -rf 了

6.服务器系统参数配置(master上执行)

配置-1
vi /etc/sysctl.conf

------------------------------------------------------
注意:
gp-->kernel.shmmax的值:64G内存—shmmax :68719476736
128G内存-- shmmax :137438953472
即物理内存*0.9*1024*1024*1024
如我的机器全是256G的:256*1024*1024*1024*0.9
其实只要超过128G 都可以默认得了
------------------------------------------------------

#注释原来所有内容,添加:

#system
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

#gp,64G内存—shmmax :68719476736,128G内存-- shmmax :137438953472,物理内存*0.9*1024*1024*1024
#xfs_mount_options = rw,noatime,inode64,allocsize=16m
kernel.shmmax = 137438953472
kernel.shmmni = 4096
kernel.shmall = 80000000000
kernel.sem = 250 5120000 100 20480
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535
net.core.netdev_max_backlog = 10000
vm.overcommit_memory = 2

#net.core.rmem_max = 2097152
#net.core.wmem_max = 2097152

net.core.rmem_default = 256960
net.core.rmem_max = 2097152
net.core.wmem_default = 256960
net.core.wmem_max = 2097152
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_window_scaling = 1   
配置-2
vi /etc/security/limits.conf
#增加

#gp
gpadmin soft nofile 65536
gpadmin hard nofile 65536
gpadmin soft nproc 131072
gpadmin hard nproc 131072

#注意:对于RedHat6.x/CENTOS 6.X系统,还需要将vi /etc/security/limits.d/90-nproc.conf文件中的1024修改为131072。
#对于RedHat7.x/CENTOS 7.X系统,还需要将vi 录vi /etc/security/limits.d/20-nproc.conf文件中的4096修改为131072。
配置-3
vi /etc/rc.d/rc.local

#添加:

#gp
/sbin/blockdev --setra 16384 /dev/sd*

#单独执行命令:
echo 1 > /proc/sys/vm/overcommit_memory
配置-4
#Centos6.5/7.1没有该文件,略过
vi /boot/grub/menu.lst

#添加后的结果:
kernel /boot/vmlinuz-2.2.18-164.el5 ro root=LABEL=/ ro elevator=deadline rhgb quiet

#注:根据系统不同内容会不同,只管添加elevator=deadline即可
配置-5
vi /etc/inittab

#修改后结果:
id:3:initdefault:
配置-6
vi /etc/ssh/sshd_config

#找到MaxStartups去掉#并修改为如下参数
MaxStartups 1000:30:5000
配置-7
vi /etc/sysconfig/selinux

#修改:
SELINUX=disabled
将master配置覆盖其它节点服务器脚本
vi cp.sh

#添加:
gpscp -f hostfile_all /etc/hosts =:/etc
gpscp -f hostfile_all /etc/sysctl.conf =:/etc
gpscp -f hostfile_all /etc/security/limits.conf =:/etc/security
gpscp -f hostfile_all /etc/rc.d/rc.local =:/etc/rc.d
gpscp -f hostfile_all /proc/sys/vm/overcommit_memory =:/proc/sys/vm
gpscp -f hostfile_all /etc/sysconfig/selinux =:/etc/sysconfig
gpscp -f hostfile_all /etc/inittab =:/etc
gpscp -f hostfile_all /etc/ssh/sshd_config         =:/etc/ssh/sshd_config
gpscp -f hostfile_all /etc/yum.repos.d/http.repo =:/etc/yum.repos.d/

#如果不能运行,执行添加变量
source /usr/local/greenplum-db/greenplum_path.sh 
#执行:
chmod +x cp.sh
./cp.sh
检查配置结果(master上执行)
#注意目录  /home/gp_install

vi check
#添加:
/sbin/sysctl -p
/sbin/blockdev --setra 16384 /dev/sd*
blockdev --getra /dev/sd*
df -h
/etc/init.d/iptables stop
/etc/init.d/iptables status
chkconfig iptables off
chkconfig --list iptables
setenforce 0
getenforce
cat /etc/sysconfig/network
cat /etc/hosts
grep MaxStartups /etc/ssh/sshd_config
grep MaxSessions /etc/ssh/sshd_config

#执行
gpssh -f hostfile_all -v < check
安装GP数据库(master上执行)
#注意:之前看,查看是否都所有机器都存在 gpadmin 账号,如果是userdel -r gpadmin
gpseginstall -f hostfile_all -u gpadmin -p gpadmin

#错误failed doing a test read of file: su gpadmin
#解决办法
chown gpadmin:gpadmin hostfile_all
chmod 777 hostfile_all

#master存储
mkdir -p /data/master
mkdir -p /data/master/space_fastdisk

chown gpadmin /data/master
chown -R gpadmin:gpadmin /data  
chown -R gpadmin:gpadmin /data/master

#second master存储(本次没备用管理smdw服务器,请忽略此步骤)
//gpssh -h smdw -e 'mkdir -p /home/data/master'
//gpssh -h smdw -e 'chown gpadmin /home/data/master'

#segment存储
vi mkdir

#添加:
mkdir -p /data/data1/gp/p
mkdir -p /data/data1/gp/m
mkdir -p /data/data2/gp/p
mkdir -p /data/data2/gp/m
mkdir -p /data/data3/gp/p
mkdir -p /data/data3/gp/m
mkdir -p /data/data4/gp/p
mkdir -p /data/data4/gp/m

mkdir -p /data/data1/gp/p/space_fastdisk
mkdir -p /data/data1/gp/m/space_fastdisk
mkdir -p /data/data2/gp/p/space_fastdisk
mkdir -p /data/data2/gp/m/space_fastdisk
mkdir -p /data/data3/gp/p/space_fastdisk
mkdir -p /data/data3/gp/m/space_fastdisk
mkdir -p /data/data4/gp/p/space_fastdisk
mkdir -p /data/data4/gp/m/space_fastdisk

chown -R gpadmin:gpadmin /data/data1
chown -R gpadmin:gpadmin /data/data2
chown -R gpadmin:gpadmin /data/data3
chown -R gpadmin:gpadmin /data/data4
chown -R gpadmin:gpadmin /data/data1/gp
chown -R gpadmin:gpadmin /data/data2/gp
chown -R gpadmin:gpadmin /data/data3/gp
chown -R gpadmin:gpadmin /data/data4/gp

#执行
gpssh -f hostfile_all -v  < mkdir

8. 配置环境变量(master上执行)

#以下步骤gpadmin用户在master执行
切换用户:su - gpadmin

#配置gpadmin变量
vi /home/gpadmin/.bashrc

#添加:
export GPHOME=/usr/local/greenplum-db
source $GPHOME/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=postgres
#读取变量
source /home/gpadmin/.bashrc

9. 初始化GP数据库(master_gpadmin上执行)

su - gpadmin
mkdir /home/gpadmin/gpconfigs
cd /home/gpadmin/gpconfigs

#配置实例对应的网卡
vi hostfile_gpinitsystem

#注意:一定要加上主服务器,不然到时候数据存不到主服务器
#添加节点服务器:
mdw132
sdw133
sdw134
sdw135

#配置实例对应的磁盘等

1.实例数不用太多,参考2物理cpu则8个实例即可;
2.DATA_DIRECTORY 一个目录代表一个实例

vi gpinitsystem_config

#添加:
ARRAY_NAME="EMC Greenplum DW"
SEG_PREFIX=gpseg
PORT_BASE=40000
declare -a DATA_DIRECTORY=(/data/data1/gp/p /data/data2/gp/p /data/data3/gp/p /data/data4/gp/p)
MASTER_HOSTNAME=mdw94    #注意修改主机名
MASTER_DIRECTORY=/data/master
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=16
ENCODING=UTF8
MIRROR_PORT_BASE=50000
REPLICATION_PORT_BASE=41000
MIRROR_REPLICATION_PORT_BASE=51000
declare -a MIRROR_DATA_DIRECTORY=(/data/data1/gp/m /data/data2/gp/m /data/data3/gp/m /data/data4/gp/m)
DATABASE_NAME=lte_mr

#执行初始化

#[无备管理服务器smdw执行]
gpinitsystem -c gpinitsystem_config -h hostfile_gpinitsystem

#[有备管理smdw服务器]
//gpinitsystem -c gpinitsystem_config -h hostfile_gpinitsystem -s smdw

运行结果中如出现如下信息则创建成功!
……
Greenplum Database instance successfully created
……

10.远程访问设置(master_gpadmin上执行)

cd $MASTER_DATA_DIRECTORY
vi pg_hba.confy

#添加:
host all gpadmin 0.0.0.0/0 md5
host all dpi 0.0.0.0/0 md5

11.配置GP数据库登录密码(master_gpadmin上执行)

su - gpadmin
登录数据库:psql -d lte_mr
>alter role gpadmin with password 'gpadmin_GP$';
输入:\q退出数据库

12.GP数据库参数配置(master上执行)

gp_vmem_protect_limit=内存总量/实例数*1024=(256*4)/8*1024 MB
以128G内存,8实例为例,基本使用默认不修改

#以下操作gpadmin用户在mdw上执行 –skipvalidation
gp_vmem_protect_limit=内存总量/实例数*1024=(256*4)/8*1024 MB
单台机器内存*机器数量=内存总量 
实例数=n个data*2*机器数  
以128G内存,8实例为例,基本使用默认不修改

gpconfig -c gp_vmem_protect_limit -v 60000MB
gpconfig -c max_statement_mem -v 21333MB
gpconfig -c statement_mem -v 2048MB
gpconfig -c filerep_socket_timeout -v 350  
gpconfig -c gp_fts_probe_timeout -v 60
gpconfig -c max_connections -v 1000 -m 200
gpconfig -c max_appendonly_tables -v 50000 -m 50000
 
gpconfig -c gp_fts_probe_threadcount -v 90 -m 90 
gpconfig -c gp_fts_probe_interval -v 120 -m 120
gpconfig -c gp_fts_probe_retries -v 10 -m 10   
gpconfig -c gp_fts_probe_timeout -v 120 -m 120
gpconfig -c gp_segment_connect_timeout -v 600

gpconfig -c vacuum_freeze_min_age -v 999999999 -m 999999999
gpconfig -c autovacuum_freeze_max_age -v 2000000000 -m 2000000000 

//上面执行过,下方不执行
gpconfig -c gp_vmem_protect_limit -v 32000MB
gpconfig -c max_statement_mem -v 21333MB
gpconfig -c statement_mem -v 2048MB
gpconfig -c filerep_socket_timeout -v 350 
gpconfig -c gp_fts_probe_timeout -v 60
gpconfig -c max_connections -v 1000 -m 200
gpconfig -c max_appendonly_tables -v 50000 -m 50000

gpconfig -c gp_fts_probe_threadcount -v 90 -m 90 
gpconfig -c gp_fts_probe_interval -v 120 -m 120
gpconfig -c gp_fts_probe_retries -v 10 -m 10
gpconfig -c gp_fts_probe_timeout -v 120 -m 120
gpconfig -c gp_segment_connect_timeout -v 600
安装CDH后执行的语句
gpconfig -c gp_hadoop_target_version -v "cdh5"
gpconfig -c gp_hadoop_home -v "'/opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/hadoop/client'"

-----默认180  --
--gpconfig -c gp_segment_connect_timeout -v 600
--gp_segment_connect_timeout  30min
--gp_interconnect_setup_timeout  4min

gpconfig -c vacuum_freeze_min_age -v 999999999 -m 999999999
gpconfig -c autovacuum_freeze_max_age -v 2000000000 -m 2000000000

13.重启GP数据库(master上执行)

切换用户:su - gpadmin
停数据库:gpstop -M fast
启数据库:gpstart -a
查看各节点状态:gpstate –m

14.性能验证(数据库安装成功后测试,master上执行)

#网卡性能验证,网卡-管理口

vi hostfile_gpchecknet_eth0
#添加
mdw132
sdw133
sdw134

#测试
 gpcheckperf -f hostfile_gpchecknet_eth0 -r N -d /tmp > eth0.out
 cat eth0.out 
 
 
/usr/local/greenplum-db/bin/gpcheckperf -f hostfile_gpchecknet_eth0 -r N -d /tmp

********************************************
--  NETPERF TEST
-------------------

====================
==  RESULT
====================
Netperf bisection bandwidth test
mdw96 -> sdw97 = 1107.060000
sdw98 -> sdw99 = 1109.530000
sdw97 -> mdw96 = 1131.740000
sdw99 -> sdw98 = 1130.610000

Summary:
sum = 4478.94 MB/sec
min = 1107.06 MB/sec
max = 1131.74 MB/sec
avg = 1119.73 MB/sec
median = 1130.61 MB/sec
-->
#磁盘性能验证
[-S 256GB] 为内存的2倍
gpcheckperf -f hostfile_gpchecknet_eth0 -r ds -D -v -S 128GB -d /data1/gp/p -d /data2/gp/p -d /data1/gp/m -d /data2/gp/m>disk.out

#测试结果
********************************************
[Error] unable to make gpcheckperf directory. 
    command failed: rm -rf  /data1/gp/p/gpcheckperf_$USER /data2/gp/p/gpcheckperf_$USER /data1/gp/m/gpcheckperf_$USER /data2/gp/m/gpcheckperf_$USER ; mkdir -p  /data1/gp/p/gpcheckperf_$USER /data2/gp/p/gpcheckperf_$USER /data1/gp/m/gpcheckperf_$USER /data2/gp/m/gpcheckperf_$USER



cat disk.out

#测试结果
********************************************

/usr/local/greenplum-db/bin/gpcheckperf -f hostfile_gpchecknet_eth0 -r ds -D -v -S 512GB -d /data1/gp/p -d /data2/gp/p -d /data1/gp/m -d /data2/gp/m
--------------------
  SETUP
--------------------
[Info] verify python interpreter exists
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 'python -c print'
[Info] making gpcheckperf directory on all hosts ... 
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 'rm -rf  /data1/gp/p/gpcheckperf_$USER /data2/gp/p/gpcheckperf_$USER /data1/gp/m/gpcheckperf_$USER /data2/gp/m/gpcheckperf_$USER ; mkdir -p  /data1/gp/p/gpcheckperf_$USER /data2/gp/p/gpcheckperf_$USER /data1/gp/m/gpcheckperf_$USER /data2/gp/m/gpcheckperf_$USER'
[Info] copy local /usr/local/greenplum-db/bin/lib/multidd to remote /data1/gp/p/gpcheckperf_$USER/multidd
[Info] /usr/local/greenplum-db/bin/gpscp -f hostfile_gpchecknet_eth0 /usr/local/greenplum-db/bin/lib/multidd =:/data1/gp/p/gpcheckperf_$USER/multidd
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 'chmod a+rx /data1/gp/p/gpcheckperf_$USER/multidd'

--------------------
--  DISK WRITE TEST
--------------------
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 'time -p /data1/gp/p/gpcheckperf_$USER/multidd -i /dev/zero -o /data1/gp/p/gpcheckperf_$USER/ddfile -i /dev/zero -o /data2/gp/p/gpcheckperf_$USER/ddfile -i /dev/zero -o /data1/gp/m/gpcheckperf_$USER/ddfile -i /dev/zero -o /data2/gp/m/gpcheckperf_$USER/ddfile -B 32768 -S 137438953472'

--------------------
--  DISK READ TEST
--------------------
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 'time -p /data1/gp/p/gpcheckperf_$USER/multidd -o /dev/null -i /data1/gp/p/gpcheckperf_$USER/ddfile -o /dev/null -i /data2/gp/p/gpcheckperf_$USER/ddfile -o /dev/null -i /data1/gp/m/gpcheckperf_$USER/ddfile -o /dev/null -i /data2/gp/m/gpcheckperf_$USER/ddfile -B 32768 -S 137438953472'

--------------------
--  STREAM TEST
--------------------
[Info] copy local /usr/local/greenplum-db/bin/lib/stream to remote /data1/gp/p/gpcheckperf_$USER/stream
[Info] /usr/local/greenplum-db/bin/gpscp -f hostfile_gpchecknet_eth0 /usr/local/greenplum-db/bin/lib/stream =:/data1/gp/p/gpcheckperf_$USER/stream
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 'chmod a+rx /data1/gp/p/gpcheckperf_$USER/stream'
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 /data1/gp/p/gpcheckperf_$USER/stream
--------------------
  TEARDOWN
--------------------
[Info] /usr/local/greenplum-db/bin/gpssh -f hostfile_gpchecknet_eth0 'rm -rf  /data1/gp/p/gpcheckperf_$USER /data2/gp/p/gpcheckperf_$USER /data1/gp/m/gpcheckperf_$USER /data2/gp/m/gpcheckperf_$USER'

====================
==  RESULT
====================

 disk write avg time (sec): 539.92
 disk write tot bytes: 2199023255552
 disk write tot bandwidth (MB/s): 3887.06
 disk write min bandwidth (MB/s): 926.78 [mdw96]
 disk write max bandwidth (MB/s): 989.50 [sdw98]
 -- per host bandwidth --
disk write bandwidth (MB/s): 981.32 [sdw97]
disk write bandwidth (MB/s): 926.78 [mdw96]
disk write bandwidth (MB/s): 989.50 [sdw98]
disk write bandwidth (MB/s): 989.47 [sdw99]


 disk read avg time (sec): 560.87
 disk read tot bytes: 2199023255552
 disk read tot bandwidth (MB/s): 3742.41
 disk read min bandwidth (MB/s): 888.61 [mdw96]
 disk read max bandwidth (MB/s): 954.62 [sdw98]
 -- per host bandwidth --
disk read bandwidth (MB/s): 949.93 [sdw97]
disk read bandwidth (MB/s): 888.61 [mdw96]
disk read bandwidth (MB/s): 954.62 [sdw98]
disk read bandwidth (MB/s): 949.25 [sdw99]


 stream tot bandwidth (MB/s): 35329.97
 stream min bandwidth (MB/s): 7966.39 [sdw99]
 stream max bandwidth (MB/s): 9395.71 [sdw98]
 -- per host bandwidth --
stream bandwidth (MB/s): 8641.37 [sdw97]
stream bandwidth (MB/s): 9326.50 [mdw96]
stream bandwidth (MB/s): 9395.71 [sdw98]
stream bandwidth (MB/s): 7966.39 [sdw99]

你可能感兴趣的:(GreenPlum部署)