1.查看节点状态
[omm@openGauss2 local]$ gs_om -t status --detail#查看本机节点状态
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
-------------------------------------------------------------------------------------------------
1 openGauss1 192.168.50.32 15400 6001 /opt/huawei/install/data/dn P Primary Normal
2 openGauss2 192.168.50.126 15400 6002 /opt/huawei/install/data/dn S Standby Normal
[omm@openGauss2 local]$ gs_om -t status -h openGauss1 --detail #查看远程节点状态
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
-------------------------------------------------------------------------------------------------
1 openGauss1 192.168.50.32 15400 6001 /opt/huawei/install/data/dn P Primary Normal
2 openGauss2 192.168.50.126 15400 6002 /opt/huawei/install/data/dn S Standby Normal
2.启停OpenGauss
[omm@openGauss1 conf]$ gs_om -t start
Starting cluster.
=========================================
[SUCCESS] openGauss1
2022-07-20 13:57:08.343 62d809b2.1 [unknown] 139962140871744 [unknown] 0 dn_6001_6002 01000 0 [BACKEND] WARNING: could not create any HA TCP/IP sockets
2022-07-20 13:57:08.344 62d809b2.1 [unknown] 139962140871744 [unknown] 0 dn_6001_6002 01000 0 [BACKEND] WARNING: Failed to initialize the memory protect for g_instance.attr.attr_storage.cstore_buffers (1024 Mbytes) or shared memory (3300 Mbytes) is larger.
[SUCCESS] openGauss2
2022-07-20 13:59:13.222 62d80a30.1 [unknown] 140718970952768 [unknown] 0 dn_6001_6002 01000 0 [BACKEND] WARNING: could not create any HA TCP/IP sockets
2022-07-20 13:59:13.283 62d80a30.1 [unknown] 140718970952768 [unknown] 0 dn_6001_6002 01000 0 [BACKEND] WARNING: Failed to initialize the memory protect for g_instance.attr.attr_storage.cstore_buffers (1024 Mbytes) or shared memory (3300 Mbytes) is larger.
=========================================
Successfully started.
[omm@openGauss1 conf]$
[omm@openGauss1 conf]$ gs_om -t stop
Stopping cluster.
=========================================
Successfully stopped cluster.
=========================================
End stop cluster.
3. 主从节点的切换
[omm@openGauss2 local]$ gs_om -t status -h openGauss1 --detail
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
-------------------------------------------------------------------------------------------------
1 openGauss1 192.168.50.32 15400 6001 /opt/huawei/install/data/dn P Primary Normal
2 openGauss2 192.168.50.126 15400 6002 /opt/huawei/install/data/dn S Standby Normal
[omm@openGauss2 local]$ hostname
openGauss2
[omm@openGauss2 local]$ gs_ctl switchover -D /opt/huawei/install/data/dn #切换命令 -D后面跟的是data dir
[2022-07-20 14:11:27.287][162666][][gs_ctl]: gs_ctl switchover ,datadir is /opt/huawei/install/data/dn
[2022-07-20 14:11:27.287][162666][][gs_ctl]: switchover term (1)
[2022-07-20 14:11:27.311][162666][][gs_ctl]: waiting for server to switchover...............
[2022-07-20 14:11:41.461][162666][][gs_ctl]: done
[2022-07-20 14:11:41.461][162666][][gs_ctl]: switchover completed (/opt/huawei/install/data/dn)
[omm@openGauss2 local]$ gs_om -t status -h openGauss1 --detail
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
-------------------------------------------------------------------------------------------------
1 openGauss1 192.168.50.32 15400 6001 /opt/huawei/install/data/dn P Standby Normal
2 openGauss2 192.168.50.126 15400 6002 /opt/huawei/install/data/dn S Primary Normal
[omm@openGauss2 local]$
4.检查实例状态
[omm@openGauss1 conf]$ gs_check -U omm -i CheckClusterState
Parsing the check items config file successfully
Distribute the context file to remote hosts successfully
Start to health check for the cluster. Total Items:1 Nodes:2
Checking... [=========================] 1/1
Start to analysis the check result
CheckClusterState...........................OK
The item run on 2 nodes. success: 2
Analysis the check result successfully
Success. All check items run completed. Total:1 Success:1
For more information please refer to /opt/huawei/install/om/script/gspylib/inspection/output/CheckReport_2022072051473279604.tar.gz
[omm@openGauss1 conf]$
5.检查锁
[omm@openGauss1 conf]$ gsql -d postgres -p 15400
openGauss=# select * from pg_locks;
locktype | database | relation | page | tuple | bucket | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | sessionid | mode
| granted | fastpath | locktag | global_sessionid
------------+----------+----------+------+-------+--------+------------+---------------+---------+-------+----------+--------------------+-----------------+-----------------+---------------
--+---------+----------+-------------------+------------------
relation | 15563 | 12010 | | | | | | | | | 2/23 | 139957245310720 | 139957245310720 | AccessShareLoc
k | t | t | 3ccb:2eea:0:0:0:0 | 0:0#0
virtualxid | | | | | | 2/23 | | | | | 2/23 | 139957245310720 | 139957245310720 | ExclusiveLock
| t | t | 2:17:0:0:0:7 | 0:0#0
virtualxid | | | | | | 1/1 | | | | | 1/0 | 139957034346240 | | ExclusiveLock
| t | t | 1:1:0:0:0:7 | 0:0#0
(3 rows)
openGauss=# SELECT * FROM pg_thread_wait_status WHERE wait_status = 'acquire lock'; #查询正在等待锁的SESSION
node_name | db_name | thread_name | query_id | tid | sessionid | lwtid | psessionid | tlevel | smpid | wait_status | wait_event | locktag | lockmode | block_sessionid | global_sessionid
-----------+---------+-------------+----------+-----+-----------+-------+------------+--------+-------+-------------+------------+---------+----------+-----------------+------------------
(0 rows)
openGauss=#
6.查询当前数据库活动状态信息
openGauss=# SELECT count(*) FROM pg_stat_activity;
count
-------
1
(1 row)
openGauss=# SELECT backend_start,xact_start,query_start,state_change FROM pg_stat_activity;
backend_start | xact_start | query_start | state_change
-------------------------------+-------------------------------+-------------------------------+-------------------------------
2022-07-20 14:21:15.740564+00 | 2022-07-20 14:31:03.692873+00 | 2022-07-20 14:31:03.692873+00 | 2022-07-20 14:31:03.692876+00
(1 row)
openGauss=# SELECT * FROM pv_session_memory_detail() ORDER BY usedsize desc limit 10;
sessid | threadid | contextname | level | parent | totalsize | freesize | usedsize
--------+----------+-------------+-------+--------+-----------+----------+----------
(0 rows)
7.查看表或是视图的字段信息
openGauss=# \d+ pg_stat_activity
View "pg_catalog.pg_stat_activity"
Column | Type | Modifiers | Storage | Description
------------------+--------------------------+-----------+----------+-------------
datid | oid | | plain |
datname | name | | plain |
pid | bigint | | plain |
sessionid | bigint | | plain |
usesysid | oid | | plain |
usename | name | | plain |
application_name | text | | extended |
client_addr | inet | | main |
client_hostname | text | | extended |
client_port | integer | | plain |
backend_start | timestamp with time zone | | plain |
xact_start | timestamp with time zone | | plain |
query_start | timestamp with time zone | | plain |
state_change | timestamp with time zone | | plain |
waiting | boolean | | plain |
enqueue | text | | extended |
state | text | | extended |
resource_pool | name | | plain |
query_id | bigint | | plain |
query | text | | extended |
connection_info | text | | extended |
unique_sql_id | bigint | | plain |
trace_id | text | | extended |
View definition:
SELECT s.datid, d.datname, s.pid, s.sessionid, s.usesysid,
u.rolname AS usename, s.application_name, s.client_addr, s.client_hostname,
s.client_port, s.backend_start, s.xact_start, s.query_start, s.state_change,
s.waiting, s.enqueue, s.state,
CASE
WHEN s.srespool = 'unknown'::name THEN u.rolrespool
ELSE s.srespool
END AS resource_pool,
s.query_id, s.query, s.connection_info, s.unique_sql_id, s.trace_id
FROM pg_database d,
pg_stat_get_activity_with_conninfo(NULL::bigint) s(datid, pid, sessionid, usesysid, application_name, state, query, waiting, xact_start, query_start, backend_start, state_change, client
_addr, client_hostname, client_port, enqueue, query_id, connection_info, srespool, global_sessionid, unique_sql_id, trace_id),
pg_authid u
WHERE s.datid = d.oid AND s.usesysid = u.oid;
8.查看当前数据库的版本
openGauss=# SELECT version();
version
------------------------------------------------------------------------------------------------------------------------------------------------------
(openGauss 3.0.0 build 02c14696) compiled at 2022-04-01 18:12:19 commit 0 last mr on x86_64-unknown-linux-gnu, compiled by g++ (GCC) 7.3.0, 64-bit
(1 row)
openGauss=#
9. 查看表的大小
openGauss=# SELECT pg_table_size('pg_database');
pg_table_size
---------------
40960
(1 row)
10. 由于突然关机或是网络问题导致的Standby Need repair(WAL),问题修复
[omm@openGauss1 cipher]$ gs_om -t status --detail
[ Cluster State ]
cluster_state : Degraded
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
-------------------------------------------------------------------------------------------------
1 openGauss1 192.168.50.32 15400 6001 /opt/huawei/install/data/dn P Primary Normal
2 openGauss2 192.168.50.126 15400 6002 /opt/huawei/install/data/dn S Standby Need repair(WAL)
[omm@openGauss2 log]$ gs_ctl build -D /opt/huawei/install/data/dn #-D 后面跟的数据目录
[2022-07-21 02:36:04.082][58140][][gs_ctl]: gs_ctl incremental build ,datadir is /opt/huawei/install/data/dn
[2022-07-21 02:36:04.082][58140][][gs_ctl]: stop failed, killing gaussdb by force ...
[2022-07-21 02:36:04.082][58140][][gs_ctl]: command [ps c -eo pid,euid,cmd | grep gaussdb | grep -v grep | awk '{if($2 == curuid && $1!="-n") print "/proc/"$1"/cwd"}' curuid=`id -u`| xargs ls -l | awk '{if ($NF=="/opt/huawei/install/data/dn") print $(NF-2)}' | awk -F/ '{print $3 }' | xargs kill -9 >/dev/null 2>&1 ] path: [/opt/huawei/install/data/dn]
[2022-07-21 02:36:04.090][58140][][gs_ctl]: server stopped
[2022-07-21 02:36:04.091][58140][][gs_ctl]: fopen build pid file "/opt/huawei/install/data/dn/gs_build.pid" success
[2022-07-21 02:36:04.091][58140][][gs_ctl]: fprintf build pid file "/opt/huawei/install/data/dn/gs_build.pid" success
[2022-07-21 02:36:04.091][58140][][gs_ctl]: fsync build pid file "/opt/huawei/install/data/dn/gs_build.pid" success
[2022-07-21 02:36:04.092][58140][dn_6001_6002][gs_ctl]: build connection to 192.168.50.32
................
[2022-07-21 02:37:40.539][59990][dn_6001_6002][gs_ctl]: done
[2022-07-21 02:37:40.539][59990][dn_6001_6002][gs_ctl]: server started (/opt/huawei/install/data/dn)
[2022-07-21 02:37:40.539][59990][dn_6001_6002][gs_ctl]: fopen build pid file "/opt/huawei/install/data/dn/gs_build.pid" success
[2022-07-21 02:37:40.539][59990][dn_6001_6002][gs_ctl]: fprintf build pid file "/opt/huawei/install/data/dn/gs_build.pid" success
[2022-07-21 02:37:40.543][59990][dn_6001_6002][gs_ctl]: fsync build pid file "/opt/huawei/install/data/dn/gs_build.pid" success
[omm@openGauss2 log]$ gs_om -t status --detail
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
-------------------------------------------------------------------------------------------------
1 openGauss1 192.168.50.32 15400 6001 /opt/huawei/install/data/dn P Primary Normal
2 openGauss2 192.168.50.126 15400 6002 /opt/huawei/install/data/dn S Standby Normal
[omm@openGauss2 log]$
11.查看参数 和修改参数
openGauss=# show max_process_memory;
max_process_memory
--------------------
2GB
(1 row)
openGauss=# gs_guc reload -D /opt/huawei/install/data/dn -c "max_process_memory=2096M"
12.查看SQL执行计划
openGauss=# EXPLAIN select * from test;
QUERY PLAN
---------------------------------------------------------
Seq Scan on test (cost=0.00..23.17 rows=1317 width=32)
(1 row)
openGauss=#
13.备份数据库
[omm@openGauss1 opt]$ gs_dump postgres -p 15400 -f /u01/backup/postgres_bak220721.sql
gs_dump[port='15400'][postgres][2022-07-21 03:08:59]: The total objects number is 410.
gs_dump[port='15400'][postgres][2022-07-21 03:08:59]: [100.00%] 410 objects have been dumped.
gs_dump[port='15400'][postgres][2022-07-21 03:08:59]: dump database postgres successfully
gs_dump[port='15400'][postgres][2022-07-21 03:08:59]: total time: 951 ms
[omm@openGauss1 opt]$
14.检查所有节点状态
[omm@openGauss1 ~]$ gs_check -i CheckClusterState
Parsing the check items config file successfully
Distribute the context file to remote hosts successfully
Start to health check for the cluster. Total Items:1 Nodes:2
Checking... [=========================] 1/1
Start to analysis the check result
CheckClusterState...........................OK
The item run on 2 nodes. success: 2
Analysis the check result successfully
Success. All check items run completed. Total:1 Success:1
For more information please refer to /opt/huawei/install/om/script/gspylib/inspection/output/CheckReport_202207211184890527.tar.gz
[omm@openGauss1 ~]$
[omm@openGauss1 ~]$ gs_check -i CheckCPU
Parsing the check items config file successfully
Distribute the context file to remote hosts successfully
Start to health check for the cluster. Total Items:1 Nodes:2
Checking... [=========================] 1/1
Start to analysis the check result
CheckCPU....................................OK
The item run on 2 nodes. success: 2
Analysis the check result successfully
Success. All check items run completed. Total:1 Success:1
For more information please refer to /opt/huawei/install/om/script/gspylib/inspection/output/CheckReport_202207211192995315.tar.gz
[omm@openGauss1 ~]$ zcat /opt/huawei/install/om/script/gspylib/inspection/output/CheckReport_202207211192995315.tar.gz
nodes/0000700000175000017500000000000014266142645010636 5ustar ommdbgrpnodes/CheckCPU_openGauss1_202207211192995315.out0000600000175000017500000000130714266142641017054 0ustar ommdbgrp
[HOST] openGauss1
[NAM] CheckCPU
[RST] OK
[VAL]
[RAW]
Linux 4.19.90-2003.4.0.0036.oe1.x86_64 (openGauss1) 07/21/22 _x86_64_ (2 CPU)
03:18:52 CPU %user %nice %system %iowait %steal %idle
03:18:53 all 3.57 0.00 3.57 0.00 0.00 92.86
03:18:54 all 1.47 0.00 1.47 0.00 0.00 97.06
03:18:55 all 3.55 0.00 4.57 0.00 0.00 91.88
03:18:56 all 3.52 0.00 5.03 0.00 0.00 91.46
03:18:57 all 3.05 0.00 4.06 0.00 0.00 92.89
Average: all 3.02 0.00 3.73 0.00 0.00 93.25
nodes/CheckCPU_openGauss2_202207211192995315.out0000600000175000017500000000130714266142642017056 0ustar ommdbgrp
[HOST] openGauss2
[NAM] CheckCPU
[RST] OK
[VAL]
[RAW]
Linux 4.19.90-2003.4.0.0036.oe1.x86_64 (openGauss2) 07/21/22 _x86_64_ (2 CPU)
03:18:52 CPU %user %nice %system %iowait %steal %idle
03:18:53 all 1.03 0.00 4.62 0.00 0.00 94.36
03:18:54 all 0.51 0.00 1.02 0.00 0.00 98.48
03:18:55 all 1.53 0.00 4.59 0.00 0.00 93.88
03:18:56 all 2.03 0.00 4.57 0.00 0.00 93.40
03:18:57 all 1.52 0.00 1.52 0.00 0.00 96.95
Average: all 1.32 0.00 3.26 0.00 0.00 95.42
log/0000700000175000017500000000000014266142645010307 5ustar ommdbgrplog/gs_check_192.168.50.126.log0000600000175000017500000000177114266142645014232 0ustar ommdbgrp[2022-07-20 14:17:58][gs_check][line:698][DEBUG] Load check context from cache file
[2022-07-20 14:17:58][CheckItem.py][line:349][DEBUG] Start to run CheckClusterState
[2022-07-20 14:18:06][CheckItem.py][line:361][DEBUG] Finish to run CheckClusterState
[2022-07-20 14:18:06][gs_check][line:1456][DEBUG] run check items done and exit the command
[2022-07-21 03:17:32][gs_check][line:698][DEBUG] Load check context from cache file
[2022-07-21 03:17:32][CheckItem.py][line:349][DEBUG] Start to run CheckClusterState
[2022-07-21 03:17:35][CheckItem.py][line:361][DEBUG] Finish to run CheckClusterState
[2022-07-21 03:17:35][gs_check][line:1456][DEBUG] run check items done and exit the command
[2022-07-21 03:18:52][gs_check][line:698][DEBUG] Load check context from cache file
[2022-07-21 03:18:52][CheckItem.py][line:349][DEBUG] Start to run CheckCPU
[2022-07-21 03:18:57][CheckItem.py][line:361][DEBUG] Finish to run CheckCPU
[2022-07-21 03:18:57][gs_check][line:1456][DEBUG] run check items done and exit the command
log/gs_check_openGauss1.log0000600000175000017500000000573114266142644014674 0ustar ommdbgrp[2022-07-20 14:17:54][gs_check][line:772][DEBUG] Start to parse the check items config file
[2022-07-20 14:17:54][gs_check][line:894][INFO] Parsing the check items config file successfully
[2022-07-20 14:17:55][gs_check][line:881][DEBUG] Start to distributing the check context dump file
[2022-07-20 14:17:56][gs_check][line:894][INFO] Distribute the context file to remote hosts successfully
[2022-07-20 14:17:56][gs_check][line:894][INFO] Start to health check for the cluster. Total Items:1 Nodes:2
[2022-07-20 14:17:57][gs_check][line:698][DEBUG] Load check context from cache file
[2022-07-20 14:17:57][CheckItem.py][line:349][DEBUG] Start to run CheckClusterState
[2022-07-20 14:18:04][CheckItem.py][line:361][DEBUG] Finish to run CheckClusterState
[2022-07-20 14:18:04][gs_check][line:1456][DEBUG] run check items done and exit the command
[2022-07-20 14:18:10][gs_check][line:894][INFO] Start to analysis the check result
[2022-07-20 14:18:10][gs_check][line:894][INFO] Analysis the check result successfully
[2022-07-21 03:17:29][gs_check][line:772][DEBUG] Start to parse the check items config file
[2022-07-21 03:17:29][gs_check][line:894][INFO] Parsing the check items config file successfully
[2022-07-21 03:17:30][gs_check][line:881][DEBUG] Start to distributing the check context dump file
[2022-07-21 03:17:31][gs_check][line:894][INFO] Distribute the context file to remote hosts successfully
[2022-07-21 03:17:31][gs_check][line:894][INFO] Start to health check for the cluster. Total Items:1 Nodes:2
[2022-07-21 03:17:32][gs_check][line:698][DEBUG] Load check context from cache file
[2022-07-21 03:17:32][CheckItem.py][line:349][DEBUG] Start to run CheckClusterState
[2022-07-21 03:17:34][CheckItem.py][line:361][DEBUG] Finish to run CheckClusterState
[2022-07-21 03:17:34][gs_check][line:1456][DEBUG] run check items done and exit the command
[2022-07-21 03:17:37][gs_check][line:894][INFO] Start to analysis the check result
[2022-07-21 03:17:37][gs_check][line:894][INFO] Analysis the check result successfully
[2022-07-21 03:18:49][gs_check][line:772][DEBUG] Start to parse the check items config file
[2022-07-21 03:18:49][gs_check][line:894][INFO] Parsing the check items config file successfully
[2022-07-21 03:18:50][gs_check][line:881][DEBUG] Start to distributing the check context dump file
[2022-07-21 03:18:51][gs_check][line:894][INFO] Distribute the context file to remote hosts successfully
[2022-07-21 03:18:51][gs_check][line:894][INFO] Start to health check for the cluster. Total Items:1 Nodes:2
[2022-07-21 03:18:52][gs_check][line:698][DEBUG] Load check context from cache file
[2022-07-21 03:18:52][CheckItem.py][line:349][DEBUG] Start to run CheckCPU
[2022-07-21 03:18:57][CheckItem.py][line:361][DEBUG] Finish to run CheckCPU
[2022-07-21 03:18:57][gs_check][line:1456][DEBUG] run check items done and exit the command
[2022-07-21 03:19:00][gs_check][line:894][INFO] Start to analysis the check result
[2022-07-21 03:19:00][gs_check][line:894][INFO] Analysis the check result successfully
CheckResult_2022072111929953150000600000175000017500000000023114266142644013656 0ustar ommdbgrpCheckCPU....................................OK
The item run on 2 nodes. success: 2
Success. All check items run completed. Total:1 Success:1
15. 检查数据库性能
[omm@openGauss1 ~]$ gs_checkperf -i pmk -U omm
Cluster statistics information:
Host CPU busy time ratio : 1.69 %
MPPDB CPU time % in busy time : 37.17 %
Shared Buffer Hit ratio : 93.54 %
In-memory sort ratio : 0
Physical Reads : 363
Physical Writes : 182
DB size : 38 MB
Total Physical writes : 182
Active SQL count : 1
Session count : 1
[omm@openGauss1 ~]$
16.备份数据库
[omm@openGauss1 backup]$ gs_basebackup -D /u01/backup/openGauss1 -p 15400
INFO: The starting position of the xlog copy of the full build is: 0/5000028. The slot minimum LSN is: 0/5000148.
[2022-07-21 09:19:21]:begin build tablespace list
[2022-07-21 09:19:21]:finish build tablespace list
[2022-07-21 09:19:21]:begin get xlog by xlogstream
[2022-07-21 09:19:21]: check identify system success
[2022-07-21 09:19:21]: send START_REPLICATION 0/5000000 success
[2022-07-21 09:19:21]: keepalive message is received
[2022-07-21 09:19:23]: keepalive message is received
[2022-07-21 09:19:26]: keepalive message is received
[2022-07-21 09:19:34]:gs_basebackup: base backup successfully
[omm@openGauss1 backup]$ cd openGauss1/
[omm@openGauss1 openGauss1]$ ls
PG_VERSION cacert.pem mot.conf pg_ctl.lock pg_hba.conf.bak pg_llog pg_notify pg_snapshots pg_twophase postgresql.conf.bak rewind_lable server.key.cipher
backup_label global pg_clog pg_errorinfo pg_hba.conf.lock pg_logical pg_replslot pg_stat_tmp pg_xlog postgresql.conf.lock server.crt server.key.rand
base gswlm_userinfo.cfg pg_csnlog pg_hba.conf pg_ident.conf pg_multixact pg_serial pg_tblspc postgresql.conf postmaster.pid.lock server.key undo
[omm@openGauss1 openGauss1]$ gs_probackup version
gs_probackup (openGauss 3.0.0 build 02c14696) compiled at 2022-04-01 18:12:19 commit 0 last mr