_gc_policy_time
(10g版本中该参数叫_gc_affinity_time
)
指定Oracle搜集每个节点对某个数据库对象的访问次数的时间间隔,默认值为10(单位:分钟),换句话说Oracle每隔10分钟会搜集一次每个节点对所有数据对象对应的数据块的访问次数。
_gc_policy_minimum
(10g版本中该参数叫_gc_affinity_minimum
)
制定了每分钟数据库对象至少要被访问多少次,才考虑修改它的主节点信息。在10g版本中,该参数默认值6000(11g版本开始,默认1500),也就是说,只有当数据库对象每分钟至少被访问6000次,才有可能修改它的主节点信息。
_gc_affinity_ratio
(10g版本中该参数叫_gc_affinity_limit
)
指定只有当一个节点访问某一个数据库对象的次数超过了所有其他节点访问相同的数据库对象的多少倍时,才考虑修改对象的主节点。这个参数的默认值为50。也就是说,当一个节点访问某个数据库对象次数超过了所有其他节点访问该对象次数的50倍时,该数据库对象的主节点信息会被转移到访问次数最多的节点。
DRM发生条件
当在一段时间内_gc_policy_time
,如果一个数据库对象每分钟被访问的次数超过了一个阈值_gc_policy_minimum
,而且某一个节点访问该对象的次数超过了所有其他节点的访问次数的_gc_affinity_ratio
50倍,那么Oracle会陆续地(以windows为单位)将这个数据库对象对应的数据块的主节点信息迁移到访问次数最多的节点。
注意
DRM迁移的是数据块的主节点信息,而不是将数据块的最新版本copy到访问次数最多的节点。
affinity lock
对于满足DRM条件的数据库对象的数据块,Oracle会在内存中对这个buffer加上一个标志,这个标志就被称为affinity lock。访问一个具有affinity lock标识的数据块会更加的高效
windows
由于一个对象可能会包含很多数据块,也就说需要转移的主节点信息的数据块会很多,而转移主节点信息的过程需要在对应资源上加上锁,所以不可能一次将所有需要转移主节点的数据块全部处理完,而需要设置一些windows,每次迁移主节点信息的时候,只对一个windows中的数据块进行迁移。所以,windows定义了一次DRM操作转移的块数,windows的默认尺寸为64。
undo affinity
相对于其他的数据库对象,回滚段也可以被认为是一种数据库对象,但是这种数据对象和其他的数据库对象在affinity的属性上要特殊一点。对于回滚段,Oracle会自动将其主节点信息指定到上线该回滚段的节点。
由于DRM需要将数据块资源的主节点信息从一个节点迁移到另外一个节点,所以需要几个后台进程协作完成。
基本过程如下:
_gc_policy_time
)到达时,LMD进程会检查DRM队列,如果发现的确有待处理的DRM请求,开始对相应的数据块以windows为单位进行迁移(Remastering)操作。最后,LMON进程和LMS进程协作,完成对数据块主节点信息的迁移。迁移详细过程如下:
1) 静默阶段(Quiesce):在这个阶段,LMON进程准备开始进行DRM操作,通知所有节点的LMS进程在结束对需要进行主节点信息迁移的数据块的当前操作后,不要再接收对这些块的新的请求。
2)冻结阶段(Freeze):所有节点的LMS进程在接收到LMON发送的信息之后,完成对需要进行DRM操作的数据块的当前请求之后,冻结这些数据块。这个时候,如果用户进程需要操作对应的数据块需要等待(等待事件为:gcs drm freeze in enter server mode
)
3)清除阶段(Cleanup):将对应块在所有节点上的旧的主节点信息清除
4)重建阶段(Rebuild):所有实例的LMS进程将本地持有的对应数据块的锁信息发送给新的主节点,从而构建新的主节点信息。新的块主节点在收到其他节点发送的信息之后,将新的主节点信息写入到本地内存。
5)解冻阶段(Unfreeze):这个阶段也可以叫做结束阶段,在这个阶段中,之前被冻结的数据块已经完成了主节点信息的迁移,之前的锁定被解除,用户进程可以开始访问这些数据块。在这个阶段,发起DRM节点的LMON进程会向所有远程节点的LMON进程发送DRM完成的消息,而远程节点的LMON进程收到这些消息后,会和本地节点的LMD进程确认本地节点的工作是否已完成,并反馈发起DRM操作节点的LMON进程。只有当所有远程节点的LMON都反馈了DRM发起节点的消息之后,才能确认本次DRM操作已经完成,所有节点的资源信息一致。 之后,在DRM队列当中,被完成的数据块对应的windows消息被标记成已经完成,下一次LMD在检索DRM队列时不再对这个windows进行操作,而当DRM队列满的时候,这些信息也可以被覆盖掉。
另外,在DRM过程当中,如果其他进程需要访问正在进行DRM操作的数据块,那么产生的等待事件会是gc master
1)修改DRM相关隐含参数,便于更容易满足DRM的条件
alter system set "_gc_policy_minimum"=100;
alter system set "_gc_policy_time"=4 scope=spfile;
2)创建一个测试表,并添加2000行数据。
create table test_big
(test_id number(10),
time_id date,
contents1 varchar2(3000),
contents2 varchar2(3000)
) tablespace test;
--插入数据
begin
for i in 0 .. 2000 loop
insert into test_big
values
(i,
sysdate,
dbms_random.string('a', 3000),
dbms_random.string('a', 3000));
end loop;
commit;
end;
/
3)创建一个存储过程,访问指定个数的数据块
create or replace procedure access_table(p_times number) as
begin
if p_times < 1 then
dbms_output.put_line('invalid value');
else
for i in 1 .. p_times loop
update test_big set time_id = sysdate where test_id = i;
commit;
end loop;
end if;
end;
/
4)创建一个shell脚本来调用这个存储过程
实例1:
PATH=$PATH:$HOME/bin
export PATH
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_SID=orcl1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
for ((i=1;i<=8;i++))
do
date
sqlplus -S /nolog <<EOF
connect / as sysdba
exec access_table(20);
spool off;
exit
EOF
date
sleep 30
done
exit
实例2:
PATH=$PATH:$HOME/bin
export PATH
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_SID=orcl2
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
for ((i=1;i<=20;i++))
do
date
sqlplus -S /nolog <<EOF
connect / as sysdba
exec access_table(1000);
spool off;
exit
EOF
date
sleep 5
done
exit
以上的shell脚本会使实例2访问测试表的次数远大于实例1.
5) 运行上面的脚本并观察一段时间
6)查看产生的后台进程日志文件
实例2, LMD进程日志:
*** 2019-01-23 15:38:57.183
Begin DRM(5) (swin 0) - AFFINITY transfer pkey 73599.0 to 2 oscan 0.0
kjiobjscn 1
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.34.0)
all ftds received
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.38.0)
all ftds received
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.34.0)
all ftds received
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.38.0)
all ftds received
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.31.0)
all ftds received
* kjxftdn: break from kjxftdn, post lmon later
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.38.0)
all ftds received
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.34.0)
all ftds received
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.36.0)
all ftds received
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.34.0)
all ftds received
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.36.0)
all ftds received
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.34.0)
all ftds received
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.36.0)
all ftds received
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.34.0)
all ftds received
*** 2019-01-23 15:38:57.632
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.38.0)
all ftds received
ftd (30) received from node 1 (4/0.31.0)
all ftds received
ftd (33) received from node 1 (4/0.34.0)
all ftds received
ftd (35) received from node 1 (4/0.36.0)
all ftds received
ftd (37) received from node 1 (4/0.38.0)
all ftds received
2019-01-23 15:38:57.701543 :
* End DRM for pkey remastering request(s) (locally requested)
实例1, LMD进程日志:
*** 2019-01-23 15:38:59.279
Rcvd DRM(5) AFFINITY Transfer pkey 73599.0 from 1 to 2 oscan 1.1
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.33.0)
all ftds received
* kjxftdn: break from kjxftdn, post lmon later
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.38.0)
all ftds received
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.31.0)
all ftds received
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.38.0)
all ftds received
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.34.0)
all ftds received
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.38.0)
all ftds received
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.34.0)
all ftds received
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.38.0)
all ftds received
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.34.0)
all ftds received
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.38.0)
all ftds received
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.34.0)
all ftds received
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.38.0)
all ftds received
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.34.0)
all ftds received
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.36.0)
all ftds received
* kjxftdn: break from kjxftdn, post lmon later
ftd (30) received from node 2 (4/0.31.0)
all ftds received
ftd (33) received from node 2 (4/0.34.0)
all ftds received
ftd (35) received from node 2 (4/0.36.0)
all ftds received
ftd (37) received from node 2 (4/0.38.0)
all ftds received
2019-01-23 15:38:59.792432 :
End DRM(5) for pkey transfer request(s) from 2
以上信息说明,LMD进程的确启动了DRM操作,并通知了所有远程实例。
实例2,LMS进程日志
lms 0 finished fixing gcs write protocol
DRM(5) win(2) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(3) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(4) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(5) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(6) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(7) lms 0 finished replaying gcs resources
*** 2019-01-23 15:38:57.632
lms 0 finished fixing gcs write protocol
DRM(5) win(8) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
2019-01-23 16:25:41.643889 : GSIPC:PING: send PINGREQ[1] to 1.1 (seq 0.56731) stm 0xd6e950c5
实例1,LMS进程日志
DRM(5) win(2) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(3) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(4) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(5) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(6) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(7) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
DRM(5) win(8) lms 0 finished replaying gcs resources
lms 0 finished fixing gcs write protocol
可以看到,每个节点的LMS进程都收到了DRM的消息。
7)从一些视图中的信息来看一下DRM更多信息
SQL> select a.DATA_OBJECT_ID,
a.GC_MASTERING_POLICY,
a.CURRENT_MASTER,
a.PREVIOUS_MASTER,
a.REMASTER_CNT
from v$gcspfmaster_info a
where a.DATA_OBJECT_ID=73599 2 3 4 5 6 7
8 /
DATA_OBJECT_ID GC_MASTERIN CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
-------------- ----------- -------------- --------------- ------------
73599 Affinity 1 0 2
可以看到,对象73599 (test_big)当前的主节点为实例2(列CURRENT_MASTER =1),这个对象的主节点信息迁移过2次(列REMASTER_CNT=2)
SQL> select * from gv$policy_history b where b.DATA_OBJECT_ID=73599
2 /
INST_ID POLICY_EVENT DATA_OBJECT_ID TARGET_INSTANCE_NUMBER EVENT_DATE
---------- -------------------- -------------- ---------------------- --------------------
1 initiate_affinity 73599 1 01/23/2019 15:16:47
1 push_affinity 73599 2 01/23/2019 15:38:59
可以看到,初始化affinity的操作发生在01/23/2019 15:16:47,对应的主节点为1,push affinity操作发生在01/23/2019 15:38:59,对应主节点为2。
《RAC核心技术详解》