本文转载自:CSDN的知名Oracle博主:tianlesoftware
本文转载网址:http://blog.csdn.net/tianlesoftware/article/details/7626421
本文英文书籍的下载地址:http://dl.dbank.com/c0hf1ba269
本文源码下载:http://www.apress.com/9781430239543
为了便于学习,本人对文章中新增了一些自己的注释:如果有侵犯您权益的问题,请及时告知本人,本人将即刻停止侵权行为:
The single most important feature of Oracle is one that first appeared in version 6: the change vector, a mechanism for describing changes to data blocks, the heart of redo and undo.
--对于Oracle来说,最具有革命性的新特性莫过于Oracle 6时出现的改变向量(changevector);改变向量(变更向量):即描述数据块更改的信息机制, change vector 也是redo 和undo的核心。
This is the technology that keeps your data safe, minimizes conflict between readers and writers, and allows for instance recovery, media recovery, all the stand by technologies, flashback mechanisms, change data capture, and streams. So this is the technology that we’re going to review first.
--change vector 技术也保证了数据安全,最小化数据读写之间的冲突,允许实例恢复和介质恢复…..
一.Basic Data Change -- 基本数据更改
One of the strangest features of an Oracle database is that it records your data twice.One copy of the data exists in a set of data files which hold something that is nearly the latest, up-to-date version of your data (although the newest version of some of the data will be in memory, waiting to be copied to disc); the other copy of the data exists as a set of instructions—the redo log files—telling you how to re-create the content of the data files from scratch.
--Oracle 数据库最有趣的特性就是会记录2次数据。一次是把最近的数据写入数据文件,这里是最近的数据,因为最新的数据只存在内存的buffer中,等待写入磁盘。另一次是数据写入redolog,这里的record 信息是一个操作描述的集合。
1.1 The Approach
Under the Oracle approach to data change, when you issue an instruction to change an item of data, Oracle doesn’t just go to a data file (or the in-memory copy if the item happens to be buffered), find the item, and change it. Instead, Oracle works through four critical steps to make the change happen. Stripped to the bareminimum of detail, these are
但当你要改变一组数据数据时,Oracle不会直接去数据文件(或者正在缓存中),去寻找文件,修改文件,相反,Oracle会通过四个关键的步骤来修改数据
--在Oracle数据库里,当我们要改变一个记录时,Oracle 不会直接去修改datafile,而是按照以下的步骤来操作:
1. Create a description of how to change the data item.--建立一个如何修改数据的描述
2. Create a description of how to re-create the original data item if needed.--如果有必要,建立一个如何重新生成原始数据的描述
3. Create a description of how to create the description of how to re-create the original data item.建立一个描述关于如何生成原始数据的描述
4. Change the data item.--修改数据
The tongue-twisting(拗口) nature of the third step gives you some idea of how convoluted(令人费解) the mechanism is, but all will become clear. With the substitution of a few technical labels in these steps, here’s another way of describing the actions of changing a data block:
1. Create a redo change vector describing the change to the data block. --创建一个重做的变更向量用来描述改变的数据块
2. Create an undo record for insertion into an undo block in the undo tablespace.在UNDO表空间创建一个undo(撤销)记录插入到undo块
3. Create a redo change vector describing the change to the undo block. --创建一个重做的变更向量用来描述改变的数据块
4. Change the data block. --修改数据
The exact sequence of steps and the various technicalities around the edges vary depending on the version of Oracle, the nature of the transaction, how much work has been done so far in the transaction, what the states of the various database blocks were before you executed the instruction, whether or not you’re looking at the first change of a transaction, and so on.
确切的执行步骤和技术取决于Oracle的版本,事务的属性、在事务中有多少任务被完成,当执行改变的时候,数据块的状态,是否观察了在事务中的第一个改变,等等其他的。
1.2 An Example
I’m going to start with the simplest example of a data change, which you might expect to see as you updated a single row in the middle of an OLTP transaction that had already updated a scattered set of rows. In fact, the order of the steps in the historic (and most general) case is not the order I’ve listed in the preceding section.
The steps actually go in the order 3, 1, 2, 4, and the two redo change vectors are combined into a single redo change record and copied into the redo log (buffer)before the undo block and data block are modified (in that order). This means aslightly more accurate version of my list of actions would be:
(两个redo变更向量合并成一个变更记录,在undo数据块和数据块修改之前拷贝到日志缓冲区)
一个update事务操作的具体流程如下:
1. Create a redo change vectordescribing how to insert an undo record into an undo block.
--创建undo record 对应的redochange vector。
2. Create a redo change vector forthe data block change.
--创建data block 对应的redochange vector。
3. Combine the redo change vectorsinto a redo record and write it to the log buffer.
--将undo record和 datablock 的redo change vector绑定成一个redo record并写入logbuffer中。
4. Insert the undo record into theundo block.
--将undo record 写入undoblock。
5. Change the data block.
--改变数据块内容。
Here’s a little sample, taken from a system running Oracle 9.2.0.8 (the last version in whichit’s easy to create the most generic example of the mechanism). We’re going to execute an update statement that updates five rows by jumping back and forth between two table blocks, dumping various bits of information into our process trace file before and after the update. I need to make my update a little bit complicated because I want the example to be as simple as possible while avoiding a few “special case” details.
Note The first change in a transaction includes some special steps, and the first change a transaction makes to eachblock is slightly different from the most “typical” change.
1.3 Debriefing
So where have we got to so far? When we change a data block, Oracle inserts an undo record into an undo block to tell us how to reverse that change. But for every change that happens to a block in the database, Oracle creates a redo change vector describing how to make that change, and it creates the vectors before it makes the changes. Historically, it created the undo change vector before it created the “forward” change vector, hence, the following sequence of events (seeFigure 2-1) that I described earlier occurs:
当我们修改一个数据块的时,Oracle向undo block插入一个undo record,用来告诉我们将如何恢复数据块的变化,但是,对于数据库中发生在任何一个数据块的变化,Oralce 都会创建一个redo变更向量来描述如何来变化,并且在数据块变化之前创建变更向量,以往,在创建“forward”变更向量之前,先创建undo变更向量,如下图所示:
1. Create the change vector for the undo record.
2. Create the change vector for the data block.
3. Combine the change vectors and write the redo record into the redo log (buffer).
4. Insert the undo record into the undo block.
5. Make the change to the datablock.
When you look at the first two steps here, of course, there’s no reason to believe that I’ve got them in the right order. Nothing I’ve described or dumped shows that the actions must be happening in that order. But there is one little detail I can now show you that I omitted from the dumps of the change vectors, partly because things are different from 10g onwards and partly because the description of the activity is easier to comprehend if you first think about it in the wrong order.
Note Oracle Database 10g introduced an important change to the way that redo change vectors are created and combined, but the underlying mechanisms are still very similar;moreover, the new mechanisms don’t apply to RAC, and even single instance Oracle falls back to the old mechanism if a transaction gets too large or you have enabled supplemental logging or flashback database. We will be looking at the new strategy later in this chapter. One thing that doesn’t change, though,is that redo is generated before changes are applied to data and undo blocks.
Oracle 10g有一个非常重大的针对redo变更向量的创建和结合的变化,但是相关的机制仍然非常的相似,此外,新的机制将不可以用于RAC(集群)或者数据库的闪回,在之后我们会介绍这种新的机制,但是有件事不会改变的-重做在变化应用于undo数据块的修改的数据块已经生成。
1.4 Summary of Observations
Before we continue, we can summarize our observations as follows: in the data files,every change we make to our own data is matched by Oracle with the creation of an undo record (which is also a change to a data file); at the same time Oracle puts into the redo log a description of how to make our change and how to make its own change.
总结:在数据文件中,我们没进行一次数据文件的修改,Oracle会创建一个undo记录(他也是一个针对数据文件的变化),同时,Oracle生成一个如何修改数据块的描述和如何还原的描述。
You might note that since data can be changed “in place,” we could make an “infinite” (i.e.,arbitrarily large) number of changes to our single row of data, but we clearly can’t record an infinite number of undo records without growing the data files of the undo tablespace, nor can we record an infinite number of changes in the redo log without constantly adding more redo log files. For the sake of simplicity, we’ll postpone the issue of infinite changes and simply pretend fort he moment that we can record as many undo and redo records as we need.
不能进行无限制的大小变化,我们只是假设在一些理想化的情况下。
二.ACID
Although we’re not going to look at transactions in this chapter, it is, at this point, worth mentioning the ACID requirements of a transactional system and how Oracle’s implementation of undo and redo gives Oracle the capability of meeting those requirements. Table 2-1 lists the ACID requirements.
The following list goes into more detail about each of the requirements in Table 2-1:
Atomicity(原子性): As we make a change, we create an undo record that describes how to reverse the change. This means that when we are in the middle of a transaction, another user trying to view any data we have modified can be instructed to use the undo records to see an olderversion of that data, thus making our work invisible until the moment we decide to publish (commit) it. We can ensure that the other user either sees nothing of what we’ve done or sees everything.(我们可以可以确保,用户要么看见我们修改的全部,要么什么也看不到)
Consistency(一致性): This requirement is really about constraints defining the legal states of the database; but we could also argue that the presence of undo records means that other users can be blocked from seeing the incremental application of our transaction and therefore cannot see the database moving from one legal state to another by way of a temporarily illegal state—what they see is either the old state or the newstate and nothing in between. (The internal code, of course, can see all the intermediate states—and take advantage of being able to see them—but the end-user code never sees inconsistent data.)
Isolation(隔离性): Yet again we can see that the availability of undo records stops other users from seeing how we are changing the data until the moment we decide that our transaction is completeand commit it(undo records的可见性防止其他用户在事务没有完全提交前,看到我们正在修改的数据). In fact, we do better than that: the availability of undo means that other users need not see the effects of our transactions for the entire duration of their transactions, even if we start and end our transaction between the start and end of their transaction. (This is not the default isolation level in Oracle, but it is an available isolation level; see the“Isolation Levels” sidebar.) Of course, we do run into confusing situations when two users try to change the same data at the same time; perfect isolationis not possible in a world where transactions have to take a finite amount of time.
Durability(持久性): This is the requirement that highlights the benefit of the redo log(redo log的作用,用于实例恢复). How do you ensure that a completed transaction will survive a system failure? The obvious strategy is to keep writing any changes to disc, either as they happen or as the final step that“completes” the transaction. If you didn’t have the redo log, this could mean writing a lot of random data blocks to disc as you change them. Imagine inserting ten rows into an order_lines table with three indexes; this could require 31 randomly distributed disk writes to make changes to 1 table block and 30 index blocks durable. But Oracle has the redo mechanism. Instead of writing an entire data block as you change it, you prepare a small description of the change, and 31 small descriptions could end up as just one (relatively)small write to the end of the log file when you need to make sure that you’ve got a permanent record of the entire transaction。假设向order_lines表插入10行,每行有三个索引,将需要31个随机分布的磁盘,用来写1个表的数据块和30个索引块,但是,Oracle有重做的机制,吸纳各方,当修改的时候,写入整个数据块。你可以准备一个很小的针对这些变化的描述,当你确保当前整个事务的得到一个永久的记录,那么31个很小的描述将写在日志的末尾。
2.1 ISOLATION LEVELS
Oracle 数据隔离级别(TransactionIsolation Levels) 说明
http://blog.csdn.net/tianlesoftware/article/details/6594655
Oracle offers three isolation levels: read committed (the default), read only, andserializable.
--Oracle 提供了3种不同级别的隔离:read commited(默认级别),readonly和serializable。
As a brief sketch(描述) of the differences, consider the following scenario: table t1 holds one row, and table t2 is identical(相同的) to t1 in structure. We have two sessions that go through the following steps in order:
1. Session 1: select from t1;
2. Session 2: insert into t1 select * from t1;
3. Session 2: commit;
4. Session 1: select from t1;
5. Session 1: insert into t2 select * from t1;
If session 1 is operating at isolation level read committed, it will select one row on the first select, select two rows on the second select, and insert two rows.
If session 1 is operating at isolation level read only, it will select one row on the first select, select one row on the second select, and fail with Oracle error“ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction.” If session 1 is operating at isolation level serializable, it will select one row on the first select, select one row on the second select,and insert one row.
Not only are the mechanisms for undo and redo sufficient to implement the basic requirements of ACID, they also offer advantages in performance and recoverability.
The performance benefit of redo has already been covered in the comments on durability; if you want an example of the performance benefits of undo, think about isolation—howcan you run a report that takes minutes to complete if you have users who need to update data at the same time? In the absence of something like the undo mechanism, you would have to choose between allowing wrong results and locking out everyone who wants to change the data. This is a choice that you have to make with some other database products. The undo mechanism allows for anextraordinary degree of concurrency because, per Oracle’s marketing sound bite,“readers don’t block writers, writers don’t block readers.” (readers 和writer之间不会相互影响)
As far as recoverability is concerned (and we will examine recoverability in more detailin Chapter 6), if we record a complete list of changes we have made to the database, then we could, in principle, start with a brand-new database and simply reapply every single change description to reproduce an up-to-date copy of the original database. Practically, of course, we don’t (usually) start with a new database; instead we take regular backup copies of the data files so that we need only replay a small fraction of the total redo generated to bring the copy database up to date.
三.Redo Simplicity – Redo 的简易性
The way we handle redo is quite simple: we just keep generating a continuous stream of redo records and pumping them as fast as we can into the redo log, initially into an area of shared memory known as the redo log buffer. Eventually, of course, Oracle has to deal with writing the buffer to disk and, for operational reasons, actually writes the “continuous” stream to a small set of predefined files—the online redo log files. The number of online redo log files is limited, so we have to reuse them constantly in a round-robin fashion.
--Oracle 处理redo 的方式很简单,就是持续生成redo records的stream,然后把这些redo record近可能快的写入redo log中。 redo records最初是写入shared memory中的一个区域,叫redolog buffer,然后从buffer中写入disk,存放在onlineredo log files里,redo log files的大小是受限制的,所以我们需要循环的使用redo log files。
To protect the information stored in the online redo log files over a longer time period, most systems are configured to make a copy, or possibly many copies, of each file as it becomes full before allowing Oracle to reuse it: the copies are referred to as thear chived redo log files. As far as redo is concerned, though, it’s essentially write it and forget it—once a redo record has gone into the redo log (buffer),we don’t (normally) expect the instance to reread it. At the basic level, this“write and forget” approach makes redo a very simple mechanism.
--为了保护onlineredo log files里的信息能保存一个较长的周期,大部分系统都会对redo log file 保留一个或者多个副本(归档)。而对 于redo 来说,其本质上就是一个:write and forget的机制。一旦redo record 写入了redo log(buffer),一般我们就不会在去读这些信息,因为这个机制,就保证了redo 很简单。
Although we don’t usually expect to do anything with the online redo log files except write them and forget them, there is a special case where a session can read the online redo log files when it discovers the in-memory version of a block to be corrupt and attempts to recover from the disk copy of the block. Of course,some features, such as Log Miner, Streams, and asynchronous Change DataCapture, have been created in recent years to take advantage of the redo logfiles, and some of the newer mechanisms for dealing with Stand by (支持)databases have become real-time(实时) and are bound into(装订成) the process that writes the online redo.
虽然我们一般只对online redo files进行write 和 forget,但是还是有一些特殊情况我们需要读取online redo log files的信息,比如检测到内存坏块,并从disk 恢复坏块时。当然,Oracle一些特性也会用到Online redo logs,比如Log Miner,Streams和 asynchronous Change Data Capture等。
There is,however, one complication. There is a critical bottleneck(瓶颈) in redo generation,the moment when a redo record has to be copied into the redo log buffer. Prior to 10g, Oracle would insert a redo record (typically consisting of just one pair of redo change vectors) into the redo log buffer for each change a session made to user data. But a single session might make many changes in a very short period of time, and there could be many sessions operating concurrently—andthere’s only one redo log buffer that everyone wants to access.
但是在生成redo 时,将redo record 写入redolog buffer的过程会是一个非常严重的瓶颈。在Oracle 10g之前,Oracle会将用户session中的每一个change 产生的redo record都会写入redolog buffer,这个redo record一般包含(data的undo change vector和redo change vector),单个session在很短的周期内可能产生多个change,并且可能有多个session 同时并发操作,但是只有一个redo logbuffer他们可以访问。
It’s relatively easy to create a mechanism to control access to a piece of shared memory, and Oracle’s use of the redo allocation latch to protect the redo log buffer is fairly well known. A process that needs some space in the log buffer tries to acquire (get) the redo allocation latch, and once it has exclusive ownership of that latch, it can reserve some space in the buffer for the information it wants to write into the buffer. This avoids the threat of having multiple processes overwrite the same piece of memory in the log buffer, but if there are lots of processes constantly competing for the redo allocation latch, then the level of competition could end up “invisibly” consuming lots of resources(typically CPU spent on latch spinning) or even lots of sleep time as session stake themselves off the run queue after failing to get the latch on the first spin.
有个很简单的方法来解决上面的问题,创建一个机制,来控制访问shared memory中的每一块,然后使用redo allocationlatch 来保护redo log buffer 被公平的使用。当一个进程需要使用log buffer中的space时,那么会申请redoallocation latch,一旦它已独占的方式获取了这个latch,它就可以把其对应的信息写入到redo log buffer中了,这样就可以避免多个线程重写log buffer中相同的块,如果有大量的进程来申请redo allocation latch,他们是拿不到这个latch,并且这个过程还会消耗大量的资源(主要是CPU 用来latch spin),或者是在第一次spin 失败后进入等待队列,休眠sleep time后继续进行spin操作,来申请redo allocation latch操作。
In older versions of Oracle, when the databases were less busy and the volume of redo generated was much lower, the “one change = one record = one allocation” strategy was good enough for most systems, but as systems became larger, there quirement for dealing with large numbers of concurrent allocations(particularly for OLTP systems) demanded a more scalable strategy. So a new mechanism combining private redo and in-memory undo appeared in 10g.
在老版本的Oracle中,当数据库不忙时,产生redo record也会相对较少,这种情况下: one change = one record = one allocation 的策略对大部分系统都是够用的,当系统非常大时,就需要来处理大量的并发allocations 操作,所以在Oracle 10g的新策略里就出现了private redo 和in-memory undo。
In effect, a process can work its way through an entire transaction, generating all its change vectors and storing them in a pair of private redo log buffers. When the transaction completes, the process copies all the privately stored redo into the public redo log buffer, at which point the traditional log buffer processing takes over. This means that a process acquires the public redo allocation latch only once per transaction, rather than once per change.
事实上,一个进程可以在整个事务中工作,生成事务相关的所有的change vectors,在一对private redo logbuffers中存储这些vectors 。当这个事务结束时,进程就copy 所有privateredo log 里的信息到public redo log buffer中,剩下的操作就按照原始的方式处理。按照这种方式处理之后,一个进程仅需要在一个事务结束之后来申请public redo allocation latch,而不是每次改变都申请这个latch。
As a step toward improved scalability(可扩展性), Oracle 9.2 introduced the option for multiple log buffers with the log_parallelism parameter, but this option was kept fairly(相当) quiet and the general suggestion was that you didn’t need to know about it unless you had at least 16 CPUs. In 10g you get at least two public log buffers (redo threads) if you have more than one CPU.
在Oracle 9.2中引入了一个选项,使用log_parallelism参数来控制多个log buffers,这个参数不会改变之前的策略,还是会公平的使用。同时在少于16个CPU的情况下可以忽略这个参数。在Oracle 10g,如果有多个CPU(>2),那么至少有2个public 的log buffers(redo threads)。一个log buffer 对应一个redo thread。
There are a number of details (and restrictions) that need to be mentioned, but before we go into any of the complexities, let’s just take a note of how this changes some of the instance activity reported in the dynamic performance views. I’ve taken the script in core_demo_02.sql, removed the dump commands, and replaced them with calls to take snapshots of v$latch and v$sesstat (seecore_demo_02b.sql in the code library). I’ve also modified the SQL to update 50 rows instead of 5 rows so that differences in workload(工作量) stand out more clearly.The following results come from a 9i and a 10g system, respectively, running the same test. First the 9i results:
Latch Gets Im_Gets
----- ---- -------
redocopy 0 51
redoallocation 53 0
Name Value
---- -----
redoentries 51
redosize 12,668
Note particularly in the 9i output that we have hit the redo copy and redoallocation latches 51 times each (with a couple of extra gets on the allocationlatch from another process), and have created 51 redo entries. Compare this with the 10g results:
Latch Gets Im_Gets
----- ---- -------
redocopy 0 1
redoallocation 5 1
In memory undolatch 53 1
Name Value
---- -----
redoentries 1
redosize 12,048
In 10g, our session has hit the redo copy latch just once, and there has been just a little more activity on the redo allocation latch. We can also see that we have generated a single redo entry with a size that is slightly smaller than the total redo size from the 9i test. These results appear after the commit; if we took the same snapshot before the commit, we would see no redo entries (and azero redo size), the gets on the In memory undo latch would drop to 51, and the gets on the redo allocation latch would be 1, rather than 5.
So there’sclearly a notable reduction in the activity and the threat of contention at acritical location. On the downside, we can see that 10g has, however, hit that new latch called the In memory undo latch 53 times in the course of our test,which makes it look as if we may simply have moved a contention problem fromone place to another. We’ll take a note of that idea for later examination.
There are various places we can look in the database to understand what has happened. Wecan examine v$latch_children to understand why the change in latch activityisn’t a new threat. We can examine the redo log file to see what the one large redo entry looks like. And we can find a couple of dynamic performance objects(x$kcrfstrand and x$ktifp) that will help us to gain an insight into the way in which various pieces of activity link together.
我们可以查询相关的视图来帮助我们理解整个过程发生了什么。相关视图和字典有:v$latch_children,x$kcrfstrand,x$ktifp。
The enhanced(增强)infrastructure is based on two sets of memory structures. One set (calledx$kcrfstrand, the private redo) handles “forward” change vectors, and the otherset (called x$ktifp, the in-memory undo pool) handles the undo change vectors.The private redo structure also happens to hold information about the traditional “public” redo log buffer(s), so don’t be worried if you see two different patterns of information when you query it.
增强结构(redo record)是根据两组memory structures来建立的。一个是set是x$kcrfstrand(privateredo),用来处理 forward change vectors,另一个set是x$ktifp(in-memoryundo pool),用来处理undo change vectors。 这里要注意的是,就是在public redo logbuffer也会保存旧机制存放进来的redo record。因为在一些特殊条件下,新机制不能使用就会切换到旧机制下,这样redo record就不会在commit之后批量的从private redo logbuffer写入public redo log buffer,而是每次修改都会写入一次public redolog buffer。
The number ofpools in x$ktifp (in-memory undo) is dependent on the size of the array that holds transaction details (v$transaction), which is set by parameter transactions (but may be derived from parameter sessions or parameter processes). Essentially, the number of pools defaults to transactions / 10 and each pool is covered by its own “In memory undo latch” latch.
in-memory undo pools(x$ktifp)的数量由详细事务所持有数组的大小决定,其也可以受参数控制,不过一般根据sessions 或者processes 计算出来。一般默认的pools数量=transactions/10,每个pool 由自己的 in memory undo latch来控制。
For each entry in x$ktifp there is a corresponding private redo entry in x$kcrfstrand, and, as I mentioned earlier, there are then a few extra entries which are for the traditional “public” redo threads. The number of public redo threads is dictated by the cpu_count parameter, and seems to be ceiling(1 + cpu_count /16). Each entry in x$kcrfstrand is covered by its own redo allocation latch,and each public redo thread is additionally covered by one redo copy latch per CPU (we’ll be examining the role of these latches in Chapter 6).
在x$ktifp 中的每一个entry,在x$kcrfstrand中都有一个对应的privateredo entry,并且在x$kcrfstrand中还有一些额外的entry(traditional ‘public’ redothreads)。 Public redo threads的数量根据cpu_count计算,=(1+cpu_count/16). 每个x$kcrfstrand中的entry都受自己的redoallocation latch控制,而每一个public redo thread 则受每个CPU的redocopy latch控制。
If we go back toour original test, updating just five rows and two blocks in the table, Oracle would still go through the action of visiting the rows and cached blocks in the same order, but instead of packaging pairs of redo change vectors, writing them into the redo log buffer, and modifying the blocks, it would operate asfollows:
按照我们之前的例子来看我们上面讲的内容:
1. Start the transaction by acquiring a matching pair of the private memory structures , one from x$ktifpand one from x$kcrfstrand.
2. Flag each affected block as“has private redo” (but don’t change the block).
3. Write each undo change vector into the selected in-memory undo pool.
4. Write each redo change vector into the selected private redo thread.
5. End the transaction by concatenating the two structures into a single redo change record.--组合
6. Copy the redo change record into the redo log and apply the changes to the blocks.
If we look at the memory structures (see core_imu_01.sql in the code depot) just before we commit the transaction from the original test, we see the following:
INDX UNDO_SIZEUNDO_USAGE REDO_SIZE REDO_USAGE
----- ---------- ---------- --------------------
0 64000 4352 62976 3920
This show usthat the private memory areas for a session allow roughly 64KB for “forward”changes, and the same again for “undo” changes. For a 64-bit system this would be closer to 128KB each. The update to five rows has used about 4KB from eachof the two areas.
If I then dump the redo log file after committing my change, this (stripped to a bare minimum)is the one redo record that I get:
REDO RECORD - Thread:1 RBA:0x0000d2.00000002.0010 LEN: 0x0594 VLD: 0x0d
SCN: 0x0000.040026ae SUBSCN: 104/06/2011 04:46:06
CHANGE #1 TYP:0 CLS: 1 AFN:5 DBA:0x0142298aOBJ:76887
SCN:0x0000.04002690 SEQ: 2OP:11.5
CHANGE #2 TYP:0 CLS:23 AFN:2 DBA:0x00800039OBJ:4294967295
SCN:0x0000.0400267e SEQ: 1OP:5.2
CHANGE #3 TYP:0 CLS: 1 AFN:5 DBA:0x0142298bOBJ:76887
SCN:0x0000.04002690 SEQ: 2OP:11.5
CHANGE #4 TYP:0 CLS: 1 AFN:5 DBA:0x0142298aOBJ:76887
SCN:0x0000.040026ae SEQ: 1OP:11.5
CHANGE #5 TYP:0 CLS: 1 AFN:5 DBA:0x0142298bOBJ:76887
SCN:0x0000.040026ae SEQ: 1OP:11.5
CHANGE #6 TYP:0 CLS: 1 AFN:5 DBA:0x0142298aOBJ:76887
SCN:0x0000.040026ae SEQ: 2OP:11.5
CHANGE #7 TYP:0 CLS:23 AFN:2 DBA:0x00800039OBJ:4294967295
SCN:0x0000.040026ae SEQ: 1OP:5.4
CHANGE #8 TYP:0 CLS:24 AFN:2 DBA:0x00804a9bOBJ:4294967295
SCN:0x0000.0400267d SEQ: 2OP:5.1
CHANGE #9 TYP:0 CLS:24 AFN:2 DBA:0x00804a9bOBJ:4294967295
SCN:0x0000.040026ae SEQ: 1OP:5.1
CHANGE #10 TYP:0 CLS:24 AFN:2DBA:0x00804a9b OBJ:4294967295
SCN:0x0000.040026ae SEQ: 2OP:5.1
CHANGE #11 TYP:0 CLS:24 AFN:2DBA:0x00804a9b OBJ:4294967295
SCN:0x0000.040026ae SEQ: 3OP:5.1
CHANGE #12 TYP:0 CLS:24 AFN:2DBA:0x00804a9b OBJ:4294967295
SCN:0x0000.040026ae SEQ: 4OP:5.1
You’ll noticethat the length of the undo record (LEN:) is 0x594 = 1428, which matched thevalue of the redo size statistic I saw when I ran this particular test. This issignificantly smaller than the sum of the 4352 and 3920 bytes reported as usedin the in-memory structures, so there are clearly lots of extra bytes involvedin tracking the private undo and redo—perhaps as starting overhead in thebuffers.
If you readthrough the headers of the 12 separate change vectors, taking note particularlyof the OP: code, you’ll see that we have five change vectors for code 11.5followed by five for code 5.1. These are the five forward change vectorsfollowed by the five undo block change vectors. Change vector #2 (code 5.2) isthe start of transaction, and change vector #7 (code 5.4) is the so-calledcommit record, the end of transaction. We’ll be looking at those change vectorsmore closely in Chapter 3, but it’s worth mentioning at this point that whilemost of the change vectors are applied to data blocks only when the transactioncommits, the change vector for the start of transaction is an important specialcase and is applied to the undo segment header block as the transaction starts.
So Oracle has a mechanism for reducing the number of times a session demands space from, andcopies information into, the (public) redo log buffer, and that improves thelevel of concurrency we can achieve . . . up to a point. But you’re probablythinking that we have to pay for this benefit somewhere—and, of course, we do.
Earlier on we saw that every change we made resulted in an access to the In memory undo latch. Does that mean we have just moved the threat of latch activity ratherthan actually relieving it? Yes and no. We now hit only one latch (In memoryundo latch) instead of two (redo allocation and redo copy), so we have at leasthalved the latch activity, but, more significantly, there are multiple childlatches for the In memory undo latches, one for each in-memory undo pool.Before the new mechanism appeared, most systems ran with just one redoallocation latch, so although we now hit an In memory undo latch just as manytimes as we used to hit the redo allocation latch, we are spreading the accessacross far more latches.
It’s also worthnoting that the new mechanism also has two types of redo allocation latch—one type covers the private redo threads, one type covers the public redo threads,and each thread has its own latch. This helps to explain the extra gets on there do allocation latch statistic that we saw earlier: our session uses a private redo allocation latch to acquire a private redo thread, then on the commit it has to acquire a public redo allocation latch, and then the log writer (as weshall see in Chapter 6) acquires the public redo allocation latches (and mytest system had two public redo threads) to write the log buffer to file.
在Oracle 新机制中有两种类型的redo allocationlatch,一个是private redo threads,另一个是public redo threads,每个thread都受自己的latch控制。
-------------------------------------------------------------------------------------------------------------------------------
翻译:在我们之前看到的统计信息:我们的会话使用一个 private redo allocation latch获得一个private redo thread,然后在事务提交的时候,
获得了public redo allocation latch,然后lGWR获取了了一个public redo allocation latch(系统中存在两个public redo threads)将日志缓存
写入到磁盘文件中。
------------------------------------------------------------------------------------------------------------------------------
Overall, then,the amount of latch activity decreases and the focus of latch activity is spread a little more widely, which is a good thing. But in a multiuser system, there are always other points of view to consider—using the old mechanism, the amount of redo a session copied into the log buffer and applied to the database blocks at any one instant was very small; using the new mechanism, the amount of redo to copy and apply could be relatively large,which means it takes more time to apply to the database blocks, potentially blocking other sessions from accessing those blocks as the changes are made.This may be one reason why the private redo threads are strictly limited insize.
总体来说,新机制下latch活动减少。但对于一个多用户系统,总会有一些观点认为需要使用旧的机制,在旧机制下,每个操作产生的vector都会写入log buffer和应用到block上,这样每次身上的数据都很少,而使用新机制,redo和apply的数据都会相对较大,也就意味着需要更多的时间来处理这些信息,也会潜在的block 其他session 访问我们正在修改的信息。这也是为什么privateredo thread 会严格的控制大小的原因。
Moreover, using the old mechanism, a second session reading a changed block would see the changes immediately(立即发现块数据被改变); with the new mechanism, a second session can see only that a block is subject to some private redo, so the second session is now responsible for tracking down the private redo and applying it to the block (if necessary), and then deciding what to do next with the block. (Think about the problems of referential integrity if you can’t immediately see that anothe rsession has, for example, deleted a primary key that you need.) This leads too longer code paths, and more complex code, but even if the resulting code for read consistency does use more CPU than it used to, there is always an argumentfor making several sessions use a little more CPU as a way of avoiding a singlepoint of contention.
更深入一点的问题,使用旧的机制时,第二个session 会立即读取到block的变化,如果使用新机制,当block的信息到private redo,第二个session需要跟踪这些redo信息并apply这些block(如果需要),然后决定下一步做什么,这样就不能立即看到之前session的操作,这样也导致一个问题,就是代码路径过长,更负责,即我们需要使用更多的CPU资源才可以一致性的读取我们的数据。
There is an important principle of optimization that is often overlooked. Sometimes it is better for everyone to do a little more work if that means they are operating in separate locations rather than constantly colliding on the same contention point—competition wastes resources.
在性能优化时有一个非常重要的原则常被忽略:减少热快,因为竞争会浪费资源。
I don’t know how many different events there are that could force a session to construct new versions of blocks from private redo and undo, but I do know that there are several events that result in a session abandoning the new strategy before the commit.
我不知道有多少events会强制session 用privateredo 和undo 来构建新的block,但是我知道有一些events会导致session在commit之前放弃新的策略。
An obviouscase where Oracle has to abandon the new mechanism is when either the private redo thread or the in-memory undo pool becomes full. As we saw earlier,each private area is limited to roughly 64KB (or 128KB if you’re running a64-bit copy of Oracle). When an area is full, Oracle creates a single redo record, copies it to the public redo thread, and then continues using the public redo thread in the old wa y.
一个明显的案例,当private redo thread 或者in-memory undo pool 满了之后,Oracle就不得不放弃使用新的机制。正如我们前面示例中的,每个private area限制约64KB(64位系统是128KB),当privatearea满了之后,Oracle 就会直接创建单个的single redorecord,然后copy 这些到public redo thread,之后继续使用老方法来处理public redothread里的数据。
But there are other events that cause this switch prematurely. For example, your SQL might trigger a recursive statement. For a quick check on possible causes, and howmany times each has occurred, you could connect as SYS and run the followingSQL (sample taken from 10.2.0.3):
但是也有一些enevts,其可以导致机制新旧机制之间过早的切换,比如我们的SQL可能会引发一个递归调用。为了快速的检查可能的原因,以及发生了多少次,就可以使用sys用户执行如下的语句
select ktiffcat, ktiffflc from x$ktiff;
KTIFFCAT KTIFFFLC
--------------------------------------------------
Undo pool overflowflushes 0
Stack cvflushes 21
Multi-block undoflushes 0
Max. chgsflushes 9
NTPflushes 0
Contentionflushes 18
Redo pool overflowflushes 0
Logfile spaceflushes 0
Multiple persistent bufferflushes 0
Bind timeflushes 0
Rollbackflushes 6
Commitflushes 13628
Recursive txnflushes 2
Redo only CRflushes 0
Ditributed txnflushes 0
Set txn use rbsflushes 0
Bitmap state changeflushes 26
Presumed commitviolation 0
18 rows selected.
Unfortunately,although there are various statistics relating to IMU in the v$sysstat dynamic performance view (e.g., IMU flushes), they don’t seem to correlate terriblywell with the figures from the x$ structure—although, if you ignore a couple ofthe numbers, you can get quite close to thinking you’ve found the matchingbits.
Oracle Redo 机制的小结:
假设我们修改一个记录,那么操作步骤如下:
1. 创建对应block的redo change vector
2. 创建对应block的undo record vector并写入undo tablespace。
3 . 将undo record和 data block 的redo change vector绑定成一个redo record并写入log buffer中。
4. 修改block的内容。
在Oracle 10g之前使用的是旧的机制:
Session在事务中产生的每一个改变都会生成一个record,然后写入redo log buffer。单个session在很短的周期内可能产生多个change,并且可能有多个session 同时并发操作,但是只有一个redo log buffer他们可以访问。因此在生成redo 时,将redo record 写入redo log buffer的过程会是一个非常严重的瓶颈。
当数据库不忙时,产生redo record也会相对较少,这种情况下: one change = onerecord = one allocation 的策略对大部分系统都是够用的,当系统非常大时,就需要来处理大量的并发allocations 操作,所以在Oracle 10g的新策略里就出现了private redo 和in-memory undo。
在oracle 10g之后使用的是新的机制:
在Oracle 10g中使用了Latch的机制,通过redo allocation latch 来保护redo log buffer 被公平的使用。当一个进程需要使用log buffer中的space时,那么会申请redo allocation latch,一旦它已独占的方式获取了这个latch,它就可以把其对应的信息写入到redo log buffer中了,这样就可以避免多个线程重写log buffer中相同的块,如果有大量的进程来申请redo allocationlatch,他们是拿不到这个latch,并且这个过程还会消耗大量的资源(主要是CPU 用来latch spin),或者是在第一次spin 失败后进入等待队列,休眠sleep time后继续进行spin操作,来申请redo allocation latch操作。
Latch机制保证了logbuffer被公平的使用,在高并发的情况下还是会有进程申请不到资源,所以又产生了private redo log buffer的概念。
一个进程可以在整个事务中工作,生成事务相关的所有的change vectors,在一对private redo log buffers中存储这些vectors 。 当这个事务结束时,进程就copy 所有private redo log 里的信息到public redo log buffer中,剩下的操作就按照原始的方式处理。 按照这种方式处理之后,一个进程仅需要在一个事务结束之后来申请public redo allocation latch,而不是每次改变都申请这个latch。
在Oracle 新机制中有两种类型的redo allocation latch,一个是private redo threads,另一个是public redo threads,每个thread都受自己的latch控制。
总体来说,新机制下latch活动减少。但对于一个多用户系统,总会有一些观点认为需要使用旧的机制,在旧机制下,每个操作产生的vector都会写入log buffer和应用到block上,这样每次身上的数据都很少,而使用新机制,redo 和apply的数据都会相对较大,也就意味着需要更多的时间来处理这些信息,也会潜在的block 其他session 访问我们正在修改的信息。 这也是为什么private redo thread 会严格的控制大小的原因。
更深入一点的问题,使用旧的机制时,第二个session 会立即读取到block的变化,如果使用新机制,当block的信息到private redo,第二个session需要跟踪这些redo 信息并apply这些block(如果需要),然后决定下一步做什么,这样就不能立即看到之前session的操作,这样也导致一个问题,就是代码路径过长,更负责,即我们需要使用更多的CPU 资源才可以一致性的读取我们的数据。
一个明显的案例,当private redo thread 或者in-memory undo pool 满了之后,Oracle 就不得不放弃使用新的机制。正如我们前面示例中的,每个privatearea限制约64KB(64位系统是128KB),当privatearea满了之后,Oracle 就会直接创建单个的single redo record,然后copy 这些到public redo thread,之后继续使用老方法来处理public redo thread里的数据。
四.Undo Complexity
Undo is more complicated than redo. Most significantly, any process may, in principle, need to access any undo record at any time to “hide” an item of data that it is not yet supposed to see. To meet this requirement efficiently, Oracle keeps the undo records inside the database in a special tablespace known, unsurprisingly,as the undo tablespace; then the code has to maintain various pointers to the undo records so that a process knows where to find the undo records it needs.The advantage of keeping undo information inside the database in “ordinary”data files is that the blocks are subject to exactly the same buffering,writing, and recovery algorithms as every block in the database—the basic code to manage undo blocks is the same as the code to handle every other type ofblock.
Oracle 中的Undo 信息要比Redo 复杂。理论上,任何进程在任何时候都可能会访问任何的undo record,来隐藏那些还不可见的数据(查询UNDO获取前镜像),为了满足这种要求,Oracle 把undo records保存在一个单独的表空间:undo tablespace里,然后根据需要使用代码来维护undo tablespace里的信息。把undo records放在表空间里的好处是block的信息可以精确的定位在同样的buffering上,可以把block作为数据库的block来进行写和恢复操作。而且用来管理undo blocks的代码和处理其他类型block的代码是一样的。
There are three reasons why a process needs to read an undo record, and therefore three ways inwhich chains of pointers run through the undo tablespace. We will examine all three in detail in Chapter 3, but I will make some initial comments about the commonest two uses now.
进程需要访问undo record的原因有三个,因此在undo tablespace里也就有三种不同的pointers chains。这个在后面章节里会说明,这里我们看最常见的2个原因。
Linked lists of undo records are used to deal with read consistency, rolling back changes, and deriving commit SCNs that have been “lost” due to delayed block clean out. The third topic will be postponed until Chapter 3.
--undo records的Linked lists用来处理read consistency,rollingback changes等。
4.1 Read Consistency
The first, and most commonly invoked, use of undo is read consistency, and I have already commented briefly on read consistency. The existence of undo allows a session to see an older version of the data when it’s not yet supposed to see a newer version.
第一个原因,也是最常见的,使用undo 来实现read consistency。如果已经修改的记录还没有提交,那么我们就可以通过undo信息来查看block的前镜像,从而来实现read consistency。
The requirementfor read consistency means that a block must contain a pointer to the undo records that describe how to hide changes to the block. But there could be an arbitrarily large number of changes that need to be concealed, and insufficient space for that many pointers in a single block. So Oracle allows a limited number of pointers in each block (one for each concurrent transaction affecting the block), which are stored in the ITL entries. When a process creates an undo record, it (usually) overwrites one of the existing pointers, saving the previous value as part of the undo record
Read consistency 也就要求block 必须包含一个指针,指向对应的undo records,该undo records描述了如何来隐藏修改的block。但是block上可能有大量的change需要进行隐藏(即对非修改的session,只能看到前镜像),并且单个block上也没有足够的空间来保存这些指针,因此Oracle 在单个block只允许有限数量的pointers,这些pointers信息保存在ITL entry中,当一个进程创建一个undo record,它通常会重写ITL中一个已经存在pointer,然后保存之前的值来作为undo record的一部分。
OraceITL(Interested Transaction List) 说明
http://blog.csdn.net/tianlesoftware/article/details/6573988
Take anotherlook at the undo record I showed you earlier, after updating three rows in asingle block:
*-----------------------------
* Rec #0xf slt:0x1a objn: 45810(0x0000b2f2) objd:45810 tblspc: 12(0x0000000c)
* Layer: 11(Row) opc: 1 rci 0x0e Undotype: Regular undo Last buffer split: No
Temp Object: No
Tablespace Undo: No
rdba: 0x00000000*-----------------------------
KDO undo record:
KTB Redo
op: 0x02 ver: 0x01
op: C uba: 0x0080009a.09d4.0d
KDO Op code: URP row dependencies Disabled xtype: XA bdba: 0x02c0018a hdba: 0x02c00189
itli: 2 ispac:0 maxfr: 4863
tabn: 0 slot: 4(0x4) flag: 0x2c lock: 0ckix: 16
ncol: 4 nnew: 1 size: -4 col 2:[ 6] 78 78 78 78 78 78
The table blockholding the fifth row I had updated was pointing to this undo record, and wecan see from the second line of the dump that it is record 0xf in the undoblock. Seven lines up from the bottom of the dump you see that this record hasop: C, which tells us that it is the continuation of an earlier update by thesame transaction. This lets Oracle know that the rest of the line uba:0x0080009a.09d4.0d is part of the information that has to be used to re-createthe older version of the block: as the xxxxxx (78s) are copied back to column 2of row 4, the value 0x0080009a.09d4.0d has to be copied back to ITL entry 2.
Of course, onceOracle has taken these steps to reconstruct an older version of the block, itwill discover that it hasn’t yet gone far enough, but the pointer in ITL 2 isnow telling it where to find the next undo record to apply. In this way aprocess can gradually work its way backward through time; the pointer in eachITL entry tells Oracle where to find an undo record to apply, and each undorecord includes the information to take the ITL entry backward in time as wellas taking the data backward in time.
4.2 Rollback
The second,major use of undo is in rolling back changes, either with an explicit rollback(or rollback to savepoint) or because a step in a transaction has failed and Oracle has issued an implicit, statement-level rollback.
第二个原因,undo 主要用来进行rollback change。我们可以通过执行rollback命令显示的进行rolling操作,也可以在事务失败时让Oracle 隐式的执行rollback操作。
Read consistencyis about a single block, and finding a linked list of all the undo records for that block. Rolling back is about the history of a transaction, so we need a linked list that runs through all the undo records for a transaction in the correct (which, in this case, means reverse) order.
Read consistency是关于单个block,操作时也是查找block对应的所有undorecords的linked list。
Rolling Back 是关于事务的历史操作,所以我们需要一个linked list,然后按照正确的顺序来处理事务的所有的undo records。
Here is a simple example demonstrating why we need to link the undo records “backward.” Imagine we update a row twice, changing a single column value from A to B and then from B to C, giving us two undo records. If we want to reverse the change, we haveto change the C back to B before we can apply an undo record that says “changea B to an A”; in other words, we have to apply the second undo record before we apply the first undo record.
这有一个简单的例子来帮助我们理解为什么我们需要link 这些undo records。假如我们修改了一个记录2次,从A改成B,在从B改成C,这时候我们进行rollback,那么我们在进行操作时,必须先执行C到B的undorecords,然后执行B到A的undo records。
Looking again at the sample undo record, we can see signs of the linked list. Line 3 of the dump includes the entry rci 0x0e. This tells Oracle that the undo record created immediately before this undo record was number 14 (0x0e) in the same undo block. It’s possible,of course, that the previous undo record will be in a different undo block, but that should be the case only if the current undo record is the first undo record of the undo block, in which case the rci entry would be zero and ther dba: entry four lines below it would give the block address of the previous undo record. If you have to go back a block, then the last record of the blockwill usually be the required record, although technically what you need is the record pointed at by the irb: entry. However, the only case in which the irb:entry might not point to the last record is if you have done a rollback tosavepoint.
There’s an important difference between read consistency and rolling back, of course. For read consistency we make a copy of the data block in memory and apply the undo records to that block, and it’s a copy of the block that we can discard very rapidly once we’ve finished with it; when rolling back we acquire the current block and apply the undo record to that. This has three important effects:
--Read consistency和Rolling back有很大的区别,对于Read consistency,我们仅copy data block到内存里,然后应用undo record到datablock上,对data block的copy 在我们使用结束之后就可以迅速的丢弃。但当我们进行rolling back时,我们需要获取current block,然后应用undo record,最主要的三点不同:
1. The data block is the current block, so it is the version of the block that must eventually be written to disc.
data block是currentblock,所以该block最终还是要写回disk。
2. Because it is the current block, we will be generating redo as we change it (even though we are “changingit back to the way it used to be”).
因为是current block,所以我们需要对我们的rolling back 生成redo records。
3. Because Oracle has crash-recovery mechanisms that clean up accidents as efficiently as possible,we need to ensure that the undo record is marked as “undo applied” as we use it, and doing that generates even more redo.
因为Oracle 有crash-recovery的机制来clean up故障,所以我们必须确保我们使用过的undo record被标记为undo applied,然后生成额为的redorecords。
If the undorecord was one that had already been used for rolling back, line 4 of the dumpwould have looked like this:
Undotype: Regular undo User UndoApplied Last buffer split: No
In the raw blockdump, the User Undo Applied flag is just 1 byte rather than a 17-characterstring.
Rolling back involves a lot of work, and a rollback can take roughly the same amount of time as the original transaction, possibly generating a similar amount of redo. But you have to remember that rolling back is an activity that changes data blocks,so you have to reacquire, modify, and write those blocks, and write the redo that describes how you’ve changed those blocks. Moreover, if the transaction was a large, long-running transaction, you may find that some of the blocks you’ve changed have been written to disc and flushed from the cache—so they’llhave to be read from disc before you can roll them back!
Rolling back牵涉到很多的工作,因此rollback 可能会和原始操作使用相同的时间,生成和原始操作差不多的redo records。我们还必须记住的是rolling back是一次change data block操作,所以我们必须再次获取,修改然后写回block到磁盘,同事还需要生成相关的redo 和undo records。
如果一个事务非常大,并且运行了很长时间,我们可能发现一些修改的block 已经写入饿了disc,所以在这种情况下我们在rollback之前还必须先从磁盘将block读取到内存。
Some systems use Oracle tables to hold “temporary” or “scratchpad”information. One of the strategies used with such tables is to insert datawithou t committing it so that read consistency makes it private to the session,and then roll back to make the data “go away.” There are many flaws in thisstrategy, the potentially high cost of rolling back being just one of them. Theability to eliminate the cost of rollback is one of the things that makesglobal temporary tables useful.
--在一些系统上,使用Oracle 表来保存临时的信息。其中一个策略就是使用表来insert 数据,但是不提交,从而利用read consistency来作为私人的数据,最后在rollback 这些修改。这是一个有缺陷的策略,因为在rolling back的时候可能会消耗很多的资源,在这种情况下我们可以使用全局的临时表来代替。
There are other overheads introduced by rolling back, of course. When a session creates undo records, it acquires, pins, and fills one undo block at a time; when it is rolling back it gets one record from an undo block at a time, releasing and reacquiring the block for each record. This means that you generate more buffer visits on undo blocks to roll back than you generated when initially executing the transaction. Moreover, every time Oracle acquires an undo record, it checks that the tablespace it should be applied to is still online (if it isn’t,Oracle will transfer the undo record into a save undo segment in the system tablespace); this shows up as get on the dictionary cache (specifically thedc_tablespaces cache).
rolling back还有一些其他的内容,当session 创建undo records时,每次操作都需要获取,pins和fill one undo block。当进行rolling back时,会从undoblock里一次获取一个record,然后releasing,在从block里搜索下一个record。也就意味着在进行rollback时生成的buffer 会大于我们执行事务时生成的buffer。并且Oracle 每次获取一个undo record时,需要检查表空间是否一直是online状态,如果不是online,oracle 需要将undorecord 写入SYSTEM 表空间的undo segment中。
小结:
In some ways redo is a very simple concept: every change to a block in a data file is described by a redo change vector, and these change vectors are written to theredo log buffer (almost) immediately, and are ultimately written into the redolog file.
从某种意义上讲,redo 是非常简单的概念,block上的每个change都会生成一个redochange vector,这些redo change vector会立即写入到redo log buffer,最后这些redo change vector会写入online redo log file。
As we makechanges to data (which includes index entries and structural metadata), we also create undo records in the undo tablespace that describe how to reverse those changes. Since the undo tablespace is just another set of data files, we create redo change vectors to describe the undo records we store there.
当我们改变一个data时,我们也会在undo tablespace里生成对应的undo record,因为undo tablespace是另一个data files,所以我们需要再次创建一个redo change vectors来记录我们的undo recored存在在什么地方。
In earlier versions of Oracle, change vectors were usually combined in pairs—one describing the forward change, one describing the undo record—to create a single redo record that was written (initially) into the redo log buffer.
在Oracle 早期的版本中,change vectors通常是一对,即一个用来描述forward change(redo),另一个用来描述undo record。然后把这一对vectors创建成一个redorecord,然后写入redo log buffer。
In later versions of Oracle, the step of moving change vectors into the redo log buffer was seen as an important bottleneck in OLTP systems, and a new mechanism was created to allow a session to accumulate all the changes for a transaction “in private” before creating one large redo record in the redo buffer.
在Oracle 后面的版本中,在OLTP系统中将redo vectors移动到redo log buffer中成了一个重要的瓶颈,于是出现了一个新的机制,允许我们将一个事务的所有操作打包到一起,存放到一个private redo log buffer中,待事务完成之后,在生成一个大的redo record,写入public redo log buffer中,这样可以减少对public redo log buffer的latch争用。
The new mechanism is strictly limited in the amount of work a session will do before it flushes its change vectors to the redo log buffer and switches to the older mechanism, and there are various events that will make this switch happen prematurely.
--新机制在将change vectors写入到redo log buffer 之前会严格限制session中总的操作量,写入redo log buffer之后的操作按照老的机制进行,当然也有一些事件会导致新旧机制之间的切换,这个上面有示例说明。
While redo operates as a simple “write it and forget it” stream, undo may be frequently reread in the on going activity of the database, and undo records have to belinked together in different ways to allow for efficient access. Read consistency requires chains of undo records for a given block; rolling back requires a chain of undo records for a given transaction.