IA-32 保护模式内存管理

保护模式内存管理

译自 Intel的IA-32架构手册第三卷(系统编程)
刘建文略译( http://blog.csdn.net/keminlau )

KEY:功能 逻辑分层 功能分步 系统

CHAPTER 3 PROTECTED-MODE MEMORY MANAGEMENT

3.1. MEMORY MANAGEMENT OVERVIEW

The memory management facilities of the IA-32 architecture are divided into two parts: segmentation and paging. Segmentation provides a mechanism of isolating individual code, data, and stack modules so that multiple programs (or tasks) can run on the same processor without interfering with one another. Paging provides a mechanism for implementing a conventional demand-paged, virtual-memory system where sections of a program’s execution environment are mapped into physical memory as needed. Paging can also be used to provide isolation between multiple tasks. When operating in protected mode, some form of segmentation must be used. There is no mode bit to disable segmentation. The use of paging, however, is optional.

IA-32架构的内存管理机构(facilities)可划分为两个部分:分段(segmentation)和分页(paging)。分段功能提供了分隔代码、数据和堆栈的机制,从而使多个进程运行在同一个CPU物理地址空间内而互不影响;分页可用来实现一种“请求页式(demand-paged)”的虚拟内存机制,从而页化程序执行环境,在程序运行时可将所需要的页映射到物理内存。分页机制也可用作隔离多进程任务。分段功能是CPU保护模式必须的,没有设置位可以屏蔽内存分段;不过内存分页则是可选的。

These two mechanisms (segmentation and paging) can be configured to support simple single-program (or single-task) systems, multitasking systems, or multiple-processor systems that used shared memory.

As shown in Figure 3-1, segmentation provides a mechanism for dividing the processor’s addressable memory space (called the linear address space) into smaller protected address spaces called segments. Segments can be used to hold the code, data, and stack for a program or to hold system data structures (such as a TSS or LDT). If more than one program (or task) is running on a processor, each program can be assigned its own set of segments. The processor then enforces the boundaries between these segments and insures that one program does not interfere with the execution of another program by writing into the other program’s segments.

分段和分页机制被配置成支持单任务系统、多任务系统或多处理器系统。

如图3-1,内存分段将CPU的可寻址空间(称为线性地址空间)划分更小的受保护的内存段,这些段存放程序的数据(代码、数据和堆栈)和系统的数据结构(像TSS 或 LDT)。如果处理器运行着多个任务,那么每个任务都有一集自己独立的内存段。

The segmentation mechanism also allows typing of segments so that the operations that may be performed on a particular type of segment can be restricted.

All the segments in a system are contained in the processor’s linear address space. To locate a byte in a particular segment, a logical address (also called a far pointer) must be provided. A logical address consists of a segment selector and an offset. The segment selector is a unique identifier for a segment. Among other things it provides an offset into a descriptor table (such as the global descriptor table, GDT) to a data structure called a segment descriptor. Each segment has a segment descriptor, which specifies the size of the segment, the access rights and privilege level for the segment, the segment type, and the location of the first byte of the segment in the linear address space (called the base address of the segment). The offset part of the logical address is added to the base address for the segment to locate a byte within the segment. The base address plus the offset thus forms a linear address in the processor’s linear address space.

进程的各个段都必须位于CPU的线性空间之内,进程要访问某段的一个字节,必须给出该字节的逻辑地址(也叫远指针)。逻辑地址由段选择子(segment selector )和偏移值组成。段选择子是段的唯一标识,指向一个叫段描述符的数据结构;段描述符位于一个叫描述表之内(如全局描述表GDT); 每个段必须都有相应的段描述符,用以指定段大小、访问权限和段的特权级别(privilege level)、段类型和段的首地址在线性地址空间的位置(叫段的基地址)。逻辑地址通过基地址加上段内偏移得到。

If paging is not used, the linear address space of the processor is mapped directly into the physical address space of processor. The physical address space is defined as the range of addresses that the processor can generate on its address bus.

Because multitasking computing systems commonly define a linear address space much larger than it is economically feasible to contain all at once in physical memory, some method of “virtualizing” the linear address space is needed. This virtualization of the linear address space is handled through the processor’s paging mechanism.

如果不用分页功能,处理器的[线性地址空间]就会直接映射到[物理地址空间]。[物理地址空间]的大小就是处理器能通过地址总线产生的地址范围。为了直接使用线性地址空间从而简化编程和实现多进程而提高内存的利用率,需要实现某种对线性地址空间进行“虚拟化(virtualizing)”,CPU的分页机制实现了这种虚拟化。

Paging supports a “virtual memory” environment where a large linear address space is simulated with a small amount of physical memory (RAM and ROM) and some disk storage. When using paging, each segment is divided into pages (typically 4 KBytes each in size), which are stored either in physical memory or on the disk. The operating system or executive maintains a page directory and a set of page tables to keep track of the pages. When a program (or task) attempts to access an address location in the linear address space, the processor uses the page directory and page tables to translate the linear address into a physical address and then performs the requested operation (read or write) on the memory location. If the page being accessed is not currently in physical memory, the processor interrupts execution of the program (by generating a page-fault exception). The operating system or executive then reads the page into physical memory from the disk and continues executing the program.

“虚拟内存”就是利用物理内存和磁盘来对CPU的线性地址进行模拟(kemin:高级语言源码指定的是符号地址,是虚的,有了虚拟内存即便是用汇编指定一固定地址也是虚的。问题是这些虚存是怎么管理的)。当使用分页时,进程的每个段都会被分成大小固定的页,这些页可能在内存中,也可能在磁盘。操作系统用了一张页目录(page directory)和多张页表来管理这些页。当进程试图访问线性地址空间的某个位置,处理器会通过页目录和页表先将线性地址转换成物理地址,然后再访问(读或写)(kemin:转换细节没有讲)。如果被访问的页当前不在内存,处理就会中断进程的运行(通过产生缺页异常中断)(kemin:怎么判断某页不在内存?)。操作系统负责从磁盘读入该页并继续执行该进程(kemin:页读入的前前后后没有讲)。

When paging is implemented properly in the operating-system or executive, the swapping of pages between physical memory and the disk is transparent to the correct execution of a program. Even programs written for 16-bit IA-32 processors can be paged (transparently) when they are run in virtual-8086 mode.

(to be continue……)

你可能感兴趣的:(计算机软硬体系(操作系统))