Operating Systems: Three Easy Pieces

第四章 The Abstraction : 

process: it is a running program

Q: How to virtualizing the CPU?

A: By running one process, then stopping it and running another, and so forth, the OS can promote the illusion that many virtual CPUs exist when in fact there is only one physical CPU(or a few).------time sharing

Q: What is the differences between high-level policies and low-level mechanisms?

A: You can think of the mechanism as providing the answer to a how question about a system; for example, how does an operating system perform a context switch? The policy provides the answer to a which question; for example, which process should the operating system run right now?

Q: What is the lifetime of a real APIs ?

A: Create, Destroy,  Wait, Miscellaneous Control, Status;

Q: How does the OS get a program up and running? How does process creation actually work?

A: Modern OSes perform the process lazily, i.e., by loading pieces of code or data only as they are needed during program executions. 

Q: Once the code and static data are loaded into memory, there are a few other things the OS needs to do before running the process. What are they?

A: Firstly, some memory must be allocated for the program's run-time stack( or just stack). Secondly, the OS may also create some initial memory for the program's heap. Thirdly, the OS will also do some other initialization tasks, particularly as related to input/output(I/O). For example, in UNIX systems, each process by default has three open file descriptors, for standard input, output, and error; these descriptors let programs easily read input from the terminal as well as print output to the screen. We'll learn more about I/O, file descriptors, and the like in the third part of the book on persistence.

Q: What is the state of a process ?

A: Running: In the running state, a process is running on a processor. This means it is executing instructions.

Ready: In the ready state, a process is ready to run but for some reason the OS has chosen not to run it at this given moment.

Blocked: In the blocked state, a process has performed some kind of operation that makes it not ready to run until some other event takes place. A common example: When a process initiates an I/O request to a disk, it becomes blocked and thus some other process can use the processor.

Q: What is the most basic abstraction of the OS: the process ?

A: It is quite simply viewed as a running program. With this conceptual view in mind, we will now move on to the nitty-gritty: the low-level mechanisms needed to implement processes, and the higher-level policies required to schedule them in an intelligent way. By combining mechanisms and policies, we will build up our understanding of how an operating system virtualizes the CPU.

Q: What is the exec() System Call ?

A: This system call is useful when you want to run a program that is different from the calling program. For example, calling fork() in p2.c is only useful if you want to keep running copies of the same program. However, often you want to run a different program; exec() does just that.

Q: Why would we build such and odd interface to what should be the simple act of creating a new process? 

A: Well, as it turns out, the separation of fork() and exec() is essential in building a UNIX shell, because it lets the shell run code after the call to fork() but before the call to exec(); The shell is just a user program. It shows you a prompt and then waits for you to type something into it. You then type a command (i.e., the name of an executable program, plus any arguments) into it; in most cases, the shell then figures out where in the file system the executable resides, calls fork() to create a new child process to run the command, calls some variant of exec() to run the command, and then waits for the command to complete by calling wait(). When the child completes, the shell returns from wait() and prints out a prompt again, ready for your next command.

Q: What should a user process do when it wishes to perform some kind of privileged operation, such as reading from disk?

A: To enable this, virtually all modern hardware provides the ability for user programs to perform a system call.

Q: How to execute a system call ?

A: To execute a system call, a program must execute a special trap instruction. This instruction simultaneously jumps into the kernel and raises the privilege level to kernel mode; once in the kernel, the system can now perform whatever privileged operations are needed (if allowed), and thus do the required work for the calling process. When finished, the OS calls a special return-from-trap instruction, which, as you might expect, returns into the calling user program while simultaneously reducing the privilege level back to user mode.

The hardware needs to be a bit careful when executing a trap, in that it must make sure to save enough of the caller's register state in order to be able to return correctly when the OS issues the return-from-trap instruction. On x86, for example, the processor will push the program counter, flags, and a few other registers onto a per-process kernel stack; the return-from-trap will pop these values off the stack and resume execution of the user-mode program.

Q: What is the problem when switching between processes?

A: If a process is running on the CPU, this by definition means the OS is not running. If the OS is not running, how can it do anything at all? (hint: it can't). It is a real problem: there is clearly no way for the OS to take an action if it is not running on the CPU.

One approach is known as the cooperative approach. In this style, the OS trusts the processes of the system to behave reasonably. Processes that run for too long are assumed to periodically give up the CPU so that the OS can decide to run some other task. In a cooperative scheduling system, the OS regains control of the CPU by waiting for a system call or an illegal operation of some kind to take place. 

So, in the cooperative approach, your only recourse when a process gets stuck in an infinite loop is to resort to the age-old solution to all problems in computer systems: reboot the machine. 

The second approach is : a timer interrupt. When the interrupt is raised, the currently running process is halted, and a pre-configured interrupt handler in the OS runs.

Chapter 7 Scheduling : Introduction

Q: What is convoy effect ? (occurs at First In, First Out)

A: A number of relatively-short potential consumers of a resource get queued behind a heavy weight resource consumer.

Q: What is Shortest Job First(SJF) ?

A: it runs the shortest job first, then the next shortest, and so on.

Q: What is Shortest Time-to-Completion First(STCF) ?

A: Any time a new job enters the system, it determines of the remaining jobs and new job, which has the least time left, and then schedules that one.

Q: What is Round Robin(RR) scheduling ?

A: The basic idea is simple: instead of running jobs to completion, RR runs a job for a time slice (sometimes called a scheduling quantum) and then switches to the next job in the run queue. It repeatedly does so until the jobs are finished. For this reason, RR is sometimes called time-slicing. 

Q: What is the Multi-Level Feedback Queue ?

A: The MLFQ has a number of distinct queues, each assigned a different priority level. At any given time, a job that is ready to run is on a single queue. MLFQ uses priorities to decide which job should run at a given time: a job with higher priority (i.e., a job on a higher queue) is chosen to run.

Q: How about the circumstance when more than one job on a given queue with same priority ?

A: In this case, we will just use round-robin scheduling among those jobs.

Q: What is the two basic rules for MLFQ ?

A: Rule 1: If Priority(A) > Priority(B), A runs (B doesn't).

Rule 2: If Priority(A) = Priority(B), A & B run in RR.

We need to understand how job priority changes over time.

Rule 3: When a job enters the system, it is placed at the highest priority ( the topmost queue).

Rule 4a: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced(i.e., it moves down one queue).

Rule 5: After some time period S, move all the jobs in the system to the topmost queue.

Q: How to parameterize such a scheduler ?

A: A few other issues arise with MLFQ scheduling. How many queues should there be ? How big should the time slice be per queue? How often should priority be boosted in order to avoid starvation and account for changes in behavior ? There are no easy answers to these questions, and thus only some experience with workloads and subsequent tuning of the scheduler will lead to a satisfactory balance.

Most MLFQ variants allow for varying time-slice length across different queues. The high-priority queues are usually given short time slices;

The low-priority queues, in contrast, contain long-running jobs that are CPU-bound; hence, longer time slices work well (e.g., 100s of ms).

Q: Why we call it the Multi-Level Feedback Queue (MLFQ) ?

A: It has multiple levels of queues, and used feedback to determine the priority of a given job. 

Chapter 12 A Dialogue on Memory Virtualization

1. every address generated by a user program is a virtual address.

2. Q : Why does the OS want to provide virtual memory ?

A: The OS will give each program the view that it has a large contiguous address space of put its code and data into for ease of use. As a programmer, you never have to worry about things like "where should I store this variable?" because the virtual address space of the program is large and has lots of room for that sort of thing.

Chapter 13 The Abstraction: Address Spaces


Operating Systems: Three Easy Pieces_第1张图片

Q : What is the arrangement of heap and stack ?

A : We have the two regions of the address space that may grow (and shrink) while the program runs. Those are the heap (at the top) and the stack (at the bottom). We place them like this because each wishes to be able to grow, and by putting them at opposite ends of the address space, we can allow such growth: they just have to grow in opposite directions. The heap thus starts just after the code (at 1KB) and grows downward (say when a user requests more memory via malloc()); the stack starts at 16KB and grows upward (say when a user makes a procedure call).  However, this placement of stack and heap is just a convention; you could arrange the address space in a different way if you'd like (as we'll see later, when multiple threads co-exist in an address space, no nice way to divide the address space like this works anymore, alas). 

Q: What is the key to virtualization of memory ?

A: We say the OS is virtualizing memory, because the running program thinks it is loaded into memory at a particular address (say 0) and has a potentially very large address space (say 32-bits or 64-bits); the reality is quite different;

Q: What is the principle of isolation ?

A: Isolation is a key principle in building reliable systems. Operating systems strive to isolate processes from each other and in this way prevent one from harming the other. By using memory isolation, the OS further ensures that running programs cannot affect the operation of the underlying OS. 

Q: What are goals to virtualize memory ?

A: One major goal of a virtual memory(VM) system is transparency.  The OS should implement virtual memory in a way that is invisible to the running program. Thus, the program shouldn't be aware of the fact that memory is virtualized; rather, the program behaves as if it has its own private physical memory. Behind the scenes, the OS (and hardware) does all the work to multiplex memory among many different jobs, and hence implements the illusion.

Another goal of VM is efficiency. The OS should strive to make the virtualization as efficient as possible.

Finally, a third VM goal is protection. The OS should make sure to protect processes from one another as well as the OS itself from processes. When one process performs a load, a store, or an instruction fetch, it should not be able to access or affect in any way the memory contents of any other process or the OS itself (that is, anything outside its address space). Protection thus enables us to deliver the property of isolation among processes; each process should be running in its own isolated cocoon, safe from the ravages of other faulty or even malicious processes.

Q: What is the address you see ?

A: Ever write a C program that prints out a pointer? The value you see ( some large number, often printed in hexadecimal), is a virtual address.

Q: Why is sizeof() thought of as an operator and not a function call ?

A: The actual size is known at compile time and thus a number (in this case, 8, for a double ) is substituted as the argument to malloc(). A function call would take place at run time.

Q: 

int *x = malloc(10 * sizeof(int));

printf("%d\n", sizeof(x));

When we use sizeof() in the next line, it returns a small value, such as 4 (on 32-bit machines) or 8 (on 64-bit machines). The reason is that in this case, sizeof() thinks we are simply asking how big a pointer to an integer is, not how much memory we have dynamically allocated.

int x[10];

printf("%d\n", sizeof(x));

In this case, there is enough static information for the compiler to know that 40 bytes have been allocated.

Q: What must be taken care of when using malloc ?

A: 1. When declaring space for a string, use the following idiom: malloc(strlen(s) + 1), which gets the length of the string using the function strlen(), which gets the length of the string using the function strlen(), 

Q: What is the return value type of malloc() ?

A: malloc() returns a pointer to type void. Doing so is just the way in C to pass back an address and let the programmer decide what to do with it. The programmer further helps out by using what is called a cast; 

Chapter 15 Mechanism: Address Translation

Q: What is address translation ?

A: With address translation, the hardware transforms each memory access (e.g., an instruction fetch, load, or store), changing the virtual address provided by the instruction to a physical address where the desired information is actually located. 

Q: What is base and bounds ?

A: Specifically, we'll need two hardware registers within each CPU: one is called the base register, and the other the bounds (sometimes called a limit register). 

physical address = virtual address + base;

Q: What is hardware-based dynamic relocation ?

A: With dynamic relocation, we can see how a little hardware goes a long way. Namely, a bee register is used to transform virtual address ( generated by the program) 

Chapter 16 Segmentation

Q: 

Chapter 17 Free-Space Management

Q: What are the strategies of Free-Space Management ?

A: 1. Best Fit: first, search through the free list and find chunks of free memory that are as big or bigger than the requested size. Then, return the one that is the smallest in that group of candidates;

2. 

Chapter 18 Paging: Introduction

Q: 

Chapter26 Concurrency : An Introduction

Q: What is the difference between process and threads ?

A: 1,There is one major difference though, in the context switch we perform between threads as compared to processes: the address space remains the same (i.e., there is no need to switch which page table we are using).

2, One other major difference between threads and processes concerns the stack. Instead of a single stack in the address space, there will be one per thread. Any stack-allocated variables, parameters, return-values, and other things that we put on the stack will be placed in what is sometimes called thread-local storage, i.e., the stack of the revenant thread.

Trouble : Before, the stack and heap could grow independently and trouble only arose when you ran out of room in the address space. Here, we no longer have such a nice situation. Fortunately, this is usually OK, as stacks do not generally have to be very large (the exception being in programs that make heavy use of recursion).

tips : Creating and scheduling threads are separate operations.

Q: What is race condition ?

A: the results depend on the timing execution of the code. With some bad luck (i.e., context switches that occur at untimely points in the execution), we get the wrong result.

Q: What is critical section ?

A: A critical section is a piece of code that accesses a shared resource, usually a variable or data structure.

Q: What is race condition ?

A: A race condition arises if multiple threads of execution enter the critical section at roughly the same time; both attempt to update the shared data structure, leading to a surprising ( and perhaps undesirable) outcome.

Q: What is indeterminate program ?

A: An indeterminate program consists of one or more race conditions; the output of the program varies from run to run, depending on which threads ran when. The outcome is thus no deterministic, something we usually expect from computer systems.

Q: What is mutual exclusion ?

A: To avoid these problems, threads should use some kind of mutual exclusion primitives; doing so guarantees that only a single thread ever enters a critical section, thus avoiding races, and resulting in deterministic program outputs.

Q: What is atomic operations ?

A: Atomic operations are one of the most powerful underlying techniques in building computer systems, from the computer architecture, to concurrent code ( what we are studying here), to file systems (which we'll study soon enough), data base management systems, and even distributed systems.

Q: What is pthread_create(pthread_t * thread, const pthread_attr_t *attr, void* (*start_routine)(void*), void* arg);

The first, thread, is a pointer to a structure of type pthread_t; we'll use this structure to interact with this thread, and thus we need to pass it to pthread_create() in order to initialize it.

The second argument, attire, is used to specify and any attributes this thread might have. some examples include setting the stack size or perhaps information about the scheduling priority of the thread.

The third argument is the most complex, but is really just asking: which function should this thread start running in? In C, we call this a function pointer, and this one tells us the following is expected: a function name (start_routine), which is passed a single argument of type void *(as indicated in the parentheses after start_routine), and which returns a value of type void*(i.e., a void pointer).

The fourth argument, arg, is exactly the argument to be passed to the function where the thread begins execution. You might ask: why do we need these void pointers ? Well, the answer is quite simple: having a void pointer as an argument to the function start_routine allows us to pass in any type of argument; having it as a return value allows the thread to return any type of result.

Q: What is the basic pair of routines to use for this purpose is provided by this pair of routines ?

A: int pthread_mutex_lock(pthread_mutex_t *mutex)

int pthread_mutex_unlock(pthread_mutex_t *mutex);

Q: What is the basic code using lock? Pls give an example.

A:

pthread_mutex_t lock;

pthread_mutex_lock(&lock);

x = x + 1;

pthread_mutex_unlock(&lock);

Q: What are the problems above ?

A: The first problem is a lack of proper initialization. There are tow ways to initialize locks. One way to do this is to use PTHREAD_MUTEX_INITIALIZER, as follows:

pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;

Doing so sets the lock to the default values and thus make the lock usable. The dynamic way to do it (i.e., at run time) is to make a call to pthread_mutex_init(), as follows:

int rc = phtread_mutex_init(&lock, NULL);

assert(rc == 0 );

Q: what is trylock ?

A: int pthread_mutex_trylock(pthread_mutex_t *mutex);

int pthread_mutex_timedlock(pthread_mutex_t *mutex, struct time spec *abs_timeout);

The trylock version returns failure if the lock is already held; the timed lock version of acquiring a lock returns after a timeout or after acquiring the lock, whichever happens first.

Q: What is condition variable ?

A: Condition variables are useful when some kind of signaling must take place between threads, if one thread is waiting for another to do something before it can continue. Two primary routines are used by programs wishing to interact in this way: 

int pthread_cond_wait(pthread_cod_t *cond, pthread_mutex_t *mutex);

int pthread_cond_signal(pthread_cod_t *cond);

tips: the hard part with threads is not the APIs, but rather the tricky logic of how you build concurrent programs. Read on to learn more.

->Keep it simple. Above all else, any code to lock or signal between threads should be as simple as possible. Tricky thread interactions lead to bugs.

->Minimize thread interactions. Try to keep the number of ways in which threads interact to a minimum. Each interaction should be carefully thought out and constructed with tried and true approaches (many of which we will learn about in the coming chapters).

->Initialize locks and condition variables. Failure to do so will lead to code that sometimes works and sometimes fails in very strange ways.

->Check your return codes. Of course, in any C and UNIX programming you do, you should be checking each and every return code, and it's true here as well. Failure to do so will lead to bizarre and hard to understand behavior, making you likely to  (a) scream, (b) pull some of your hair out, or (c) both.

->Each thread has its own stack. As related to the point above, please remember that each thread has its own stack. Thus, if you have a locally-allocated variable inside of some function a thread is executing, it is essentially private to that thread; no other thread can (easily) access it. To share data between threads, the values must be in the heap or otherwise some locale that is globally accessible.

->Always use condition variables to signal between threads. While it is often tempting to use a simple flag, don't do it.

Chapter 28 Locks

A lock is just a variable, and thus to use one, you must declare a lock variable of some kind (such as mutex above). This lock variable (or just "lock" for short) holds the state of the lock at any instant in time.

Q : How to evaluate locks ?

A: 1. mutual exclusion 2. fairness. 3. performance

Q: What is the earliest solutions used to provide mutual exclusion was to disable interrupts for critical sections ?

A: 

void lock() {

    DisableInterrupts();

}

void unlock() {

    EnableInterrupts();

}

the drawbacks of method above:

->the OS may never regains control of the system, and there is only one recourse: restart the system.

->the approach does not work on multiprocessors. If multiple threads are running on different CPUs, and each try to enter the same critical section, it does not matter whether interrupts are disabled; 

->probably least important, this approach can be inefficient. Compared to normal instruction execution, code that masks or unmasks interrupts tends to be executed slowly by modern CPUs

Q: What is Spin Lock ?

A: 

int TestAndSet (int *ptr, int new) {

    int old = *ptr;

  *ptr = new;

    return old;

}

typedef struct   __lock__t {

    int flag;

} lock_t;

void int (lock_t *lock) {

    lock->flag = 0;

}

void lock(lock_t *lock) {

while (TestAndSet(&lock->flag, 1) == 1)

;

}

void unlock(lock_t *lock) {

    lock->flag = 0;

}

By making both the test (of the old lock value) and set ( of the new value) a single atomic operation, we ensure that only one thread acquires the lock. And that's how to build a working mutual exclusion primitive!

Q: Please evaluating spin locks.

A:

Correctness : fulfillment the requirement.

Fairness: Spin locks don't provide any fairness guarantees. Indeed, a thread spinning may spin forever, under contention. Spin locks are not fair and may lead to starvation.

Performance: What are the costs of using a spin lock ? To analyze this more carefully, we suggest thinking about a few different case. In the first, imagine threads competing for the lock on a single processor; int the second, consider the threads as spread out across many processors.

Q: What is compare-and-swap instruction (or compare-and-exchange instruction) ?

A: The C pseudocode:

int CompareAndSwap(int *ptr, int expected, int new) {

int actual = *ptr;

if (actual == expected)

*ptr = new;

return actual;

}

Q: What is ticket lock ?

A: 

int FetchAndAdd(int *ptr) {

int old = *ptr;

*ptr = old + 1;

return old;

}

typedef struct __lock_t {

int ticket;

int turn;

} lock_t;

void lock_init(lock_t *lock) {

lock->ticket = 0;

lock->turn = 0;

}

void lock (lock_t *lock) {

int my turn = FetchAndAdd(&lock->ticket);

while(lock->turn != my turn)

;

}

void unlock(lock_t *lock) {

FetchAndAdd(&lock->turn);

}

The basic operation is pretty simple: when a thread wishes to acquire a lock, it first does an atomic fetch-and-add on the ticket value; that value is now considered this thread's "turn" (my turn). The globally shared lock->turn is then used to determine which thread's turn it is; when(my turn == turn) for a given thread, it is that thread's turn to enter the critical section.

Q: What is "Just Yield, Baby"?

A: We assume an operating system primitive yield() which a thread can call when it wants to give up the CPU and let another run. 

Think about the example with two threads on one CPU; in this case, our yield-based approach works quit well. If a thread happens to call lock() and find a lock held, it will simply yield the CPU, and thus the other thread will run and finish its critical section. In this simple case, the yielding approach works well.

你可能感兴趣的:(Operating Systems: Three Easy Pieces)