CSharp - Working principles about Thread, Process, Multithreading, DeadLock, Lock

/*

Author: Jiangong SUN

*/


Last update: 26/08/2013, 12/09/2013


What is threading?

A thread is an independent execution path, able to run simultaneously with other threads.

STA: Single-Threaded Apartment

MTA: Multi-Threaded Apartment


What is multithreading ?

C# supports parallel execution of code through multithreading. 

Every application runs with at least one thread.  

A thread, while blocked, doesn't consume CPU resources.

While waiting on a Sleep or Join, a thread is blocked and so does not consume CPU resources.


Multithreading is managed internally by a thread scheduler, afunction the CLR typically delegates to the operating system.A thread scheduler ensures all active threads are allocated appropriate execution time, and that threads that are waiting or blocked (for instance, on an exclusive lock or on user input)  do not consume CPU time.


Thread vs. Process:

A thread is analogous to the operating system process in which your application runs. Just as processes run in parallel on a computer, threads run in parallel within a single process. 

Processes are fully isolated from each other; threads have just a limited degree of isolation. 

In particular, threads share (heap) memory with other threads running in the same application. This, in part, is why threading is useful: one thread can fetch data in the background, for instance, while another thread can display the data as it arrives.


What is deadlock?

A deadlock is a situation in which two or morecompeting actions are each waiting for the other to finish, and thus neitherever does.

For example:

Thread A takes object A, it needs to take object B to finishits action.

Thread B takes object B, it needs to take object A to finishits action.

Here a deadlock occurs.


Deadlock condition:

互斥,占有并等待,禁止抢占,循环等待

1.      Mutual Exclusion: At least two resources must be non-shareable.[1] Only oneprocess can use the resource at any given instant of time.
2.      Hold and Wait or ResourceHolding: A process iscurrently holding at least one resource and requesting additional resourceswhich are being held by other processes.
3.      No Preemption: The operatingsystem must not de-allocate resources once they have been allocated; they mustbe released by the holding process voluntarily.
4.      Circular Wait: A process must be waiting for a resource which isbeing held by another process, which in turn is waiting for the first processto release the resource. In general, there is aset of waitingprocesses, P = {P1, P2, ..., PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1. 



How to prevent deadlock ?

Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none. Thus, first they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation.[1] (These algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control.
The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial orderingof resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration.


Lock leveling

LeveledLock lockA = new LeveledLock (10); 

LeveledLock lockB = new LeveledLock (5);


What is livelock?

If Thread 1 takes object A, Thread 2 want to take object A, and then Thread 3 want to take object A.

Thread 1 releases object A, then it's Thread 3 takes object A. At the same time, Thread 4 want to take object A.

Thread 3 releases object A, then it's Thread 4 takes object A. 

If these things repeats, Thread 2 is always waiting for object A.

This is a livelock.


Prevent Deadlock examples?

http://www.mindfiresolutions.com/A-single-step-solution-to-avoid-Deadlock-in-Multithreading-414.php

Monitor.TryEnter

Thread.Sleep

Thread.Join

Task.Wait

Mutex

Semaphore

Volatile

Lock

Interlock

async await (asynchronous programming)

spinlock (parallel programming, PLINQ)

 

http://blogs.msdn.com/b/mohamedg/archive/2010/01/29/how-to-use-locks-and-prevent-deadlocks.aspx



Lock, Volatile, Interlock?

http://www.andrewdenhertog.com/c/thread-safe-lock-volatile-and-interlock-in-c/

Lock is the most generalway to prevent contention issues with resources, albeit an expensive one. Acommon lock object is created, and is used as the lock key to preventconcurrent access to a code block. It’s implemented as follows:

Interlocked provides atomic operations for variables thatare shared by multiple threads.

Interlocked brings a numberof operations that help address the synchronisation issues in volatile. Itprovides operations for most atomic operations including: Add(), Increment()and Decrement().

The Interlocked statementexecutes atomically, and is the fastest way to safely make an operation on ashared resource.

http://stackoverflow.com/questions/154551/volatile-vs-interlocked-vs-lock

Worst: Volatile

Second Best: Lock

This is safe to do (provided you remember to lock everywhere else that you access this.counter). It prevents any other threadsfrom executing any other code which is guarded by locker. Using locks also, prevents the multi-cpu reorderingproblems as above, which is great.

The problem is, locking is slow, and if you re-use the locker in some other place which is not really related then you can endup blocking your other threads for no reason.

 

Best: Interlocked.Increment

This is safe, as it effectively does the read, increment,and write in 'one hit' which can't be interrupted. Because of this it won'taffect any other code and you don't need to remember to lock elsewhere either.It's also very fast (as MSDN says, on modern CPU's this is often literally asingle CPU instruction).

I'm not entirely sure however if it gets around other CPU'sreordering things, or if you also need to combine volatile with the increment.

 

 

When to use volatile?

You use volatile/Thread.MemoryBarrier() when you want to access a variable across threads without locking.

http://blogs.msdn.com/b/ericlippert/archive/2011/06/16/atomicity-volatility-and-immutability-are-different-part-three.aspx

According to Eric Lippert’sblog, it’s not recommended to use volatile. Because it doesn’t make sense ofusing volatile.

http://stackoverflow.com/questions/1002739/what-is-the-cost-of-the-volatile-keyword-in-a-multiprocessor-system

lock does induce amemory barrier, so if you are always accessing instance in a lock you don'tneed the volatile.

The C# volatile keywordimplements acquire and release semantics, which implies a read memory barrieron read and a write memory barrier on write.

 

Lock vs Mutex vs Semaphore?

http://stackoverflow.com/questions/2332765/lock-mutex-semaphore-whats-the-difference

A lock allows only onethread to enter the part that is locked and the lock is not shared with anyother processes

Mutex is the same as a lockbut system wide. A synchronization primitive that can also be used for interprocesssynchronization. The lock block (which uses Monitor) is far more light-weightcompared to a Mutex. So use lock unless you specifically need to use a Mutex(example for inter process synchronization).

Semaphore does the same asa lock but allows x number of threads to enter... Limits the number of threadsthat can access a resource or pool of resources concurrently.


Mutex vs Semaphore?

http://en.wikipedia.org/wiki/Semaphore_%28programming%29#Semaphores_vs._mutexes

1.    Mutexes have a concept of an owner,which is the process that locked the mutex. Only the process that locked themutex can unlock it. In contrast, a semaphore has no concept of an owner. Anyprocess can unlock a semaphore.

2.    Unlike semaphores, mutexes provide priority inversion safety. Since the mutex knows its current owner, it is possible topromote the priority of the owner whenever a higher-priority task startswaiting on the mutex.

3.    Mutexes also allow for recursivelocking where the same thread can lock a single mutex multiple times.Subsequently, the same mutex must be unlocked as many times as it was locked.Recursive locking of semaphores is not possible.

4.    Mutexes also provide deletion safety,where the process holding the mutex cannot be accidentally deleted. Semaphoresdo not provide this.

 

Memory Barrier, Volatile,Lock

http://stackoverflow.com/questions/2844452/memory-barrier-by-lock-statement

http://en.wikipedia.org/wiki/Memory_barrier

The subject of memory barriers is quitecomplex. It even trips up the experts from time to time. When we talk about amemory barrier we are really combining two different ideas.

·        Acquire fence: A memory barrier inwhich other reads & writes are not allowed to move before thefence.

·        Release fence: A memory barrier inwhich other reads & writes are not allowed to move after thefence.

A memory barrier thatcreates only one of two is sometimes called a half-fence. A memory barrierthat creates both is sometimes called a full-fence.

The volatile keywordcreates half-fences. Reads of volatile fields have acquire semantics whilewrites have release semantics. That means no instruction can be moved before aread or after a write.

The lock keywordcreates full-fences on both boundaries (entry and exit). That means noinstruction can be moved either before or after each boundary.

 

Lock vs Monitor.Enter

http://blogs.msdn.com/b/ericlippert/archive/2009/03/06/locks-and-exceptions-do-not-mix.aspx

In C# 4.0 we've changedlock so that it now generates code as if it were

bool lockWasTaken = false;
var temp = obj;
try { Monitor.Enter(temp, ref lockWasTaken); { body } }
finally { if (lockWasTaken) Monitor.Exit(temp); }

The purpose ofthe lock statement is to help you protect the integrity of a mutable resourcethat is shared by multiple threads. But suppose an exception is thrownhalfway through a mutation of the locked resource. Our implementation of lockdoes not magically roll back the mutation to its pristine state, and it doesnot complete the mutation. Rather, controlimmediately branches tothe finally, releasing the lock and allowing every other thread that ispatiently waiting to immediately view the messed-up partiallymutated state! If that state has privacy, security, or human life and safetyimplications, the result could be very bad indeed. In that case it is possibly betterto deadlock the program and protect the messed-up resource by denying access toit entirely. But that's obviously not good either.

This is yet another reason why the body of a lock shoulddo as little as possible. Usually the rationale for having small lockbodies is to get in and get out quickly, so that anyone waiting on the lockdoes not have to wait long. But an even better reason is because small, simplelock bodies minimize the chance that the thing in there is going to throw anexception. It's also easier to rewrite mutating lock bodies to have rollbackbehaviour if they don't do very much to begin with.


Atomic operation?

http://wayneye.com/Blog/Atomic-Operation-In-Csharp

Read/write on a field of 32-bit or lessis always atomic, operations on 64-bit are guaranteed to be atomic only in64-bit OS, statements that combine more than one read/write operation are neveratomic

int i = 3; // Alwaysatomic

long l = Int64.MaxValue;// Atomic in 64-bit enviroment, non-atomic on 32-bit environment

int j += i;  // Non-atomic, read and write operation

 i++;        // Non-atomic, read and write operation


Thread Safe?

A program or method is thread-safe if it has no indeterminacy in the face of any multithreading scenario. Thread safety isachieved primarily with locking and by reducing the possibilities for threadinteraction.



reference:

http://www.albahari.com/threading/

https://en.wikipedia.org/wiki/Deadlock


你可能感兴趣的:(deadlock,threading)