Java Concurrency in Practice读书笔记

目录
Chapter 1. Introduction 2
1.1. A (Very) Brief History of Concurrency 2
1.2. Benefits of Threads 3
1.3. Risks of Threads 3
1.4. Threads are Everywhere 3
Chapter 2. Thread Safety 3
2.1. What is Thread Safety? 3
2.2. Atomicity 4
2.3. Locking 4
Chapter 3. Sharing Objects 5
3.1. Visibility 5
3.2. Publication and Escape 8
3.3. Thread Confinement 8
3.4. Immutability 8
3.5. Safe Publication 9
Chapter 4. Composing Objects 10
4.1. Designing a Thread-safe Class 10
4.2. Instance Confinement 10
4.3. Delegating Thread Safety 10
4.4. Adding Functionality to Existing Thread-safe Classes 11
4.5. Documenting Synchronization Policies 11
Chapter 5. Building Blocks 11
5.1. Synchronized Collections 11
5.2. Concurrent Collections 12
5.3. Blocking Queues and the Producer-consumer Pattern 13
5.4. Blocking and Interruptible Methods 14
5.5. Synchronizers 14
5.6. Building an Efficient, Scalable Result Cache 15
Summary of Part I 15
Chapter 6. Task Execution 16
6.1. Executing Tasks in Threads 16
6.2. The Executor Framework 16
6.3. Finding Exploitable Parallelism 17
Chapter 7. Cancellation and Shutdown 17
7.1. Task Cancellation 17
7.2. Stopping a Thread-based Service 19
7.3. Handling Abnormal Thread Termination 19
7.4. JVM Shutdown 20
Chapter 8. Applying Thread Pools 20
8.1. Implicit Couplings Between Tasks and Execution Policies 20
8.2. Sizing Thread Pools 21
8.3. Configuring ThreadPoolExecutor 21
8.4. Extending ThreadPoolExecutor 21
8.5. Parallelizing Recursive Algorithms 22
Chapter 9. GUI Applications 22
9.1. Why are GUIs Single-threaded? 22
9.2. Short-running GUI Tasks 22
9.3. Long-running GUI Tasks 22
9.4. Shared Data Models 22
9.5. Other Forms of Single-threaded Subsystems 23
Chapter 10. Avoiding Liveness Hazards 23
10.1. Deadlock 23
10.2. Avoiding and Diagnosing Deadlocks 23
10.3. Other Liveness Hazards 23
Chapter 11. Performance and Scalability 24
11.1. Thinking about Performance 24
11.2. Amdahl's Law 24
11.3. Costs Introduced by Threads 24
11.4. Reducing Lock Contention 25
11.5. Example: Comparing Map Performance 25
11.6. Reducing Context Switch Overhead 25
Chapter 12. Testing Concurrent Programs 25
12.1. Testing for Correctness 26
12.2. Testing for Performance 27
12.3. Avoiding Performance Testing Pitfalls 27
12.4. Complementary Testing Approaches 28
Chapter 13. Explicit Locks 28
13.1. Lock and ReentrantLock 28
13.2. Performance Considerations 28
13.3. Fairness 28
13.4. Choosing Between Synchronized and ReentrantLock 28
13.5. Read-write Locks 29
Chapter 14. Building Custom Synchronizers 29
14.1. Managing State Dependence 29
14.2. Using Condition Queues 29
14.3. Explicit Condition Objects 29
14.4. Anatomy of a Synchronizer 30
14.5. AbstractQueuedSynchronizer 30
14.6. AQS in Java.util.concurrent Synchronizer Classes 30
Chapter 15. Atomic Variables and Nonblocking Synchronization 30
15.1. Disadvantages of Locking 30
15.2. Hardware Support for Concurrency 30
15.3. Atomic Variable Classes 30
15.4. Nonblocking Algorithms 31
Chapter 16. The Java Memory Model 31
16.1. What is a Memory Model, and Why would I Want One? 31
16.2. Publication 33
16.3. Initialization Safety 34



Chapter 1. Introduction

1.1. A (Very) Brief History of Concurrency
Several motivating factors led to the development of operating systems that allowed multiple programs to execute simultaneously:
Resource utilization.
Fairness.
Convenience.
1.2. Benefits of Threads
Exploiting Multiple Processors
Simplicity of Modeling
Simplified Handling of Asynchronous Events
More Responsive User Interfaces
1.3. Risks of Threads
Safety Hazards
Liveness Hazards
Performance Hazards
1.4. Threads are Everywhere
Timer
Servlet
Remote Method Invocation
Swing and AWT


Chapter 2. Thread Safety

Don't share the state variable across threads;
Make the state variable immutable; or
Use synchronization whenever accessing the state variable.
2.1. What is Thread Safety?
A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.

Thread-safe classes encapsulate any needed synchronization so that clients need not provide their own.

Stateless objects are always thread-safe.
2.2. Atomicity
A race condition occurs when the correctness of a computation depends on the relative timing or interleaving of multiple threads by the runtime; in other words, when getting the right answer relies on lucky timing. The most common type of race condition is check-then-act, where a potentially stale observation is used to make a decision on what to do next.

Example: Race Conditions in Lazy Initialization

Operations A and B are atomic with respect to each other if, from the perspective of a thread executing A, when another thread executes B, either all of B has executed or none of it has. An atomic operation is one that is atomic with respect to all operations, including itself, that operate on the same state.

Servlet that Counts Requests Using AtomicLong

Where practical, use existing thread-safe objects, like AtomicLong, to manage your class's state. It is simpler to reason about the possible states and state transitions for existing thread-safe objects than it is for arbitrary state variables, and this makes it easier to maintain and verify thread safety.

2.3. Locking
To preserve state consistency, update related state variables in a single atomic operation.

Reentrancy

For each mutable state variable that may be accessed by more than one thread, all accesses to that variable must be performed with the same lock held. In this case, we say that the variable is guarded by that lock.

Every shared, mutable variable should be guarded by exactly one lock. Make it clear to maintainers which lock that is.

For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by the same lock.

每一个方法加上synchronization只能保证每一个方法是原子的,组合操作一样不能保证。

There is frequently a tension between simplicity and performance. When implementing a synchronization policy, resist the temptation to prematurely sacriflce simplicity (potentially compromising safety) for the sake of performance.

Avoid holding locks during lengthy computations or operations at risk of not completing quickly such as network or console I/O.


Chapter 3. Sharing Objects

synchronized不仅保证了原子性,而且保证了内存可见性。
3.1. Visibility
public class NoVisibility {
    private static boolean ready;
    private static int number;

    private static class ReaderThread extends Thread {
        public void run() {
            while (!ready)
                Thread.yield();
            System.out.println(number);
        }
    }

    public static void main(String[] args) {
        new ReaderThread().start();
        number = 42;
        ready = true;
    }
}

有可能永远循环,还有可能打印出一个0。Reordering。

There is no guarantee that operations in one thread will be performed in the order given by the program, as long as the reordering is not detectable from within that thread。

In the absence of synchronization, the compiler, processor, and runtime can do some downright weird things to the order in which operations appear to execute. Attempts to reason about the order in which memory actions "must" happen in insufficiently synchronized multithreaded programs will almost certainly be incorrect.

Reasoning about insufficiently synchronized concurrent programs is prohibitively difficult.

Stale data can cause serious and confusing failures such as unexpected exceptions, corrupted data structures, inaccurate computations, and infinite loops.
When a thread reads a variable without synchronization, it may see a stale value, but at least it sees a value that was actually placed there by some thread rather than some random value. This safety guarantee is called out-of-thin-air safety.

Nonatomic 64-bit Operations

Out-of-thin-air safety applies to all variables, with one exception: 64-bit numeric variables (double and long) that are not declared volatile. The Java Memory Model requires fetch and store operations to be atomic, but for nonvolatile long and double variables, the JVM is permitted to treat a 64-bit read or write as two separate 32-bit operations. If the reads and writes occur in different threads, it is therefore possible to read a nonvolatile long and get back the high 32 bits of one value and the low 32 bits of another. Thus, even if you don't care about stale values, it is not safe to use shared mutable long and double variables in multithreaded programs unless they are declared volatile or guarded by a lock

Intrinsic locking can be used to guarantee that one thread sees the effects of another in a predictable manner。A和B都用锁,则在A释放锁之前所有的更改B在获得锁的时候都可以看到。

Locking is not just about mutual exclusion; it is also about memory visibility. To ensure that all threads see the most up-to-date values of shared mutable variables, the reading and writing threads must synchronize on a common lock.
A good way to think about volatile variables is to imagine that they behave roughly like the SynchronizedInteger class in Listing 3.3, replacing reads and writes of the volatile variable with calls to get and set.[4] Yet accessing a volatile variable performs no locking and so cannot cause the executing thread to block, making volatile variables a lighter-weight synchronization mechanism than synchronized.[5]

Use volatile variables only when they simplify implementing and verifying your synchronization policy; avoid using volatile variables when veryfing correctness would require subtle reasoning about visibility. Good uses of volatile variables include ensuring the visibility of their own state, that of the object they refer to, or indicating that an important lifecycle event (such as initialization or shutdown) has occurred.

Locking can guarantee both visibility and atomicity; volatile variables can only guarantee visibility.
You can use volatile variables only when all the following criteria are met:
1 Writes to the variable do not depend on its current value, or you can ensure that only a single thread ever updates the value;
2 The variable does not participate in invariants with other state variables; and
3 Locking is not required for any other reason while the variable is being accessed.

3.2. Publication and Escape
常见的数组,集合,引用类型对象泄露。
public class ThisEscape {
    public ThisEscape(EventSource source) {
        source.registerListener(
            new EventListener() {
                public void onEvent(Event e) {
                    doSomething(e);
                }
            });
    }
}
This 泄露。
在构造函数中有内部类或者启动线程都有this泄露问题。
构造函数中调用非private或final方法一样有this泄露问题。

Do not allow the this reference to escape during construction.

3.3. Thread Confinement
Swing的EDT,虽然组件不安全,但是都在同一个线程中,所以安全。
JDBC的Connection,Connetion本身不安全,但是因为没有返回就不让在使用,所以变得安全。

Ad-hoc Thread Confinement
Stack Confinement
ThreadLocal 框架爱用这个东西,事务也在用,注意不要滥用。
3.4. Immutability
Immutable objects are always thread-safe.
An object is immutable if:
Its state cannot be modified after construction;
All its flields are final
It is properly constructed (the this reference does not escape during construction).

It is the use of final fields that makes possible the guarantee of initialization safety that lets immutable objects be freely accessed and shared without synchronization.

Just as it is a good practice to make all fields private unless they need greater visibility [EJ Item 12], it is a good practice to make all fields final unless they need to be mutable.

不变类和volatile的组合。
3.5. Safe Publication
可见性的问题。
To publish an object safely, both the reference to the object and the object's state must be made visible to other threads at the same time. A properly constructed object can be safely published by:
Initializing an object reference from a static initializer;
Storing a reference to it into a volatile field or AtomicReference;
Storing a reference to it into a final field of a properly constructed object; or
Storing a reference to it into a field that is properly guarded by a lock.

Safely published effectively immutable objects can be used safely by any thread without additional synchronization.

The publication requirements for an object depend on its mutability:
Immutable objects can be published through any mechanism;
Effectively immutable objects must be safely published;
Mutable objects must be safely published, and must be either threadsafe or guarded by a lock.


Chapter 4. Composing Objects

4.1. Designing a Thread-safe Class
The design process for a thread-safe class should include these three basic elements:
Identify the variables that form the object's state;
Identify the invariants that constrain the state variables;
Establish a policy for managing concurrent access to the object's state.

You cannot ensure thread safety without understanding an object's invariants and postconditions. Constraints on the valid values or state transitions for state variables can create atomicity and encapsulation requirements.

4.2. Instance Confinement
Encapsulating data within an object confines access to the data to the object's methods, making it easier to ensure that the data is always accessed with the appropriate lock held.

Confinement makes it easier to build thread-safe classes because a class that confines its state can be analyzed for thread safety without having to examine the whole program.

The Java Monitor Pattern

4.3. Delegating Thread Safety

If a class is composed of multiple independent thread-safe state variables and has no operations that have any invalid state transitions, then it can delegate thread safety to the underlying state variables.

If a state variable is thread-safe, does not participate in any invariants that constrain its value, and has no prohibited state transitions for any of its operations, then it can safely be published.
4.4. Adding Functionality to Existing Thread-safe Classes
继承一个类然后扩展它的方法。锁的机制分散了。
Client side locking. 也不好,锁的机制分散了。
Composition 还行,其实和Java monitor pattern很像了。
4.5. Documenting Synchronization Policies
Document a class's thread safety guarantees for its clients; document its synchronization policy for its maintainers.
关于线程安全的猜想还真是精彩。


Chapter 5. Building Blocks

5.1. Synchronized Collections
Vector and Hashtable and Collections.synchronizedXxx这些东西都是线程安全的。但是compound actions一样需要client locking。

Just as encapsulating an object's state makes it easier to preserve its invariants, encapsulating its synchronization makes it easier to enforce its synchronization policy.

注意隐藏的iterator。
System.out.println("DEBUG: added ten elements to " + set);

Iteration is also indirectly invoked by the collection's hashCode and equals methods, which may be called if the collection is used as an element or key of another collection. Similarly, the containsAll, removeAll, and retainAll methods, as well as the constructors that take collections are arguments, also iterate the collection. All of these indirect uses of iteration can cause ConcurrentModificationException.
These fail-fast iterators are not designed to be foolproofthey are designed to catch concurrency errors on a "good-faith-effort" basis and thus act only as early-warning indicators for concurrency problems. They are implemented by associating a modification count with the collection: if the modification count changes during iteration, hasNext or next throws ConcurrentModificationException. However, this check is done without synchronization, so there is a risk of seeing a stale value of the modification count and therefore that the iterator does not realize a modification has been made. This was a deliberate design tradeoff to reduce the performance impact of the concurrent modification detection code.[2]
Just as encapsulating an object's state makes it easier to preserve its invariants, encapsulating its synchronization makes it easier to enforce its synchronization policy.


5.2. Concurrent Collections
Replacing synchronized collections with concurrent collections can offer dramatic scalability improvements with little risk.

ConcurrentHashMap用小粒度的锁换取高的并发吞吐量。
ConcurrentHashMap的Iterator用weakly consistent 代替传统的fail-fast. A weakly consistent iterator can tolerate concurrent modification, traverses elements as they existed when the iterator was constructed, and may (but is not guaranteed to) reflect modifications to the collection after the construction of the iterator.

Tradeoff:
Size和isEmpty方法被消弱了,只是一个近似值,在综合考量之后,get put containsKey remove更重要一些。
ConcurrentHashMap不能排他性访问,因为没有一个整个对象的锁。
ConcurrentHashMap提供了put-if-absent, remove-if-equal, and replace-if-equal 的原子操作。

CopyOnWriteArrayList 修改是在一个copy上面做,而原有的back是一个不变对象,所以线程安全,适用于大量读,少量修改的场景。
Element-changing operations on iterators themselves (remove, set, and add) are not supported. These methods throw UnsupportedOperationException.
5.3. Blocking Queues and the Producer-consumer Pattern
Bounded queues are a powerful resource management tool for building reliable applications: they make your program more robust to overload by throttling activities that threaten to produce more work than can be handled.

Just as blocking queues lend themselves to the producer-consumer pattern, deques lend themselves to a related pattern called work stealing. A producer consumer design has one shared work queue for all consumers; in a work stealing design, every consumer has its own de-queue. If a consumer exhausts the work in its own de-queue, it can steal work from the tail of someone else's de-queue. Work stealing can be more scalable than a traditional producer-consumer design because workers don't contend for a shared work queue; most of the time they access only their own deque, reducing contention. When a worker has to access another's queue, it does so from the tail rather than the head, further reducing contention.
5.4. Blocking and Interruptible Methods
When your code calls a method that throws InterruptedException, then your method is a blocking method too, and must have a plan for responding to interruption. For library code, there are basically two choices:
Propagate the InterruptedException. This is often the most sensible policy if you can get away with it just propagate the InterruptedException to your caller. This could involve not catching InterruptedException, or catching it and throwing it again after performing some brief activity-specific cleanup.
Restore the interrupt. Sometimes you cannot throw InterruptedException, for instance when your code is part of a Runnable. In these situations, you must catch InterruptedException and restore the interrupted status by calling interrupt on the current thread, so that code higher up the call stack can see that an interrupt was issued, as demonstrated in Listing 5.10.

5.5. Synchronizers
A synchronizer is any object that coordinates the control flow of threads based on its state. Blocking queues can act as synchronizers; other types of synchronizers include semaphores, barriers, and latches. There are a number of synchronizer classes in the platform library; if these do not meet your needs, you can also create your own using the mechanisms described in Chapter 14.
All synchronizers share certain structural properties: they encapsulate state that determines whether threads arriving at the synchronizer should be allowed to pass or forced to wait, provide methods to manipulate that state, and provide methods to wait efficiently for the synchronizer to enter the desired state.

Latches
A latch is a synchronizer that can delay the progress of threads until it reaches its terminal state [CPJ 3.4.2]. A latch acts as a gate: until the latch reaches the terminal state the gate is closed and no thread can pass, and in the terminal state the gate opens, allowing all threads to pass. Once the latch reaches the terminal state, it cannot change state again, so it remains open forever.

FutureTask
Semaphores
Barriers

5.6. Building an Efficient, Scalable Result Cache

Summary of Part I
It's the mutable state, stupid. [1]
All concurrency issues boil down to coordinating access to mutable state. The less mutable state, the easier it is to ensure thread safety.
Make fields final unless they need to be mutable.
Immutable objects are automatically thread-safe.
Immutable objects simplify concurrent programming tremendously. They are simpler and safer, and can be shared freely without locking or defensive copying.
Encapsulation makes it practical to manage the complexity.
You could write a thread-safe program with all data stored in global variables, but why would you want to? Encapsulating data within objects makes it easier to preserve their invariants; encapsulating synchronization within objects makes it easier to comply with their synchronization policy.
Guard each mutable variable with a lock.
Guard all variables in an invariant with the same lock.
Hold locks for the duration of compound actions.
A program that accesses a mutable variable from multiple threads without synchronization is a broken program.
Don't rely on clever reasoning about why you don't need to synchronize.
Include thread safety in the design processor explicitly document that your class is not thread-safe.
Document your synchronization policy.



Chapter 6. Task Execution

6.1. Executing Tasks in Threads
Web server的例子:
Executing Tasks Sequentially 完全是个灾难。
Explicitly Creating Threads for Tasks 有不少的缺点。
Thread lifecycle overhead.
Resource consumption.
Stability.


6.2. The Executor Framework
Executor    Execution Policies
Execution Policies
• In what thread will tasks be executed?
• In what order should tasks be executed (FIFO, LIFO, priority order)?
• How many tasks may execute concurrently?
• How many tasks may be queued pending execution?
• If a task has to be rejected because the system is overloaded, which task should be selected as the victim, and how should the application be notified?
• What actions should be taken before or after executing a task?
Executor Lifecycle 优雅的关闭 和 粗暴的关闭
Delayed and Periodic Tasks
不要用Timer了,用ScheduledThreadPoolExecutor
A Timer creates only a single thread for executing timer tasks.
Another problem with Timer is that it behaves poorly if a TimerTask throws an unchecked exception.
这里的常用类:
Executor ,ExecutorService ,ThreadPoolExecutor,Callable ,Future,FutureTask,ExecutorCompletionService。
ThreadPoolExecutor的文档很好,基本从用户的角度描述清楚了一个线程池的方方面面。
6.3. Finding Exploitable Parallelism
The real performance payoff of dividing a program's workload into tasks comes when there are a large number of independent, homogeneous tasks that can be processed concurrently.
正确的衡量程序的并发性。


Chapter 7. Cancellation and Shutdown

The cooperative approach is required because we rarely want a task, thread, or service to stop immediately, since that could leave shared data structures in an inconsistent state. Instead, tasks and services can be coded so that, when requested, they clean up any work currently in progress and then terminate. This provides greater flexibility, since the task code itself is usually better able to assess the cleanup required than is the code requesting cancellation.
7.1. Task Cancellation
标志位+轮询
A task that wants to be cancellable must have a cancellation policy that specifies the "how", "when", and "what" of cancellation.
There is nothing in the API or language specification that ties interruption to any specific cancellation semantics, but in practice, using interruption for anything but cancellation is fragile and difficult to sustain in larger applications.
Each thread has a boolean interrupted status; interrupting a thread sets its interrupted status to true. Thread contains methods for interrupting a thread and querying the interrupted status of a thread, as shown in Listing 7.4. The interrupt method interrupts the target thread, and isInterrupted returns the interrupted status of the target thread. The poorly named static interrupted method clears the interrupted status of the current thread and returns its previous value; this is the only way to clear the interrupted status.
Blocking library methods like Thread.sleep and Object.wait try to detect when a thread has been interrupted and return early. They respond to interruption by clearing the interrupted status and throwing InterruptedException, indicating that the blocking operation completed early due to interruption. The JVM makes no guarantees on how quickly a blocking method will detect interruption, but in practice this happens reasonably quickly.
Calling interrupt does not necessarily stop the target thread from doing what it is doing; it merely delivers the message that interruption has been requested.
A good way to think about interruption is that it does not actually interrupt a running thread; it just requests that the thread interrupt itself at the next convenient opportunity. (These opportunities are called cancellation points.) Some methods, such as wait, sleep, and join, take such requests seriously, throwing an exception when they receive an interrupt request or encounter an already set interrupt status upon entry. Well behaved methods may totally ignore such requests so long as they leave the interruption request in place so that calling code can do something with it. Poorly behaved methods swallow the interrupt request, thus denying code further up the call stack the opportunity to act on it.
Interruption is usually the most sensible way to implement cancellation.
Because each thread has its own interruption policy, you should not interrupt a thread unless you know what interruption means to that thread.
Only code that implements a thread's interruption policy may swallow an interruption request. General-purpose task and library code should never swallow interruption requests.
用Executor的框架类来cancel task.
Dealing with Non-interruptible Blocking
用其他的方法引出异常,中止线程的执行。
7.2. Stopping a Thread-based Service
Provide lifecycle methods whenever a thread-owning service has a lifetime longer than that of the method that created it.
线程是有owner的,只有owener可以shutdown它。
生产者消费者模型要一起shutdown。
Poison Pills 等同于放标识位,不过是把标识位放在了queue里面。
Limitations of Shutdown now However, there is no general way to find out which tasks started but did not complete.自己实现获得被取消的task。
注意完成一个task和把这个task标识为完成的,中间是有一个小间隔的。(多线程的困难性啊)。注意task的幂等性。
7.3. Handling Abnormal Thread Termination
线程池中线程的异常如果没有处理,常常让人不知所措。
常用的处理方式(catch runtime exception OR uncaught exception handler)。
写线程的执行code的时候注意对异常的处理。
Uncaught Exception Handlers
In long-running applications, always use uncaught exception handlers for all threads that at least log the exception.
Somewhat confusingly, exceptions thrown from tasks make it to the uncaught exception handler only for tasks submitted with execute; for tasks submitted with submit, any thrown exception, checked or not, is considered to be part of the task's return status. If a task submitted with submit terminates with an exception, it is rethrown by Future.get, wrapped in an ExecutionException.
7.4. JVM Shutdown
In an orderly shutdown, the JVM first starts all registered shutdown hooks. Shutdown hooks are unstarted threads that are registered with Runtime.addShutdownHook. The JVM makes no guarantees on the order in which shutdown hooks are started. If any application threads (daemon or nondaemon) are still running at shutdown time, they continue to run concurrently with the shutdown process. When all shutdown hooks have completed, the JVM may choose to run finalizers if runFinalizersOnExit is true, and then halts. The JVM makes no attempt to stop or interrupt any application threads that are still running at shutdown time; they are abruptly terminated when the JVM eventually halts. If the shutdown hooks or finalizers don't complete, then the orderly shutdown process "hangs" and the JVM must be shut down abruptly. In an abrupt shutdown, the JVM is not required to do anything other than halt the JVM; shutdown hooks will not run.
Daemon threads are not a good substitute for properly managing the lifecycle of services within an application.
Avoid finalizers.


Chapter 8. Applying Thread Pools

8.1. Implicit Couplings Between Tasks and Execution Policies
Some tasks have characteristics that require or preclude a specific execution policy. Tasks that depend on other tasks require that the thread pool be large enough that tasks are never queued or rejected; tasks that exploit thread confinement require sequential execution. Document these requirements so that future maintainers do not undermine safety or liveness by substituting an incompatible execution policy.
Whenever you submit to an Executor tasks that are not independent, be aware of the possibility of thread starvation deadlock, and document any pool sizing or configuration constraints in the code or configuration file where the Executor is configured.
Long-running Tasks 注意任务的响应时间。
8.2. Sizing Thread Pools
参数依赖于任务的属性和计算环境的配置。
各种资源会相互影响。如线程池和数据库连接池。
8.3. Configuring ThreadPoolExecutor
The core pool size, maximum pool size, and keep-alive time govern thread creation and teardown. The core size is the target size; the implementation attempts to maintain the pool at this size even when there are no tasks to execute,[2] and will not create more threads than this unless the work queue is full.[3] The maximum pool size is the upper bound on how many pool threads can be active at once. A thread that has been idle for longer than the keep-alive time becomes a candidate for reaping and can be terminated if the current pool size exceeds the core size.
8.4. Extending ThreadPoolExecutor
ThreadPoolExecutor was designed for extension, providing several "hooks" for subclasses to override beforeExecute, afterExecute, and terminate that can be used to extend the behavior of ThreadPoolExecutor.
8.5. Parallelizing Recursive Algorithms
Sequential loop iterations are suitable for parallelization when each iteration is independent of the others and the work done in each iteration of the loop body is significant enough to offset the cost of managing a new task.


Chapter 9. GUI Applications

9.1. Why are GUIs Single-threaded?
The Swing single-thread rule: Swing components and models should be created, modified, and queried only from the event-dispatching thread.
9.2. Short-running GUI Tasks
9.3. Long-running GUI Tasks
9.4. Shared Data Models
Consider a split-model design when a data model must be shared by more than one thread and implementing a thread-safe data model would be inadvisable because of blocking, consistency, or complexity reasons.
9.5. Other Forms of Single-threaded Subsystems


Chapter 10. Avoiding Liveness Hazards

10.1. Deadlock
A program will be free of lock-ordering deadlocks if all threads acquire the locks they need in a fixed global order.
Invoking an alien method with a lock held is asking for liveness trouble. The alien method might acquire other locks (risking deadlock) or block for an unexpectedly long time, stalling other threads that need the lock you hold.
Calling a method with no locks held is called an open call [CPJ 2.4.1.3], and classes that rely on open calls are more well-behaved and composable than classes that make calls with locks held.
Strive to use open calls throughout your program. Programs that rely on open calls are far easier to analyze for deadlock-freedom than those that allow calls to alien methods with locks held.
10.2. Avoiding and Diagnosing Deadlocks
锁的顺序获得。
Timed Lock Attempts
Deadlock Analysis with Thread Dumps
10.3. Other Liveness Hazards
Starvation
Avoid the temptation to use thread priorities, since they increase platform dependence and can cause liveness problems. Most concurrent applications can use the default priority for all threads.
Poor Responsiveness
Livelock is a form of liveness failure in which a thread, while not blocked, still cannot make progress because it keeps retrying an operation that will always fail.
The solution for this variety of livelock is to introduce some randomness into the retry mechanism.


Chapter 11. Performance and Scalability

11.1. Thinking about Performance
Scalability describes the ability to improve throughput or capacity when additional computing resources (such as additional CPUs, memory, storage, or I/O bandwidth) are added.
This is one of the reasons why most optimizations are premature: they are often undertaken before a clear set of requirements is available.
Avoid premature optimization. First make it right, then make it fast. if it is not already fast enough.
Measure, don't guess.
11.2. Amdahl's Law
All concurrent applications have some sources of serialization; if you think yours does not, look again.
11.3. Costs Introduced by Threads
Don't worry excessively about the cost of uncontended synchronization. The basic mechanism is already quite fast, and JVMs can perform additional optimizations that further reduce or eliminate the cost. Instead, focus optimization efforts on areas where lock contention actually occurs.
11.4. Reducing Lock Contention
The principal threat to scalability in concurrent applications is the exclusive resource lock.
There are three ways to reduce lock contention:
• Reduce the duration for which locks are held;
• Reduce the frequency with which locks are requested; or
• Replace exclusive locks with coordination mechanisms that permit greater concurrency.
Narrowing Lock Scope
Reducing Lock Granularity
Lock Striping
Avoiding Hot Fields
Alternatives to Exclusive Locks
Monitoring CPU Utilization
Just Say No to Object Pooling
Allocating objects is usually cheaper than synchronizing.
Even taking into account its reduced garbage collection overhead, object pooling has been shown to be a performance loss[14] for all but the most expensive objects (and a serious loss for light- and medium-weight objects) in single-threaded programs (Click, 2005).
11.5. Example: Comparing Map Performance
11.6. Reducing Context Switch Overhead


Chapter 12. Testing Concurrent Programs

Most tests of concurrent classes fall into one or both of the classic categories of safety and liveness. In Chapter 1, we defined safety as "nothing bad ever happens" and liveness as "something good eventually happens".
12.1. Testing for Correctness
The result of Thread.getState should not be used for concurrency control, and is of limited usefulness for testing its primary utility is as a source of debugging information.
Constructing tests to disclose safety errors in concurrent classes is a chicken-and-egg problem: the test programs themselves are concurrent programs. Developing good concurrent tests can be more difficult than developing the classes they test.

The challenge to constructing effective safety tests for concurrent classes is identifying easily checked properties that will, with high probability, fail if something goes wrong, while at the same time not letting the failureauditing code limit concurrency artificially. It is best if checking the test property does not require any synchronization.

Tests should be run on multiprocessor systems to increase the diversity of potential interleavings. However, having more than a few CPUs does not necessarily make tests more effective. To maximize the chance of detecting timing-sensitive data races, there should be more active threads than CPUs, so that at any given time some threads are running and some are switched out, thus reducing the predicatability of interactions between threads.

Since many of the potential failures in concurrent code are low-probability events, testing for concurrency errors is a numbers game, but there are some things you can do to improve your chances. We've already mentioned how running on multiprocessor systems with fewer processors than active threads can generate more interleavings than either a single-processor system or one with many processors. Similarly, testing on a variety of systems with different processor counts, operating systems, and processor architectures can disclose problems that might not occur on all systems.

A useful trick for increasing the number of interleavings, and therefore more effectively exploring the state space of your programs, is to use THRead.yield to encourage more context switches during operations that access shared state. (The effectiveness of this technique is platform-specific, since the JVM is free to treat THRead.yield as a no-op [JLS 17.9]; using a short but nonzero sleep would be slower but more reliable.)


12.2. Testing for Performance
测量吞吐量,响应时间,可伸缩性。
不要相信直觉,测试。
12.3. Avoiding Performance Testing Pitfalls
1 避免GC对性能测试的影响。
2 避免编译本地优化的影响。
3 避免不现实的code path采样。

For example, the JVM can use monomorphic call transformation to convert a virtual method call to a direct method call if no classes currently loaded override that method, but it invalidates the compiled code if a class is subsequently loaded that overrides the method.

4 Unrealistic Degrees of Contention

Concurrent applications tend to interleave two very different sorts of work: accessing shared data, such as fetching the next task from a shared work queue, and thread-local computation (executing the task, assuming the task itself does not access shared data). Depending on the relative proportions of the two types of work, the application will experience different levels of contention and exhibit different performance and scaling behaviors.

If N threads are fetching tasks from a shared work queue and executing them, and the tasks are compute-intensive and long-running (and do not access shared data very much), there will be almost no contention; throughput is dominated by the availability of CPU resources. On the other hand, if the tasks are very short-lived, there will be a lot of contention for the work queue and throughput is dominated by the cost of synchronization.

5 Dead Code Elimination

Writing effective performance tests requires tricking the optimizer into not optimizing away your benchmark as dead code. This requires every computed result to be used somehow by your programin a way that does not require synchronization or substantial computation.

12.4. Complementary Testing Approaches
Code review.
Static Analysis Tools
Aspect-oriented Testing Techniques
Profilers and Monitoring Tools


Chapter 13. Explicit Locks

13.1. Lock and ReentrantLock
13.2. Performance Considerations
Performance is a moving target; yesterday's benchmark showing that X is faster than Y may already be out of date today.
13.3. Fairness
Don't pay for fairness if you don't need it.
13.4. Choosing Between Synchronized and ReentrantLock
ReentrantLock is an advanced tool for situations where intrinsic locking is not practical. Use it if you need its advanced features: timed, polled, or interruptible lock acquisition, fair queueing, or non-block-structured locking. Otherwise, prefer synchronized.
13.5. Read-write Locks


Chapter 14. Building Custom Synchronizers

14.1. Managing State Dependence
14.2. Using Condition Queues
Document the condition predicate(s) associated with a condition queue and the operations that wait on them.

Every call to wait is implicitly associated with a specific condition predicate. When calling wait regarding a particular condition predicate, 
the caller must already hold the lock associated with the condition queue, and that lock must also guard the state variables from which the 
condition predicate is composed.

As if the three-way relationship among the lock, the condition predicate, and the condition queue were not complicated enough, that wait 
returns does not necessarily mean that the condition predicate the thread is waiting for has become true.

Waking Up Too Soo n
Missed Signals
Whenever you wait on a condition, make sure that someone will perform a notification whenever the condition predicate becomes true.

14.3. Explicit Condition Objects
Lock和object本身的锁无关。

14.4. Anatomy of a Synchronizer
14.5. AbstractQueuedSynchronizer
14.6. AQS in Java.util.concurrent Synchronizer Classes


Chapter 15. Atomic Variables and Nonblocking Synchronization

15.1. Disadvantages of Locking
15.2. Hardware Support for Concurrency
Compare and Swap
15.3. Atomic Variable Classes

The performance reversal between locks and atomics at differing levels of contention illustrates the strengths and weaknesses of each. With 
low to moderate contention, atomics offer better scalability; with high contention, locks offer better contention avoidance. (CAS-based 
algorithms also outperform lock-based ones on single-CPU systems, since a CAS always succeeds on a single-CPU system except in the unlikely 
case that a thread is preempted in the middle of the read-modify-write operation.)

We can improve scalability by dealing more effectively with contention, but true scalability is achieved only by eliminating contention 
entirely.

15.4. Nonblocking Algorithms


Chapter 16. The Java Memory Model

16.1. What is a Memory Model, and Why would I Want One?
The Java Language Specification requires the JVM to maintain withinthread as-if-serial semantics: as long as the program has the same result as if it were executed in program 
order in a strictly sequential environment, all these games are permissible.

One convenient mental model for program execution is to imagine that there is a single order in which the operations happen in a program, regardless of what processor they 
execute on, and that each read of a variable will see the last write in the execution order to that variable by any processor. This happy, if unrealistic, model is called 
sequential consistency. Software developers often mistakenly assume sequential consistency, but no modern multiprocessor offers sequential consistency and the JMM does not 
either. The classic sequential computing model, the von Neumann model, is only a vague approximation of how modern multiprocessors behave.

The Java Memory Model is specified in terms of actions, which include reads and writes to variables, locks and unlocks of monitors, and starting and joining with threads. The 
JMM defines a partial ordering [2] called happens-before on all actions within the program. To guarantee that the thread executing action B can see the results of action A 
(whether or not A and B occur in different threads), there must be a happens-before relationship between A and B. In the absence of a happens-before ordering between two 
operations, the JVM is free to reorder them as it pleases.

A data race occurs when a variable is read by more than one thread, and written by at least one thread, but the reads and writes are not ordered by happens-before. A correctly 
synchronized program is one with no data races; correctly synchronized programs exhibit sequential consistency, meaning that all actions within the program appear to happen in a 
fixed, global order.

The rules for happens-before are:


Program order rule. Each action in a thread happens-before every action in that thread that comes later in the program order.
Monitor lock rule. An unlock on a monitor lock happens-before every subsequent lock on that same monitor lock.[3]
Volatile variable rule. A write to a volatile field happens-before every subsequent read of that same field.[4]
Thread start rule. A call to Thread.start on a thread happens-before every action in the started thread.
Thread termination rule. Any action in a thread happens-before any other thread detects that thread has terminated, either by successfully return from Thread.join or by 
Thread.isAlive returning false.
Interruption rule. A thread calling interrupt on another thread happens-before the interrupted thread detects the interrupt (either by having InterruptedException tHRown, or 
invoking isInterrupted or interrupted).
Finalizer rule. The end of a constructor for an object happens-before the start of the finalizer for that object.
Transitivity. If A happens-before B, and B happens-before C, then A happens-before C.


16.2. Publication
With the exception of immutable objects, it is not safe to use an object that has been initialized by another thread unless the publication happensbefore the consuming thread 
uses it.

This happens-before guarantee is actually a stronger promise of visibility and ordering than made by safe publication. When X is safely published from A to B, the safe 
publication guarantees visibility of the state of X, but not of the state of other variables A may have touched. But if A putting X on a queue happens-before B fetches X from 
that queue, not only does B see X in the state that A left it (assuming that X has not been subsequently modified by A or anyone else), but B sees everything A did before the 
handoff (again, subject to the same caveat).[5]

The treatment of static fields with initializers (or fields whose value is initialized in a static initialization block [JPL 2.2.1 and 2.5.3]) is somewhat special and offers 
additional thread-safety guarantees. Static initializers are run by the JVM at class initialization time, after class loading but before the class is used by any thread. Because 
the JVM acquires a lock during initialization [JLS 12.4.2] and this lock is acquired by each thread at least once to ensure that the class has been loaded, memory writes made 
during static initialization are automatically visible to all threads. Thus statically initialized objects require no explicit synchronization either during construction or when 
being referenced. However, this applies only to the as-constructed stateif the object is mutable, synchronization is still required by both readers and writers to make 
subsequent modifications visible and to avoid data corruption.


16.3. Initialization Safety
Initialization safety guarantees that for properly constructed objects, all threads will see the correct values of final fields that were set by the constructor, regardless of 
how the object is published. Further, any variables that can be reached through a final field of a properly constructed object (such as the elements of a final array or the 
contents of a HashMap referenced by a final field) are also guaranteed to be visible to other threads. [6]

Initialization safety makes visibility guarantees only for the values that are reachable through final fields as of the time the constructor finishes. For values reachable 
through nonfinal fields, or values that may change after construction, you must use synchronization to ensure visibility.

你可能感兴趣的:(Java Concurrency in Practice读书笔记)