AggregateEventHandler.java
对EventHandler列表的封装,类似EventHandler List的功能,还实现了生命周期的管理,onStart onShutdown。
Sequence.java
Cache line padded sequence counter 补齐Cache line的序列计数器,ringbuffer和BatchEventProcessor使用到此类来计数。
补齐方式:
public long p1 , p2 , p3 , p4 , p5 , p6 , p7 ; // cache line padding, padding1 private volatile long cursor = INITIAL_CURSOR_VALUE ;publiclong p8, p9, p10, p11, p12, p13, p14;// cache line padding. padding2
情形1: object(0~8byte)+ padding1,
cursor+padding2
情形2: padding1+ cursor,
padding2 + object
这样,保证不同Sequence instance在不同的Cache line
参考资料:http://mechanical-sympathy.blogspot.com/2011/07/false-sharing.html
1.doubles(8) and longs(8)
2.ints(4) and floats(4)
3.shorts(2) and chars(2)
4.booleans(1) and bytes(1)
5.references(4/8)
6.<repeat for sub-class fields>
所以我们补齐cache line:在任意field之间补上7个long(8)
BatchEventProcessor.java 批量从RingBuffer获取event代理給EventHandler处理。
关键代码:
public void run() { if (!running.compareAndSet(false, true)) { throw new IllegalStateException("Thread is already running"); } sequenceBarrier.clearAlert(); notifyStart(); T event = null; long nextSequence = sequence.get() + 1L; while (true) { try { final long availableSequence = sequenceBarrier.waitFor(nextSequence); //批量处理,nextSequence无限增长怎么办? while (nextSequence <= availableSequence) { event = ringBuffer.get(nextSequence); eventHandler.onEvent(event, nextSequence,nextSequence == availableSequence); nextSequence++; } sequence.set(nextSequence - 1L);//注意回退1,标示(nextSequence - 1L)的event已经消费完成 } catch (final AlertException ex) { if (!running.get()) { break; } } catch (final Throwable ex) { exceptionHandler.handleEventException(ex,nextSequence, event);//异常处理类处理异常信息 sequence.set(nextSequence);//跳过异常信息的序列 nextSequence++; } } notifyShutdown(); running.set(false); }
ClaimStrategy.java
Sequencer里面的、用于event publishers申请event序列的策略合同。
有以下3种实现:
SingleThreadedClaimStrategy.java: 针对发布者的策略的单线程实现,只能在单线程做publisher的场景使用。
关键方法:
// availableCapacity 需要申请的可用数量 // dependentSequences 依赖的序列 public boolean hasAvailableCapacity(final int availableCapacity, final Sequence[] dependentSequences) { final long wrapPoint = (claimSequence.get() + availableCapacity) - bufferSize;//当前已经作为发布使用的序列(未被消费)+申请数量- if (wrapPoint > minGatingSequence.get()) { long minSequence = getMinimumSequence(dependentSequences); //取出依赖序列中的最小的序列(未被消费) minGatingSequence.set(minSequence); if (wrapPoint > minSequence) { return false;//如果期望的到达的序列位置大于依赖序列中的最小的序列(未被消费),说明尚未消费,所以没有可用序列用于给发布者分配 } } return true; } private void waitForFreeSlotAt(final long sequence, final Sequence[] dependentSequences) { final long wrapPoint = sequence - bufferSize; if (wrapPoint > minGatingSequence.get()) { long minSequence; while (wrapPoint > (minSequence = getMinimumSequence(dependentSequences))) { LockSupport.parkNanos(1L);//等待1纳秒 } minGatingSequence.set(minSequence); } }
MultiThreadedClaimStrategy.java
@Override public long incrementAndGet(final Sequence[] dependentSequences) { final MutableLong minGatingSequence = minGatingSequenceThreadLocal.get(); waitForCapacity(dependentSequences,minGatingSequence);//什么技巧? final long nextSequence = claimSequence.incrementAndGet(); waitForFreeSlotAt(nextSequence,dependentSequences, minGatingSequence); return nextSequence; } @Override public long incrementAndGet(final int delta, final Sequence[] dependentSequences) { final long nextSequence = claimSequence.addAndGet(delta); waitForFreeSlotAt(nextSequence,dependentSequences, minGatingSequenceThreadLocal.get()); return nextSequence; } @Override public void serialisePublishing(final long sequence, final Sequence cursor, final int batchSize) { int counter = RETRIES; while (sequence - cursor.get() > pendingPublication.length()) { if (--counter == 0) { Thread.yield(); counter = RETRIES; } } long expectedSequence = sequence - batchSize; for (long pendingSequence = expectedSequence + 1;pendingSequence <= sequence; pendingSequence++) { pendingPublication.set((int) pendingSequence& pendingMask, pendingSequence); } long cursorSequence = cursor.get(); if (cursorSequence >= sequence) { return; } expectedSequence = Math.max(expectedSequence,cursorSequence); long nextSequence = expectedSequence + 1; while (cursor.compareAndSet(expectedSequence, nextSequence)) { expectedSequence = nextSequence; nextSequence++; if (pendingPublication.get((int) nextSequence & pendingMask) != nextSequence) //这里是什么含义?只有当nextSequence 大于 PendingBufferSize才会出现不相等的情况。 { break; } } }
MultiThreadedLowContentionClaimStrategy.java
与MultiThreadedClaimStrategy.java的在于:
@Override public void serialisePublishing(final long sequence, final Sequence cursor, final int batchSize) { final long expectedSequence = sequence - batchSize; while (expectedSequence != cursor.get())//会不会死循环? { // busy spin } cursor.set(sequence); }
EventPublisher.java
时间发布者,主要代码:
private void translateAndPublish(final EventTranslator<E> translator, final long sequence) { try { translator.translateTo(ringBuffer.get(sequence),sequence);//需要根据传入的translator来依据sequence转换event后,再发布event. } finally { ringBuffer.publish(sequence); } }
WaitStrategy.java
定制EventProcessor等待cursor这个sequence的策略,有以下4种实现:
/**
* Blocking strategy that uses a lock andcondition variable for {@linkEventProcessor}s waiting on a barrier.
*
* This strategy can be used when throughputand low-latencyare not as important as CPU resource.
*/
BlockingWaitStrategy.java:用到了lock,所以只适合用在throughput和low-latency要求不高的情况下。
/**
* Busy Spin strategy that uses a busy spinloop for {@link com.lmax.disruptor.EventProcessor}s waiting on a barrier.
*
* This strategy will use CPU resource to avoidsyscalls which can introduce latency jitter. It is best
* used when threads can be bound to specificCPU cores.
*/
BusySpinWaitStrategy.java:这种是耗cpu的做法,不做yield()。
/**
* Sleeping strategy that initially spins, thenuses a Thread.yield(), and eventually for the minimum number of nanos
* the OS and JVM will allow while the {@link com.lmax.disruptor.EventProcessor}s are waiting on a barrier.
*
* This strategy is a good compromise betweenperformance and CPU resource. Latency spikes can occur after quiet periods.
*/
SleepingWaitStrategy.java:做一个counter的判断,小于100才yield(),小于0做LockSupport.parkNanos(1L);
/**
* Yielding strategy that uses a Thread.yield()for {@link com.lmax.disruptor.EventProcessor}s waiting on a barrier
* after an initially spinning.
*
* This strategy is a good compromise betweenperformance and CPU resource without incurring significant latency spikes.
*/
YieldingWaitStrategy.java:counter==0,才做yield()。