消费者发起调用过程中涉及如下几步
1:接口调用,比如DemoService.demoMethod
2:InvokerInvocationHandler.invoker:消费端启动时,通过JavassistProxyFactory.getProxy反射获取代理类,之后服务调用就直接调用这个Handler
3:MigrationInvoker.invoke:Dubbo 发起调用非常重要的一步,如果失败了,通过这个invoker做切换
4:其他
5:FailoverClusterInvoker.invoke(目前我们使用的,实际在AbstractClusterInvoker里面invoke逻辑是固定的)
6:其他
@Override
public Result invoke(final Invocation invocation) throws RpcException {
checkWhetherDestroyed();
// binding attachments into invocation.
Map contextAttachments = RpcContext.getContext().getObjectAttachments();
if (contextAttachments != null && contextAttachments.size() != 0) {
((RpcInvocation) invocation).addObjectAttachments(contextAttachments);
}
//如果设置了标签规则,则通过list方法过滤出来符合标签的几个invoker
List> invokers = list(invocation);
LoadBalance loadbalance = initLoadBalance(invokers, invocation);
RpcUtils.attachInvocationIdIfAsync(getUrl(), invocation);
//请求负载并且做好灾备降级
return doInvoke(invocation, invokers, loadbalance);
}
public Result doInvoke(Invocation invocation, final List> invokers, LoadBalance loadbalance) throws RpcException {
List> copyInvokers = invokers;
checkInvokers(copyInvokers, invocation);
String methodName = RpcUtils.getMethodName(invocation);
int len = calculateInvokeTimes(methodName);
// retry loop.
RpcException le = null; // last exception.
List> invoked = new ArrayList>(copyInvokers.size()); // invoked invokers.
Set providers = new HashSet(len);
for (int i = 0; i < len; i++) {
//Reselect before retry to avoid a change of candidate `invokers`.
//NOTE: if `invokers` changed, then `invoked` also lose accuracy.
if (i > 0) {
checkWhetherDestroyed();
copyInvokers = list(invocation);
// check again
checkInvokers(copyInvokers, invocation);
}
//这里通过loadBalance做负载
Invoker invoker = select(loadbalance, invocation, copyInvokers, invoked);
invoked.add(invoker);
RpcContext.getContext().setInvokers((List) invoked);
try {
Result result = invoker.invoke(invocation);
if (le != null && logger.isWarnEnabled()) {
logger.warn("Although retry the method " + methodName
+ " in the service " + getInterface().getName()
+ " was successful by the provider " + invoker.getUrl().getAddress()
+ ", but there have been failed providers " + providers
+ " (" + providers.size() + "/" + copyInvokers.size()
+ ") from the registry " + directory.getUrl().getAddress()
+ " on the consumer " + NetUtils.getLocalHost()
+ " using the dubbo version " + Version.getVersion() + ". Last error is: "
+ le.getMessage(), le);
}
return result;
} catch (RpcException e) {
if (e.isBiz()) { // biz exception.
throw e;
}
le = e;
} catch (Throwable e) {
le = new RpcException(e.getMessage(), e);
} finally {
providers.add(invoker.getUrl().getAddress());
}
}
throw new RpcException(le.getCode(), "Failed to invoke the method "
+ methodName + " in the service " + getInterface().getName()
+ ". Tried " + len + " times of the providers " + providers
+ " (" + providers.size() + "/" + copyInvokers.size()
+ ") from the registry " + directory.getUrl().getAddress()
+ " on the consumer " + NetUtils.getLocalHost() + " using the dubbo version "
+ Version.getVersion() + ". Last error is: "
+ le.getMessage(), le.getCause() != null ? le.getCause() : le);
}
第一种:ConsistentHashLoadBalance 一致性哈希负载
@Override
protected Invoker doSelect(List> invokers, URL url, Invocation invocation) {
String methodName = RpcUtils.getMethodName(invocation);
String key = invokers.get(0).getUrl().getServiceKey() + "." + methodName;
// using the hashcode of list to compute the hash only pay attention to the elements in the list
int invokersHashCode = getCorrespondingHashCode(invokers);
ConsistentHashSelector selector = (ConsistentHashSelector) selectors.get(key);
if (selector == null || selector.identityHashCode != invokersHashCode) {
selectors.put(key, new ConsistentHashSelector(invokers, methodName, invokersHashCode));
selector = (ConsistentHashSelector) selectors.get(key);
}
return selector.select(invocation);
}
public Invoker select(Invocation invocation) {
String key = toKey(invocation.getArguments());
byte[] digest = Bytes.getMD5(key);
return selectForKey(hash(digest, 0));
}
private Invoker selectForKey(long hash) {
//这个virtualInvokers是一个TreeMap,通过hash选择一个entry,可能包含多个invoker
Map.Entry> entry = virtualInvokers.ceilingEntry(hash);
if (entry == null) {
entry = virtualInvokers.firstEntry();
}
String serverAddress = entry.getValue().getUrl().getAddress();
double overloadThread = ((double) totalRequestCount.get() / (double) serverCount) * OVERLOAD_RATIO_THREAD;
//算负载,选一个可用,当前不行就选择下一个,轮训完没有合适的,就直接拿第一个
while (serverRequestCountMap.containsKey(serverAddress)
&& serverRequestCountMap.get(serverAddress).get() >= overloadThread) {
entry = getNextInvokerNode(virtualInvokers, entry);
serverAddress = entry.getValue().getUrl().getAddress();
}
if (!serverRequestCountMap.containsKey(serverAddress)) {
serverRequestCountMap.put(serverAddress, new AtomicLong(1));
} else {
serverRequestCountMap.get(serverAddress).incrementAndGet();
}
totalRequestCount.incrementAndGet();
return entry.getValue();
}
综上,一致性哈希就是预先把所有的invoker的url取hashcode,然后放到treeMap中,做负载时对当前请求的参数拼接后取Md5再取hash值,映射treeMap中的invoker。
TreeMap初始化,每个address初始化160个hash值放到TreeMap
for (Invoker invoker : invokers) {
String address = invoker.getUrl().getAddress();
for (int i = 0; i < replicaNumber / 4; i++) {
byte[] digest = Bytes.getMD5(address + i);
for (int h = 0; h < 4; h++) {
long m = hash(digest, h);
virtualInvokers.put(m, invoker);
}
}
}
private static long hash(byte[] digest, int number) {
return (((long) (digest[3 + number * 4] & 0xFF) << 24)
| ((long) (digest[2 + number * 4] & 0xFF) << 16)
| ((long) (digest[1 + number * 4] & 0xFF) << 8)
| (digest[number * 4] & 0xFF))
& 0xFFFFFFFFL;
}
负载时的代码在上面,注意负载时是通过参数运算的,所以可以暂且认为不会命中160个hash值中的任意一个。所以才使用了TreeMap的ceilingEntry,也就是如果有1,4,8以及其他更大的hash值,负载时计算的hash值是7,那么ceilingEntry取到的就是8
总结:通过对一个接口的所有可用实例的地址做md5,hash后,每个地址初始化160个hash值放到TreeMap。负载时通过对参数取hash值,取大于等于其值的那个invoker,然后根据历史请求树,判断当前负载,可用,则返回,不可用则选下一个,都不可用,那么索性就拿TreeMap的第一个。
类似钟表,比如分针现在是超过10,则ceilingEntry返回的是11。
对于那些接口参数区分度高的比较适用,比如带有流水号的交易订单。
而对于那些接口参数比较固定的,比如参数只包含固定枚举或者少数几个值的接口则不适用