SpringCache产生的背景其实与Spring产生的背景有点类似。由于Java EE 系统框架臃肿、低效,代码可观性低,对象创建和依赖关系复杂,Spring框架出来了,目前基本上所有的Java后台项目都离不开Spring或SpringBoot(对Spring的进一步简化)。现在项目面临高并发的问题越来越多,各类缓存的应用也增多,那么在通用的Spring框架上,就需要有一种更加便捷简单的方式,来完成缓存的支持,就这样SpringCache就出现了。
private Object execute(final CacheOperationInvoker invoker, Method method, CacheOperationContexts contexts) {
// Special handling of synchronized invocation
if (contexts.isSynchronized()) {
CacheOperationContext context = contexts.get(CacheableOperation.class).iterator().next();
if (isConditionPassing(context, CacheOperationExpressionEvaluator.NO_RESULT)) {
Object key = generateKey(context, CacheOperationExpressionEvaluator.NO_RESULT);
Cache cache = context.getCaches().iterator().next();
try {
return wrapCacheValue(method, cache.get(key, new Callable() {
@Override
public Object call() throws Exception {
return unwrapReturnValue(invokeOperation(invoker));
}
}));
}
catch (Cache.ValueRetrievalException ex) {
// The invoker wraps any Throwable in a ThrowableWrapper instance so we
// can just make sure that one bubbles up the stack.
throw (CacheOperationInvoker.ThrowableWrapper) ex.getCause();
}
}
else {
// No caching required, only call the underlying method
return invokeOperation(invoker);
}
}
// Process any early evictions
processCacheEvicts(contexts.get(CacheEvictOperation.class), true,
CacheOperationExpressionEvaluator.NO_RESULT);
// Check if we have a cached item matching the conditions
Cache.ValueWrapper cacheHit = findCachedItem(contexts.get(CacheableOperation.class));
// Collect puts from any @Cacheable miss, if no cached item is found
List cachePutRequests = new LinkedList();
if (cacheHit == null) {
collectPutRequests(contexts.get(CacheableOperation.class),
CacheOperationExpressionEvaluator.NO_RESULT, cachePutRequests);
}
Object cacheValue;
Object returnValue;
if (cacheHit != null && cachePutRequests.isEmpty() && !hasCachePut(contexts)) {
// If there are no put requests, just use the cache hit
cacheValue = cacheHit.get();
returnValue = wrapCacheValue(method, cacheValue);
}
else {
// Invoke the method if we don't have a cache hit
returnValue = invokeOperation(invoker);
cacheValue = unwrapReturnValue(returnValue);
}
// Collect any explicit @CachePuts
collectPutRequests(contexts.get(CachePutOperation.class), cacheValue, cachePutRequests);
// Process any collected put requests, either from @CachePut or a @Cacheable miss
for (CachePutRequest cachePutRequest : cachePutRequests) {
cachePutRequest.apply(cacheValue);
}
// Process any late evictions
processCacheEvicts(contexts.get(CacheEvictOperation.class), false, cacheValue);
return returnValue;
}
1. aggregateByKey的运行机制
/**
* Aggregate the values of each key, using given combine functions and a neutral "zero value".
* This function can return a different result type
spark-sql是Spark bin目录下的一个可执行脚本,它的目的是通过这个脚本执行Hive的命令,即原来通过
hive>输入的指令可以通过spark-sql>输入的指令来完成。
spark-sql可以使用内置的Hive metadata-store,也可以使用已经独立安装的Hive的metadata store
关于Hive build into Spark
// Max value in Array
var arr = [1,2,3,5,3,2];Math.max.apply(null, arr); // 5
// Max value in Jaon Array
var arr = [{"x":"8/11/2009","y":0.026572007},{"x"
在使用XMlhttpRequest对象发送请求和响应之前,必须首先使用javaScript对象创建一个XMLHttpRquest对象。
var xmlhttp;
function getXMLHttpRequest(){
if(window.ActiveXObject){
xmlhttp:new ActiveXObject("Microsoft.XMLHTTP