归并引擎的职责定位是进行结果集的合并,支持应用以标准的JDBC接口访问正确的结果集ResultSet。因为在数据分片模式下,SQL可能会需要在多个数据节点上执行,各数据节点的结果集之间是独立不关联的,在排序、分组、聚合等操作时,就需要对结果集进行归并处理,以屏蔽后端多个数据库给应用操作带来的差异性。
代码执行分析
合并引擎对应的类为MergeEngine,但其内部真正进行处理类为MergeEntry,其实例merger在构造函数中完成创建。
org.apache.shardingsphere.underlying.pluggble.merge.MergeEngine
private final Collection rules;
private final MergeEntry merger;
public MergeEngine(final Collection rules, final ConfigurationProperties properties, final DatabaseType databaseType, final SchemaMetaData metaData) {
this.rules = rules;
merger = new MergeEntry(databaseType, metaData, properties);
}
/**
* Merge.
*
* @param queryResults query results
* @param sqlStatementContext SQL statement context
* @return merged result
* @throws SQLException SQL exception
*/
public MergedResult merge(final List queryResults, final SQLStatementContext sqlStatementContext) throws SQLException {
registerMergeDecorator();
return merger.process(queryResults, sqlStatementContext);
}
private void registerMergeDecorator() {
for (Class extends ResultProcessEngine> each : OrderedRegistry.getRegisteredClasses(ResultProcessEngine.class)) {
ResultProcessEngine processEngine = createProcessEngine(each);
Class> ruleClass = (Class>) processEngine.getType();
// FIXME rule.getClass().getSuperclass() == ruleClass for orchestration, should decouple extend between orchestration rule and sharding rule
rules.stream().filter(rule -> rule.getClass() == ruleClass || rule.getClass().getSuperclass() == ruleClass).collect(Collectors.toList())
.forEach(rule -> merger.registerProcessEngine(rule, processEngine));
}
}
private ResultProcessEngine createProcessEngine(final Class extends ResultProcessEngine> processEngine) {
try {
return processEngine.newInstance();
} catch (final InstantiationException | IllegalAccessException ex) {
throw new ShardingSphereException(String.format("Can not find public default constructor for result process engine `%s`", processEngine), ex);
}
}
}
merge方法首先进行了对merge装饰器进行了注册,具体根据传入的BaseRule类型,将需要的ResultProcessEngine进行实例化,并添加到MergeEntry实例的engines属性中。真正的处理逻辑在MergeEngine实例的process方法中。
在4.1.1版本中,结果集合并引擎ResultMergerEngine接口实现类只有ShardingResultMergerEngine类,结果装饰器ResultDecoratorEngine接口实现类只有EncryptResultDecoratorEngine。
接下来看下MergeEntry 类的process方法
org.apache.shardingsphere.underlying.merge.MergeEntry
@RequiredArgsConstructor
public final class MergeEntry {
private final DatabaseType databaseType;
private final SchemaMetaData schemaMetaData;
private final ConfigurationProperties properties;
private final Map engines = new LinkedHashMap<>();
…
/**
* Process query results.
*
* @param queryResults query results
* @param sqlStatementContext SQL statement context
* @return merged result
* @throws SQLException SQL exception
*/
public MergedResult process(final List queryResults, final SQLStatementContext sqlStatementContext) throws SQLException {
Optional mergedResult = merge(queryResults, sqlStatementContext);
Optional result = mergedResult.isPresent() ? Optional.of(decorate(mergedResult.get(), sqlStatementContext)) : decorate(queryResults.get(0), sqlStatementContext);
return result.orElseGet(() -> new TransparentMergedResult(queryResults.get(0)));
}
private Optional merge(final List queryResults, final SQLStatementContext sqlStatementContext) throws SQLException {
for (Entry entry : engines.entrySet()) {
if (entry.getValue() instanceof ResultMergerEngine) {
ResultMerger resultMerger = ((ResultMergerEngine) entry.getValue()).newInstance(databaseType, entry.getKey(), properties, sqlStatementContext);
return Optional.of(resultMerger.merge(queryResults, sqlStatementContext, schemaMetaData));
}
}
return Optional.empty();
}
@SuppressWarnings("unchecked")
private MergedResult decorate(final MergedResult mergedResult, final SQLStatementContext sqlStatementContext) throws SQLException {
MergedResult result = null;
for (Entry entry : engines.entrySet()) {
if (entry.getValue() instanceof ResultDecoratorEngine) {
ResultDecorator resultDecorator = ((ResultDecoratorEngine) entry.getValue()).newInstance(databaseType, schemaMetaData, entry.getKey(), properties, sqlStatementContext);
result = null == result ? resultDecorator.decorate(mergedResult, sqlStatementContext, schemaMetaData) : resultDecorator.decorate(result, sqlStatementContext, schemaMetaData);
}
}
return null == result ? mergedResult : result;
}
@SuppressWarnings("unchecked")
private Optional decorate(final QueryResult queryResult, final SQLStatementContext sqlStatementContext) throws SQLException {
MergedResult result = null;
for (Entry entry : engines.entrySet()) {
if (entry.getValue() instanceof ResultDecoratorEngine) {
ResultDecorator resultDecorator = ((ResultDecoratorEngine) entry.getValue()).newInstance(databaseType, schemaMetaData, entry.getKey(), properties, sqlStatementContext);
result = null == result ? resultDecorator.decorate(queryResult, sqlStatementContext, schemaMetaData) : resultDecorator.decorate(result, sqlStatementContext, schemaMetaData);
}
}
return Optional.ofNullable(result);
}
process方法主要逻辑主要有两部分:
一、 遍历拿到注册的ResultMergerEngine实例(只能有一个),调用其newInstance方法返回对应的ResultMerger对象,然后执行ResultMerger实例的merge方法生成合并结果集MergedResult实例。
转入ShardingResultMergerEngine的newInstance方法可以看到,根据SQLStatementContext类型的种类创建不同的ResultMerger实例。如果是SQL是Select,则创建ShardingDQLResultMerger对象;如果是DAL语句(show database、show table 等),则创建ShardingDALResultMerger对象。
org.apache.shardingsphere.sharding.merge.ShardingResultMergerEngine
/**
* Result merger engine for sharding.
*/
public final class ShardingResultMergerEngine implements ResultMergerEngine {
@Override
public ResultMerger newInstance(final DatabaseType databaseType, final ShardingRule shardingRule, final ConfigurationProperties properties, final SQLStatementContext sqlStatementContext) {
if (sqlStatementContext instanceof SelectStatementContext) {
return new ShardingDQLResultMerger(databaseType);
}
if (sqlStatementContext.getSqlStatement() instanceof DALStatement) {
return new ShardingDALResultMerger(shardingRule);
}
return new TransparentResultMerger();
}
…
}
接下来我们看下最常见的Select类的结果合并器
org.apache.shardingsphere.sharding.merge.dql.ShardingDQLResultMerger
public final class ShardingDQLResultMerger implements ResultMerger {
private final DatabaseType databaseType;
@Override
public MergedResult merge(final List queryResults, final SQLStatementContext sqlStatementContext, final SchemaMetaData schemaMetaData) throws SQLException {
if (1 == queryResults.size()) {
return new IteratorStreamMergedResult(queryResults);
}
Map columnLabelIndexMap = getColumnLabelIndexMap(queryResults.get(0));
SelectStatementContext selectStatementContext = (SelectStatementContext) sqlStatementContext;
selectStatementContext.setIndexes(columnLabelIndexMap);
MergedResult mergedResult = build(queryResults, selectStatementContext, columnLabelIndexMap, schemaMetaData);//生成合并结果集
return decorate(queryResults, selectStatementContext, mergedResult);//对合并结果集进行装饰处理
}
…
private MergedResult build(final List queryResults, final SelectStatementContext selectStatementContext,
final Map columnLabelIndexMap, final SchemaMetaData schemaMetaData) throws SQLException {
if (isNeedProcessGroupBy(selectStatementContext)) {
return getGroupByMergedResult(queryResults, selectStatementContext, columnLabelIndexMap, schemaMetaData);
}
if (isNeedProcessDistinctRow(selectStatementContext)) {
setGroupByForDistinctRow(selectStatementContext);
return getGroupByMergedResult(queryResults, selectStatementContext, columnLabelIndexMap, schemaMetaData);
}
if (isNeedProcessOrderBy(selectStatementContext)) {
return new OrderByStreamMergedResult(queryResults, selectStatementContext, schemaMetaData);
}
return new IteratorStreamMergedResult(queryResults);
}
…private MergedResult getGroupByMergedResult(final List queryResults, final SelectStatementContext selectStatementContext,
final Map columnLabelIndexMap, final SchemaMetaData schemaMetaData) throws SQLException {
return selectStatementContext.isSameGroupByAndOrderByItems()
? new GroupByStreamMergedResult(columnLabelIndexMap, queryResults, selectStatementContext, schemaMetaData)
: new GroupByMemoryMergedResult(queryResults, selectStatementContext, schemaMetaData);
}
private boolean isNeedProcessOrderBy(final SelectStatementContext selectStatementContext) {
return !selectStatementContext.getOrderByContext().getItems().isEmpty();
}
private MergedResult decorate(final List queryResults, final SelectStatementContext selectStatementContext, final MergedResult mergedResult) throws SQLException {
PaginationContext paginationContext = selectStatementContext.getPaginationContext();
if (!paginationContext.isHasPagination() || 1 == queryResults.size()) {
return mergedResult;
}
String trunkDatabaseName = DatabaseTypes.getTrunkDatabaseType(databaseType.getName()).getName();
if ("MySQL".equals(trunkDatabaseName) || "PostgreSQL".equals(trunkDatabaseName)) {
return new LimitDecoratorMergedResult(mergedResult, paginationContext);
}
if ("Oracle".equals(trunkDatabaseName)) {
return new RowNumberDecoratorMergedResult(mergedResult, paginationContext);
}
if ("SQLServer".equals(trunkDatabaseName)) {
return new TopAndRowNumberDecoratorMergedResult(mergedResult, paginationContext);
}
return mergedResult;
}
}
可以看到ShardingDQLResultMerger的merge方法:
- 首先生成columnLabel位置下标的Map,然后传入SelectStatementContext.setIndexes方法,进行聚合projecttion、group by、order by设置其对应的columnLabel所在位置下标,这些下标会在GroupByStreamMergedResult的等类中访问。
- build方法中判断是否需要处理group by、distinct以及order by,然后返回对应的
MergedResult实现类。这些实现类是合并结果集的真正逻辑所在。getGroupByMergedResult方法中判断group by与order by的列是否一样,如果是则创建GroupByStreamMergedResult实例,否则创建GroupByMemoryMergedResult实例,前者为基于流方式,后者为基于内存方式。
看下GroupByStreamMergedResult类的实现
org.apache.shardingsphere.sharding.merge.dql.groupby.GroupByStreamMergedResult
/**
* Stream merged result for group by.
*/
public final class GroupByStreamMergedResult extends OrderByStreamMergedResult {
private final SelectStatementContext selectStatementContext;
private final List
GroupByStreamMergedResult继承自OrderByStreamMergedResult类,该类实现的即为基于流模式的排序合并结果集。
org.apache.shardingsphere.sharding.merge.dql.orderby.OrderByStreamMergedResult
/**
* Stream merged result for order by.
*/
public class OrderByStreamMergedResult extends StreamMergedResult {
private final Collection orderByItems;
@Getter(AccessLevel.PROTECTED)
private final Queue orderByValuesQueue;
@Getter(AccessLevel.PROTECTED)
private boolean isFirstNext;
public OrderByStreamMergedResult(final List queryResults, final SelectStatementContext selectStatementContext, final SchemaMetaData schemaMetaData) throws SQLException {
this.orderByItems = selectStatementContext.getOrderByContext().getItems();
this.orderByValuesQueue = new PriorityQueue<>(queryResults.size());
orderResultSetsToQueue(queryResults, selectStatementContext, schemaMetaData);
isFirstNext = true;
}
//将每个数据节点的查询结果各取一个放入优先队列中,设置当前查询结果为队列头元素,由于优先队列能保证其中元素的顺序性,因此 每次取出的队头元素即为排序最小的。
private void orderResultSetsToQueue(final List queryResults, final SelectStatementContext selectStatementContext, final SchemaMetaData schemaMetaData) throws SQLException {
for (QueryResult each : queryResults) {
OrderByValue orderByValue = new OrderByValue(each, orderByItems, selectStatementContext, schemaMetaData);
if (orderByValue.next()) {
orderByValuesQueue.offer(orderByValue);
}
}
setCurrentQueryResult(orderByValuesQueue.isEmpty() ? queryResults.get(0) : orderByValuesQueue.peek().getQueryResult());
}
@Override
public boolean next() throws SQLException {
if (orderByValuesQueue.isEmpty()) {
return false;
}
if (isFirstNext) {
isFirstNext = false;
return true;
}
//获取队列头元素,然后取该元素所在数据组的下一个元素,如果存在则继续压入有限队列
OrderByValue firstOrderByValue = orderByValuesQueue.poll();
if (firstOrderByValue.next()) {
orderByValuesQueue.offer(firstOrderByValue);
}
if (orderByValuesQueue.isEmpty()) {
return false;
}
setCurrentQueryResult(orderByValuesQueue.peek().getQueryResult());
return true;
}
}
OrderByStreamMergedResult类巧妙的使用优先队列实现了基于流模式的排序,由于每个数据集已经有序,所以在next()操作时弹出队列头部元素,然后再取该数据集下一个压入队列,当进行读取数据时直接读取队头元素对应值即可。
这里需要注意的是该优先队列采用的是小顶堆,但实际排序的时候可能是升序也可能是降序,那这里如何保证在不同排序方向时都能正确返回结果呢。玄机在排序元素OrderByValue类,其实现了Comparable接口,其会根据排序方向保证元素值的正确顺序。
org.apache.shardingsphere.sharding.merge.dql.orderby.OrderByValue
public final class OrderByValue implements Comparable {
…
public int compareTo(final OrderByValue o) {
int i = 0;
for (OrderByItem each : orderByItems) {
int result = CompareUtil.compareTo(orderValues.get(i), o.orderValues.get(i), each.getSegment().getOrderDirection(),
each.getSegment().getNullOrderDirection(), orderValuesCaseSensitive.get(i));
if (0 != result) {
return result;
}
i++;
}
return 0;
}
}
在CompareUtil类的compareTo方法中,根据排序方向进行返回比较结果,当降序时直接返回值得比较结果,如果是升序则对值比较结果取反。
public final class CompareUtil {
/**
* Compare two object with order type.
*
* @param thisValue this value
* @param otherValue other value
* @param orderDirection order direction
* @param nullOrderDirection order direction for null value
* @param caseSensitive case sensitive
* @return compare result
*/
@SuppressWarnings("unchecked")
public static int compareTo(final Comparable thisValue, final Comparable otherValue, final OrderDirection orderDirection, final OrderDirection nullOrderDirection, final boolean caseSensitive) {
if (null == thisValue && null == otherValue) {
return 0;
}
if (null == thisValue) {
return orderDirection == nullOrderDirection ? -1 : 1;
}
if (null == otherValue) {
return orderDirection == nullOrderDirection ? 1 : -1;
}
if (!caseSensitive && thisValue instanceof String && otherValue instanceof String) {
return compareToCaseInsensitiveString((String) thisValue, (String) otherValue, orderDirection);
}
return OrderDirection.ASC == orderDirection ? thisValue.compareTo(otherValue) : -thisValue.compareTo(otherValue);//当升序时,将比较结果值取反
}
当group by和order by的列不一样时,则采用基于内存模式归并,对应合并结果集类为org.apache.shardingsphere.sharding.merge.dql.groupby.GroupByMemoryMergedResult
/**
* Memory merged result for group by.
*/
public final class GroupByMemoryMergedResult extends MemoryMergedResult {
public GroupByMemoryMergedResult(final List queryResults, final SelectStatementContext selectStatementContext, final SchemaMetaData schemaMetaData) throws SQLException {
super(null, schemaMetaData, selectStatementContext, queryResults);
}
@Override
protected List init(final ShardingRule shardingRule,
final SchemaMetaData schemaMetaData, final SQLStatementContext sqlStatementContext, final List queryResults) throws SQLException {
SelectStatementContext selectStatementContext = (SelectStatementContext) sqlStatementContext;
Map dataMap = new HashMap<>(1024);
Map> aggregationMap = new HashMap<>(1024);
for (QueryResult each : queryResults) {//遍历所有结果集元素,然后进行聚合运算
while (each.next()) {
GroupByValue groupByValue = new GroupByValue(each, selectStatementContext.getGroupByContext().getItems());
initForFirstGroupByValue(selectStatementContext, each, groupByValue, dataMap, aggregationMap);//初始化创建GroupByValue对应的AggregationUnit
aggregate(selectStatementContext, each, groupByValue, aggregationMap);//对各分组进行聚合计算
}
}
setAggregationValueToMemoryRow(selectStatementContext, dataMap, aggregationMap);//将计算的值设置到aggregationMap
List valueCaseSensitive = queryResults.isEmpty() ? Collections.emptyList() : getValueCaseSensitive(queryResults.iterator().next(), selectStatementContext, schemaMetaData);
return getMemoryResultSetRows(selectStatementContext, dataMap, valueCaseSensitive);//
}
…
//将aggregationMap中值排序后生成List集合,此及时作为内存合并结果的内部数据类
private List getMemoryResultSetRows(final SelectStatementContext selectStatementContext,
final Map dataMap, final List valueCaseSensitive) {
List result = new ArrayList<>(dataMap.values());
result.sort(new GroupByRowComparator(selectStatementContext, valueCaseSensitive));
return result;
}
}
可以看到与基于流模式归并的区别在于需要遍历读取所有结果集中元素,然后根据GroupByValue分组进行聚合运算。
在上面我们提到会根据创建聚合类型,创建AggregationUnit实例,其对应的工厂类为org.apache.shardingsphere.sharding.merge.dql.groupby.aggregation.AggregationUnitFactory
public final class AggregationUnitFactory {
/**
* Create aggregation unit instance.
*
* @param type aggregation function type
* @param isDistinct is distinct
* @return aggregation unit instance
*/
public static AggregationUnit create(final AggregationType type, final boolean isDistinct) {
switch (type) {
case MAX:
return new ComparableAggregationUnit(false);
case MIN:
return new ComparableAggregationUnit(true);
case SUM:
return isDistinct ? new DistinctSumAggregationUnit() : new AccumulationAggregationUnit();
case COUNT:
return isDistinct ? new DistinctCountAggregationUnit() : new AccumulationAggregationUnit();
case AVG:
return isDistinct ? new DistinctAverageAggregationUnit() : new AverageAggregationUnit();
default:
throw new UnsupportedOperationException(type.name());
}
}
这些AggregationUnit实现类逻辑都比较简单,主要就是累积计算传入集合元素的和,如果带有distinct的,则其内部通过一个HashSet进行判重,当为avg聚合运算时,传入的值有两个元素,分别为count和sum值。
org.apache.shardingsphere.sharding.merge.dql.groupby.aggregation.AccumulationAggregationUnit
/**
* Accumulation aggregation unit.
*/
@RequiredArgsConstructor
public final class AccumulationAggregationUnit implements AggregationUnit {
private BigDecimal result;
@Override
public void merge(final List> values) {
if (null == values || null == values.get(0)) {
return;
}
if (null == result) {
result = new BigDecimal("0");
}
result = result.add(new BigDecimal(values.get(0).toString()));
}
@Override
public Comparable> getResult() {
return result;
}
}
org.apache.shardingsphere.sharding.merge.dql.groupby.aggregation.DistinctAverageAggregationUnit
**
* Distinct average aggregation unit.
*/
@RequiredArgsConstructor
public final class DistinctAverageAggregationUnit implements AggregationUnit {
private BigDecimal count;
private BigDecimal sum;
private Collection> countValues = new LinkedHashSet<>();
private Collection> sumValues = new LinkedHashSet<>();
@Override
public void merge(final List> values) {
if (null == values || null == values.get(0) || null == values.get(1)) {
return;
}
if (this.countValues.add(values.get(0)) && this.sumValues.add(values.get(0))) {
if (null == count) {
count = new BigDecimal("0");
}
if (null == sum) {
sum = new BigDecimal("0");
}
count = count.add(new BigDecimal(values.get(0).toString()));
sum = sum.add(new BigDecimal(values.get(1).toString()));
}
}
由于这些类逻辑比较简单,这里就不详细介绍其中代码。
在基于流或者内存生成合并结果集后,最后还有一个分页的装饰操作,其核心操作是根据offset,先对结果集取next()操作offset次,之所以需要这样处理,是因为如果是多个数据节点的分页操作,在rewrite环节会将offset设置成0,所以在返回给应用前需要跳过前offset记录。
org.apache.shardingsphere.sharding.merge.dql.pagination.LimitDecoratorMergedResult
/**
* Decorator merged result for limit pagination.
*/
public final class LimitDecoratorMergedResult extends DecoratorMergedResult {
private final PaginationContext pagination;
private final boolean skipAll;
private int rowNumber;
public LimitDecoratorMergedResult(final MergedResult mergedResult, final PaginationContext pagination) throws SQLException {
super(mergedResult);
this.pagination = pagination;
skipAll = skipOffset();
}
private boolean skipOffset() throws SQLException {
for (int i = 0; i < pagination.getActualOffset(); i++) {
if (!getMergedResult().next()) { //如果结果集总概述小于offset值,设置skipAll为true,表示跳过所有结果集,后续next()直接返回false
return true;
}
}
rowNumber = 0;
return false;
}
@Override
public boolean next() throws SQLException {
if (skipAll) {
return false;
}
if (!pagination.getActualRowCount().isPresent()) {
return getMergedResult().next();
}
//由于改写了offset,多个数据节点返回的结果集总数大于SQL中指定的RowCount,因此在next()操作时要记录当前已返回的记录总数rowNumber,同时要判断该值不能大于SQL指定的RowCount
return ++rowNumber <= pagination.getActualRowCount().get() && getMergedResult().next();
}
}
二、 回到MergeEntry的decorate方法,该方法遍历拿到注册的ResultDecoratorEngine,依次调用其newInstance创建ResultDecorator实例,然后执行其decorate方法,对第一步生成的MergedResult实例进行二次处理。ResultDecoratorEngine接口目前实现类只有org.apache.shardingsphere.encrypt.merge.EncryptResultDecoratorEngine
/**
* Result decorator engine for encrypt.
*/
public final class EncryptResultDecoratorEngine implements ResultDecoratorEngine {
@Override
public ResultDecorator newInstance(final DatabaseType databaseType, final SchemaMetaData schemaMetaData,
final EncryptRule encryptRule, final ConfigurationProperties properties, final SQLStatementContext sqlStatementContext) {
if (sqlStatementContext instanceof SelectStatementContext) {
return new EncryptDQLResultDecorator(
new EncryptorMetaData(schemaMetaData, encryptRule, (SelectStatementContext) sqlStatementContext), properties.getValue(ConfigurationPropertyKey.QUERY_WITH_CIPHER_COLUMN));
}
if (sqlStatementContext.getSqlStatement() instanceof DALStatement) {
return new EncryptDALResultDecorator();
}
return new TransparentResultDecorator();
}
..
}
可以看到其创建了org.apache.shardingsphere.encrypt.merge.dql.EncryptDQLResultDecorator实例
/**
* DQL result decorator for encrypt.
*/
@RequiredArgsConstructor
public final class EncryptDQLResultDecorator implements ResultDecorator {
private final EncryptorMetaData encryptorMetaData;
private final boolean queryWithCipherColumn;
@Override
public MergedResult decorate(final QueryResult queryResult, final SQLStatementContext sqlStatementContext, final SchemaMetaData schemaMetaData) {
return new EncryptMergedResult(encryptorMetaData, new TransparentMergedResult(queryResult), queryWithCipherColumn);
}
@Override
public MergedResult decorate(final MergedResult mergedResult, final SQLStatementContext sqlStatementContext, final SchemaMetaData schemaMetaData) {
return new EncryptMergedResult(encryptorMetaData, mergedResult, queryWithCipherColumn);
}
}
decorate方法则直接创建了EncryptMergedResult实例返回,该类逻辑也很清晰,实现了MergedResult接口,在getValue方法中,先判断是否需要进行解密,如果需要则查询对应的加密器对象,然后执行解密,然后返回。
org.apache.shardingsphere.encrypt.merge.dql.EncryptMergedResult
/**
* Merged result for encrypt.
*/
@RequiredArgsConstructor
public final class EncryptMergedResult implements MergedResult {
private final EncryptorMetaData metaData;
private final MergedResult mergedResult;
private final boolean queryWithCipherColumn;
@Override
public boolean next() throws SQLException {
return mergedResult.next();
}
@Override
public Object getValue(final int columnIndex, final Class> type) throws SQLException {
if (!queryWithCipherColumn) {
return mergedResult.getValue(columnIndex, type);
}
Optional encryptor = metaData.findEncryptor(columnIndex);
if (!encryptor.isPresent()) {
return mergedResult.getValue(columnIndex, type);
}
String ciphertext = (String) mergedResult.getValue(columnIndex, String.class);
return null == ciphertext ? null : encryptor.get().decrypt(ciphertext);
}
关于加密的源码限于篇幅,这里不展开分析。
官网文档对归并引擎画了一张流程图:
其它关于归并引擎原理的更多介绍可参见官网介绍https://shardingsphere.apache.org/document/current/cn/features/sharding/principle/merge/