前言
昨晚上看完了Clean Code-------Function篇章,然后用所学到的知识去Code Review我之前写过的一个小框架,进行的还算顺利,下面总结下我新体会到的事情,还有暴露的问题。
源码对照(重构前后)
先贴上重构前后的源码对照,之后再总结分析
首先是App.java
重构前
package com.Albert.cache;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.function.Function;
/**
* @author Albert
* @create 2018-01-10 21:33
*/
public class App {
public static void main(String[] args) {
CacheResult cacheResult = BlockHardlyCacheResult.createNeedComputeFunction(get_A_TestComputeMethod());
final CountDownLatch firstComputesStartControl = new CountDownLatch(1);
final CountDownLatch firstComputesEndMark = new CountDownLatch(1);
final CountDownLatch secondComputesStartControl = new CountDownLatch(1);
final CountDownLatch secondComputesEndMark = new CountDownLatch(1);
final long computeFromOneUntilThisValue = 100000L;
final ExecutorService executor = Executors.newFixedThreadPool(1);
executor.execute(runManyComputeAndStartTimeIsControlled(cacheResult, firstComputesStartControl, firstComputesEndMark, computeFromOneUntilThisValue));
getFirstExecuteTime(firstComputesStartControl, firstComputesEndMark);
executor.execute(runManyComputeAndStartTimeIsControlled(cacheResult, secondComputesStartControl, secondComputesEndMark, computeFromOneUntilThisValue));
getSecondExecuteTime(secondComputesStartControl, secondComputesEndMark);
executor.shutdown();
}
private static Function get_A_TestComputeMethod() {
return a -> {
long result = 0;
for (long i = 1L; i <= a; i++) {
result += i;
}
return result;
};
}
private static void getSecondExecuteTime(CountDownLatch startGate2, CountDownLatch endGate2) {
long startTime2 = System.nanoTime();
startGate2.countDown();
try {
endGate2.await();
} catch (InterruptedException e) {
System.out.println("error........");
e.printStackTrace();
} finally {
long endTime2 = System.nanoTime();
System.out.println("This is use cache time: " + (endTime2 - startTime2) + " ns");
}
}
private static void getFirstExecuteTime(CountDownLatch startGate1, CountDownLatch endGate1) {
long startTime = System.nanoTime();
startGate1.countDown();
try {
endGate1.await();
} catch (InterruptedException e) {
System.out.println("error........");
e.printStackTrace();
} finally {
long endTime = System.nanoTime();
System.out.println("This is not use cache time: " + (endTime - startTime) + " ns");
}
}
private static Runnable runManyComputeAndStartTimeIsControlled(CacheResult cacheResult, CountDownLatch startGate2, CountDownLatch endGate2, long computeFromOneUntilThis) {
return () -> {
try {
startGate2.await();
for(long i = 1; i <= computeFromOneUntilThis; i++) {
cacheResult.compute(i);
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
endGate2.countDown();
}
};
}
}
重构后。。。
package com.Albert.cache;
import java.util.function.Function;
/**
* @author Albert
* @create 2018-01-10 21:33
*/
public class App {
public static void main(String[] args) {
Compute compute = EfficientCacheCompute.createNeedComputeFunction(get_A_TestComputeMethod());
System.out.println("--------start the first task of many compute--------");
long firstCostTime = getComputeTime(compute);
System.out.println("This is not use cache time: " + firstCostTime + " ns");
System.out.println("--------start the second task of many compute--------");
long secondCostTime = getComputeTime(compute);
System.out.println("This is use cache time: " + secondCostTime + " ns");
}
private static Function get_A_TestComputeMethod() {
return a -> {
long result = 0;
for (long i = 1L; i <= a; i++) {
result += i;
}
return result;
};
}
private static long getComputeTime(Compute compute) {
long startTime = System.nanoTime();
App.toComputeVeryMuch(compute);
long endTime = System.nanoTime();
return endTime - startTime;
}
private static void toComputeVeryMuch(Compute compute) {
long computeFromOneUntilThis = 100000L;
for(long i = 1; i <= computeFromOneUntilThis; i++) {
compute.compute(i);
}
}
}
App.java
重构相对还是比较满意的。首先介绍下App.java
,它的功能是对EfficientCacheCompute
的一次实践对比,比较了在有缓存和没有缓存情况下的时间差异。(值得注意的是EfficientCacheCompute
是在并发情况下无加锁的线程安全类)
思想一:重构时要以整体的角度去考虑代码
重构时,起初我是根据原来的代码在原意上去重构,就是按照源码的意思去抽离代码生成方法等。后来发现越写越复杂,把简单的App实践写的一层又一层(因为原码中用到了CountDownLatch
去控制异步线程上计算的开始和结束)。
后来仔细思考了下,想到了好的代码是让人一看就懂,要坚持写“笨”代码的思想,然后认为没有必要在实践中用这么复杂的并发类去实现,这样反而不容易突出框架的主次。所以取消了异步计算,改为在同一个主线程上进行。这样代码更易懂,能清晰的看到EfficientCacheCompute
的使用方式。
思想二:要以更能让人读懂的方式去考虑代码,而不是完全遵循书上提到的知识,毕竟这些知识也是前人经过多年的经验获得。
在思想二上,我是通过重构时的两个小点上的考虑得到的,
首先其一:
private static Function get_A_TestComputeMethod() {
return a -> {
long result = 0;
for (long i = 1L; i <= a; i++) {
result += i;
}
return result;
};
}
注意下这个方法名,起初我是以getATestComputeMethod
去命名的,但是觉得两个大写的挨着容易引起误会,所以改为上述的方式。
英文单词中有很多短单词,比如Of,当这些单词掺杂在一个比较多的命名当中就会很不起眼,所以可以用“__”把它分开。采用“__”是一个有争论的观点,可能有的人觉得修改后的名称很不舒服,我只是表述了下思想二的意义吧,不要完全按照书上的规则去写代码,要有自己的思想,以增强可读性为主。
其次是其二(代码来自于下面的类EfficientCacheCompute
):
private class Message {
public final Future putResult;
public final FutureTask willPut;
public Message(Future putResult, FutureTask willPut) {
this.putResult = putResult;
this.willPut = willPut;
}
}
这个类名最初我是采用MessageOfPutToCacheResult
去命名的,后来发现这样在使用使会使声明的这一行很长,如下
MessageOfPutToCacheResult messageOfPutToCacheResult;
然后我就想到,反正它就是个私有内部类,在其它地方肯定不会用到,而且声明是命名肯定要说明它的用途的,所以为了读起来更舒服,就把类名改为了Message
之后的代码就变成了
Message messageOfPutToCacheResult;
//原文:Message message_of_putToCacheResult;上为特意修改后,避免争议
仅通过变量名就可以做到见名知其义,这样代码看着更简洁。
其次是EfficientCacheCompute(主要是针对这两个类进行重构)
重构前。。。
package com.Albert.cache;
import java.util.concurrent.*;
import java.util.function.Function;
/**
* @author Albert
* @create 2018-01-10 19:44
*/
public class BlockHardlyCacheResult implements CacheResult {
private final boolean IS_NOT_RETURN = true;
private final ConcurrentHashMap> cacheResult;
private final Function computeMethod;
private BlockHardlyCacheResult(Function computeMethod) {
this.computeMethod = computeMethod;
this.cacheResult = new ConcurrentHashMap<>();
}
public static BlockHardlyCacheResult createNeedComputeFunction(Function computeMethod) {
return new BlockHardlyCacheResult<>(computeMethod);
}
@Override
public ResultT compute(final KeyT keyT) {
while (IS_NOT_RETURN) {
Future resultFuture = cacheResult.get(keyT);
if (isNotExitResult(resultFuture)) {
Callable putKeyComputeMethod = () -> computeMethod.apply(keyT);
FutureTask runWhenResultFutureNull = new FutureTask<>(putKeyComputeMethod);
resultFuture = cacheResult.putIfAbsent(keyT, runWhenResultFutureNull);
if (isNotExitResult(resultFuture)) {
resultFuture = runWhenResultFutureNull;
runWhenResultFutureNull.run();
}
}
try {
return resultFuture.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
private boolean isNotExitResult(Future resultFuture) {
return resultFuture == null;
}
}
重构后。。。
package com.Albert.cache;
import java.util.concurrent.*;
import java.util.function.Function;
/**
* @author Albert
* @create 2018-01-10 19:44
*/
public class EfficientCacheCompute implements Compute {
private final boolean IS_NOT_RETURN = true;
private final ConcurrentHashMap> cacheResult;
private final Function computeMethod;
private EfficientCacheCompute(Function computeMethod) {
this.computeMethod = computeMethod;
this.cacheResult = new ConcurrentHashMap<>();
}
public static EfficientCacheCompute createNeedComputeFunction(Function computeMethod) {
return new EfficientCacheCompute<>(computeMethod);
}
@Override
public ResultT compute(final KeyT keyT) {
while (IS_NOT_RETURN) {
Future resultFuture = cacheResult.get(keyT);
if (isNotExitResult(resultFuture)) {
resultFuture = tryPutFutureToCacheAndRunCompute(keyT);
}
return getResultT(resultFuture);
}
}
private boolean isNotExitResult(Future resultFuture) {
return resultFuture == null;
}
private Future tryPutFutureToCacheAndRunCompute(KeyT keyT) {
Message message_of_putToCacheResult;
message_of_putToCacheResult = tryPutFutureToCache(keyT);
return getFutureFromCacheAndRunComputeIfNecessary(message_of_putToCacheResult);
}
private Message tryPutFutureToCache(KeyT keyT) {
Callable computeMethod_Of_PutTheKey = () -> computeMethod.apply(keyT);
FutureTask willPut = new FutureTask<>(computeMethod_Of_PutTheKey);
Future putResult = cacheResult.putIfAbsent(keyT, willPut);
return new Message(putResult, willPut);
}
private Future getFutureFromCacheAndRunComputeIfNecessary(Message message) {
if (isPutSuccess(message)) {
message.willPut.run();
return message.willPut;
} else {
return message.putResult;
}
}
private boolean isPutSuccess(Message message) {
return message.putResult == null;
}
private ResultT getResultT(Future resultTFuture) {
ResultT resultT = null;
try {
resultT = resultTFuture.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
return resultT;
}
private class Message {
public final Future putResult;
public final FutureTask willPut;
public Message(Future putResult, FutureTask willPut) {
this.putResult = putResult;
this.willPut = willPut;
}
}
}
对EfficientCacheCompute
的重构就暴露了一些问题,
其一是对方法的上下排序上,我记得Clean Code中提到过一点,要把代码写的像读一篇优美的文章一样,段落清晰,富有层次感。方法的排序就像是文章的标题跟段落一样,那么对于含有一层嵌套的方法我们可以这样排序:
public void one() {
firstInOne();
secondInOne();
thirdInOne();
}
public void firstInOne() {}
public void secondInOne() {}
public void thirdInOne() {}
那么对于存在多层嵌套的怎么排序呢?我罗列出了如下两种排序方式
方式一:
public void one() {
firstInOne();
secondInOne();
}
public void firstInOne() {
AInFirst();
BInFirst();
}
public void AInFirst() {}
public void BInFirst() {}
public void secondInOne() {}
方式二:
public void one() {
firstInOne();
secondInOne();
}
public void firstInOne() {
AInFirst();
BInFirst();
}
public void secondInOne() {}
public void AInFirst() {}
public void BInFirst() {}
这两种的区别是:方式一更取向与线式读取,当读取到一个方法时,直接在下方按顺序可以看到对应的方法。而方式二则更注重对每个方法的整体把握性。
经过一天多无意识的思考,现在觉得方法一更好,因为它更有规律,更能让读者知道方法内对应的方法应该在哪里,读起来更有层次感。
总结
通过这次Code Review使代码的可读性大大加强,可能有的人一看到这么多方法、命名还那么长而且也没有注释,就会觉得厌烦,不想看,但是Clean Code的妙处就在你只要你用心去读,你就会发现理解起来就好像是读一篇优美的散文一样,代码的注释全部暗含在代码内,全篇通俗易懂,多么美妙的感觉。