分布式锁主要是解决分布式系统下数据一致性的问题。在单机的环境下,应用是在同一进程下的,只需要保证单进程多线程环境中的线程安全性,通过 Java 提供的 volatile、ReentrantLock、synchronized 以及 concurrent 并发包下一些线程安全的类等就可以做到。
Redisson 是一个基于 Redis 的分布式 Java 客户端。它提供了丰富的功能和工具,帮助开发者在分布式系统中解决数据共享、并发控制和任务调度等问题。通过使用Redisson,开发者可以轻松地操作 Redis 的分布式对象(如集合、映射、队列等),实现可靠的分布式锁机制,以及管理和调度分布式环境中的任务和服务。
Redisson 的分布式锁的特点
● 线程安全:分布式锁可以确保在多线程和多进程环境下的数据一致性和可靠性。
● 可重入性: 同一个线程可以多次获取同一个锁,避免死锁的问题。
● 锁超时: 支持设置锁的有效期,防止锁被长时间占用而导致系统出现问题。
● 阻塞式获取锁: 当某个线程尝试获取锁时,如果锁已经被其他线程占用,则该线程可以选择等待直到锁释放。
● 无阻塞式获取锁: 当某个线程尝试获取锁时,如果锁已经被其他线程占用,则该线程不会等待,而是立即返回获取锁失败的信息。
redisson 实现分布式官网文档:https://github.com/redisson/redisson/wiki/8.-distributed-locks-and-synchronizers
Redisson 支持单点模式、主从模式、哨兵模式、集群模式,本文以单点模式为例说明。
<dependency>
<groupId>org.redissongroupId>
<artifactId>redissonartifactId>
<version>3.12.0version>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starter-data-redisartifactId>
<version>2.1.21.RELEASEversion>
dependency>
spring:
redis:
host: 192.168.57.129
port: 6379
@Configuration
public class RedissonConfig {
@Value("${spring.redis.host}")
private String host;
@Value("${spring.redis.port}")
private String port;
/**
* 对所有redisson的使用都是通过redissonClient对象
* @return
*/
@Bean(destroyMethod = "shutdown")
public RedissonClient redissonClient() {
Config config = new Config();
config.useSingleServer().setAddress("redis://" + host + ":" + port);
return Redisson.create(config);
}
}
@Autowired
private RedissonClient redissonClient;
// redisson分布式锁的key
private static final String LOCK_TEST_KEY = "redisson_lock";
// redisson分布式锁的key
private static int TICKET_NUMBER = 10;
/**
* 分布式锁测试用例
* 模拟开启11个用户去抢车票
*/
@Test
public void lockTest() {
// 利用循环+多线程 模仿高并发请求
for (int i = 0; i < 11; i++) {
CompletableFuture.runAsync(() -> {
if (TICKET_NUMBER > 0) {
// 这里获取公平锁,遵循先进先出原则,方便测试
RLock fairLock = redissonClient.getFairLock(LOCK_TEST_KEY);
try {
// 尝试加锁
// waitTimeout 尝试获取锁的最大等待时间,超过这个值,则认为获取锁失败
// leaseTime 锁的持有时间,超过这个时间锁会自动失效(值应设置为大于业务处理的时间,确保在锁有效期内业务能处理完)
boolean lock = fairLock.tryLock(3000, 30, TimeUnit.MILLISECONDS);
if (lock){
log.info("线程:" + Thread.currentThread().getName() + "获得了锁");
log.info("车票剩余数量:{}", --TICKET_NUMBER);
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
log.info("线程:" + Thread.currentThread().getName() + "准备释放锁");
//注意: 无论出现任何情况都要主动解锁
fairLock.unlock();
}
}else {
log.info("车票已售罄!");
}
});
try {
// ->_-> 这里使当前方法占用的线程休息10秒,不要立即结束
Thread.sleep(3000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
日志信息输出
2023-11-18 15:27:00.834 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:00.835 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:9
2023-11-18 15:27:00.835 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:03.749 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:03.749 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:8
2023-11-18 15:27:03.749 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:06.759 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:06.759 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:7
2023-11-18 15:27:06.759 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:09.749 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:09.750 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:6
2023-11-18 15:27:09.750 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:12.759 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:12.759 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:5
2023-11-18 15:27:12.759 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:15.752 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:15.752 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:4
2023-11-18 15:27:15.752 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:18.762 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:18.762 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:3
2023-11-18 15:27:18.762 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:21.754 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:21.754 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:2
2023-11-18 15:27:21.754 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:24.763 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:24.763 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:1
2023-11-18 15:27:24.763 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:27.757 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6获得了锁
2023-11-18 15:27:27.757 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票剩余数量:0
2023-11-18 15:27:27.757 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 线程:ForkJoinPool.commonPool-worker-6准备释放锁
2023-11-18 15:27:30.753 INFO 5800 --- [onPool-worker-6] c.a.g.p.GulimallProductApplicationTests : 车票已售罄!
@ResponseBody
@GetMapping("/hello")
public String hello(){
//1、获取一把锁,只要锁的名字一样,就是同一把锁
String lockKey = "my-lock";
RLock lock = redissonClient.getLock(lockKey);
//2、加锁阻塞式等待,默认加的锁都是30s。
lock.lock();
//10秒自动解锁,自动解锁时间一定要大于业务的执行时间。问题:在锁时间到了以后,不会自动续期。
//lock.lock(10, TimeUnit.SECONDS);
//最佳实战:省掉了整个续期操作。手动解锁
//1)、lock.lock(30, TimeUnit.SECONDS);
try {
log.info("加锁成功,执行业务ing, 线程ID = {}", Thread.currentThread().getId());
Thread.sleep(10000);
}catch (Exception e){
e.printStackTrace();
}finally {
//3、解锁 假设解锁代码没有运行,redisson会不会出现死锁
log.info("释放锁, 线程ID = {}", Thread.currentThread().getId());
lock.unlock();
}
return "hello";
}
浏览器执行两个 hello 请求,只有当第一个请求业务执行完,第二个才能正常执行,不然第二个处于阻塞式等待状态。
控制台打印日志信息
2023-11-18 16:01:00.784 INFO 3916 --- [io-10000-exec-4] c.a.g.product.web.IndexController : 加锁成功,执行业务ing, 线程ID = 116
2023-11-18 16:01:10.785 INFO 3916 --- [io-10000-exec-4] c.a.g.product.web.IndexController : 释放锁, 线程ID = 116
2023-11-18 16:01:10.794 INFO 3916 --- [io-10000-exec-2] c.a.g.product.web.IndexController : 加锁成功,执行业务ing, 线程ID = 114
2023-11-18 16:01:20.794 INFO 3916 --- [io-10000-exec-2] c.a.g.product.web.IndexController : 释放锁, 线程ID = 114
redisson 实现分布式锁解决了 redis 实现分布式锁的两个问题
redisson 这个框架的实现依赖了 Lua 脚本和 Netty,以及各种 Future 及FutureListener 的异步、同步操作转换,加锁和解锁过程中还巧妙地利用了 redis 的发布订阅功能。
无参加锁方法
@Override
public void lock() {
try {
lock(-1, null, false);
} catch (InterruptedException e) {
throw new IllegalStateException();
}
}
有参加锁方法
@Override
public void lock(long leaseTime, TimeUnit unit) {
try {
lock(leaseTime, unit, false);
} catch (InterruptedException e) {
throw new IllegalStateException();
}
}
private void lock(long leaseTime, TimeUnit unit, boolean interruptibly) throws InterruptedException {
long threadId = Thread.currentThread().getId();
//尝试获取锁
Long ttl = tryAcquire(leaseTime, unit, threadId);
// lock acquired
if (ttl == null) {
return;
}
RFuture<RedissonLockEntry> future = subscribe(threadId);
if (interruptibly) {
commandExecutor.syncSubscriptionInterrupted(future);
} else {
commandExecutor.syncSubscription(future);
}
try {
while (true) {
ttl = tryAcquire(leaseTime, unit, threadId);
// lock acquired
if (ttl == null) {
break;
}
// waiting for message
if (ttl >= 0) {
try {
future.getNow().getLatch().tryAcquire(ttl, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
if (interruptibly) {
throw e;
}
future.getNow().getLatch().tryAcquire(ttl, TimeUnit.MILLISECONDS);
}
} else {
if (interruptibly) {
future.getNow().getLatch().acquire();
} else {
future.getNow().getLatch().acquireUninterruptibly();
}
}
}
} finally {
unsubscribe(future, threadId);
}
}
//尝试获取锁
private Long tryAcquire(long leaseTime, TimeUnit unit, long threadId) {
return get(tryAcquireAsync(leaseTime, unit, threadId));
}
异步的方式尝试获取锁
private <T> RFuture<Long> tryAcquireAsync(long leaseTime, TimeUnit unit, long threadId) {
if (leaseTime != -1) {
//如果我们传递了锁的超时时间,就发送给redis执行脚本,进行占锁,默认超时就是我们指定的时间。
return tryLockInnerAsync(leaseTime, unit, threadId, RedisCommands.EVAL_LONG);
}
//如果我们未指定锁的超时时间,就使用30*1000【LockWatchdogTimeout看门狗的默认时间】
RFuture<Long> ttlRemainingFuture = tryLockInnerAsync(commandExecutor.getConnectionManager().getCfg().getLockWatchdogTimeout(), TimeUnit.MILLISECONDS, threadId, RedisCommands.EVAL_LONG);
//占锁成功
ttlRemainingFuture.onComplete((ttlRemaining, e) -> {
//发生异常直接返回,若无异常执行下面逻辑
if (e != null) {
return;
}
// lock acquired
if (ttlRemaining == null) {
scheduleExpirationRenewal(threadId);
}
});
return ttlRemainingFuture;
}
//默认自动续期时间30s,看门狗时间
private long lockWatchdogTimeout = 30 * 1000;
<T> RFuture<T> tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand<T> command) {
internalLockLeaseTime = unit.toMillis(leaseTime);
return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, command,
"if (redis.call('exists', KEYS[1]) == 0) then " +
"redis.call('hset', KEYS[1], ARGV[2], 1); " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return nil; " +
"end; " +
"if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
"redis.call('hincrby', KEYS[1], ARGV[2], 1); " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return nil; " +
"end; " +
"return redis.call('pttl', KEYS[1]);",
Collections.<Object>singletonList(getName()), internalLockLeaseTime, getLockName(threadId));
}
重新设置超时时间
private void scheduleExpirationRenewal(long threadId) {
ExpirationEntry entry = new ExpirationEntry();
ExpirationEntry oldEntry = EXPIRATION_RENEWAL_MAP.putIfAbsent(getEntryName(), entry);
if (oldEntry != null) {
oldEntry.addThreadId(threadId);
} else {
entry.addThreadId(threadId);
renewExpiration();
}
}
开启定时任务,发送 LUA 脚本,锁的超时时间达到1/3就重新设为30s
private void renewExpiration() {
ExpirationEntry ee = EXPIRATION_RENEWAL_MAP.get(getEntryName());
if (ee == null) {
return;
}
Timeout task = commandExecutor.getConnectionManager().newTimeout(new TimerTask() {
@Override
public void run(Timeout timeout) throws Exception {
ExpirationEntry ent = EXPIRATION_RENEWAL_MAP.get(getEntryName());
if (ent == null) {
return;
}
Long threadId = ent.getFirstThreadId();
if (threadId == null) {
return;
}
RFuture<Boolean> future = renewExpirationAsync(threadId);
future.onComplete((res, e) -> {
if (e != null) {
log.error("Can't update lock " + getName() + " expiration", e);
return;
}
if (res) {
// reschedule itself
renewExpiration();
}
});
}
//只要占锁成功,就会启动一个定时任务【重新给锁设置过期时间,新的过期时间就是看门狗的默认时间】,每隔10s都会自动再次续期,续成30s internalLockLeaseTime【看门狗时间】 / 3,10s
}, internalLockLeaseTime / 3, TimeUnit.MILLISECONDS);
ee.setTimeout(task);
}
重新设置超时时间 LUA 脚本
protected RFuture<Boolean> renewExpirationAsync(long threadId) {
return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN,
"if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return 1; " +
"end; " +
"return 0;",
Collections.<Object>singletonList(getName()),
internalLockLeaseTime, getLockName(threadId));
}