引用一下golang互斥锁和读写锁性能分析中关于互斥锁和读写锁的定义,比较清楚
1.互斥锁有两种操作,获取锁和释放锁
2.当有一个goroutine获取了互斥锁后,任何goroutine都不可以获取互斥锁,只能等待这个goroutine将互斥锁释放
3.互斥锁适用于读写操作数量差不多的情况
4.读写都可以放入互斥锁中
1.读写锁有四种操作 读上锁 读解锁 写上锁 写解锁
2.写锁最多有一个,读锁可以有多个(最大个数据说和CPU个数有关)
3.写锁的优先级高于读锁,这是因为为了防止读锁过多,写锁一直堵塞的情况发生
4.当有一个goroutine获得写锁时,其他goroutine不可以获得读锁或者写锁,直到这个写锁释放
5.当有一个goroutine获得读锁时,其他goroutine可以获得读锁,但是不能获得写锁。所以由此也可得知,如果当一个goroutine希望获取写锁时,不断地有其他goroutine在获得读锁和释放读锁会导致这个写锁一直处于堵塞状态,所以让写锁的优先级高于读锁可以避免这种情况,
6.读写锁适用于读多写少的情景。
7.在读取操作多的情况下,比如现在有三个goroutine:G1,G2,G3都想要读取一段数据A,我们如果用互斥锁的话,就是以下的情形:G1先加锁,然后读取A,然后释放;然后G2加锁,读取A,释放;G3加锁,读取A,然后释放…这个操作是串行的,由于每个goroutine都需要排队等待前一个goroutine释放锁,所以效率显然不高;但是如果这个时候我们用读写锁就可以让G1,G2,G3同时读A,就可以大大的提升效率。
8.写操作只能放在写锁中,读操作可以放在读写锁中,但是放在写锁中肯定并发效率低
在本机上跑博主提供的原代码时,结果与博主相反
package main
import (
"fmt"
"sync"
"time"
)
const MAXNUM = 1000 //map的大小
const LOCKNUM = 1e7 //加锁次数
var lock sync.Mutex //互斥锁
var rwlock sync.RWMutex //读写锁
var lock_map map[int]int //互斥锁map
var rwlock_map map[int]int //读写锁map
func main() {
var lock_w = &sync.WaitGroup{}
var rwlock_w = &sync.WaitGroup{}
lock_w.Add(LOCKNUM)
rwlock_w.Add(LOCKNUM)
lock_ch := make(chan int, 10000)
rwlock_ch := make(chan int, 10000)
lock_map = make(map[int]int, MAXNUM)
rwlock_map = make(map[int]int, MAXNUM)
init_map(lock_map, rwlock_map)
time1 := time.Now()
for i := 0; i < LOCKNUM; i++ {
go test1(lock_ch, i, lock_map, lock_w)
}
lock_w.Wait()
time2 := time.Now()
for i := 0; i < LOCKNUM; i++ {
go test2(rwlock_ch, i, rwlock_map, rwlock_w)
}
rwlock_w.Wait()
time3 := time.Now()
fmt.Println("lock time:", time2.Sub(time1).String())
fmt.Println("rwlock time:", time3.Sub(time2).String())
}
func init_map(a map[int]int, b map[int]int) { //初始化map
for i := 0; i < MAXNUM; i++ {
a[i] = i
b[i] = i
}
}
func test1(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
lock.Lock()
defer lock.Unlock()
w.Done()
return mymap[i%MAXNUM]
}
func test2(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
rwlock.RLock()
defer rwlock.RUnlock()
w.Done()
return mymap[i%MAXNUM]
}
Out:
lock time: 3.6869219s
rwlock time: 2.7925313s
但是可以看到二者的数量级在并发数达到1e7时仍然比较接近
下面我们增加chan传递(通用情景),同时增加任务中的耗时(这会增加互斥锁的串行负担)
package main
import (
"fmt"
"sync"
"time"
)
const MAXNUM = 1000 //map的大小
const LOCKNUM = 1e5 //加锁次数
var lock sync.Mutex //互斥锁
var rwlock sync.RWMutex //读写锁
var lock_map map[int]int //互斥锁map
var rwlock_map map[int]int //读写锁map
func main() {
var lock_w sync.WaitGroup
var rwlock_w sync.WaitGroup
lock_w.Add(LOCKNUM)
rwlock_w.Add(LOCKNUM)
lock_ch := make(chan int, 1000) // 缓存影响小,因为存入马上便从chan中取出来
rwlock_ch := make(chan int, 1000)
lock_map = make(map[int]int, MAXNUM)
rwlock_map = make(map[int]int, MAXNUM)
count1 := 0
count2 := 0
init_map(lock_map, rwlock_map)
time1 := time.Now()
for i := 0; i < LOCKNUM; i++ {
go test1(lock_ch, i, lock_map, &lock_w)
}
go func() {
lock_w.Wait()
close(lock_ch)
}()
for i := range lock_ch {
count1 += i
}
fmt.Printf("CHAN ID SUM %d\n", count1)
time2 := time.Now()
for i := 0; i < LOCKNUM; i++ {
go test2(rwlock_ch, i, rwlock_map, &rwlock_w)
}
go func() {
rwlock_w.Wait()
close(rwlock_ch)
}()
for i := range rwlock_ch {
count2 += i
}
fmt.Printf("CHAN ID SUM %d\n", count2)
time3 := time.Now()
fmt.Println("lock time:", time2.Sub(time1).String())
fmt.Println("rwlock time:", time3.Sub(time2).String())
}
func init_map(a map[int]int, b map[int]int) { //初始化map
for i := 0; i < MAXNUM; i++ {
a[i] = i
b[i] = i
}
}
func test1(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
lock.Lock()
defer lock.Unlock()
ch <- i
time.Sleep(time.Nanosecond)
w.Done()
return mymap[i%MAXNUM]
}
func test2(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
rwlock.RLock()
defer rwlock.RUnlock()
ch <- i
time.Sleep(time.Nanosecond)
w.Done()
return mymap[i%MAXNUM]
}
Out:
CHAN ID SUM 4999950000
CHAN ID SUM 4999950000
lock time: 2m50.2909581s
rwlock time: 124.6928ms
可以看到两种锁明显不在一个数量级上
将互斥锁换为读写锁的写锁,结果与之前测试相同
读写锁在多读的时候,由于其避免了互斥锁的并行性,性能上远优于互斥锁