Timers are for when you want to do something once in the future - tickers are for when you want to do something repeatedly at regular intervals. Here’s an example of a ticker that ticks periodically until we stop it.
[maxwell@oracle-db-19c Day04]$ vim tickers.go
[maxwell@oracle-db-19c Day04]$ cat tickers.go
package main
import (
"fmt"
"time"
)
func main(){
ticker := time.NewTicker(500 * time.Millisecond)
done := make(chan bool)
go func(){
for {
select {
case <- done:
return
case t:= <-ticker.C:
fmt.Println("Tick at", t)
}
}
}()
time.Sleep(1600 * time.Millisecond)
ticker.Stop()
done <- true
fmt.Println("Ticker stopped")
}
[maxwell@oracle-db-19c Day04]$ go run tickers.go
Tick at 2023-02-20 18:46:02.784334988 +0800 HKT m=+0.500946000
Tick at 2023-02-20 18:46:03.284654541 +0800 HKT m=+1.001265526
Tick at 2023-02-20 18:46:03.783880632 +0800 HKT m=+1.500491648
Ticker stopped
[maxwell@oracle-db-19c Day04]$
we’ll look at how to implement a worker pool using goroutines and channels.
[maxwell@oracle-db-19c Day04]$ vim worker_pools.go
[maxwell@oracle-db-19c Day04]$ cat worker_pools.go
package main
import (
"fmt"
"time"
)
func worker(id int, jobs <-chan int, results chan<- int){
for j := range jobs {
fmt.Println("worker", id, "Started job", j)
time.Sleep(time.Second)
fmt.Println("worker", id, "finished job", j)
results <- j * 2
}
}
func main() {
const numJobs = 5
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
for a := 1; a <= numJobs; a++ {
<-results
}
}
[maxwell@oracle-db-19c Day04]$ time go run worker_pools.go
worker 3 Started job 1
worker 1 Started job 2
worker 2 Started job 3
worker 3 finished job 1
worker 3 Started job 4
worker 2 finished job 3
worker 1 finished job 2
worker 1 Started job 5
worker 1 finished job 5
worker 3 finished job 4
real 0m2.168s
user 0m0.126s
sys 0m0.123s
[maxwell@oracle-db-19c Day04]$
To wait for multiple goroutines to finish, we can use a wait group.
This is the function we’ll run in every goroutine.
Sleep to simulate an expensive task.
This WaitGroup is used to wait for all the goroutines launched here to finish. Note: if a WaitGroup is explicitly passed into functions, it should be done by pointer.
Note that this approach has no straightforward way to propagate errors from workers. For more advanced use cases, consider using the errgroup package.
[maxwell@oracle-db-19c Day04]$ vim waitgroups.go
[maxwell@oracle-db-19c Day04]$ cat waitgroups.go
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int) {
fmt.Printf("Worker %d starting\n",id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main(){
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1)
i := i
go func() {
defer wg.Done()
worker(i)
}()
}
wg.Wait()
}
[maxwell@oracle-db-19c Day04]$ go run waitgroups.go
Worker 5 starting
Worker 3 starting
Worker 1 starting
Worker 2 starting
Worker 4 starting
Worker 4 done
Worker 1 done
Worker 2 done
Worker 3 done
Worker 5 done
[maxwell@oracle-db-19c Day04]$
Rate limiting is an important mechanism for controlling resource utilization and maintaining quality of service. Go elegantly supports rate limiting with goroutines, channels, and tickers.
[maxwell@oracle-db-19c Day04]$ vim rate_limiting.go
[maxwell@oracle-db-19c Day04]$ cat rate_limiting.go
package main
import (
"fmt"
"time"
)
func main(){
requests := make(chan int, 5)
for i := 1; i <= 5; i++{
requests <- i
}
close(requests)
limiter := time.Tick(200 * time.Millisecond)
for req := range requests{
<-limiter
fmt.Println("request", req, time.Now())
}
burstyLimiter := make(chan time.Time, 3)
for i := 0; i < 3; i++{
burstyLimiter <- time.Now()
}
go func(){
for t := range time.Tick(200 * time.Millisecond){
burstyLimiter <- t
}
}()
burstyRequests := make(chan int, 5)
for i := 1;i <= 5; i++ {
burstyRequests <- i
}
close(burstyRequests)
for req := range burstyRequests {
<-burstyLimiter
fmt.Println("request",req, time.Now())
}
}
[maxwell@oracle-db-19c Day04]$ go run rate_limiting.go
request 1 2023-02-20 20:02:45.313459312 +0800 HKT m=+0.200941018
request 2 2023-02-20 20:02:45.513426444 +0800 HKT m=+0.400908162
request 3 2023-02-20 20:02:45.713280795 +0800 HKT m=+0.600762451
request 4 2023-02-20 20:02:45.91274965 +0800 HKT m=+0.800231322
request 5 2023-02-20 20:02:46.113581016 +0800 HKT m=+1.001062671
request 1 2023-02-20 20:02:46.113679079 +0800 HKT m=+1.001160735
request 2 2023-02-20 20:02:46.11369761 +0800 HKT m=+1.001179264
request 3 2023-02-20 20:02:46.113701878 +0800 HKT m=+1.001183532
request 4 2023-02-20 20:02:46.315056209 +0800 HKT m=+1.202537928
request 5 2023-02-20 20:02:46.513929655 +0800 HKT m=+1.401411316
[maxwell@oracle-db-19c Day04]$
The primary mechanism for managing state in Go is communication over channels. We saw this for example with worker pools. There are a few other options for managing state though. Here we’ll look at using the sync/atomic package for atomic counters accessed by multiple goroutines.
[maxwell@oracle-db-19c Day04]$ vim atomiccounters.go
[maxwell@oracle-db-19c Day04]$ cat atomiccounters.go
package main
import (
"fmt"
"sync"
"sync/atomic"
)
func main() {
var ops uint64
var wg sync.WaitGroup
for i := 0; i < 50; i++ {
wg.Add(1)
go func() {
for c := 0; c < 1000; c++ {
atomic.AddUint64(&ops, 1)
}
wg.Done()
}()
}
wg.Wait()
fmt.Println("ops:", ops)
}
[maxwell@oracle-db-19c Day04]$ go run atomiccounters.go
ops: 50000
[maxwell@oracle-db-19c Day04]$