用go-wrk,go-torch做服务器压测和性能分析

常常会有这样的场景,写了一个接口,上线发现性能不大好,费资源处理慢,可能是某个rpc耗时长,可能是可以并发的地方没有做并发,但发现问题比较麻烦,最好能有一些可视化的工具。

当然我们能这么做,是因为go语言集成了profile采样工具,并且允许我们在程序的运行时使用。

准备工作

写个最简单的web server,一个helloworld

package main

import (
    "log"
    "net/http"
    "fmt"
)

func myHandler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello World!\n")
}

func main() {
    http.HandleFunc("/hello", myHandler) //    设置访问路由
    log.Fatal(http.ListenAndServe(":9090", nil))
}

装一下go-wrk,go-torch和flamegraph
https://github.com/adjust/go-wrk
https://github.com/uber/go-torch
https://github.com/brendangregg/FlameGraph

操作一波

用go-wrk先跑一下试试

wujingcideMacBook-Pro:go wujingci$ go-wrk -c=400 -t=8 -n=10000 http://localhost:9090/hello
==========================BENCHMARK==========================
URL:                http://localhost:9090/hello

Used Connections:       400
Used Threads:           8
Total number of calls:      10000

===========================TIMINGS===========================
Total time passed:      1.77s
Avg time per request:       36.49ms
Requests per second:        5634.38
Median time per request:    17.46ms
99th percentile time:       1020.30ms
Slowest time for request:   1082.00ms

=============================DATA=============================
Total response body sizes:      129376
Avg response body per request:      12.94 Byte
Transfer rate per second:       72895.32 Byte/s (0.07 MByte/s)
==========================RESPONSES==========================
20X Responses:      9952    (99.52%)
30X Responses:      0   (0.00%)
40X Responses:      0   (0.00%)
50X Responses:      0   (0.00%)
Errors:         48  (0.48%)

可以执行,为了执行go-torch,我们需要打开pprof,很简单main.go里加一个import

_ "net/http/pprof"

那我们运行go-torch同时执行go-wrk

go-torch -u http://127.0.0.1:9090/ -f cpu.svg

用浏览器打开就能看到函数执行的效率了,这个例子很简单,主要是net包里的调用,就已经这么大一张图了,真实情况更复杂,因此只能点进去一块一块分析
用go-wrk,go-torch做服务器压测和性能分析_第1张图片

go-torch也可以搞内存,block的内容,这个自己go-torch -h研究下吧。

你可能感兴趣的:(实战,golang)