pt-query-digest是用于分析mysql慢查询的一个工具,它可以分析binlog、General log、slowlog,也可以通过SHOWPROCESSLIST或者通过tcpdump抓取的MySQL协议数据来进行分析。可以把分析结果输出到文件中,分析过程是先对查询语句的条件进行参数化,然后对参数化以后的查询进行分组统计,统计出各查询的执行时间、次数、占比等,可以借助分析结果找出问题进行优化。
安装:Percona Toolkit工具集介绍
$ pt-query-digest --help Usage: pt-query-digest [OPTIONS] [FILES] [DSN] --host #指定MySQL地址; --port #指定MySQL端口; --socket #指定MySQL SOCK文件; --user #指定MySQL用户; --password #指定MySQL密码; --type #指定将要分析的类型,默认是slowlog,还有如binlog,general log等; --charset #指定字符集; --filter #对输入的慢查询按指定的字符串进行匹配过滤后再进行分析; --limit #限制输出结果百分比或数量,默认值是20,即将最慢的20条语句输出,如果是50%则按总响应时间占比从大到小排序,输出到总和达到50%位置截止; --review #将分析结果保存到表中,这个分析只是对查询条件进行参数化,一个类型的查询一条记录,比较简单;当下次使用--review时,如果存在相同的语句分析,就不会记录到数据表中; --history #将分析结果保存到表中,分析结果比较详细,下次再使用--history时,如果存在相同的语句,且查询所在的时间区间和历史表中的不同,则会记录到数据表中,可以通过查询同--CHECKSUM来比较某类型查询的历史变化; --since #从什么时间开始分析,值为字符串,可以是指定的某个"yyyy-mm-dd [hh:mm:ss]"格式的时间点,也可以是简单的一个时间值:s(秒)、h(小时)、m(分钟)、d(天),如12h就表示从12小时前开始统计; --until #截止时间,配合--since可以分析一段时间内的慢查询; --log #指定输出的日志文件; --output #分析结果输出类型,值可以是report(标准分析报告)、slowlog(Mysql slow log)、json、json-anon;一般使用report,以便于阅读; --create-review-table #当使用--review参数把分析结果输出到表中时,如果没有表就自动创建; --create-history-table #当使用--history参数把分析结果输出到表中时,如果没有表就自动创建;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
$ pt-query-digest --help Usage: pt-query-digest [OPTIONS] [FILES] [DSN] --host #指定MySQL地址; --port #指定MySQL端口; --socket #指定MySQL SOCK文件; --user #指定MySQL用户; --password #指定MySQL密码; --type #指定将要分析的类型,默认是slowlog,还有如binlog,general log等; --charset #指定字符集; --filter #对输入的慢查询按指定的字符串进行匹配过滤后再进行分析; --limit #限制输出结果百分比或数量,默认值是20,即将最慢的20条语句输出,如果是50%则按总响应时间占比从大到小排序,输出到总和达到50%位置截止; --review #将分析结果保存到表中,这个分析只是对查询条件进行参数化,一个类型的查询一条记录,比较简单;当下次使用--review时,如果存在相同的语句分析,就不会记录到数据表中; --history #将分析结果保存到表中,分析结果比较详细,下次再使用--history时,如果存在相同的语句,且查询所在的时间区间和历史表中的不同,则会记录到数据表中,可以通过查询同--CHECKSUM来比较某类型查询的历史变化; --since #从什么时间开始分析,值为字符串,可以是指定的某个"yyyy-mm-dd [hh:mm:ss]"格式的时间点,也可以是简单的一个时间值:s(秒)、h(小时)、m(分钟)、d(天),如12h就表示从12小时前开始统计; --until #截止时间,配合--since可以分析一段时间内的慢查询; --log #指定输出的日志文件; --output #分析结果输出类型,值可以是report(标准分析报告)、slowlog(Mysql slow log)、json、json-anon;一般使用report,以便于阅读; --create-review-table #当使用--review参数把分析结果输出到表中时,如果没有表就自动创建; --create-history-table #当使用--history参数把分析结果输出到表中时,如果没有表就自动创建; |
a) pt-query-digest分析慢查询日志; $ pt-query-digest --report slow.log b) 报告最近半个小时的慢查询; $ pt-query-digest --report --since 1800s slow.log c) 报告一个时间段的慢查询; $ pt-query-digest --report --since '2016-02-10 21:48:59' --until '2016-02-16 02:33:50' slow.log d) 报告只含select语句的慢查询; $ pt-query-digest --filter '$event->{fingerprint} =~ m/^select/i' slow.log e) 报告针对某个用户的慢查询; $ pt-query-digest --filter '($event->{user} || "") =~ m/^root/i' slow.log f) 报告所有的全表扫描或full join的慢查询; $ pt-query-digest --filter '(($event->{Full_scan} || "") eq "yes") || (($event->{Full_join} || "") eq "yes")' slow.log g) 把查询保存到query_review表; $ pt-query-digest --user=root –password=abc123 --review h=localhost,D=test,t=query_review --create-review-table slow.log h) 把查询保存到query_history表; $ pt-query-digest --user=root –password=abc123 --history h=localhost,D=test,t=query_history --create-history-table slow.log i) 通过tcpdump抓取mysql的tcp协议数据,然后再分析; $ tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt $ pt-query-digest --type tcpdump mysql.tcp.txt> slow_report.log j) 分析binlog; $ mysqlbinlog mysql-bin.000093 > mysql-bin000093.sql $ pt-query-digest --type=binlog mysql-bin000093.sql > slow_report.log k) 分析general log; $ pt-query-digest --type=genlog localhost.log > slow_report.log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
a) pt-query-digest分析慢查询日志; $ pt-query-digest --report slow.log
b) 报告最近半个小时的慢查询; $ pt-query-digest --report --since 1800s slow.log
c) 报告一个时间段的慢查询; $ pt-query-digest --report --since '2016-02-10 21:48:59' --until '2016-02-16 02:33:50' slow.log
d) 报告只含select语句的慢查询; $ pt-query-digest --filter '$event->{fingerprint} =~ m/^select/i' slow.log
e) 报告针对某个用户的慢查询; $ pt-query-digest --filter '($event->{user} || "") =~ m/^root/i' slow.log
f) 报告所有的全表扫描或full join的慢查询; $ pt-query-digest --filter '(($event->{Full_scan} || "") eq "yes") || (($event->{Full_join} || "") eq "yes")' slow.log
g) 把查询保存到query_review表; $ pt-query-digest --user=root –password=abc123 --review h=localhost,D=test,t=query_review --create-review-table slow.log
h) 把查询保存到query_history表; $ pt-query-digest --user=root –password=abc123 --history h=localhost,D=test,t=query_history --create-history-table slow.log
i) 通过tcpdump抓取mysql的tcp协议数据,然后再分析; $ tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt $ pt-query-digest --type tcpdump mysql.tcp.txt> slow_report.log
j) 分析binlog; $ mysqlbinlog mysql-bin.000093 > mysql-bin000093.sql $ pt-query-digest --type=binlog mysql-bin000093.sql > slow_report.log
k) 分析general log; $ pt-query-digest --type=genlog localhost.log > slow_report.log |
如下为输入结果示例:
$ pt-query-digest slow.log # 250ms user time, 20ms system time, 25.90M rss, 221.92M vsz # Current date: Fri Feb 10 15:02:48 2017 # Hostname: mysql01 # Files: /data/mysql/3306/log/slowlog/slow.log # Overall: 83 total, 36 unique, 0 QPS, 0x concurrency ____________________ # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ======= ======= ======= # Exec time 589s 2s 166s 7s 15s 18s 3s # Lock time 26ms 0 2ms 308us 1ms 364us 176us # Rows sent 12.73M 0 729.17k 157.08k 562.03k 233.92k 964.41 # Rows examine 234.89M 0 197.39M 2.83M 1.26M 20.85M 535.27k # Query size 16.85k 6 1.54k 207.84 918.49 272.84 136.99 # Profile # Rank Query ID Response time Calls R/Call V/M Item # ==== ================== ============= ===== ======= ===== ============== # 1 0x67D8A5A8051864A5 98.3906 24.4% 7 14.0558 9.83 SELECT user_wallet_log_? # 2 0xE60A50618D3E289C 43.6118 10.8% 9 4.8458 0.06 SELECT bid_goods # 3 0x3C32BB875D3BDF32 33.0655 8.2% 1 33.0655 0.00 SELECT user_wallet_log_? temp_detail # 4 0x34802A1738951BF8 27.5228 6.8% 11 2.5021 0.03 SELECT user_wallet_log_? # 5 0x9A28FE84C2D79A81 20.9853 5.2% 7 2.9979 0.05 SELECT bid_focus # 6 0x556ED8DDE51C28CD 20.6676 5.1% 3 6.8892 5.33 SELECT kupai.user_wallet_log_? # 7 0x261459299CC899DF 20.4575 5.1% 1 20.4575 0.00 SELECT temp_detail user_wallet_log_? # 8 0xADA7DFED8804D36B 20.0858 5.0% 3 6.6953 5.72 SELECT kupai.bid_focus # 9 0x84811E36BDF26443 13.8947 3.5% 2 6.9474 5.71 SELECT temp_detail user_wallet_log_? # 10 0x9B3D46BE43C8B217 9.7770 2.4% 1 9.7770 0.00 SELECT kupai.funding_wechat_relationg # 11 0xD1CF1661D703A026 8.4543 2.1% 2 4.2271 0.00 SELECT kupai.bid_goods # 12 0x960F1FA20F0168C9 8.3329 2.1% 4 2.0832 0.00 SELECT user_wallet_log_? # 13 0x599EF5AEA92BD305 8.2548 2.1% 1 8.2548 0.00 SELECT kupai.deposit_log_? # 14 0x3325180FB3DE74D8 8.0086 2.0% 1 8.0086 0.00 SELECT kupai.bid_user_list # 15 0x56847A1798193E3B 7.8118 1.9% 1 7.8118 0.00 SELECT kupai.kupai_wallet_log # 16 0x71CD0AC23C069A0A 6.8504 1.7% 3 2.2835 0.02 SELECT user_wallet_log_? # 17 0xF12704E2E24C9195 6.2984 1.6% 3 2.0995 0.00 SELECT member_mapping # 18 0x8350DE3D839DE829 6.2365 1.5% 3 2.0788 0.00 SELECT UNION user_wallet_log_? # 19 0x856222EC5BF15F34 4.8287 1.2% 1 4.8287 0.00 SELECT user_wallet_log_? # 20 0xCE6B2737F1D87CAC 4.2012 1.0% 2 2.1006 0.01 SELECT UNION user_wallet_log_? # MISC 0xMISC 24.9313 6.2% 10 2.4931 0.0 <10 ITEMS> # Query 1: 0 QPS, 0x concurrency, ID 0x67D8A5A8051864A5 at byte 5903 _____ # This item is included in the report because it matches --limit. # Scores: V/M = 9.83 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 9 7 # Exec time 24 98s 7s 43s 14s 42s 12s 8s # Lock time 5 1ms 167us 199us 184us 194us 12us 176us # Rows sent 34 4.19M 516.83k 729.17k 613.11k 717.31k 70.60k 590.13k # Rows examine 11 4.19M 516.83k 729.17k 613.11k 717.31k 70.60k 590.13k # Query size 14 2.01k 294 294 294 294 0 294 # String: # Hosts 10.0.8.149 # Time 2016-12-02... (1/14%), 2016-12-03... (1/14%)... 5 more # Users kupaiuser # Query_time distribution # 1us # 10us # 100us # 1ms # 10ms # 100ms # 1s ################################################################ # 10s+ ################################################ # Tables # SHOW TABLE STATUS LIKE 'user_wallet_log_0'\G # SHOW CREATE TABLE `user_wallet_log_0`\G
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
$ pt-query-digest slow.log # 250ms user time, 20ms system time, 25.90M rss, 221.92M vsz # Current date: Fri Feb 10 15:02:48 2017 # Hostname: mysql01 # Files: /data/mysql/3306/log/slowlog/slow.log # Overall: 83 total, 36 unique, 0 QPS, 0x concurrency ____________________ # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ======= ======= ======= # Exec time 589s 2s 166s 7s 15s 18s 3s # Lock time 26ms 0 2ms 308us 1ms 364us 176us # Rows sent 12.73M 0 729.17k 157.08k 562.03k 233.92k 964.41 # Rows examine 234.89M 0 197.39M 2.83M 1.26M 20.85M 535.27k # Query size 16.85k 6 1.54k 207.84 918.49 272.84 136.99
# Profile # Rank Query ID Response time Calls R/Call V/M Item # ==== ================== ============= ===== ======= ===== ============== # 1 0x67D8A5A8051864A5 98.3906 24.4% 7 14.0558 9.83 SELECT user_wallet_log_? # 2 0xE60A50618D3E289C 43.6118 10.8% 9 4.8458 0.06 SELECT bid_goods # 3 0x3C32BB875D3BDF32 33.0655 8.2% 1 33.0655 0.00 SELECT user_wallet_log_? temp_detail # 4 0x34802A1738951BF8 27.5228 6.8% 11 2.5021 0.03 SELECT user_wallet_log_? # 5 0x9A28FE84C2D79A81 20.9853 5.2% 7 2.9979 0.05 SELECT bid_focus # 6 0x556ED8DDE51C28CD 20.6676 5.1% 3 6.8892 5.33 SELECT kupai.user_wallet_log_? # 7 0x261459299CC899DF 20.4575 5.1% 1 20.4575 0.00 SELECT temp_detail user_wallet_log_? # 8 0xADA7DFED8804D36B 20.0858 5.0% 3 6.6953 5.72 SELECT kupai.bid_focus # 9 0x84811E36BDF26443 13.8947 3.5% 2 6.9474 5.71 SELECT temp_detail user_wallet_log_? # 10 0x9B3D46BE43C8B217 9.7770 2.4% 1 9.7770 0.00 SELECT kupai.funding_wechat_relationg # 11 0xD1CF1661D703A026 8.4543 2.1% 2 4.2271 0.00 SELECT kupai.bid_goods # 12 0x960F1FA20F0168C9 8.3329 2.1% 4 2.0832 0.00 SELECT user_wallet_log_? # 13 0x599EF5AEA92BD305 8.2548 2.1% 1 8.2548 0.00 SELECT kupai.deposit_log_? # 14 0x3325180FB3DE74D8 8.0086 2.0% 1 8.0086 0.00 SELECT kupai.bid_user_list # 15 0x56847A1798193E3B 7.8118 1.9% 1 7.8118 0.00 SELECT kupai.kupai_wallet_log # 16 0x71CD0AC23C069A0A 6.8504 1.7% 3 2.2835 0.02 SELECT user_wallet_log_? # 17 0xF12704E2E24C9195 6.2984 1.6% 3 2.0995 0.00 SELECT member_mapping # 18 0x8350DE3D839DE829 6.2365 1.5% 3 2.0788 0.00 SELECT UNION user_wallet_log_? # 19 0x856222EC5BF15F34 4.8287 1.2% 1 4.8287 0.00 SELECT user_wallet_log_? # 20 0xCE6B2737F1D87CAC 4.2012 1.0% 2 2.1006 0.01 SELECT UNION user_wallet_log_? # MISC 0xMISC 24.9313 6.2% 10 2.4931 0.0 <10 ITEMS>
# Query 1: 0 QPS, 0x concurrency, ID 0x67D8A5A8051864A5 at byte 5903 _____ # This item is included in the report because it matches --limit. # Scores: V/M = 9.83 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 9 7 # Exec time 24 98s 7s 43s 14s 42s 12s 8s # Lock time 5 1ms 167us 199us 184us 194us 12us 176us # Rows sent 34 4.19M 516.83k 729.17k 613.11k 717.31k 70.60k 590.13k # Rows examine 11 4.19M 516.83k 729.17k 613.11k 717.31k 70.60k 590.13k # Query size 14 2.01k 294 294 294 294 0 294 # String: # Hosts 10.0.8.149 # Time 2016-12-02... (1/14%), 2016-12-03... (1/14%)... 5 more # Users kupaiuser # Query_time distribution # 1us # 10us # 100us # 1ms # 10ms # 100ms # 1s ################################################################ # 10s+ ################################################ # Tables # SHOW TABLE STATUS LIKE 'user_wallet_log_0'\G # SHOW CREATE TABLE `user_wallet_log_0`\G |
Overall:总共有多少条查询。
Time range:查询执行的时间范围。
unique:唯一查询数量,即对查询条件进行参数化以后,总共有多少个不同的查询。
total:查询总时间。
min:最小时间。
max:最大时间。
avg:平均时间。
95%:把所有值从小到大排列,位置位于95%的那个数,这个数一般最具有参考价值。
median:中位数,把所有值从小到大排列,位置位于中间那个数。
使用pt-query-digest分析慢查询日志并将查询分析数据保存到MySQL数据库表中,然后使用应用程序来展示分析结果。目前有基于LAMP的Query-Digest-UI、Anemometer开源项目支持。
将慢日志插入表中:
$ pt-query-digest \ --user=mha \ --password=123456 \ --review h='10.99.73.9',D=test,t=global_query_review \ --history h='10.99.73.9',D=test,t=global_query_review_history \ --no-report \ --create-review-table \ --create-history-table \ --limit=0% slow.log
1 2 3 4 5 6 7 8 9 |
$ pt-query-digest \ --user=mha \ --password=123456 \ --review h='10.99.73.9',D=test,t=global_query_review \ --history h='10.99.73.9',D=test,t=global_query_review_history \ --no-report \ --create-review-table \ --create-history-table \ --limit=0% slow.log |
查看表信息
mysql> select * from global_query_review limit 1\G *************************** 1. row *************************** checksum: 19915890639818927 fingerprint: select * from `bid` where buyer is not ? order by end_time sample: SELECT * FROM `bid` where buyer is not null ORDER BY end_time first_seen: 2017-02-10 15:38:10 last_seen: 2017-02-10 15:38:10 reviewed_by: NULL reviewed_on: NULL comments: NULL 1 row in set (0.00 sec) mysql> select * from global_query_review_history limit 1\G *************************** 1. row *************************** checksum: 19915890639818927 sample: SELECT * FROM `bid` where buyer is not null ORDER BY end_time ts_min: 2017-02-10 15:38:10 ts_max: 2017-02-10 15:38:10 ts_cnt: NULL Query_time_sum: 2.07545 Query_time_min: 2.07545 Query_time_max: 2.07545 Query_time_pct_95: 2.07545 Query_time_stddev: 0 Query_time_median: 2.07545 Lock_time_sum: 0.001107 Lock_time_min: 0.001107 Lock_time_max: 0.001107 Lock_time_pct_95: 0.001107 Lock_time_stddev: 0 Lock_time_median: 0.001107 Rows_sent_sum: 3089 Rows_sent_min: 3089 Rows_sent_max: 3089 Rows_sent_pct_95: 3089 Rows_sent_stddev: 0 Rows_sent_median: 3089 Rows_examined_sum: 6178 Rows_examined_min: 6178 Rows_examined_max: 6178 Rows_examined_pct_95: 6178 Rows_examined_stddev: 0 Rows_examined_median: 6178 Rows_affected_sum: NULL Rows_affected_min: NULL Rows_affected_max: NULL Rows_affected_pct_95: NULL Rows_affected_stddev: NULL Rows_affected_median: NULL Rows_read_sum: NULL Rows_read_min: NULL Rows_read_max: NULL Rows_read_pct_95: NULL Rows_read_stddev: NULL Rows_read_median: NULL Merge_passes_sum: NULL Merge_passes_min: NULL Merge_passes_max: NULL Merge_passes_pct_95: NULL Merge_passes_stddev: NULL Merge_passes_median: NULL InnoDB_IO_r_ops_min: NULL InnoDB_IO_r_ops_max: NULL InnoDB_IO_r_ops_pct_95: NULL InnoDB_IO_r_ops_stddev: NULL InnoDB_IO_r_ops_median: NULL InnoDB_IO_r_bytes_min: NULL InnoDB_IO_r_bytes_max: NULL InnoDB_IO_r_bytes_pct_95: NULL InnoDB_IO_r_bytes_stddev: NULL InnoDB_IO_r_bytes_median: NULL InnoDB_IO_r_wait_min: NULL InnoDB_IO_r_wait_max: NULL InnoDB_IO_r_wait_pct_95: NULL InnoDB_IO_r_wait_stddev: NULL InnoDB_IO_r_wait_median: NULL InnoDB_rec_lock_wait_min: NULL InnoDB_rec_lock_wait_max: NULL InnoDB_rec_lock_wait_pct_95: NULL InnoDB_rec_lock_wait_stddev: NULL InnoDB_rec_lock_wait_median: NULL InnoDB_queue_wait_min: NULL InnoDB_queue_wait_max: NULL InnoDB_queue_wait_pct_95: NULL InnoDB_queue_wait_stddev: NULL InnoDB_queue_wait_median: NULL InnoDB_pages_distinct_min: NULL InnoDB_pages_distinct_max: NULL InnoDB_pages_distinct_pct_95: NULL InnoDB_pages_distinct_stddev: NULL InnoDB_pages_distinct_median: NULL QC_Hit_cnt: NULL QC_Hit_sum: NULL Full_scan_cnt: NULL Full_scan_sum: NULL Full_join_cnt: NULL Full_join_sum: NULL Tmp_table_cnt: NULL Tmp_table_sum: NULL Tmp_table_on_disk_cnt: NULL Tmp_table_on_disk_sum: NULL Filesort_cnt: NULL Filesort_sum: NULL Filesort_on_disk_cnt: NULL Filesort_on_disk_sum: NULL 1 row in set (0.00 sec)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
mysql> select * from global_query_review limit 1\G *************************** 1. row *************************** checksum: 19915890639818927 fingerprint: select * from `bid` where buyer is not ? order by end_time sample: SELECT * FROM `bid` where buyer is not null ORDER BY end_time first_seen: 2017-02-10 15:38:10 last_seen: 2017-02-10 15:38:10 reviewed_by: NULL reviewed_on: NULL comments: NULL 1 row in set (0.00 sec)
mysql> select * from global_query_review_history limit 1\G *************************** 1. row *************************** checksum: 19915890639818927 sample: SELECT * FROM `bid` where buyer is not null ORDER BY end_time ts_min: 2017-02-10 15:38:10 ts_max: 2017-02-10 15:38:10 ts_cnt: NULL Query_time_sum: 2.07545 Query_time_min: 2.07545 Query_time_max: 2.07545 Query_time_pct_95: 2.07545 Query_time_stddev: 0 Query_time_median: 2.07545 Lock_time_sum: 0.001107 Lock_time_min: 0.001107 Lock_time_max: 0.001107 Lock_time_pct_95: 0.001107 Lock_time_stddev: 0 Lock_time_median: 0.001107 Rows_sent_sum: 3089 Rows_sent_min: 3089 Rows_sent_max: 3089 Rows_sent_pct_95: 3089 Rows_sent_stddev: 0 Rows_sent_median: 3089 Rows_examined_sum: 6178 Rows_examined_min: 6178 Rows_examined_max: 6178 Rows_examined_pct_95: 6178 Rows_examined_stddev: 0 Rows_examined_median: 6178 Rows_affected_sum: NULL Rows_affected_min: NULL Rows_affected_max: NULL Rows_affected_pct_95: NULL Rows_affected_stddev: NULL Rows_affected_median: NULL Rows_read_sum: NULL Rows_read_min: NULL Rows_read_max: NULL Rows_read_pct_95: NULL Rows_read_stddev: NULL Rows_read_median: NULL Merge_passes_sum: NULL Merge_passes_min: NULL Merge_passes_max: NULL Merge_passes_pct_95: NULL Merge_passes_stddev: NULL Merge_passes_median: NULL InnoDB_IO_r_ops_min: NULL InnoDB_IO_r_ops_max: NULL InnoDB_IO_r_ops_pct_95: NULL InnoDB_IO_r_ops_stddev: NULL InnoDB_IO_r_ops_median: NULL InnoDB_IO_r_bytes_min: NULL InnoDB_IO_r_bytes_max: NULL InnoDB_IO_r_bytes_pct_95: NULL InnoDB_IO_r_bytes_stddev: NULL InnoDB_IO_r_bytes_median: NULL InnoDB_IO_r_wait_min: NULL InnoDB_IO_r_wait_max: NULL InnoDB_IO_r_wait_pct_95: NULL InnoDB_IO_r_wait_stddev: NULL InnoDB_IO_r_wait_median: NULL InnoDB_rec_lock_wait_min: NULL InnoDB_rec_lock_wait_max: NULL InnoDB_rec_lock_wait_pct_95: NULL InnoDB_rec_lock_wait_stddev: NULL InnoDB_rec_lock_wait_median: NULL InnoDB_queue_wait_min: NULL InnoDB_queue_wait_max: NULL InnoDB_queue_wait_pct_95: NULL InnoDB_queue_wait_stddev: NULL InnoDB_queue_wait_median: NULL InnoDB_pages_distinct_min: NULL InnoDB_pages_distinct_max: NULL InnoDB_pages_distinct_pct_95: NULL InnoDB_pages_distinct_stddev: NULL InnoDB_pages_distinct_median: NULL QC_Hit_cnt: NULL QC_Hit_sum: NULL Full_scan_cnt: NULL Full_scan_sum: NULL Full_join_cnt: NULL Full_join_sum: NULL Tmp_table_cnt: NULL Tmp_table_sum: NULL Tmp_table_on_disk_cnt: NULL Tmp_table_on_disk_sum: NULL Filesort_cnt: NULL Filesort_sum: NULL Filesort_on_disk_cnt: NULL Filesort_on_disk_sum: NULL 1 row in set (0.00 sec) |
也可以自己做一个简单的web程序,即可获取慢查询日志的结果。
但不管用什么工具,都需要在服务器有一个脚本把慢日志分析结果存储到一个统一的存储中。我用的比较多的就是Anemometer,支持多数据源,支持按主机过滤,支持按库过滤,支持执行计划,并且支持历史数据(很重要,分析对比使用),基本上是足够使用了。缺点就是不支持认证及权限管理。
Anemometer在global_query_review_history表上多加了两个字段:hostname_max,db_max。分别代表主机名和库名,支持按主机或库过滤的关键。其余的跟上面展示的内容一样。
Anemometer安装配置文档很多,这里就不写了。提供一个客户端慢日志收集脚本如下:
#!/bin/bash #****************************************************************# # ScriptName:/usr/local/sbin/slowquery.sh # Create Date:2017-03-25 #***************************************************************# # configure slow log storage database; pt_db_host="172.18.212.17" pt_db_port=3306 pt_db_user="slowquerylog" pt_db_password="123456" pt_db_database="slow_query_log" # configure slow log collect database; mysql_client=`which mysql` mysql_host="172.18.204.10" mysql_port=3306 mysql_user="root" mysql_password="123456" # configure slow log file position; slowquery_file=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "show variables like 'slow_query_log_file';" -BN | awk '{print $2}' 2> /dev/null` slowquery_dir=`dirname $slowquery_file` pt_query_digest=`which pt-query-digest` # configure slow_query_log and slow_query_log_file; slow_query_log=1 long_query_time=0.1 # configure HOSTNAME; HOSTNAME=172.18.204.10 # configure slow log; $mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "set global slow_query_log=$slow_query_log;set global long_query_time=$long_query_time" 2> /dev/null # collect mysql slow log into database; $pt_query_digest --user=$pt_db_user --password=$pt_db_password --port=$pt_db_port --charset=utf8 --review h=$pt_db_host,D=$pt_db_database,t=global_query_review --history h=$pt_db_host,D=$pt_db_database,t=global_query_review_history --no-report --limit=100% --filter="\$event->{add_column} = length(\$event->{arg}) and\$event->{hostname}=\"$HOSTNAME\" " $slowquery_file if [ $? = 0 ]; # set a new slow log; new_mysql_slow_log=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "select concat('$slowquery_dir','/mysql_slow_query_',date_format(now(),'%Y%m%d%H'));" -BN 2> /dev/null` # configure slow log; $mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "set global slow_query_log_file = '$new_mysql_slow_log';" 2> /dev/null # delete log before 1 days; datetime=`date -d "1 day ago" +%Y%m%d%H` if [ -d $slowquery_dir ];then cd $slowquery_dir rm -fr mysql_slow_query_${datetime}* fi fi
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
#!/bin/bash #****************************************************************# # ScriptName:/usr/local/sbin/slowquery.sh # Create Date:2017-03-25 #***************************************************************#
# configure slow log storage database; pt_db_host="172.18.212.17" pt_db_port=3306 pt_db_user="slowquerylog" pt_db_password="123456" pt_db_database="slow_query_log"
# configure slow log collect database; mysql_client=`which mysql` mysql_host="172.18.204.10" mysql_port=3306 mysql_user="root" mysql_password="123456"
# configure slow log file position; slowquery_file=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "show variables like 'slow_query_log_file';" -BN | awk '{print $2}' 2> /dev/null` slowquery_dir=`dirname $slowquery_file` pt_query_digest=`which pt-query-digest`
# configure slow_query_log and slow_query_log_file; slow_query_log=1 long_query_time=0.1
# configure HOSTNAME; HOSTNAME=172.18.204.10
# configure slow log; $mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "set global slow_query_log=$slow_query_log;set global long_query_time=$long_query_time" 2> /dev/null
# collect mysql slow log into database; $pt_query_digest --user=$pt_db_user --password=$pt_db_password --port=$pt_db_port --charset=utf8 --review h=$pt_db_host,D=$pt_db_database,t=global_query_review --history h=$pt_db_host,D=$pt_db_database,t=global_query_review_history --no-report --limit=100% --filter="\$event->{add_column} = length(\$event->{arg}) and\$event->{hostname}=\"$HOSTNAME\" " $slowquery_file
if [ $? = 0 ]; # set a new slow log; new_mysql_slow_log=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "select concat('$slowquery_dir','/mysql_slow_query_',date_format(now(),'%Y%m%d%H'));" -BN 2> /dev/null`
# configure slow log; $mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "set global slow_query_log_file = '$new_mysql_slow_log';" 2> /dev/null
# delete log before 1 days; datetime=`date -d "1 day ago" +%Y%m%d%H` if [ -d $slowquery_dir ];then cd $slowquery_dir rm -fr mysql_slow_query_${datetime}* fi fi |
或者下一个版本
#!/bin/bash #****************************************************************# # ScriptName:/home/mysql/bin/slow_query_log.sh # Create Date:2017-03-25 #***************************************************************# # configure slow log storage database; pt_db_host="172.18.212.17" pt_db_port=3306 pt_db_user="root" pt_db_password="123456" pt_db_database="slow_query_log" # configure slow log collect database; mysql_client=`which mysql` mysql_host='172.18.204.10' mysql_port=3306 mysql_user="root" mysql_password="123456" # configure slow log file position; slowquery_file=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "show variables like 'slow_query_log_file';" -BN | awk '{print $2}' 2> /dev/null` slowquery_dir=`dirname "$slowquery_file"` pt_query_digest=`which pt-query-digest` # configure slow_query_log and slow_query_log_file; slow_query_log=1 long_query_time=0.5 # configure HOSTNAME; HOSTNAME="172.18.204.10:3306" # configure slow log; slow_log=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "show variables like 'slow_query_log';" -BN | awk '{print $2}' 2> /dev/null` if [ $slow_log == "OFF" ];then $mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "set global slow_query_log=$slow_query_log;set global long_query_time=$long_query_time" 2> /dev/null fi # collect mysql slow log into database; $pt_query_digest --user=$pt_db_user --password=$pt_db_password --port=$pt_db_port --charset=utf8 --review h=$pt_db_host,D=$pt_db_database,t=global_query_review --history h=$pt_db_host,D=$pt_db_database,t=global_query_review_history --no-report --limit=100% --filter="\$event->{add_column} = length(\$event->{arg}) and\$event->{hostname}=\"$HOSTNAME\" " $slowquery_file && echo > ${slowquery_file}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
#!/bin/bash #****************************************************************# # ScriptName:/home/mysql/bin/slow_query_log.sh # Create Date:2017-03-25 #***************************************************************#
# configure slow log storage database; pt_db_host="172.18.212.17" pt_db_port=3306 pt_db_user="root" pt_db_password="123456" pt_db_database="slow_query_log"
# configure slow log collect database; mysql_client=`which mysql` mysql_host='172.18.204.10' mysql_port=3306 mysql_user="root" mysql_password="123456"
# configure slow log file position; slowquery_file=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "show variables like 'slow_query_log_file';" -BN | awk '{print $2}' 2> /dev/null` slowquery_dir=`dirname "$slowquery_file"` pt_query_digest=`which pt-query-digest`
# configure slow_query_log and slow_query_log_file; slow_query_log=1 long_query_time=0.5
# configure HOSTNAME; HOSTNAME="172.18.204.10:3306"
# configure slow log; slow_log=`$mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "show variables like 'slow_query_log';" -BN | awk '{print $2}' 2> /dev/null`
if [ $slow_log == "OFF" ];then $mysql_client -h$mysql_host -P$mysql_port -u$mysql_user -p$mysql_password -e "set global slow_query_log=$slow_query_log;set global long_query_time=$long_query_time" 2> /dev/null fi
# collect mysql slow log into database; $pt_query_digest --user=$pt_db_user --password=$pt_db_password --port=$pt_db_port --charset=utf8 --review h=$pt_db_host,D=$pt_db_database,t=global_query_review --history h=$pt_db_host,D=$pt_db_database,t=global_query_review_history --no-report --limit=100% --filter="\$event->{add_column} = length(\$event->{arg}) and\$event->{hostname}=\"$HOSTNAME\" " $slowquery_file && echo > ${slowquery_file} |
对于IP和端口,可以通过自己的环境使用命令提取出来,然后做成变量形式。然后在客户端做个执行计划就可以了,多久收集一次慢日志可以自定义。这种方式收集慢查询不是非常实时,如果需要实时性高的,可以使用filebeat类似工具把慢日志全部收集到一台服务器,可以做到实时。