标签(空格分隔): Cloud2.0
[TOC]
对比MySQL在物理机和KVM环境下性能情况
压测遵循单一变量原则,所有的对比都是只改变一个变量的前提下完成
以物理机MySQL为基准,分别做两次测试
CPU:Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz X 24
MEM:48G
Disk:SSD 1.3T
System:Ubuntu 14.04.4 LTS
Kernel:3.16.0-30-generic
MySQL:mysql-5.5.31-linux2.6-x86_64
Sysbench:0.4.12
KVM:QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.22)
因相关资料说明,writethrough IO模式能够保障数据一致性,所以在MySQL环境下,默认只测试writethrough环境
以打开NUMA Balancing的物理机环境为基准,测试KVM环境如下变量:
配置模板如下(只列举关键参数)
# The MySQL server
[mysqld]
default-storage-engine = innodb
# MyISAM setup
key_buffer_size = 128M
myisam_sort_buffer_size = 64M
## gloabl config
max_allowed_packet = 16M
max_heap_table_size = 64M
tmp_table_size = 8M
max_connections = 4000
open_files_limit = 6000
table_open_cache = 512
read_buffer_size = 2M
read_rnd_buffer_size = 4M
join_buffer_size = 256K
sort_buffer_size = 2M
thread_cache_size = 8
query_cache_size = 0
thread_concurrency = 16
# Replication Master setup
log-bin = mysql-bin
relay-log = mysqld-relay-bin
max_binlog_size = 100M
binlog_format = row
binlog_cache_size=32K
thread_stack=262144
auto_increment_increment = 3
auto_increment_offset = 1
# Logging
slow_query_log = 1
long_query_time = 2
# InnoDB setup
innodb_file_format = Barracuda
innodb_file_per_table
innodb_buffer_pool_size = 4096M
innodb_log_file_size = 16M
innodb_log_buffer_size = 40M
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 50
innodb_log_files_in_group=2
innodb_io_capacity=2000
[mysqldump]
quick
extended-insert = false
default-character-set = utf8
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 8M
sort_buffer_size = 8M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
KVM-qemu 配置如下:
<domain type='kvm'>
<name>mysql1</name>
<memory unit='MiB'>5120</memory>
<currentMemory unit='MiB'>5120</currentMemory>
<vcpu placement='static'>4</vcpu>
<os>
<type>hvm</type>
<boot dev='hd' />
</os>
<features>
<acpi />
<apic />
<pae />
</features>
<clock offset='utc' />
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' />
<source file='/data2/kvm/image1/mysql.qcow2' />
<target dev='vda' bus='virtio' />
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writethrough'/>
<source file='/data2/kvm/image1/data.qcow2' />
<target dev='vdb' bus='virtio' />
</disk>
<interface type='network'>
<source network='default1' />
<model type='virtio' />
</interface>
<console type='pty'>
<target port='0' />
</console>
<graphics type='vnc' autoport='yes' sharePolicy='allow-exclusive' keymap='en-us'>
<listen type='address' address='0.0.0.0' />
</graphics>
</devices>
</domain>
测试以物理机的MySQL实例作为参照
物理机MySQL默认情况下,使用4G+4Core,关闭NUMA Balancing
Innodb_flush_method = O_DIRECT
OLTP test statistics:
queries performed:
read: 14000028
write: 5000010
other: 2000004
total: 21000042
transactions: 1000002 (1375.45 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 19000038 (26133.48 per sec.)
other operations: 2000004 (2750.89 per sec.)
Test execution summary:
total time: 727.0382s
total number of events: 1000002
total time taken by event execution: 17443.5464
per-request statistics:
min: 1.78ms
avg: 17.44ms
max: 1048.03ms
approx. 95 percentile: 32.64ms
Threads fairness:
events (avg/stddev): 41666.7500/646.28
execution time (avg/stddev): 726.8144/0.00
关闭 Innodb_flush_method = O_DIRECT, 使用默认值
OLTP test statistics:
queries performed:
read: 14000028
write: 5000010
other: 2000004
total: 21000042
transactions: 1000002 (1390.26 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 19000038 (26414.92 per sec.)
other operations: 2000004 (2780.52 per sec.)
Test execution summary:
total time: 719.2920s
total number of events: 1000002
total time taken by event execution: 17257.6867
per-request statistics:
min: 1.78ms
avg: 17.26ms
max: 1476.86ms
approx. 95 percentile: 32.76ms
Threads fairness:
events (avg/stddev): 41666.7500/709.66
execution time (avg/stddev): 719.0703/0.00
在物理机MySQL实例情况下,innodb_flush_method对MySQL性能有一定影响关系
单纯虚拟机(kvm)压测, Innodb_flush_method = O_DIRECT
打开 Numa balancing
, kvm cache模式改为 writethrough
KVM 配置:
CPU = 4 core
Mem = 5 G
MySQL = 4G
Cache = writethrough
MySQL 配置:
Mem = 4G
Innodb_flush_method = O_DIRECT
sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=test --max-requests=1000000 --num-threads=24 --mysql-host=192.168.100.244 --mysql-user=test run
OLTP test statistics:
queries performed:
read: 14000042
write: 5000015
other: 2000006
total: 21000063
transactions: 1000003 (1041.22 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 19000057 (19783.20 per sec.)
other operations: 2000006 (2082.44 per sec.)
Test execution summary:
total time: 960.4138s
total number of events: 1000003
total time taken by event execution: 23044.1587
per-request statistics:
min: 3.43ms
avg: 23.04ms
max: 958.60ms
approx. 95 percentile: 43.71ms
Threads fairness:
events (avg/stddev): 41666.7917/865.32
execution time (avg/stddev): 960.1733/0.01
sysbench 0.4.12: multi-threaded system evaluation benchmark
OLTP test statistics:
queries performed:
read: 14000042
write: 5000015
other: 2000006
total: 21000063
transactions: 1000003 (1025.90 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 19000057 (19492.01 per sec.)
other operations: 2000006 (2051.79 per sec.)
Test execution summary:
total time: 974.7614s
total number of events: 1000003
total time taken by event execution: 23388.1224
per-request statistics:
min: 3.75ms
avg: 23.39ms
max: 1306.42ms
approx. 95 percentile: 44.38ms
Threads fairness:
events (avg/stddev): 41666.7917/863.10
execution time (avg/stddev): 974.5051/0.01
从压测报告显示,在kvm打开writethrough前提下,O_DIRECT方式,MySQL的效率更高
使用kvm,MySQL性能约为物理机的75%
纵坐标为总执行时间
在宿主机打开writethrough前提下,配置 Innodb_flush_method = O_DIRECT有效提高MySQL性能
约为物理机O_DIRECT模式下性能的97%
单纯虚拟机(kvm)压测, 打开 numa balancing
关闭宿主机 Numa balancing, kvm cache模式改为 writethrough
Innodb_flush_method = O_SYNC
OLTP test statistics:
queries performed:
read: 14000014
write: 5000005
other: 2000002
total: 21000021
transactions: 1000001 (1068.76 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 19000019 (20306.35 per sec.)
other operations: 2000002 (2137.51 per sec.)
Test execution summary:
total time: 935.6690s
total number of events: 1000001
total time taken by event execution: 22450.9403
per-request statistics:
min: 3.51ms
avg: 22.45ms
max: 1170.10ms
approx. 95 percentile: 41.65ms
Threads fairness:
events (avg/stddev): 41666.7083/855.51
execution time (avg/stddev): 935.4558/0.01
Innodb_flush_method = O_DIRECT
OLTP test statistics:
queries performed:
read: 14000042
write: 5000015
other: 2000006
total: 21000063
transactions: 1000003 (1062.79 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 19000057 (20193.07 per sec.)
other operations: 2000006 (2125.59 per sec.)
Test execution summary:
total time: 940.9197s
total number of events: 1000003
total time taken by event execution: 22577.0003
per-request statistics:
min: 3.36ms
avg: 22.58ms
max: 756.58ms
approx. 95 percentile: 41.50ms
Threads fairness:
events (avg/stddev): 41666.7917/943.69
execution time (avg/stddev): 940.7083/0.01
打开NUMA绑定后,性能下降约3%
关闭NUMA绑定
在测试的过程中,利用cgroup可以将实例的CPU全部跑到对应的核,在对应CPU上,负载是满的
猜测vCPU的多个线程可能位于不同的CPU Nodes, 导致跨node的内存访问,不太清楚vCPU是否会产生这样的调度,但是关闭NUMA是不会导致的。