MySQL 单表插入 10w+ TPS达成

MySQL服务端机器配置和操作系统信息,没有使用FlashCache哦:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
#AspersaSystemSummaryReport##############################Uptime|84days,19:00,3users,loadaverage:0.00,0.21,0.16

Platform|Linux

Release|RedHatEnterpriseLinuxServerrelease5.4(Tikanga)

Kernel|2.6.18-164.el5

Architecture|CPU=64-bit,OS=64-bit

Threading|NPTL2.5

Compiler|GNUCCversion4.1.220080704(RedHat4.1.2-44).

SELinux|Disabled

Virtualized|Novirtualizationdetected

#Processor##################################################

Processors|physical=2,cores=8,virtual=16,hyperthreading=yes

Speeds|16×2261.058

Models|16xIntel(R)Xeon(R)[email protected]

Caches|16×8192KB

#Memory#####################################################

Total|23.53G

Free|2.16G

Used|physical=21.38G,swap=240.00k,virtual=21.38G

Buffers|1.03G

Caches|13.60G

Dirty|156kB

UsedRSS|6.1G

Swappiness|vm.swappiness=60

DirtyPolicy|vm.dirty_ratio=40,vm.dirty_background_ratio=10

#MountedFilesystems########################################

FilesystemSizeUsedTypeOptsMountpoint

/dev/sda10766G11%ext3rw/uxx

/dev/sda1122M16%ext3rw/boot

/dev/sda215G67%ext3rw/

/dev/sda315G76%ext3rw/usr

/dev/sda58.6G2%ext3rw/tmp

tmpfs12G0%tmpfsrw/dev/shm

#DiskSchedulersAndQueueSize#############################

sda|[cfq]128

#DiskPartioning############################################

DeviceTypeStartEndSize

======================================================

#KernelInodeState#########################################

dentry-state|29744727674945000

file-nr|357002390094

inode-nr|22073032

#LVMVolumes################################################

WARNING:Runningasanon-rootuser.Functionalitymaybeunavailable.

#RAIDController############################################

Controller|LSILogicMegaRAIDSAS

Model|,interface,ports

Cache|Memory,BBU

BBU|%Charged,TemperatureC,isSOHGood=

VirtualDevSizeRAIDLevelDisksSpnDpthStripeStatusCache

===============================================================

PhysiclDevTypeStateErrorsVendorModelSize

=========================================================

#NetworkConfig#############################################

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

Controller|BroadcomCorporationNetXtremeIIBCM5709GigabitEthernet(rev20)

FINTimeout|net.ipv4.tcp_fin_timeout=60

PortRange|net.ipv4.ip_local_port_range=3276861000

#InterfaceStatistics#######################################

interfacerx_bytesrx_packetsrx_errorstx_bytestx_packetstx_errors

====================================================================

lo60000000050000006000000005000000

eth0000000

eth1000000

eth2000000

eth3000000

eth41000000000600000000020000000004500000000

eth5000000

eth61250000000150000000000

eth7000000

sit0000000

bond02500000000600000000020000000004500000000

#TheEnd####################################################

MySQL使用Percona5.5.18

my.cnf的配置如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
default-storage-engine=INNODBinnodb_flush_method=O_DIRECT

innodb_file_per_table=1

innodb_flush_log_at_trx_commit=1

innodb_lock_wait_timeout=100

innodb_additional_mem_pool_size=20M

innodb_buffer_pool_size=5G

innodb_log_buffer_size=800M

innodb_log_file_size=200M

innodb_log_files_in_group=4

innodb_file_io_threads=4

innodb_thread_concurrency=32

innodb_max_dirty_pages_pct=90

innodb_data_file_path=ibdata1:1G:autoextend

innodb_sync_spin_loops=0

innodb_spin_wait_delay=0

tdh_socket_thread_num=8

tdh_socket_slow_read_thread_num=64

tdh_socket_io_thread_num=4

tdh_socket_write_thread_num=16

tdh_socket_optimize_on=1

tdh_socket_cache_table_num_for_thd=40

插入的表结构

1
2
3
4
5
6
7
8
CREATE TABLE `test` ( `id` INT ( 20 ) UNSIGNED NOT NULL AUTO_INCREMENT ,

`k`INT(20)DEFAULTNULL,

`i`INT(20)NOTNULL,

`c`CHAR(120)DEFAULTNULL,

`kc`INT(20)DEFAULT’1′,

PRIMARYKEY(`id`)

)ENGINE=InnoDBDEFAULTCHARSET=gbk;

压测脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package benchmark.insert import benchmark.StressTest

importcom.taobao.tdhs.client.TDHSClient

importcom.taobao.tdhs.client.TDHSClientImpl

importcom.taobao.tdhs.client.response.TDHSResponse

importcom.taobao.tdhs.client.response.TDHSResponseEnum

importjava.util.concurrent.atomic.AtomicLong

/**

*@author<ahref=”mailto:[email protected]”>文通</a>

*@since12-2-9下午3:31

*

*/

intstep=100000000

intindex=Integer.valueOf(args[0])

AtomicLongid=newAtomicLong((index-1)*step+1)

TDHSClientclient=newTDHSClientImpl(newInetSocketAddress(“t-wentong”,9999),2);

defs=newStressTest(count:step)

s.add({

def_id=id.incrementAndGet()

Stringtable=“test”

TDHSResponseresponse=client.createStatement(index).insert().use(“benchmark_insert”).from(table)

.value(“id”,_id.toString())

.value(“k”,“10000″)

.value(“i”,_id.toString())

.value(“c”,_id.toString()+“_abcdefghijklmnopqrstuvwxyz”).insert()

if(response!=null&&response.getStatus()!=TDHSResponseEnum.ClientStatus.OK){

System.out.println(response);

}

},100)

s.run()

client.shutdown()

使用TDH_SOCKET进行插入能到这么高TPS的关键:

  • TDH_SOCKET提供groupcommit:

    • 减少redolog的fsync次数,使IOPS不成为瓶颈

  • TDH_SOCKET能提供并发写.

  • 自增id的生成策略需要分段自增:

    • 如果采用全自增id生成策略(即默认innodb提供的自增id生成策略)的话,会在高并发插入的时候遇到一个block的写锁.

    • 因为是顺序自增,所以并发插入的记录都会集中写入到同一个page上,而一个线程写page会对这个page做一次rw_lock_x_lock_func(&(block->lock),0,file,line);那么这个写锁会成为瓶颈,使TPS无法上去所以只要改变生成id的策略:

      • 分段自增比如一个写线程一下子获取1-10000的id范围下一个写线程获取10001-20000的id范围..

      • 每个写线程在各自的范围里面自增的生成id,那么即能保证一定的顺序写,又能使写page不会因为并发写锁而性能下降!

好,当上了10wTPS之后你会发现CPU占用:

发现CPU的占用还不是很高..但是这个时候的瓶颈在于log_sys->mutex这个锁了.因为插入么最后要写undolog.

至于TDH_SOCKET是啥…..先买个关子~

2012,淘宝文通.版权所有.

你可能感兴趣的:(mysql,单表插入,10w+,TPS达成)