一、pt-online-schema-change 原理简析
首先要说明pt-online-schema-change工具并不是说修改表结构的时候不上锁,通常我们说的锁一般包含innodb 行锁和MDL lock。而pt-online-schema-change工具就是将某些使用COPY算法的DDL操作使用DML操作来代替,换句话说就是使用Innodb row锁来代替MDL lock,因为MySQL原生的COPY算法的DDL会在MDL lock SNW这个类型保护下完整个表复制操作,整个复制过程中是不允许DML操作,因此造成了我们COPY算法的DDL堵塞线程正常的现象,当然哪些DDL可以online进行可以参考官方文档online ddl一节。整个pt-online-schema-change工具修改过程中,只会在rename阶段才会上MDL LOCK的X锁,但是rename操作一般非常快速。
我们大概看一下pt-online-schema-change的工作方式,这个实际上开启genrnal log就能看出来下面是重点步骤(我的表名叫做testpt_osc):
- 首先定义出新表
CREATE TABLE
test
._testpt_osc_new
(
id
int(11) NOT NULL,
name
varchar(20) DEFAULT NULL,
PRIMARY KEY (id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
ALTER TABLEtest
._testpt_osc_new
add index name_index(name)
- 定义三个触发器
delete 触发器:
CREATE TRIGGERpt_osc_test_testpt_osc_del
AFTER DELETE ONtest
.testpt_osc
FOR EACH ROW
DELETE IGNORE FROMtest
._testpt_osc_new
WHEREtest
._testpt_osc_new
.id
<=> OLD.id
update 触发器:
CREATE TRIGGERpt_osc_test_testpt_osc_upd
AFTER UPDATE ONtest
.testpt_osc
FOR EACH ROW
BEGIN
DELETE IGNORE FROMtest
._testpt_osc_new
WHERE !(OLD.id
<=> NEW.id
)
ANDtest
._testpt_osc_new
.id
<=> OLD.id
;
REPLACE INTOtest
._testpt_osc_new
(id
,name
) VALUES
(NEW.id
, NEW.name
);
END
insert 触发器:
CREATE TRIGGERpt_osc_test_testpt_osc_ins
AFTER INSERT ONtest
.testpt_osc
FOR EACH ROW
REPLACE INTOtest
._testpt_osc_new
(id
,name
) VALUES
(NEW.id
, NEW.name
)
- 使用分块(chunk)拷贝的方式
首先需要插入数据的确认上界:
SELECT /!40001 SQL_NO_CACHE /id
FROMtest
.testpt_osc
FORCE INDEX(PRIMARY
)
WHERE ((id
>= '1')) ORDERBYid
LIMIT 1999, 2 /next chunk boundary/
然后插入:
INSERT LOW_PRIORITY IGNORE INTOtest
._testpt_osc_new
(id
,name
)
SELECTid
,name
FROMtest
.testpt_osc
FORCE INDEX(PRIMARY
)
WHERE ((id
>= '1')) AND ((id
<= '2000')) LOCK IN SHARE MODE
- 最终进行表的重新命名
使用RENAME TABLE
test
.tp1
TOtest
._tp1_old
,test
._tp1_new
TOtest
.tp1
进程重新命名。
从整个过程来讲需要注意的几个地方:
- 对于delete和update触发器来讲,delete数据均使用了IGNORE进行修饰,因此即便数据还没有拷贝到新表也不会引发错误。
- 对于update和insert触发器来讲,均使用了replace这种操作来进行,因此如果数据还没有拷贝到新表那么将插入到新表中,如果数据已经拷贝到新表那么将会修改其中的值。因此新表中总是保留的最新的数据。
- 对于分块拷贝数据而言,使用是insert ignore 新表 select 老表 LOCK S 的方式,因此对于触发器插入的最新值,是不会进行修改的也不会报错。打个比方chunk为200 当前拷贝数据到了1000行,但是我们手动修改了第2000行的数据,那么第2000行将会在update触发器的作用下提前插入到新表中,当拷贝数据来到这一行的时候因为使用了ignore则不会重复行的错误,并且数据是最新的。其次每次insert select操作是一个单独的事务。
- insert ignore 新表 select 老表 LOCK S 的方式 操作存在对新表中加自增锁的可能,这取决于你的参数设置。
- 对于触发器而言,原始语句和触发语句被包裹在一个事务里面,也就是说对于任何一个DML语句而言,修改老表和新表的数据需要的行锁将会在一个事务中存在。
- pt-online-schema-change 生成的binlog和redo都会比online DDL大得多,效率上讲应该低于online DDL。
- 由于replace操作的存在,因此pt-online-schema-change将会依赖主键或者唯一键,否则将不能工作。
我们可以看到整个过程中有如下的重点知识点:
- 触发器和事务
- Insert ignore/replace语法
- 自增死锁的发生
其次对于第4和第5点来讲,有出现死锁的可能。下面我们分别讨论。
二、触发器与事务
在pt-online-schema-change中,触发器占据了重要的地位,我们需要了解一下触发器和事务之间的关系。我们常用的触发器包含了before和after触发器,代表着对原表进行DML操作前或者后进行其它的操作,下面是我定义的两个测试的触发器如下:
CREATE TRIGGER testbef BEFORE INSERT ON t1
FOR EACH ROW
BEGIN
INSERT INTO t2 values(new.id);
END;
CREATE TRIGGER testaft after INSERT ON t1
FOR EACH ROW
BEGIN
INSERT INTO t3 values(new.id);
END;
显然如果对t1表进行数据插入,那么会在之前向t2表插入一条数据,然后在之后向t3插入一条数据,这一点可以通过函数调用trace进行验证如下:
[root@ora12ctest mysql]# cat -n tri2.trace |grep row_ins
970 T@3: | | | | | | | | | | | >row_ins
971 T@3: | | | | | | | | | | | | row_ins: table: test/t2 向t2表插入数据
...
1406 T@3: | | | | | | | | >row_ins
1407 T@3: | | | | | | | | | row_ins: table: test/t1 向t1表插入数据
...
1779 T@3: | | | | | | | | | | | | >row_ins
1780 T@3: | | | | | | | | | | | | | row_ins: table: test/t3 向t3表插入数据
...
这里就能够看到顺序了,其次我们还需要知道这些所有的操作会包裹在一个事务里面,这一点也可以通过函数调用trace进行验证,还可以使用binlog进行验证,下面是一次调用的binlog信息:
# at 194 (这里是GTID EVENT事务开始)
#200212 17:23:16 server id 1903313 end_log_pos 259 CRC32 0x4ff6735e GTID last_committed=0 sequence_number=1 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= '92008a52-4b7d-11ea-9ec6-000c29c8aca8:202'/*!*/;
# at 259
#200212 17:23:16 server id 1903313 end_log_pos 331 CRC32 0x1ebd3446 Query thread_id=3 exec_time=0 error_code=0
...
BEGIN
/*!*/;
# at 331
#200212 17:23:16 server id 1903313 end_log_pos 384 CRC32 0xe748dc3a Rows_query
# INSERT INTO t2 values(new.id)
# at 384
#200212 17:23:16 server id 1903313 end_log_pos 429 CRC32 0x093c5fe3 Table_map: `test`.`t1` mapped to number 108
# at 429
#200212 17:23:16 server id 1903313 end_log_pos 474 CRC32 0x92691238 Table_map: `test`.`t2` mapped to number 110
# at 474
#200212 17:23:16 server id 1903313 end_log_pos 519 CRC32 0x5b9a710f Table_map: `test`.`t3` mapped to number 111
# at 519
#200212 17:23:16 server id 1903313 end_log_pos 559 CRC32 0xe41b1119 Write_rows: table id 110
# at 559
#200212 17:23:16 server id 1903313 end_log_pos 599 CRC32 0x36c3511c Write_rows: table id 108
# at 599
#200212 17:23:16 server id 1903313 end_log_pos 639 CRC32 0xa68b9ae6 Write_rows: table id 111 flags: STMT_END_F
### INSERT INTO `test`.`t2`
### SET
### @1=11000 /* INT meta=0 nullable=0 is_null=0 */
### INSERT INTO `test`.`t1`
### SET
### @1=11000 /* INT meta=0 nullable=1 is_null=0 */
### INSERT INTO `test`.`t3`
### SET
### @1=11000 /* INT meta=0 nullable=0 is_null=0 */
# at 639 (这里是XID EVENT事务提交)
#200212 17:23:16 server id 1903313 end_log_pos 670 CRC32 0xbbb6547b Xid = 19
COMMIT/*!*/;
这里我们使用binlog不仅验证了执行顺序并且还验证了所有操作都包含在一个事务里面。既然所有的语句都包裹在一个事务里面,那么加锁的范围就更大了,这不仅关系到本身的DML操作表,并且还关系到触发语句的相关表,需要额外注意。
其次所有语句不仅包裹在一个事务里面,并且共享一个错误返回接口,那么如下的错误:
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (0.00 sec)
mysql> insert into t1 values(1);
ERROR 1062 (23000): Duplicate entry '1' for key 'PRIMARY'
mysql>
我们惊讶的发现t1表一条数据都没有,但是居然返回重复的行。原因就在于虽然t1表没有数据,但是t2或者t3表有违反唯一性检查的可能,因此返回了错误,错误由统一的接口返回给客户端。
最后触发器会导致处理逻辑混乱,尽量避免使用触发器。
三、Insert ignore/replace语法
关于ignore语法我们以insert ignore语法为例,一般来讲如果遇到重复行insert ignore语法会通过忽略重复值错误的方式进行跳过,这实际上和replace的处理方式一致,但是replace不同的是如果遇到重复行不是进行忽略,而是执行的delete然后执行insert操作。换句话说他们的触发形式一致,但是触发后执行的行为是不同的,下面我们就来看看。
首先对于insert语句来讲我们需要定位到需要插入的位置,这部分略过。
1、进行重复行判断操作
这一步对于主键/唯一索引 而言需要判断是否已经有重复的行。其判断标准基本都是通过插入的值进行索引定位,然后判断定位游标的值是否和需要插入值相同,下面是栈帧:
主键:
#0 row_ins_duplicate_error_in_clust (flags=0, cursor=0x7fffec4347d0, entry=0x7367c00, thr=0x7362980, mtr=0x7fffec433fa0)
at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:2273
#1 0x0000000001ae47d4 in row_ins_clust_index_entry_low (flags=0, mode=2, index=0x73674f0, n_uniq=1, entry=0x7367c00, n_ext=0, thr=0x7362980, dup_chk_only=false)
at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:2555
#2 0x0000000001ae697a in row_ins_clust_index_entry (index=0x73674f0, entry=0x7367c00, thr=0x7362980, n_ext=0, dup_chk_only=false)
at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3293
#3 0x0000000001ae6e88 in row_ins_index_entry (index=0x73674f0, entry=0x7367c00, thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3429
#4 0x0000000001ae73e2 in row_ins_index_entry_step (node=0x7362710, thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3579
#5 0x0000000001ae7749 in row_ins (node=0x7362710, thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3717
#6 0x0000000001ae7bae in row_ins_step (thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3853
#7 0x0000000001b05f73 in row_insert_for_mysql_using_ins_graph (mysql_rec=0x736e7a0 "\375\002", prebuilt=0x7362170)
at /mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1738
#8 0x0000000001b06484 in row_insert_for_mysql (mysql_rec=0x736e7a0 "\375\002", prebuilt=0x7362170) at /mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1859
#9 0x00000000019adca9 in ha_innobase::write_row (this=0x736e4b0, record=0x736e7a0 "\375\002") at /mysql/mysql-5.7.26/storage/innobase/handler/ha_innodb.cc:7598
#10 0x0000000000f67dc0 in handler::ha_write_row (this=0x736e4b0, buf=0x736e7a0 "\375\002") at /mysql/mysql-5.7.26/sql/handler.cc:8062
#11 0x00000000017cf173 in write_record (thd=0x6dd5660, table=0x735afa0, info=0x7fffec435b50, update=0x7fffec435ad0) at /mysql/mysql-5.7.26/sql/sql_insert.cc:1873
#12 0x00000000017cc24a in Sql_cmd_insert::mysql_insert (this=0x6debbc8, thd=0x6dd5660, table_list=0x6deb638) at /mysql/mysql-5.7.26/sql/sql_insert.cc:769
#13 0x00000000017d2d7f in Sql_cmd_insert::execute (this=0x6debbc8, thd=0x6dd5660) at /mysql/mysql-5.7.26/sql/sql_insert.cc:3118
#14 0x000000000159a70c in mysql_execute_command (thd=0x6dd5660, first_level=true) at /mysql/mysql-5.7.26/sql/sql_parse.cc:3596
#15 0x00000000015a06c0 in mysql_parse (thd=0x6dd5660, parser_state=0x7fffec437610) at /mysql/mysql-5.7.26/sql/sql_parse.cc:5570
#16 0x0000000001595283 in dispatch_command (thd=0x6dd5660, com_data=0x7fffec437d80, command=COM_QUERY) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1484
#17 0x00000000015940bc in do_command (thd=0x6dd5660) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1025
#18 0x00000000016cbf91 in handle_connection (arg=0x6dda360) at /mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:306
#19 0x0000000001921c64 in pfs_spawn_thread (arg=0x6d7d0f0) at /mysql/mysql-5.7.26/storage/perfschema/pfs.cc:2190
#20 0x0000003823e079d1 in start_thread () from /lib64/libpthread.so.0
#21 0x0000003823ae8b6d in clone () from /lib64/libc.so.6
二级索引唯一键
#0 row_ins_scan_sec_index_for_duplicate (flags=0, index=0x7366e10, entry=0x7367ca8, thr=0x7362980, s_latch=false, mtr=0x7fffec434020, offsets_heap=0x7389038)
at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:2050
#1 0x0000000001ae5d35 in row_ins_sec_index_entry_low (flags=0, mode=2, index=0x7366e10, offsets_heap=0x7389038, heap=0x7368538, entry=0x7367ca8, trx_id=0,
thr=0x7362980, dup_chk_only=false) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3033
#2 0x0000000001ae6cea in row_ins_sec_index_entry (index=0x7366e10, entry=0x7367ca8, thr=0x7362980, dup_chk_only=false)
at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3382
#3 0x0000000001ae6ea6 in row_ins_index_entry (index=0x7366e10, entry=0x7367ca8, thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3431
#4 0x0000000001ae73e2 in row_ins_index_entry_step (node=0x7362710, thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3579
#5 0x0000000001ae7749 in row_ins (node=0x7362710, thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3717
#6 0x0000000001ae7bae in row_ins_step (thr=0x7362980) at /mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3853
#7 0x0000000001b05f73 in row_insert_for_mysql_using_ins_graph (mysql_rec=0x736e7a0 "\375\003", prebuilt=0x7362170)
at /mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1738
#8 0x0000000001b06484 in row_insert_for_mysql (mysql_rec=0x736e7a0 "\375\003", prebuilt=0x7362170) at /mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1859
#9 0x00000000019adca9 in ha_innobase::write_row (this=0x736e4b0, record=0x736e7a0 "\375\003") at /mysql/mysql-5.7.26/storage/innobase/handler/ha_innodb.cc:7598
#10 0x0000000000f67dc0 in handler::ha_write_row (this=0x736e4b0, buf=0x736e7a0 "\375\003") at /mysql/mysql-5.7.26/sql/handler.cc:8062
#11 0x00000000017cf173 in write_record (thd=0x6dd5660, table=0x735afa0, info=0x7fffec435b50, update=0x7fffec435ad0) at /mysql/mysql-5.7.26/sql/sql_insert.cc:1873
#12 0x00000000017cc24a in Sql_cmd_insert::mysql_insert (this=0x6debbc8, thd=0x6dd5660, table_list=0x6deb638) at /mysql/mysql-5.7.26/sql/sql_insert.cc:769
#13 0x00000000017d2d7f in Sql_cmd_insert::execute (this=0x6debbc8, thd=0x6dd5660) at /mysql/mysql-5.7.26/sql/sql_insert.cc:3118
#14 0x000000000159a70c in mysql_execute_command (thd=0x6dd5660, first_level=true) at /mysql/mysql-5.7.26/sql/sql_parse.cc:3596
#15 0x00000000015a06c0 in mysql_parse (thd=0x6dd5660, parser_state=0x7fffec437610) at /mysql/mysql-5.7.26/sql/sql_parse.cc:5570
#16 0x0000000001595283 in dispatch_command (thd=0x6dd5660, com_data=0x7fffec437d80, command=COM_QUERY) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1484
#17 0x00000000015940bc in do_command (thd=0x6dd5660) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1025
#18 0x00000000016cbf91 in handle_connection (arg=0x6dda360) at /mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:306
#19 0x0000000001921c64 in pfs_spawn_thread (arg=0x6d7d0f0) at /mysql/mysql-5.7.26/storage/perfschema/pfs.cc:2190
#20 0x0000003823e079d1 in start_thread () from /lib64/libpthread.so.0
#21 0x0000003823ae8b6d in clone () from /lib64/libc.so.6
如果存在重复的行,这需要进行判断了:
- 如果重复的行正在被其他事务持有,那么需要进行进行隐试锁转换,主键的转换操作如下:
->lock_clust_rec_read_check_and_lock
->lock_rec_convert_impl_to_expl
->lock_rec_convert_impl_to_expl_for_trx
因为我们知道通常insert锁并不会建立显示的锁。对于如果出现了重复的行,持有重复行数据的事务并没有提交或者回滚,需要其事务完成提交或者回滚,然后再进行相应的抛错或者继续插入。需要注意的是对于replace/insert on dup 在进行唯一性检查的时候,通常加的LOCK_S锁,而其他操作通常加的是 LOCK_X。
- 如果重复的行没有其他事务持有,那么抛出重复行错误,但是注意这里的错误不是返回给客户端的错误,是内部错误HA_ERR_FOUND_DUPP_KEY,这个错误在Innodb层叫做DB_DUPLICATE_KEY(convert_error_code_to_mysql)。如何处理这个错误就和相应的语法有关了。
当然如果没有重复的行,那么接下来就可以继续进行insert插入操作了,Insert ignore/replace实现都是进行insert操作。如果有重复行呢?那么接下来进行分析。
这里我们也很明白了,对于了insert ignore/replace是通过主键/唯一键进行判断是否重复行的,具体点来说就是如何处理错误HA_ERR_FOUND_DUPP_KEY。 如果表中一个能够判断唯一性的索引都没有,那么即便2条数据一模一样也不会标记为重复行,视为2条不同的数据,当然insert on dup 这里也是同样的逻辑。
在进行唯一性检测的时候,会先检查主键的唯一性,然后依次检查各个唯一索引的唯一性是否满足。
2、可能的回滚操作
首先对于多行插入和insert select来讲,每次innodb层插入的行数为1行,我们应该牢牢树立以行为单位的处理流程,我们可以在函数Sql_cmd_insert::mysql_insert 中找到 一个大的while 循环,这就是处理的循环。
我们也需要明白,进行判断唯一性的时候是先判断主键的唯一性,如果满足则插入主键数据,然后依次判断二级唯一索引,如果满足则进行插入。这里涉及到一个问题,如果主键数据插入了,但是二级唯一索引由于违法唯一性那么,前面主键插入的数据是需要回滚的。再或者我们执行的insert select操作,其中前面的一些行不违反唯一性插入了,但是随后的某行违法了唯一性,那么前面插入的数据也是需要回滚的。函数row_insert_for_mysql_using_ins_graph 中进行这种逻辑处理。
回滚栈帧:
#0 row_undo_ins (node=0x73685c0, thr=0x73671e8) at /mysql/mysql-5.7.26/storage/innobase/row/row0uins.cc:466
#1 0x0000000001b5d918 in row_undo (node=0x73685c0, thr=0x73671e8) at /mysql/mysql-5.7.26/storage/innobase/row/row0undo.cc:327
#2 0x0000000001b5dbae in row_undo_step (thr=0x73671e8) at /mysql/mysql-5.7.26/storage/innobase/row/row0undo.cc:411
#3 0x0000000001ab1847 in que_thr_step (thr=0x73671e8) at /mysql/mysql-5.7.26/storage/innobase/que/que0que.cc:1047
#4 0x0000000001ab1a63 in que_run_threads_low (thr=0x73671e8) at /mysql/mysql-5.7.26/storage/innobase/que/que0que.cc:1111
#5 0x0000000001ab1c25 in que_run_threads (thr=0x73671e8) at /mysql/mysql-5.7.26/storage/innobase/que/que0que.cc:1151
#6 0x0000000001bc622b in trx_rollback_to_savepoint_low (trx=0x7fffedc5b8c0, savept=0x7fffec434b60) at /mysql/mysql-5.7.26/storage/innobase/trx/trx0roll.cc:118
#7 0x0000000001bc64d2 in trx_rollback_to_savepoint (trx=0x7fffedc5b8c0, savept=0x7fffec434b60) at /mysql/mysql-5.7.26/storage/innobase/trx/trx0roll.cc:159
#8 0x0000000001b03b92 in row_mysql_handle_errors (new_err=0x7fffec434b5c, trx=0x7fffedc5b8c0, thr=0x7362980, savept=0x7fffec434b60)
at /mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:767
#9 0x0000000001b05ff2 in row_insert_for_mysql_using_ins_graph (mysql_rec=0x736e7a0 "\375\n", prebuilt=0x7362170)
at /mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1752
#10 0x0000000001b06484 in row_insert_for_mysql (mysql_rec=0x736e7a0 "\375\n", prebuilt=0x7362170) at /mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1859
#11 0x00000000019adca9 in ha_innobase::write_row (this=0x736e4b0, record=0x736e7a0 "\375\n") at /mysql/mysql-5.7.26/storage/innobase/handler/ha_innodb.cc:7598
#12 0x0000000000f67dc0 in handler::ha_write_row (this=0x736e4b0, buf=0x736e7a0 "\375\n") at /mysql/mysql-5.7.26/sql/handler.cc:8062
#13 0x00000000017cf173 in write_record (thd=0x6dd5660, table=0x735afa0, info=0x7fffec435b50, update=0x7fffec435ad0) at /mysql/mysql-5.7.26/sql/sql_insert.cc:1873
#14 0x00000000017cc24a in Sql_cmd_insert::mysql_insert (this=0x6debbc8, thd=0x6dd5660, table_list=0x6deb638) at /mysql/mysql-5.7.26/sql/sql_insert.cc:769
#15 0x00000000017d2d7f in Sql_cmd_insert::execute (this=0x6debbc8, thd=0x6dd5660) at /mysql/mysql-5.7.26/sql/sql_insert.cc:3118
#16 0x000000000159a70c in mysql_execute_command (thd=0x6dd5660, first_level=true) at /mysql/mysql-5.7.26/sql/sql_parse.cc:3596
#17 0x00000000015a06c0 in mysql_parse (thd=0x6dd5660, parser_state=0x7fffec437610) at /mysql/mysql-5.7.26/sql/sql_parse.cc:5570
#18 0x0000000001595283 in dispatch_command (thd=0x6dd5660, com_data=0x7fffec437d80, command=COM_QUERY) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1484
#19 0x00000000015940bc in do_command (thd=0x6dd5660) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1025
#20 0x00000000016cbf91 in handle_connection (arg=0x6dda360) at /mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:306
#21 0x0000000001921c64 in pfs_spawn_thread (arg=0x6d7d0f0) at /mysql/mysql-5.7.26/storage/perfschema/pfs.cc:2190
#22 0x0000003823e079d1 in start_thread () from /lib64/libpthread.so.0
#23 0x0000003823ae8b6d in clone () from /lib64/libc.so.6
3、对重复错误HA_ERR_FOUND_DUPP_KEY 的处理操作
如果有重复的行并且产生了错误HA_ERR_FOUND_DUPP_KEY ,那么就不能进行insert 操作了,这里就会根据不同的语法进行不同的操作了。我们在函数(write_record )中可以找到这种分支处理逻辑。
实际上在处理重复行错误的时候,在内部分为了3种方式如下:
enum enum_duplicates { DUP_ERROR, DUP_REPLACE, DUP_UPDATE };
- DUP_ERROR:这个代表的就是普通的insert/insert ignore语句
这是我们普通的操作,如果是insert操作则进行抛错给客户端,如果是insert ignore操作则不进行报错,仅仅做一个警告,如下:
/*
If IGNORE option is used, handler errors will be downgraded
to warnings and don't have to stop the iteration.
*/
mysql> insert ignore into tpk2 values(5,'g','m');
Query OK, 0 rows affected, 1 warning (6 min 3.60 sec)
mysql> show warnings
-> ;
+---------+------+---------------------------------------+
| Level | Code | Message |
+---------+------+---------------------------------------+
| Warning | 1062 | Duplicate entry '5' for key 'PRIMARY' |
+---------+------+---------------------------------------+
1 row in set (0.00 sec)
但是需要注意的是,当前版本报错后,自增值并不会回退。
- DUP_REPLACE:这里代表的是replace操作,也就是使用新的插入数据完全代替违反唯一性约束的行数据。我们通常理解的replace是delete/insert的结合,但是实际上并不完全是这样,抛开外键触发器等因素,如果违反的唯一性是最后一个检查的唯一索引的时候,还是会使用update代替,其他情况下就是delete/insert的结合了。如下我们可以看看:
if (last_uniq_key(table,key_nr) &&//是否是检测的最后一个唯一索引
!table->file->referenced_by_foreign_key() &&
(!table->triggers || !table->triggers->has_delete_triggers()))
{
if ((error=table->file->ha_update_row(table->record[1],
table->record[0])) && //调用了是update接口
error != HA_ERR_RECORD_IS_THE_SAME)
if (error != HA_ERR_RECORD_IS_THE_SAME)
info->stats.deleted++; //影响行数+1
goto after_trg_n_copied_inc;
}
else
{
...
if ((error=table->file->ha_delete_row(table->record[1])))//删除接口 delete
goto err;
info->stats.deleted++; //影响行数+1
...
/* Let us attempt do write_row() once more *///这里会进行一次循环进行普通的insert操作
}
- DUP_UPDATE:这个代表我们的语法insert into on dup,这里就完全等待于update语句了。需要注意的是,这里不是完全替代,会使用语句中的update进行字段的更新,其他字段并不会更新,如下:
if ((error=table->file->ha_update_row(table->record[1],
table->record[0])) && //调入update接口
error != HA_ERR_RECORD_IS_THE_SAME)
{
...
if (error != HA_ERR_RECORD_IS_THE_SAME)
info->stats.updated++;//影响行数+1
else
error= 0;
再说对于普通的insert操作而言,影响的行数通常为1。replace/insert into on dup如果遇到了重复行更改后(注意不是直接插入成功的状态),通常返回影响的行数为2如下:
mysql> replace testpri2(id,a,b) values(7,'b','k');
Query OK, 2 rows affected (2.74 sec)
mysql> insert into testpri2(a,b) values('mmmmnb','ffhhh') on DUPLICATE KEY UPDATE b='bj';
Query OK, 2 rows affected (3.81 sec)
- replace:如果是调用update接口则,不会真正执行insert操作,会在info->stats.deleted++进行影响行数+1,最后在insert接口中还会+1。如果是调用delete/insert操作会真正执行insert操作,正常的info->stats.deleted++进行+1操作,最后在insert接口中还会+1。因此它们结果还是2。
- insert into on dup:肯定调用update接口,不会真正执行insert操作,会在 info->stats.updated++;进行影响行数+1,然后会直接进行info->stats.copied++进行影响行数+1,因此为2。
因此不管怎么看起来都是影响行数为2,也不要奇怪。
其他:
DML回执接口:
#0 my_ok (thd=0x6ddc080, affected_rows=1, id=0, message=0x7fffec4357d0 "Rows matched: 1 Changed: 1 Warnings: 0") at /mysql/mysql-5.7.26/sql/sql_class.h:4753
#1 0x0000000001652574 in mysql_update (thd=0x6ddc080, fields=..., values=..., limit=18446744073709551614, handle_duplicates=DUP_ERROR, found_return=0x7fffec435d28,
updated_return=0x7fffec435d20) at /mysql/mysql-5.7.26/sql/sql_update.cc:1092
#2 0x00000000016580ea in Sql_cmd_update::try_single_table_update (this=0x6df2350, thd=0x6ddc080, switch_to_multitable=0x7fffec435dcf)
at /mysql/mysql-5.7.26/sql/sql_update.cc:2891
#3 0x0000000001658637 in Sql_cmd_update::execute (this=0x6df2350, thd=0x6ddc080) at /mysql/mysql-5.7.26/sql/sql_update.cc:3018
#4 0x000000000159a7ba in mysql_execute_command (thd=0x6ddc080, first_level=true) at /mysql/mysql-5.7.26/sql/sql_parse.cc:3606
#5 0x00000000015a06c0 in mysql_parse (thd=0x6ddc080, parser_state=0x7fffec437610) at /mysql/mysql-5.7.26/sql/sql_parse.cc:5570
#6 0x0000000001595283 in dispatch_command (thd=0x6ddc080, com_data=0x7fffec437d80, command=COM_QUERY) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1484
#7 0x00000000015940bc in do_command (thd=0x6ddc080) at /mysql/mysql-5.7.26/sql/sql_parse.cc:1025
#8 0x00000000016cbf91 in handle_connection (arg=0x6dd9480) at /mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:306
#9 0x0000000001921c64 in pfs_spawn_thread (arg=0x6da4430) at /mysql/mysql-5.7.26/storage/perfschema/pfs.cc:2190
#10 0x0000003823e079d1 in start_thread () from /lib64/libpthread.so.0
#11 0x0000003823ae8b6d in clone () from /lib64/libc.so.6