Canal-Admin 集群环境配置及踩坑实录

集群配置

canal-admin的安装不再累述,可翻看之前文章,本文主要记录canal-admin集群环境的配置和踩坑记录

新建集群

填写zk的集群信息
image.png

主要参数说明

canal持久化数据到zookeeper上的更新频率

canal.file.flush.period = 1000

memory store RingBuffer size, should be Math.pow(2,n)

canal.instance.memory.buffer.size = 16384

memory store RingBuffer used memory unit size , default 1kb

canal.instance.memory.buffer.memunit = 1024

meory store gets mode used MEMSIZE or ITEMSIZE

canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true

是否开启心跳检查

canal.instance.detecting.enable = false

心跳检查sql

canal.instance.detecting.sql = select 1

心跳检查频率

canal.instance.detecting.interval.time = 3

心跳检查失败重试次数

canal.instance.detecting.retry.threshold = 3

心跳检查失败后,是否开启自动mysql自动切换,mysql主从地址可在具体instance中配置

canal.instance.detecting.heartbeatHaEnable = false

support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery

canal.instance.transaction.size = 1024

mysql fallback connected to new master should fallback times

canal.instance.fallbackIntervalInSeconds = 60

network config 网络相关配置

canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30

binlog filter config

canal.instance.filter.druid.ddl = true

忽略dcl

canal.instance.filter.query.dcl = true
canal.instance.filter.query.dml = false

忽略ddl

canal.instance.filter.query.ddl = true

是否忽略binlog表结构获取失败的异常

canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false

binlog format/image check

canal.instance.binlog.format = ROW,STATEMENT,MIXED
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

binlog ddl isolation

canal.instance.get.ddl.isolation = false

parallel parser config 开启binlog并行解析模式

canal.instance.parser.parallel = true

concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()

canal.instance.parser.parallelThreadSize = 16

disruptor ringbuffer size, must be power of 2

canal.instance.parser.parallelBufferSize = 256

instance地址配置,不写默认扫描conf目录下所有instance配置

canal.destinations =
canal.conf.dir = ../conf
canal.auto.scan = true
canal.auto.scan.interval = 5

持久化的方式主要是写入zookeeper,保证数据集群共享

canal.instance.global.spring.xml = classpath:spring/default-instance.xml

mq地址

canal.mq.servers = 192.168.199.122:9092,192.168.199.123:9092,192.168.199.124:9092

重试次数

canal.mq.retries = 3

批处理大小

canal.mq.batchSize = 16384
canal.mq.maxRequestSize = 1048576

延时时间

canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432

获取canal数据的批次大小

canal.mq.canalBatchSize = 50

获取canal数据的超时时间

canal.mq.canalGetTimeout = 100

ture代表json格式

canal.mq.flatMessage = true

不启动数据压缩

canal.mq.compressionType = none

主副全部成功才ack

canal.mq.acks = all

canal.mq.properties. =

canal.mq.producerGroup = test

Set this value to "cloud", if you want open message trace feature in aliyun.

canal.mq.accessChannel = local

新建Server

在各个节点运行sh bin/startup.sh local
Server列表即可看到启动了三个canal-server
image.png

新建Instance

新建Instance,主要配置参数

只同步test数据库的sc_test表,.*\\..*代表同步所有表

canal.instance.filter.regex=test\.sc_test

mq topic名称

canal.mq.topic=wcc-scm

由于我们项目需要保证数据的顺序性,使用单topic单分区配置,性能会下降,消费者QPS:2-3k,对我们项目来说也是足够的了

canal.mq.partition=0

踩的坑

由于我们使用的是阿里云的RDS数据库,阿里云RDS是自动清理binlog,会导致canal读取binlog失败
java.io.IOException: Received error packet: errno = 1236, sqlstate = HY000 errmsg = Could not find first log file name in binary log index file
可将binlog文件上传阿里云OSS服务器,在instance配置中新增
` canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=`
吐槽下:运维点binlog一键上传`到OSS,没有任何提示,并没有成功,好歹给个失败原因提示啊。。。

你可能感兴趣的:(mysql,canal,kafka)