springboot集成canal,实现缓存实时刷新,驼峰问题

1.cancl安装

下载路径:cancl下载路径

下载完安装包,安装完成后,需要修改conf\example路径下配置文件instance.properties:设置position info和table meta tsdb info下面的属性即可。

#################################################
## mysql serverId , v1.0.26+ will autoGen
# canal.instance.mysql.slaveId=0

# enable gtid use true/false
canal.instance.gtidon=false

# position info
canal.instance.master.address=127.0.0.1:3306
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=

# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=

# table meta tsdb info
#canal.instance.tsdb.enable= true
#canal.instance.tsdb.url= jdbc:mysql://127.0.0.1:3306/maruko?useUnicode=true&characterEncoding=utf-8&useSSL=false
#canal.instance.tsdb.dbUsername= root
#canal.instance.tsdb.dbPassword= maruko

#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=

# username/password
canal.instance.dbUsername= root
canal.instance.dbPassword= maruko
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==

# table regex
canal.instance.filter.regex=.*\\..*
# table black regex
canal.instance.filter.black.regex=mysql\\.slave_.*
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch

# mq config
canal.mq.topic=example
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,topic2:mytest2\\..*,.*\\..*
canal.mq.partition=0
# hash partition config
#canal.mq.enableDynamicQueuePartition=false
#canal.mq.partitionsNum=3
#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6
#canal.mq.partitionHash=test.table:id^name,.*\\..*
#################################################

需要注意的是,只需要配置canal.instance.master.address,canal.instance.dbUsername,canal.instance.dbPassword即可,不要去配置canal.instance.tsdb相关属性,如果配置了启动会报错的。

2.依赖引入

引入依赖

        <!--canal-->
        <dependency>
            <groupId>top.javatool</groupId>
            <artifactId>canal-spring-boot-starter</artifactId>
            <version>1.2.1-RELEASE</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>

        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
        </dependency>

3.监听配置

去实现EntryHandler接口,添加自己的业务逻辑,比如缓存的删除更新插入,实现对增删改查的逻辑重写。

@CanalTable("kafka_test")
@Component
@Slf4j
public class KafkaHandler implements EntryHandler<KafkaTest> {

    @Autowired
    private RedisTemplate redisTemplate;


    @Override
    public void insert(KafkaTest item) {
        log.info("canal插入insert," + item);
        // 写数据到redis
        redisTemplate.opsForValue().set("kafka_test" + item.getId(), item);
    }

    @Override
    public void update(KafkaTest before, KafkaTest after) {
        log.warn("更新前update before," + before);
        log.warn("更新后update after," + after);
        // 写数据到redis
        redisTemplate.opsForValue().set("kafka_test" + after.getId(), after);
    }

    @Override
    public void delete(KafkaTest item) {
        log.warn("删除delete," + item);
        // 删除数据到redis
        redisTemplate.delete("kafka_test" + item.getId());
    }
}

4.yml配置

默认destination就是example,如果修改了服务安装里面的配置,这儿需要同步修改。

#cancl配置
canal:
  server: localhost:11111  #你canal的地址
  destination: example

如果不想让控制台一直打印某些信息,可以配置如下配置屏蔽AbstractCanalClient类process()一直打印this.log.info(“获取消息 {}”, message)。

logging:
  level:
    tracer: trace # 开启trace级别日志,在开发时可以开启此配置,则控制台可以打印es全部请求信息及DSL语句,为了避免重复,开启此项配置后,可以将EE的print-dsl设置为false.
    #top.javatool.canal.client: warn  #禁止AbstractCanalClient 打印常規日志 获取消息 {}

5.第二种方案(解决数据库存在下划线,用上述方法,某些字段会为空)

上面的方式只适合数据库字段和实体类字段,属性完全一致的情况;当数据库字段含有下划线的适合,因为我们直接去监听的binlog日志,里面的字段是数据库字段,因为跟实体类字段不匹配,所以会出现字段为空的情况,这个适合需要去获取列的字段,对字段进行属性转换,实现方法如下:

引入依赖

        <dependency>
            <groupId>com.xpand</groupId>
            <artifactId>starter-canal</artifactId>
            <version>0.0.1-SNAPSHOT</version>
        </dependency>

创建监听

@CanalEventListener
@Slf4j
public class KafkaListener {

    @Autowired
    private RedisTemplate redisTemplate;

    /**
     * @param eventType 当前操作数据库的类型
     * @param rowData   当前操作数据库的数据
     */
    @ListenPoint(schema = "maruko", table = "kafka_test")
    public void listenKafkaTest(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
        KafkaTest kafkaTestBefore = new KafkaTest();
        KafkaTest kafkaTestAfter = new KafkaTest();


        //遍历数据获取k-v
        List<CanalEntry.Column> beforeColumnsList = rowData.getBeforeColumnsList();
        List<CanalEntry.Column> afterColumnsList = rowData.getAfterColumnsList();


        getEntity(beforeColumnsList, kafkaTestBefore);
        log.warn("获取到提交前的对象为:" + kafkaTestBefore);

        getEntity(afterColumnsList, kafkaTestAfter);
        log.warn("获取到提交后的对象为:" + kafkaTestAfter);

        //判断是新增还是更新还是删除
        switch (eventType.getNumber()) {
            case CanalEntry.EventType.INSERT_VALUE:
            case CanalEntry.EventType.UPDATE_VALUE:
                redisTemplate.opsForValue().set("kafka_test" + kafkaTestAfter.getId(), kafkaTestAfter);
                break;
            case CanalEntry.EventType.DELETE_VALUE:
                redisTemplate.delete("kafka_test" + kafkaTestBefore.getId());
                break;
        }
    }

    /**
     * 遍历获取属性转换为实体类
     *
     * @param columnsList
     * @param kafkaTest
     */
    private void getEntity(List<CanalEntry.Column> columnsList, KafkaTest kafkaTest) {
        SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
        for (CanalEntry.Column column : columnsList) {
            String name = column.getName();
            String value = column.getValue();
            switch (name) {
                case KafkaTest.ID:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setId(Integer.parseInt(value));
                    }
                    break;
                case KafkaTest.CONTENT:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setContent(value);
                    }
                    break;
                case KafkaTest.PRODUCER_STATUS:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setProducerStatus(Integer.parseInt(value));
                    }
                    break;
                case KafkaTest.CONSUMER_STATUS:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setConsumerStatus(Integer.parseInt(value));
                    }
                    break;
                case KafkaTest.UPDATE_TIME:
                    if (StringUtils.hasLength(value)) {
                        try {
                            kafkaTest.setUpdateTime(format.parse(value));
                        } catch (ParseException p) {
                            log.error(p.getMessage());
                        }
                    }
                    break;
                case KafkaTest.TOPIC:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setTopic(value);
                    }
                    break;
                case KafkaTest.CONSUMER_ID:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setConsumerId(value);
                    }
                    break;
                case KafkaTest.GROUP_ID:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setGroupId(value);
                    }
                    break;
                case KafkaTest.PARTITION_ID:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setPartitionId(Integer.parseInt(value));
                    }
                    break;
                case KafkaTest.PRODUCER_OFFSET:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setProducerOffset(Long.parseLong(value));
                    }
                    break;
                case KafkaTest.CONSUMER_OFFSET:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setConsumerOffset(Long.parseLong(value));
                    }
                    break;
                case KafkaTest.TEST:
                    if (StringUtils.hasLength(value)) {
                        kafkaTest.setTest(value);
                    }
                    break;

            }
        }
    }


}

实体类

@Data
@TableName("kafka_test")
public class KafkaTest {

    public static final String ID = "id";

    public static final String CONTENT = "content";

    public static final String PRODUCER_STATUS = "producer_status";

    public static final String CONSUMER_STATUS = "consumer_status";

    public static final String UPDATE_TIME = "update_time";

    public static final String TOPIC = "topic";

    public static final String CONSUMER_ID = "consumer_id";

    public static final String GROUP_ID = "group_id";

    public static final String PARTITION_ID = "partition_id";

    public static final String PRODUCER_OFFSET = "consumer_offset";

    public static final String CONSUMER_OFFSET = "producer_offset";

    public static final String TEST = "test";

    @TableId(type = IdType.AUTO)
    private Integer id;

    @TableField("content")
    private String content;

    @TableField("producer_status")
    private Integer producerStatus;

    @TableField("consumer_status")
    private Integer consumerStatus;

    @TableField("update_time")
    private Date updateTime;

    @TableField("topic")
    private String topic;

    @TableField("consumer_id")
    private String consumerId;

    @TableField("group_id")
    private String groupId;

    @TableField("partition_id")
    private int partitionId;

    @TableField("consumer_offset")
    private Long consumerOffset;

    @TableField("producer_offset")
    private Long producerOffset;

    @TableField("test")
    private String test;
}

你可能感兴趣的:(java,canal,canal)