Flink1.10入门:自定义Redis的Sink函数

微信公众号:大数据开发运维架构

关注可了解更多大数据相关的资讯。问题或建议,请公众号留言;

如果您觉得“大数据开发运维架构”对你有帮助,欢迎转发朋友圈

从微信公众号拷贝过来,格式有些错乱,建议直接去公众号阅读


一、概述

    这篇文章需要完成的是将实时数据写到Redis,我这里自定义了Ridis对应的Sink函数,为了方便直接从socket端接收数据,operator处理后,直接写入redis中,由于比较简单,详细内容直接看实例代码即可。

软件版本:

    flink1.10

    redis5.0.5

二、代码实战

1.添加redis对应pom依赖


    org.apache.bahir
    flink-connector-redis_2.11
    1.0

2.主函数代码:


package com.hadoop.ljs.flink110.redis;
import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;
import scala.Tuple2;
/**
 * @author: Created By lujisen
 * @company ChinaUnicom Software JiNan
 * @date: 2020-05-02 10:30
 * @version: v1.0
 * @description: com.hadoop.ljs.flink110.redis
 */
public class RedisSinkMain {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment senv =StreamExecutionEnvironment.getExecutionEnvironment();

        DataStream source = senv.socketTextStream("localhost", 9000);
        DataStream filter = source.filter(new FilterFunction() {
            @Override
            public boolean filter(String value) throws Exception {
                if (null == value || value.split(",").length != 2) {
                    return false;
                }
                return true;
            }
        });
        DataStream> keyValue = filter.map(new MapFunction>() {
            @Override
            public Tuple2 map(String value) throws Exception {

                String[] split = value.split(",");

                return new Tuple2<>(split[0], split[1]);
            }
        });
        //创建redis的配置 单机redis用FlinkJedisPoolConfig,集群redis需要用FlinkJedisClusterConfig
        FlinkJedisPoolConfig redisConf = new FlinkJedisPoolConfig.Builder().setHost("worker2.hadoop.ljs").setPort(6379).setPassword("123456a?").build();

        keyValue.addSink(new RedisSink>(redisConf, new RedisMapper>() {
            @Override
            public RedisCommandDescription getCommandDescription() {
                return new RedisCommandDescription(RedisCommand.HSET,"table1");
            }
            @Override
            public String getKeyFromData(Tuple2 data) {
                return data._1;
            }
            @Override
            public String getValueFromData(Tuple2 data) {
                return data._2;
            }
        }));
        /*启动执行*/
        senv.execute();
    }
}

3.函数测试

1).window端scoket发送数据

Flink1.10入门:自定义Redis的Sink函数_第1张图片

 

2.redis结果验证

 

Flink1.10入门:自定义Redis的Sink函数_第2张图片

    如果觉得我的文章能帮到您,请关注微信公众号“大数据开发运维架构”,并转发朋友圈,谢谢支持!

Flink1.10入门:自定义Redis的Sink函数_第3张图片

 

你可能感兴趣的:(Flink)