[Flink]wordcount

一、有界流

1、代码

package wc;


import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;

public class BoundedStreamWordCount {
    public static void main(String[] args) throws Exception {
        //TODO 1.创建流式的执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        //TODO 2.读取文件
        DataStreamSource lineDS = env.readTextFile("input/words.txt");
        //TODO 3.处理数据:切分、转换、分组、求和
        //flatMap方法的参数是一个接口,该接口需要重写flatMap方法
        //这里使用的是匿名实现类
        //value为读入的每条数据的,数据类型
        //out为采集器,用来返回数据
        SingleOutputStreamOperator> wordAndOne = lineDS.flatMap(new FlatMapFunction>() {
            @Override
            public void flatMap(String value, Collector> out) throws Exception {
                String[] words = value.split(" ");
                for (String word : words) {
                    Tuple2 wordAndOne = Tuple2.of(word, 1); //将每个单词转换成2元组
                    out.collect(wordAndOne);//使用Collector向下游发送数据
                }
            }
        });

        //TODO 4.按照word分组
        //new KeySelector, String> 第一个类型指的是传入的数据的类型,第二个类型指的是key的数据类型
        KeyedStream, String> wordAndOneKS = wordAndOne.keyBy(new KeySelector, String>() {
            @Override
            public String getKey(Tuple2 value) throws Exception {
                return value.f0;
            }
        });

        //TODO 5.聚合
        SingleOutputStreamOperator> sumDS = wordAndOneKS.sum(1);

        //TODO 6.打印
        sumDS.print();

        //TODO 7.执行
        env.execute(); //默认核数为电脑的所有核数

    }
}

2、说明

假如接口A,里面有一个方法a()
1)正常写法:定义一个class B,去实现接口A,并且实现它的方法a()
B b=new B()
2)匿名实现类写法

new A(){
  实现a(){ }
}

二、无界流

1、代码

package wc;

import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;


public class StreamWordCount {
    public static void main(String[] args) throws Exception {
        //TODO 1.创建流式执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        //TODO 2.读取数据:socket
        DataStreamSource lineDataStream = env.socketTextStream("hadoop1",7777);

        //TODO 3.处理数据
        SingleOutputStreamOperator> sum = lineDataStream.flatMap((String value, Collector> out) -> {
                    String[] words = value.split("\\s+");
                    for (String word : words) {
                        out.collect(Tuple2.of(word, 1));
                    }})
                .returns(Types.TUPLE(Types.STRING, Types.INT)) //存在泛型擦除的问题,需要指定flatmap
                .keyBy((value) -> value.f0)
                .sum(1);       //value:只有一个参数的时候,类型可以不写

        //TODO 4.打印
        sum.print();

        //TODO 5.启动执行
        env.execute(); //默认核数为电脑的所有核数
    }
}

2、在hadoop上启动

nc -lk 7777

3、报错

[Flink]wordcount_第1张图片

 1)报错原因:泛型擦除

没有指定Collector的类型

2)解决方法:增加returns方法,指定Collector的类型

你可能感兴趣的:(flink,java,大数据)