Flink实战(108):connector(十七)hdfs 读写(二)写

声明:本系列博客是根据SGG的视频整理而成,非常适合大家入门学习。

《2021年最新版大数据面试题全面开启更新》

1. 依赖HDFS

pom.xml 添加依赖

复制代码



    4.0.0

    org.example
    FlinkHdfs
    1.0-SNAPSHOT

    
        UTF-8
        1.11.0
        2.11
        2.12.1
        3.1.2
        3.1.3
    
    
        
        
            com.alibaba
            fastjson
            1.2.56
        
    
        org.apache.flink
        flink-scala_${scala.binary.version}
        ${flink.version}
        
    

    
        org.apache.flink
        flink-streaming-scala_${scala.binary.version}
        ${flink.version}
        
    

        
            org.apache.flink
            flink-clients_${scala.binary.version}
            ${flink.version}
        

        
        
            org.apache.flink
            flink-hadoop-compatibility_2.11
            ${flink.version}
        
        
            org.apache.hadoop
            hadoop-client
            ${hadoop.version}
        

        
        
            org.apache.flink
            flink-connector-kafka-0.11_2.11
            ${flink.version}
            
                
                    slf4j-api
                    org.slf4j
                
            
        

        
            org.apache.flink
            flink-connector-filesystem_2.11
            ${flink.version}
        




    

复制代码

2. 配置 HDFS

hdfs-site.xmlcore-site.xml放入到src/main/resources目录下面

3.写入HDFS

输入参数:

{"deviceType":"0","userNums":0,"newusers":1,"dayActivenums":1,"timeinfoString":"","timeinfo":"2018090704","userId":"1","monthActivenums":1,"weekActivenums":1,"groupByField":"1==0==2018090704","times":1,"hourActivenums":1}

{"deviceType":"1","userNums":0,"newusers":0,"dayActivenums":0,"timeinfoString":"","timeinfo":"2018090705","userId":"2","monthActivenums":0,"weekActivenums":0,"groupByField":"2==1==2018090705","times":1,"hourActivenums":0}

1.主程序

复制代码

package com.atguigu

import java.util.Properties

import org.apache.flink.api.common.serialization.SimpleStringEncoder
import org.apache.flink.core.fs.Path
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
import org.apache.flink.streaming.util.serialization.SimpleStringSchema

object WriteToHDFS {
  def main(args: Array[String]): Unit = {
    val bsEnv = StreamExecutionEnvironment.getExecutionEnvironment
    bsEnv.setParallelism(1)

     bsEnv.enableCheckpointing(5000L)//一定要开启checkpoint

    val properties = new Properties()
    properties.setProperty("bootstrap.servers", "hadoop102:9092,hadoop103:9092,hadoop104:9092")
    properties.setProperty("group.id", "caimoutest3");

    properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    properties.setProperty("auto.offset.reset", "latest")

    val stream = bsEnv.addSource(new FlinkKafkaConsumer011(
      "datainfo4", new SimpleStringSchema(), properties
    ))

    val fileSink = StreamingFileSink
      .forRowFormat(new Path("hdfs://hadoop102:9820/dataanlay/liuliang/"),new SimpleStringEncoder[String]("UTF-8"))
      .withBucketAssigner(new LiuLiangUserDetailBucketAssigner()) // 自定义分区路径
      .withBucketCheckInterval(5*1000)
      .build()

    stream.addSink(fileSink)

    bsEnv.execute("LiuLiangHourUserDetailAnaly")






  }

}

复制代码

2 LiuLiangUserDetailBucketAssigner

复制代码

package com.atguigu

import java.io.File

import com.alibaba.fastjson.JSON
import org.apache.flink.core.io.SimpleVersionedSerializer
import org.apache.flink.streaming.api.functions.sink.filesystem.BucketAssigner

class LiuLiangUserDetailBucketAssigner extends BucketAssigner[String,String] {
  override def getBucketId(in: String, context: BucketAssigner.Context): String = {
    System.out.println(in)
    val dateString = JSON.parseObject(in).getString("timeinfo")
    val result = dateString.substring(0, 8) + "/" + dateString.substring(8, 10)
    System.out.println(result)
    result






  }

  override def getSerializer: SimpleVersionedSerializer[String] = new LiuLiangStringSerializer

  def main(args: Array[String]): Unit = {
    val dateString = "2018090707"
    val result = dateString.substring(0, 8) + File.separator + dateString.substring(8, 10)
    System.out.println(result)

  }
}

复制代码

3 LiuLiangStringSerializer

复制代码

package com.atguigu

import org.apache.flink.core.io.SimpleVersionedSerializer

class LiuLiangStringSerializer extends SimpleVersionedSerializer[String]{
  override def getVersion: Int = 0

  override def serialize(e: String): Array[Byte] = e.getBytes()

  override def deserialize(i: Int, bytes: Array[Byte]): String = {
    if (i != 77){
      throw new Exception("version mismatch")
    }else{
      new String(bytes)
    }
  }

}

复制代码

TIP

  1. 请关闭HDFS 权限,不关闭需要把认证copy到resources目录下

        dfs.permissions
        false
    

 

你可能感兴趣的:(Flink)