Spark integration ElasticSearch

用Spark Streaming 从AWS 的kinesis (类似Kafka) 中读取streaming data, 然后通过spark 计算框架处理之后write into ElasticSearch , spark 写:数据到elasticsearch 有两种方式, 下面是integration 的过程:

  • 需要的packages org.elasticsearch:elasticsearch-spark-20_2.11 [版本spark2.0, 2.11]download

  • spark 写入ElasticSearch 的两种方式

    • rdd 直接写入ES 或者dataframe 直接写入ES:
      def dataframe_write_to_es(dataframe):
          dataframe.write.format("org.elasticsearch.spark.sql")\
                                 .option("es.nodes", "http://elasticsearch_domain")\
                                .option("es.port", 443)\
                                .option("es.nodes.wan.only", "true")\
                                .option("es.nodes.discovery", "false")\
                                .option("es.net.ssl", "true")\
                                .option("es.mapping.routing", "id_xxx")\
                               .save(es_index, mode="append")
      
      
      def rdd_write_to_es(rdd):
       conf = {"es.nodes": "http://elasticsearch_domain", "es.port": "80", "es.nodes.wan.only": "true",
              "es.nodes.discovery": "false", 
              "es.mapping.routing": "xxx",
              "es.batch.size.bytes": "30mb", "es.batch.size.entries": "300000",
              "es.resource": index/type}   
       rdd .saveAsNewAPIHadoopFile(path='-',
            outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
            keyClass="org.apache.hadoop.io.NullWritable",
            valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
            conf=conf)

    以上conf 可以reference elasticsearch-hadoop-configuration

你可能感兴趣的:(spark)