Spark任务读取hive表数据导入es

使用elasticsearch-hadoop 将hive表数据导入es,超级简单

1.引入pom


  org.elasticsearch
  elasticsearch-hadoop
  9.0.0-SNAPSHOT

2. 创建sparkconf

// spark 参数设置
SparkConf sparkConf = new SparkConf();
//要写入的索引
sparkConf.set("es.resource","");
//es集群地址,不用全部配置会自动发现
sparkConf.set("es.nodes","");
sparkConf.set("es.mapping.id","c1");
sparkConf.set("es.net.http.auth.user","");//用户名
sparkConf.set("es.net.http.auth.pass","");//密码

SparkSession sparkSession = SparkSession.builder().config(sparkConf).enableHiveSupport()
                .getOrCreate();

3. 写入es

        Dataset dataSet = sparkSession.sql("select c1,c2,c3 from xx");
        JavaEsSparkSQL.saveToEs(dataSet, ImmutableMap.of());

sql读取的字段需要与es字段名一一对应

你可能感兴趣的:(hive,elasticsearch,spark,java)