使用spark时出现的错误

刚刚开始使用spark+java,两个都不会没办法只能从头踩坑

首先复制粘贴了wordcount工程

import scala.Tuple2;

import java.util.Arrays;

import static com.google.common.base.Preconditions.checkArgument;

/**
 * WordCountTask class, we will call this class with the test WordCountTest.
 */
import scala.Tuple2;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.sql.SparkSession;

import java.util.Arrays;
import java.util.List;
import java.util.regex.Pattern;

public final class JavaWordCount {
    private static final Pattern SPACE = Pattern.compile(" ");

    public static void main(String[] args) throws Exception {

        if (args.length < 1) {
            System.err.println("Usage: JavaWordCount ");
            System.exit(1);
        }

        SparkSession spark = SparkSession
                .builder()
                .appName("JavaWordCount")
                .getOrCreate();

        JavaRDD lines = spark.read().textFile(args[0]).javaRDD();

        JavaRDD words = lines.flatMap(s -> Arrays.asList(SPACE.split(s)).iterator());

        JavaPairRDD ones = words.mapToPair(s -> new Tuple2<>(s, 1));

        JavaPairRDD counts = ones.reduceByKey((i1, i2) -> i1 + i2);

        List> output = counts.collect();
        for (Tuple2 tuple : output) {
            System.out.println(tuple._1() + ": " + tuple._2());
        }
        spark.stop();
    }
}

错误1. Lambda expressions are not supported at language level '1.5'

解决方法:File -> Project Structure -> Modules

Language Level也设置为8.0 - Lambdas, type annotations etc.

提示是language level问题,只要修改Modules中的language level即可

错误2:

import org.apache.spark.sql.SparkSession;是红色
 
  

我修改了pom

<dependency>
    <groupId>org.apache.sparkgroupId>
    <artifactId>spark-core_2.11artifactId>
    <version>2.1.0version>
dependency>
<dependency>
    <groupId>org.apache.sparkgroupId>
    <artifactId>spark-sql_2.10artifactId>
    <version>2.1.0version>
dependency>
这个版本正好可以解决

你可能感兴趣的:(使用spark时出现的错误)