pyspark 运行WordCount出现错误 已解决

这几天一直被一个问题困扰,在网上找了很多方法还是无法得到解决。

1、在jupyter notebook上运行简单wordcount在第三行出现问题。

textFile=sc.textFile("data/test.txt")

stringRDD=textFile.flatMap(lambda line: line.split(" "))

countsRDD=stringRDD.map(lambda word:(word,1)).reduceByKey(lambda x,y : x+y)

countsRDD.saveAsTextFile("data/output")

2、在eplicesIDE上编写wordcount.py ,用spark-submit在终端执行程序时也出现同样的问题。

在终端输入 spark-submit --driver-memory 2g --master local[4] WordCount.py

Py4JJavaError Traceback (most recent call last)

~/pythonwork/PythonProject/WordCount.py in ()

    30    print("开始读取文本文件...")

    31    textFile =sc.textFile(Path+"data/README.md")

---> 32    print("文本文件共" +str(textFile.count()) + "行")

    33    countsRDD = textFile.flatMap(lambda line : line.split(" ")).map(lambda x: (x,1)).reduceByKey(lambda x,y : x+y)

    34    print("文字统计共" + str(countsRDD.count()) + "项数据")

/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py in count(self)

  1053        3

  1054        """

-> 1055        return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()

  1056

  1057    def stats(self):

/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py in sum(self)

  1044        6.0

  1045        """

-> 1046        return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)

  1047

  1048    def count(self):

/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py in fold(self, zeroValue, op)

    915        # zeroValue provided to each partition is unique from the one provided

    916        # to the final reduce call

--> 917        vals = self.mapPartitions(func).collect()

    918        return reduce(op, vals, zeroValue)

    919

/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py in collect(self)

    814        """

    815        with SCCallSiteSync(self.context) as css:

--> 816            sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())

    817        return list(_load_from_socket(sock_info, self._jrdd_deserializer))

    818

/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)

  1255        answer = self.gateway_client.send_command(command)

  1256        return_value = get_return_value(

-> 1257            answer, self.gateway_client, self.target_id, self.name)

  1258

  1259        for temp_arg in temp_args:

/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)

    326                raise Py4JJavaError(

    327                    "An error occurred while calling {0}{1}{2}.\n".

--> 328                    format(target_id, ".", name), value)

    329            else:

    330                raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.

: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/qcl/pythonwork/PythonProjectdata/README.md

at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)

at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)

at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)

at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)

at scala.Option.getOrElse(Option.scala:121)

at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)

at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)

at scala.Option.getOrElse(Option.scala:121)

at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)

at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:55)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)

at scala.Option.getOrElse(Option.scala:121)

at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)

at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)

at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)

at org.apache.spark.rdd.RDD.collect(RDD.scala:944)

at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)

at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)

at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)

at py4j.Gateway.invoke(Gateway.java:282)

at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)

at py4j.commands.CallCommand.execute(CallCommand.java:79)

at py4j.GatewayConnection.run(GatewayConnection.java:238)

at java.lang.Thread.run(Thread.java:748)

出现Input path does not exist: file:/home/qcl/pythonwork/PythonProjectdata/README.md。无法找到文件?按照网上很多办法还是无法解决。谁知道怎么解决?

从网上复制wordcount.py程序重新执行居然通过了!!执行环境没变,难道只是因为程序书写格式不对?不管如何这个问题希望不要再出现了。

你可能感兴趣的:(pyspark 运行WordCount出现错误 已解决)