Spark-submit 提交 报错 org.apache.spark.sql.execution.datasources.orc.OrcFileFormat could not be instant

 

错误场景

如下代码:

spark.sql("select e.empno,e.ename,e.job,e.mgr,e.comm from emp e join dept d on e.deptno = d.deptno")
      .filter("comm is not null")
      .write.parquet("/demp");

spark-shell 可以跑的通,spark-submit 提交的时候,报错如下:

Exception in thread "main" java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.execution.datasources.orc.OrcFileFormat could not be instantiated
	at java.util.ServiceLoader.fail(ServiceLoader.java:232)
	at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
	at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
	at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
	at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
	at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
	at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
	at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
	at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:575)
	at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
	at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
	at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:516)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:494)
	at sparkSql.SparkSQLHiveDemo$.main(SparkSQLHiveDemo.scala:17)
	at sparkSql.SparkSQLHiveDemo.main(SparkSQLHiveDemo.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoSuchMethodError: org.apache.spark.sql.execution.datasources.FileFormat.$init$(Lorg/apache/spark/sql/execution/datasources/FileFormat;)V
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.(OrcFileFormat.scala:81)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at java.lang.Class.newInstance(Class.java:442)
	at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
	... 28 more

 

错误原因

定位到错误是由于 .write.parquet("/demp") 写入到 hdfs 报错。

错误原因是因为本地开发所用的 spark jar 和 服务器 Spark 的版本不一致导致的。小伙伴们一定要注意这个问题哦,不然错误莫名其妙。

 

解决方案

修改 pom 文件,将依赖修改为 provided(由于 Spark 服务器上有这些 jar 所以打包的时候不用将本地的打进去也能运行) ,再次打包提交,运行通过


            org.apache.spark
            spark-core_2.12
            ${spark.version}
            
            provided
        
        
            org.apache.spark
            spark-sql_2.12
            ${spark.version}
            provided
        
        
            org.apache.spark
            spark-streaming_2.12
            ${spark.version}
            provided
        

        
            org.apache.spark
            spark-hive_2.12 
            ${spark.version}
            provided
        

 

你可能感兴趣的:(大数据,Spark,SparkSQL,Hadoop,Hive,Spark,spark·-submit,提交报错,Spark,版本冲突)