spark 问题

Q:Application application_1505791560225_0911 failed 2 times due to AM Container for appattempt_1505791560225_0911_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://cnsz22VLK2906:8088/cluster/app/application_1505791560225_0911Then, click on links to logs of each attempt.
Diagnostics: File does not exist: hdfs://cluster1/user/sfapp/.sparkStaging/application_1505791560225_0911/__spark_libs__5856340437943961943.zip
java.io.FileNotFoundException: File does not exist: hdfs://cluster1/user/sfapp/.sparkStaging/application_1505791560225_0911/__spark_libs__5856340437943961943.zip
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Failing this attempt. Failing the application.

A:spark运行模式用的是yarn-cluster ,代码里却写了sparkConf.setMaster("local[*]")

你可能感兴趣的:(spark)