spark报错org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:

spark报错org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:

在用spark开发程序的时候,有时候会看到这个错误。

py4j.protocol.Py4JJavaError: An error occurred while calling o469.count.
: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#138]
+- *(1) HashAggregate(keys=[], functions=[partial_count(1)], output=[count#678L])
   +- InMemoryTableScan
         +- InMemoryRelation [imsi#167, s_cellid#168, s_rsrp#169, ta#152L, orig_lon#153, orig_lat#154, mro_ts#173, n1_rsrp#174, n2_rsrp#175, n3_rsrp#176, n1_cell_id#178, n2_cell_id#179, n3_cell_id#180, a#294, b#295, p_day#296, city_id#297], StorageLevel(disk, memory, 1 replicas)
               +- *(7) Project [imsi#167, s_cellid#168, s_rsrp#169, ta#152L, orig_lon#153, orig_lat#154, mro_ts#173, n1_rsrp#174, n2_rsrp#175, n3_rsrp#176, n1_cell_id#178, n2_cell_id#179, n3_cell_id#180, 1 AS a#294, MDT AS b#295, 20230205 AS p_day#296, 572 AS city_id#297]
                  +- *(7) Filter (isnotnull(desc_num#164) AND (desc_num#164 < 150))
                     +- Window [row_number() windowspecdefinition(source_type#155, s_cellid#168, n1_cell_id#178, n2_cell_id#179, n3_cell_id#180, mro_ts#173 DESC NULLS LAST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS desc_num#164], [source_type#155, s_cellid#168, n1_cell_id#178, n2_cell_id#179, n3_cell_id#180], [mro_ts#173 DESC NULLS LAST]
                        +- *(6) Sort [source_type#155 ASC NULLS FIRST, s_cellid#168 ASC NULLS FIRST, n1_cell_id#178 ASC NULLS FIRST, n2_cell_id#179 ASC NULLS FIRST, n3_cell_id#180 ASC NULLS FIRST, mro_ts#173 DESC NULLS LAST], false, 0
                           +- Exchange hashpartitioning(source_type#155, s_cellid#168, n1_cell_id#178, n2_cell_id#179, n3_cell_id#180, 600), ENSURE_REQUIREMENTS, [id=#114]
                              +- *(5) Project [imsi#167, s_cellid#168, s_rsrp#169, ta#152L, orig_lon#153, orig_lat#154, n1_rsrp#174, n2_rsrp#175, n3_rsrp#176, n1_cell_id#178, n2_cell_id#179, n3_cell_id#180, mro_ts#173, source_type#155]
......

这个报错是生成计划树的时候报的错,由于业务比较复杂,spark会生成plan 可能在某个地方出错,而直接抛出了这个错误,就好比java 中 try catch 写了个 Exception 抓个大异常。
spark报错org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:_第1张图片

那么如何解决呢???

不要把重点关注在execute, tree这段上,要耐心继续往下翻日志,找到真正的Caused by: 信息。

spark报错org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:_第2张图片

我发生这个错误的原因是spark没有找到文件报错。

想了想,这是因为当从Hive元数据中读取orc表时,Spark会尝试使用自己的orc解析器来代替Hive原生,从而提升性能。该操作通过属性 spark.sql.hive.convertMetastoreOrc 来控制,默认是开启的。因为我更换了hive的字段顺序,因此为了让spark读取hive的元数据信息,设置了spark.sql("set spark.sql.hive.convertMetastoreOrc=false")。发现hive中有此分区元数据信息,但是spark读取实际上并没有此目录,因此报错。我删除了此元数据信息,程序运行成功。

你可能感兴趣的:(#,数据开发常见错误,spark,sql,大数据)