hive3.X的HiveServer2 内存泄漏问题定位与优化方案(bug)

参考文档:
https://juejin.cn/post/7141331245627080735?searchId=20230920140418F85636A0735C03971F71

官网社区:
https://issues.apache.org/jira/browse/HIVE-22275

In the case that multiple statements are run by a single Session before being cleaned up, it appears that OperationManager.queryIdOperation is not cleaned up properly.
See the log statements below - with the exception of the first “Removed queryId:” log line, the queryId listed during cleanup is the same, when each of these handles should have their own queryId. Looks like only the last queryId executed is being cleaned up.

As a result, HS2 can run out of memory as OperationManager.queryIdOperation grows and never cleans these queryIds/Operations up.

解决
既然找到了问题,那么解决方案就清楚了,那便是将 Query Id 这个值设置成 Operation 级别,而不是 HiveSession 级别,此问题影响 Hive3.x 版本,2.x 暂时没有这个特性,因此不受影响。再对照官方已知的 issue,此问题是已知 issue,目前 Hive 已经将此问题修复,且合入了4.0的版本,
但是由于该 issue 是针对 4.0.0 的代码修复的,对于 3.x 系列并没有 patch,直接 cherry-pick 将会有大量的代码不兼容,因此需要自行参考进行修复,修复的思路为给 Operation 新增:
hive3.X的HiveServer2 内存泄漏问题定位与优化方案(bug)_第1张图片

将 Query Id 从 HiveSession 级别移除,存入 Operation 级别,同时更新 Query Id 的获取和设置:

hive3.X的HiveServer2 内存泄漏问题定位与优化方案(bug)_第2张图片
对 Hive 进行重新打包,在现有集群上对 hive-service-x.x.x.jar 进行替换,即可修复此问题。

你可能感兴趣的:(bug,hadoop,hive,spark,hiveserver2)