阅读更多
Memory Leak in NodeJS
I have done a lot of memory leak investigation for JAVA, this is the first time for NodeJS.
Compare the version of TSC
> sudo npm install -g
[email protected]
> tsc --version
Version 2.1.4
My current one is
> tsc --version
Version 3.2.2
> node --version
v10.14.2
In my package.json file, it start the nodeJS application like this
"start": "node build/src/index.js",
So I follow the docs and add - - inspect to that
"start": "node --inspect build/src/index.js",
Once it start, it will open a debug port
> npm run start
>
[email protected] start /Users/hluo/company/code/sillycat.contactManager
> node --inspect build/src/index.js
Debugger listening on ws://127.0.0.1:9229/50c12fb6-488b-4456-b96a-a2f4e1f92582
For help, see: https://nodejs.org/en/docs/inspector
In the chrome browser, we can open a tab and put
chrome://inspect
We can open the monitor of that nodeJS process and we can take snapshot of the nodeJS memory to see what methods use most of the memory.
In my case, process.nextTick() happen a lot and use most of the memory
https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/
https://www.oschina.net/translate/understanding-process-next-tick
So after I read a lot of related documents. I guess maybe here is the reason.
I am using nodeJS to consume the messages from RabbitMQ, this fetch message operations are quick and powerful, but for each message, we need to do CRUD on DB and ElasticSearch. It seems heavier.
https://www.rabbitmq.com/tutorials/tutorial-two-javascript.html
https://mariuszwojcik.wordpress.com/2014/05/19/how-to-choose-prefetch-count-value-for-rabbitmq/
So there is time window NodeJS app received a lot of messages, but the callback did not happen yet to ACK the message. We generate a lot of nextTick and host all these messages in our memory. These objects can not be GC.
Then I open the UI console in RabbitMQ, and I saw the messages on the fly are large. Every time the messages on the fly grows to a large number, I get OUT OF MEMORY in nodeJS console. And the last few GC logging showing nodeJS can not get any free memory after GC.
And sometimes the ‘deliver/get’ is reaching 2000 m/s but the ‘ack’ is still 15 m/s.
I think that means Consumer just try to fetch too many messages and can not handle them in time.
After I came to change my NodeJS codes
await channel.prefetch(10000); //limit 10000 message on the fly
Limit 10000 messages on the fly, it will not slow down the consuming of the rabbitMQ. But it will make the 'deliver/get' and ‘ack’ more stable and the same number
References:
https://sillycat.iteye.com/blog/772289
https://unix.stackexchange.com/questions/10106/orphaned-connections-in-close-wait-state
https://www.shellhacks.com/kill-tcp-connections-close-wait-state/
https://marmelab.com/blog/2018/04/03/how-to-track-and-fix-memory-leak-with-nodejs.html
https://mariuszwojcik.wordpress.com/2014/05/19/how-to-choose-prefetch-count-value-for-rabbitmq/