jid=everdc&segmentTxId=145316887&storageInfo=-63%3A575476508%3A0%3ACID-46d57eba-3c5f-413b-bca6-565ab12fc364’ to transaction ID 144656573
2022-10-12 05:39:15,781 INFO namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(176)) - Fast-forwarding stream ‘http://everdc6:8480/getJournal?jid=everdc&segmentTxId=145316887&storageInfo=-63%3A575476508%3A0%3ACID-46d57eba-3c5f-413b-bca6-565ab12fc364’ to transaction ID 144656573
2022-10-12 05:39:33,207 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 313/391944 transactions completed. (0%)
2022-10-12 05:39:50,392 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 1254/391944 transactions completed. (0%)
2022-10-12 05:40:07,677 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 6088/391944 transactions completed. (2%)
2022-10-12 05:40:25,628 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 11427/391944 transactions completed. (3%)
2022-10-12 05:40:43,063 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 15076/391944 transactions completed. (4%)
2022-10-12 05:41:00,365 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 18542/391944 transactions completed. (5%)
2022-10-12 05:41:17,768 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 21752/391944 transactions completed. (6%)
2022-10-12 05:41:34,993 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 25132/391944 transactions completed. (6%)
2022-10-12 05:41:56,454 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 28480/391944 transactions completed. (7%)
2022-10-12 05:42:13,742 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 29326/391944 transactions completed. (7%)
2022-10-12 05:42:35,049 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 32871/391944 transactions completed. (8%)
2022-10-12 05:42:52,222 INFO namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(266)) - replaying edit log: 33456/391944 transactions completed. (9%)
2022-10-12 05:43:13,674 ERROR namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(221)) - Got error reading edit log input stream http://everdc6:8480/getJournal?jid=everdc&segmentTxId=145316887&storageInfo=-63%3A575476508%3A0%3ACID-46d57eba-3c5f-413b-bca6-565ab12fc364; failing over to edit log http://everdc5:8480/getJournal?jid=everdc&segmentTxId=145316887&storageInfo=-63%3A575476508%3A0%3ACID-46d57eba-3c5f-413b-bca6-565ab12fc364
java.io.IOException: got unexpected exception GC overhead limit exceeded
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp R e a d e r . r e a d O p ( F S E d i t L o g O p . j a v a : 4600 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . E d i t L o g F i l e I n p u t S t r e a m . n e x t O p I m p l ( E d i t L o g F i l e I n p u t S t r e a m . j a v a : 203 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . E d i t L o g F i l e I n p u t S t r e a m . n e x t O p ( E d i t L o g F i l e I n p u t S t r e a m . j a v a : 250 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . E d i t L o g I n p u t S t r e a m . r e a d O p ( E d i t L o g I n p u t S t r e a m . j a v a : 85 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . R e d u n d a n t E d i t L o g I n p u t S t r e a m . n e x t O p ( R e d u n d a n t E d i t L o g I n p u t S t r e a m . j a v a : 188 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . E d i t L o g I n p u t S t r e a m . r e a d O p ( E d i t L o g I n p u t S t r e a m . j a v a : 85 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . R e d u n d a n t E d i t L o g I n p u t S t r e a m . n e x t O p ( R e d u n d a n t E d i t L o g I n p u t S t r e a m . j a v a : 188 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . E d i t L o g I n p u t S t r e a m . r e a d O p ( E d i t L o g I n p u t S t r e a m . j a v a : 85 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S E d i t L o g L o a d e r . l o a d E d i t R e c o r d s ( F S E d i t L o g L o a d e r . j a v a : 190 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S E d i t L o g L o a d e r . l o a d F S E d i t s ( F S E d i t L o g L o a d e r . j a v a : 143 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S I m a g e . l o a d E d i t s ( F S I m a g e . j a v a : 898 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S I m a g e . l o a d F S I m a g e ( F S I m a g e . j a v a : 753 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S I m a g e . r e c o v e r T r a n s i t i o n R e a d ( F S I m a g e . j a v a : 329 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . l o a d F S I m a g e ( F S N a m e s y s t e m . j a v a : 983 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . l o a d F r o m D i s k ( F S N a m e s y s t e m . j a v a : 686 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e . l o a d N a m e s y s t e m ( N a m e N o d e . j a v a : 586 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e . i n i t i a l i z e ( N a m e N o d e . j a v a : 646 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e . < i n i t > ( N a m e N o d e . j a v a : 820 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e . < i n i t > ( N a m e N o d e . j a v a : 804 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e . c r e a t e N a m e N o d e ( N a m e N o d e . j a v a : 1516 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e . m a i n ( N a m e N o d e . j a v a : 1582 ) C a u s e d b y : j a v a . l a n g . O u t O f M e m o r y E r r o r : G C o v e r h e a d l i m i t e x c e e d e d a t c o m . g o o g l e . p r o t o b u f . C o d e d I n p u t S t r e a m . < i n i t > ( C o d e d I n p u t S t r e a m . j a v a : 573 ) a t c o m . g o o g l e . p r o t o b u f . C o d e d I n p u t S t r e a m . n e w I n s t a n c e ( C o d e d I n p u t S t r e a m . j a v a : 55 ) a t c o m . g o o g l e . p r o t o b u f . A b s t r a c t P a r s e r . p a r s e P a r t i a l F r o m ( A b s t r a c t P a r s e r . j a v a : 199 ) a t c o m . g o o g l e . p r o t o b u f . A b s t r a c t P a r s e r . p a r s e P a r t i a l D e l i m i t e d F r o m ( A b s t r a c t P a r s e r . j a v a : 241 ) a t c o m . g o o g l e . p r o t o b u f . A b s t r a c t P a r s e r . p a r s e D e l i m i t e d F r o m ( A b s t r a c t P a r s e r . j a v a : 253 ) a t c o m . g o o g l e . p r o t o b u f . A b s t r a c t P a r s e r . p a r s e D e l i m i t e d F r o m ( A b s t r a c t P a r s e r . j a v a : 259 ) a t c o m . g o o g l e . p r o t o b u f . A b s t r a c t P a r s e r . p a r s e D e l i m i t e d F r o m ( A b s t r a c t P a r s e r . j a v a : 49 ) a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l . p r o t o . X A t t r P r o t o s Reader.readOp(FSEditLogOp.java:4600) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:203) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250) at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85) at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:188) at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85) at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:188) at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:190) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:898) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:753) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:329) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:686) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646) at org.apache.hadoop.hdfs.server.namenode.NameNode.
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.readXAttrsFromEditLog(FSEditLogOp.java:414)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access 3800 ( F S E d i t L o g O p . j a v a : 144 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S E d i t L o g O p 3800(FSEditLogOp.java:144) at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp 3800(FSEditLogOp.java:144)atorg.apache.hadoop.hdfs.server.namenode.FSEditLogOpMkdirOp.readFields(FSEditLogOp.java:1683)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp R e a d e r . d e c o d e O p ( F S E d i t L o g O p . j a v a : 4697 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S E d i t L o g O p Reader.decodeOp(FSEditLogOp.java:4697) at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp Reader.decodeOp(FSEditLogOp.java:4697)atorg.apache.hadoop.hdfs.server.namenode.FSEditLogOpReader.readOp(FSEditLogOp.java:4583)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:203)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:188)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:188)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:898)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:753)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:329)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:983)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:686)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:820)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:804)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)
2022-10-12 05:43:13,676 INFO namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(176)) - Fast-forwarding stream ‘http://everdc5:8480/getJournal?jid=everdc&segmentTxId=145316887&storageInfo=-63%3A575476508%3A0%3ACID-46d57eba-3c5f-413b-bca6-565ab12fc364’ to transaction ID 145352425
2022-10-12 05:44:50,753 INFO namenode.FSImage (FSImage.java:updateCountForQuota(930)) - Initializing quota with 4 thread(s)
2022-10-12 05:46:58,067 INFO namenode.FSNamesystem (FSNamesystemLock.java:writeUnlock(259)) - FSNamesystem write lock held for 1564861 ms via
java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:950)
org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:259)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1511)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1014)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:686)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:820)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:804)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
Number of suppressed write-lock reports: 0
Longest write-lock held interval: 1564861
2022-10-12 05:46:58,067 ERROR namenode.NameNode (NameNode.java:main(1587)) - Failed to start namenode.
java.lang.OutOfMemoryError
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
at java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:720)
at org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuota(FSImage.java:937)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:753)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:329)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:983)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:686)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:820)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:804)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
Caused by: java.lang.OutOfMemoryError
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
at java.util.concurrent.ForkJoinTask.getException(ForkJoinTask.java:930)
at java.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:852)
at org.apache.hadoop.hdfs.server.namenode.FSImage I n i t Q u o t a T a s k . c o m p u t e ( F S I m a g e . j a v a : 983 ) a t j a v a . u t i l . c o n c u r r e n t . R e c u r s i v e A c t i o n . e x e c ( R e c u r s i v e A c t i o n . j a v a : 189 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . d o E x e c ( F o r k J o i n T a s k . j a v a : 289 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n P o o l InitQuotaTask.compute(FSImage.java:983) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool InitQuotaTask.compute(FSImage.java:983)atjava.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)atjava.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)atjava.util.concurrent.ForkJoinPoolWorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.hadoop.hdfs.server.namenode.INode.getQuotaCounts(INode.java:496)
at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getQuotaCounts(INodeDirectory.java:170)
at org.apache.hadoop.hdfs.server.namenode.INode.isQuotaSet(INode.java:504)
at org.apache.hadoop.hdfs.server.namenode.FSImage I n i t Q u o t a T a s k . c o m p u t e ( F S I m a g e . j a v a : 986 ) a t j a v a . u t i l . c o n c u r r e n t . R e c u r s i v e A c t i o n . e x e c ( R e c u r s i v e A c t i o n . j a v a : 189 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . d o E x e c ( F o r k J o i n T a s k . j a v a : 289 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . d o I n v o k e ( F o r k J o i n T a s k . j a v a : 401 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . i n v o k e A l l ( F o r k J o i n T a s k . j a v a : 843 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S I m a g e InitQuotaTask.compute(FSImage.java:986) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at java.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843) at org.apache.hadoop.hdfs.server.namenode.FSImage InitQuotaTask.compute(FSImage.java:986)atjava.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)atjava.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)atjava.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)atjava.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843)atorg.apache.hadoop.hdfs.server.namenode.FSImageInitQuotaTask.compute(FSImage.java:983)
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
at java.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843)
at org.apache.hadoop.hdfs.server.namenode.FSImage I n i t Q u o t a T a s k . c o m p u t e ( F S I m a g e . j a v a : 983 ) a t j a v a . u t i l . c o n c u r r e n t . R e c u r s i v e A c t i o n . e x e c ( R e c u r s i v e A c t i o n . j a v a : 189 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . d o E x e c ( F o r k J o i n T a s k . j a v a : 289 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . d o I n v o k e ( F o r k J o i n T a s k . j a v a : 401 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . i n v o k e A l l ( F o r k J o i n T a s k . j a v a : 843 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S I m a g e InitQuotaTask.compute(FSImage.java:983) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at java.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843) at org.apache.hadoop.hdfs.server.namenode.FSImage InitQuotaTask.compute(FSImage.java:983)atjava.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)atjava.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)atjava.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)atjava.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843)atorg.apache.hadoop.hdfs.server.namenode.FSImageInitQuotaTask.compute(FSImage.java:983)
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
at java.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843)
at org.apache.hadoop.hdfs.server.namenode.FSImage I n i t Q u o t a T a s k . c o m p u t e ( F S I m a g e . j a v a : 983 ) a t j a v a . u t i l . c o n c u r r e n t . R e c u r s i v e A c t i o n . e x e c ( R e c u r s i v e A c t i o n . j a v a : 189 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . d o E x e c ( F o r k J o i n T a s k . j a v a : 289 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . d o I n v o k e ( F o r k J o i n T a s k . j a v a : 401 ) a t j a v a . u t i l . c o n c u r r e n t . F o r k J o i n T a s k . i n v o k e A l l ( F o r k J o i n T a s k . j a v a : 843 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S I m a g e InitQuotaTask.compute(FSImage.java:983) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at java.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843) at org.apache.hadoop.hdfs.server.namenode.FSImage InitQuotaTask.compute(FSImage.java:983)atjava.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)atjava.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)atjava.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)atjava.util.concurrent.ForkJoinTask.invokeAll(ForkJoinTask.java:843)atorg.apache.hadoop.hdfs.server.namenode.FSImageInitQuotaTask.compute(FSImage.java:983)
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
2022-10-12 05:46:58,068 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2022-10-12 05:46:58,070 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at everdc1/10.200.106.1
************************************************************/
查看错误是gc超出的内存,首先想到的方案是加大配置中的gc内存大小
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Zmu5T075-1665631753464)(C:\Users\101202105025\AppData\Roaming\Typora\typora-user-images\image-20221013111921522.png)]
重启namenode后依旧报错
重新分析错误,重启namenode,那需要遍历fsimage,这个集群数据量比较大,那应该是fsimage中的节点数目过多,导致的GC Overhead超过限制。
Fsimage是namenode维护的重要文件之一,它包含了整个HDFS文件系统的所有目录信息和文件信息。对于文件来说包含了数据块描述信息、修改时间、访问时间等;对于目录来说,包含了修改时间、访问权限控制信息等。
重新尝试更改hadoop的内存
export HADOOP_NAMENODE_OPTS=“-Dhadoop.security.logger= H A D O O P S E C U R I T Y L O G G E R : − I N F O , R F A S − D h d f s . a u d i t . l o g g e r = {HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger= HADOOPSECURITYLOGGER:−INFO,RFAS−Dhdfs.audit.logger={HDFS_AUDIT_LOGGER:-INFO,NullAppender} -Xmx25600m $HADOOP_NAMENODE_OPTS”
需要用以下命令查看fsimage文件占用的内存大小
1,查看FSimage文件
执行下面的命令
时间有点长,耐心等待一会
. / h d f s o i v − p X M L − p r i n t T o S c r e e n − i / d a t a / h a d o o p / h d s p a c e / d f s / n a m e / c u r r e n t / f s i m a g e 0 000000000144656572 − o / t m p / a ./hdfs oiv -p XML -printToScreen -i /data/hadoop/hd_space/dfs/name/current/fsimage_0000000000144656572 -o /tmp/a ./hdfsoiv−pXML−printToScreen−i/data/hadoop/hdspace/dfs/name/current/fsimage0000000000144656572−o/tmp/a
时间有点长,耐心等待一会
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HKfVGwbt-1665631753465)(C:\Users\101202105025\AppData\Roaming\Typora\typora-user-images\image-20221013104012806.png)]好了执行
cat /tmp/a | egrep "|" | wc -l | awk '{printf "Objects=%d : Suggested Xms=%0dm Xmx=%0dm\n", $1, (($1 / 1000000 )*1024), (($1 / 1000000 )*1024)}'
3.查看文件大小
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lspjtW5P-1665631753466)(C:\Users\101202105025\AppData\Roaming\Typora\typora-user-images\image-20221013104116769.png)]
这里的大小是22840那zai配置中加大,之前给的是10240
贵州的地址是
/opt/apps/hadoop_everdc/etc/hadoop/hadoop-env.sh
查看文件配置
cat /opt/apps/hadoop_everdc/etc/hadoop/hadoop-env.sh
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ihIkvuOJ-1665631753468)(C:\Users\101202105025\AppData\Roaming\Typora\typora-user-images\image-20221013110545084.png)]
5.停止之前的namenode,重新启动namenode
–注意要切换用户–
贵州的是hdfs用户,
su hdfs
然后执行
/opt/apps/hadoop_everdc/sbin/hadoop-daemon.sh start namenode
重启正常了