CentOS丢失的内存

         今天nagios运维系统报警,说有一台pc server上的内存使用率飙到60%

        ssh上去执行, free -g

             total       used       free     shared    buffers     cached
Mem:            31         24          6          0          0          2
-/+ buffers/cache:         21          9
Swap:            0          0          0

     used 21, free 9 为什么会这么高呢

    TOP 然后M

  

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND    
11509 tomcat    20   0 19.7g 3.7g  12m S  2.3 11.9   8:56.05 jsvc       
 1562 root      20   0  244m 7532 1096 S  0.0  0.0   0:15.62 rsyslogd   
 1610 haldaemo  20   0 38176 6744 3448 S  0.0  0.0   0:49.03 hald       
36266 root      20   0 98292 4076 3088 S  0.0  0.0   0:00.16 sshd       
39074 root      20   0 98292 4068 3088 S  0.0  0.0   0:00.17 sshd       
38019 root      20   0 98292 4052 3088 S  0.0  0.0   0:00.08 sshd       
43550 root      20   0 98292 4048 3080 S  0.0  0.0   0:00.03 sshd       
 1764 postfix   20   0 81520 3428 2548 S  0.0  0.0   0:04.54 qmgr       
 1749 root      20   0 81272 3420 2516 S  0.0  0.0   0:22.90 master     
42335 postfix   20   0 81352 3380 2508 S  0.0  0.0   0:00.00 pickup     
 1585 root      20   0  184m 3332 2448 S  0.0  0.0   0:00.01 cupsd      
39076 root      20   0  105m 1996 1544 S  0.0  0.0   0:00.09 bash       
36268 root      20   0  105m 1988 1540 S  0.0  0.0   0:00.08 bash       
38021 root      20   0  105m 1940 1516 S  0.0  0.0   0:00.04 bash    

   发现根本没有多少地方用到内存, tomcat也就不到4个g


   cat /proc/meminfo 看看

MemTotal:       32827220 kB
MemFree:         7084524 kB
Buffers:          606424 kB
Cached:          2468112 kB
SwapCached:            0 kB
Active:          5700560 kB
Inactive:        1282020 kB
Active(anon):    3908224 kB
Inactive(anon):       16 kB
Active(file):    1792336 kB
Inactive(file):  1282004 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                16 kB
Writeback:             0 kB
AnonPages:       3916216 kB
Mapped:            21864 kB
Shmem:               212 kB
Slab:           18510960 kB
SReclaimable:   18469956 kB
SUnreclaim:        41004 kB
KernelStack:        4688 kB
PageTables:        11668 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16413608 kB
Committed_AS:    9593256 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      333876 kB
VmallocChunk:   34342501180 kB
HardwareCorrupted:     0 kB
AnonHugePages:   3833856 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        5056 kB
DirectMap2M:     2045952 kB
DirectMap1G:    31457280 kB

发现Slab高的离谱。。。。。。



到底是什么样呢, slabtop --once看看

slabtop --once
Active / Total Objects (% used)    : 225920064 / 226193412 (99.9%)
 Active / Total Slabs (% used)      : 11556364 / 11556415 (100.0%)
 Active / Total Caches (% used)     : 110 / 194 (56.7%)
 Active / Total Size (% used)       : 43278793.73K / 43315465.42K (99.9%)
 Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
221416340 221416039   3%    0.19K 11070817       20  44283268K dentry                 
1123443 1122739  99%    0.41K 124827        9    499308K fuse_request           
1122320 1122180  99%    0.75K 224464        5    897856K fuse_inode             
761539 754272  99%    0.20K  40081       19    160324K vm_area_struct         
437858 223259  50%    0.10K  11834       37     47336K buffer_head            
353353 347519  98%    0.05K   4589       77     18356K anon_vma_chain         
325090 324190  99%    0.06K   5510       59     22040K size-64                
146272 145422  99%    0.03K   1306      112      5224K size-32                
137625 137614  99%    1.02K  45875        3    183500K nfs_inode_cache        
128800 118407  91%    0.04K   1400       92      5600K anon_vma               
 59101  46853  79%    0.55K   8443        7     33772K radix_tree_node        
 52620  52009  98%    0.12K   1754       30      7016K size-128               
 19359  19253  99%    0.14K    717       27      2868K sysfs_dir_cache        
 10240   7746  75%    0.19K    512       20      2048K filp  

尼玛逼,dentry高的跟狗屎一样。


我想是我tomcat大量操作本地文件导致的系统缓存过多

At the default value of vfs_cache_pressure = 100 the kernel will attempt to reclaim dentries and inodes at a “fair” rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.

参考文档:
https://major.io/2008/12/03/reducing-inode-and-dentry-caches-to-keep-oom-killer-at-bay/

http://serverfault.com/questions/561350/unusually-high-dentry-cache-usage

你可能感兴趣的:(CentOS丢失的内存)