jmap命令的用法:
[hadoop@hadoop sbin]$ jmap Usage: jmap [option] <pid> (to connect to running process) jmap [option] <executable <core> (to connect to a core file) jmap [option] [server_id@]<remote server IP or hostname> (to connect to remote debug server) where <option> is one of: <none> to print same info as Solaris pmap -heap to print java heap summary -histo[:live] to print histogram of java object heap; if the "live" suboption is specified, only count live objects -permstat to print permanent generation statistics -finalizerinfo to print information on objects awaiting finalization -dump:<dump-options> to dump java heap in hprof binary format dump-options: live dump only live objects; if not specified, all objects in the heap are dumped. format=b binary format file=<file> dump heap to <file> Example: jmap -dump:live,format=b,file=heap.bin <pid> -F force. Use with -dump:<dump-options> <pid> or -histo to force a heap dump or histogram when <pid> does not respond. The "live" suboption is not supported in this mode. -h | -help to print this help message -J<flag> to pass <flag> directly to the runtime system
jmap可以对本机或者远程机器正在运行的Java程序的内存使用情况进行分析
常用用法:
1.jmap pid
结果:这个结果显示,第二列是文件大小,第三列是具体的文件
0x0000000000400000 7K /home/hadoop/software/jdk1.7.0_67/bin/java 0x00007ff9ac9ac000 108K /usr/lib64/libresolv-2.17.so 0x00007ff9acbc6000 26K /usr/lib64/libnss_dns-2.17.so 0x00007ff9c819c000 112K /home/hadoop/software/jdk1.7.0_67/jre/lib/amd64/libnet.so 0x00007ff9c83b3000 89K /home/hadoop/software/jdk1.7.0_67/jre/lib/amd64/libnio.so 0x00007ff9d2e31000 120K /home/hadoop/software/jdk1.7.0_67/jre/lib/amd64/libzip.so 0x00007ff9d304c000 56K /usr/lib64/libnss_files-2.17.so 0x00007ff9d3258000 214K /home/hadoop/software/jdk1.7.0_67/jre/lib/amd64/libjava.so 0x00007ff9d3483000 63K /home/hadoop/software/jdk1.7.0_67/jre/lib/amd64/libverify.so 0x00007ff9d3691000 43K /usr/lib64/librt-2.17.so 0x00007ff9d3899000 1114K /usr/lib64/libm-2.17.so 0x00007ff9d3b9b000 14853K /home/hadoop/software/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so 0x00007ff9d4a0f000 2058K /usr/lib64/libc-2.17.so 0x00007ff9d4dd0000 19K /usr/lib64/libdl-2.17.so 0x00007ff9d4fd4000 103K /home/hadoop/software/jdk1.7.0_67/lib/amd64/jli/libjli.so 0x00007ff9d51eb000 138K /usr/lib64/libpthread-2.17.so 0x00007ff9d5407000 156K /usr/lib64/ld-2.17.so
2.jmap -heap pid(堆内存使用详细情况)
[hadoop@hadoop sbin]$ jmap -heap 1819 Attaching to process ID 1819, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.65-b04 using thread-local object allocation. Parallel GC with 2 thread(s) Heap Configuration: MinHeapFreeRatio = 0 MaxHeapFreeRatio = 100 MaxHeapSize = 536870912 (512.0MB) NewSize = 1310720 (1.25MB) MaxNewSize = 17592186044415 MB OldSize = 5439488 (5.1875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 134217728 (128.0MB) G1HeapRegionSize = 0 (0.0MB) Heap Usage: PS Young Generation Eden Space: capacity = 135266304 (129.0MB) used = 31579760 (30.116806030273438MB) free = 103686544 (98.88319396972656MB) 23.346361263777858% used From Space: capacity = 22020096 (21.0MB) used = 17629120 (16.81243896484375MB) free = 4390976 (4.18756103515625MB) 80.05923316592262% used To Space: capacity = 22020096 (21.0MB) used = 0 (0.0MB) free = 22020096 (21.0MB) 0.0% used PS Old Generation capacity = 358088704 (341.5MB) used = 8192 (0.0078125MB) free = 358080512 (341.4921875MB) 0.002287701317715959% used PS Perm Generation capacity = 24641536 (23.5MB) used = 24625520 (23.484725952148438MB) free = 16016 (0.0152740478515625MB) 99.93500405169547% used 3509 interned Strings occupying 266432 byte
- Heap Configuration:堆内存空间的分配参数,
- Heap Usage:堆的各个空间的内存使用情况
3. 将内存使用情况dump到磁盘(可以运行时执行)
jmap -dump:file=heap.dump.bin.001 1819
[hadoop@hadoop sbin]$ jmap -dump:file=heap.dump.bin.001 1819 Dumping heap to /home/hadoop/software/spark-1.2.0-bin-hadoop2.4/sbin/heap.dump.bin.001 ... Heap dump file created [hadoop@hadoop sbin]$ ls -lh total 108M -rw------- 1 hadoop hadoop 42M Feb 27 05:02 headop.bin -rw------- 1 hadoop hadoop 66M Feb 27 05:21 heap.dump.bin.001
速度非常快,dump的大小为66M,接下来的问题,如何分析这个dump文件
4. 使用jhat
[hadoop@hadoop sbin]$ jhat ERROR: No arguments supplied Usage: jhat [-stack <bool>] [-refs <bool>] [-port <port>] [-baseline <file>] [-debug <int>] [-version] [-h|-help] <file> -J<flag> Pass <flag> directly to the runtime system. For example, -J-mx512m to use a maximum heap size of 512MB -stack false: Turn off tracking object allocation call stack. -refs false: Turn off tracking of references to objects -port <port>: Set the port for the HTTP server. Defaults to 7000 -exclude <file>: Specify a file that lists data members that should be excluded from the reachableFrom query. -baseline <file>: Specify a baseline object dump. Objects in both heap dumps with the same ID and same class will be marked as not being "new". -debug <int>: Set debug level. 0: No debug output 1: Debug hprof file parsing 2: Debug hprof file parsing, no server -version Report version number -h|-help Print this help and exit <file> The file to read For a dump file that contains multiple heap dumps, you may specify which dump in the file by appending "#<number>" to the file name, i.e. "foo.hprof#3". All boolean options default to "true" [hadoop@hadoop sbin]$ jhat heap.dump.bin.001 //分析heap文件 Reading from heap.dump.bin.001... Dump file created Fri Feb 27 05:21:39 EST 2015 Snapshot read, resolving... Resolving 633856 objects... Chasing references, expect 126 dots.............................................................................................................................. Eliminating duplicate references.............................................................................................................................. Snapshot resolved. Started HTTP server on port 7000 Server is ready.
可以通过7000端口打开http server进行查看每个对象占用的空间
5.显示永久代的统计信息
jmap -permstat pid
[hadoop@hadoop sbin]$ jmap -permstat 1819 Attaching to process ID 1819, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.65-b04 finding class loader instances ..done. computing per loader stat ..done. please wait.. computing liveness.liveness analysis may be inaccurate ... class_loader classes bytes parent_loader alive? type <bootstrap> 1121 6558096 null live <internal> 0x00000000fd818900 1 3048 0x00000000fd662860 dead sun/reflect/DelegatingClassLoader@0x00000000d7fcfc00 0x00000000fd92afc0 0 0 0x00000000fd662860 dead java/util/ResourceBundle$RBClassLoader@0x00000000d8614b60 0x00000000fd818940 1 3048 null dead sun/reflect/DelegatingClassLoader@0x00000000d7fcfc00 0x00000000fd6628b0 0 0 null dead sun/misc/Launcher$ExtClassLoader@0x00000000d8135e68 0x00000000fd818980 1 3048 0x00000000fd662860 dead sun/reflect/DelegatingClassLoader@0x00000000d7fcfc00 0x00000000fd662860 3051 19694968 0x00000000fd6628b0 dead sun/misc/Launcher$AppClassLoader@0x00000000d81935f0 0x00000000fd8189c0 1 1888 null dead sun/reflect/DelegatingClassLoader@0x00000000d7fcfc00 0x00000000fd8188c0 1 3048 0x00000000fd662860 dead sun/reflect/DelegatingClassLoader@0x00000000d7fcfc00 total = 9 4177 26267144 N/A alive=1, dead=8 N/A