JVM内部:内存概述

Introduction

该文章最初是几年前发布在我自己的博客上的,但是我宁愿在这里继续该系列,所以,重新发布。

What is this post about?

这篇文章的目的是概述JVM的堆和非堆内存区域-对其进行一些小介绍-并展示在JVM中发生堆/非堆内存问题时发生的情况码头工人容器。 我想一些 basic knowledge of Java, the JVM, 码头工人 and linux. You will need 码头工人 and openjdk 8 installed on a linux system (I used ubuntu 16.04 to write this post).

Containerizing a java app

首先,我将使事情变得非常简单。 让我们构建一个打印“ Hello world!”的程序。 并永远等待:

// HelloWorld.java
public class HelloWorld {
    public static void main(String[] args) throws Exception {
        System.out.println("Hello world!");
        System.in.read();
    }
}

现在,一个简单的Dockerfile:

FROM openjdk:8-jdk
ADD HelloWorld.java .
RUN javac HelloWorld.java
ENTRYPOINT java HelloWorld

这样,我们可以在容器中构建和启动您的应用程序:

$ docker build --tag jvm-test . 
$ docker run -ti --rm --name hello-jvm jvm-test
Hello world!

完成后,可以使用CTRL-C杀死容器。 对,现在我们有一个简单的程序正在运行,该怎么办? 让我们分析一下JVM。

Basic JVM analysis

让我们获得一个列表,列出应用程序中堆中有哪些对象。 首先,进入容器(假设它仍从上方运行)并获取JVM进程PID。

$ docker exec -ti hello-jvm bash
root@5f20ae043968:/ $ ps aux|grep [j]ava
root         1  0.1  0.0   4292   708 pts/0    Ss+  12:27   0:00 /bin/sh -c java HelloWorld
root         7  0.2  0.1 6877428 23756 pts/0   Sl+  12:27   0:00 java HelloWorld

从上面可以看到,PID为7。为了进行分析,openjdk附带了许多工具。映射是一种这样的工具,它使我们可以查看有关JVM进程的堆信息。 要获取对象列表,它们的实例数以及它们在堆中占用的空间,您可以使用映射 -histo

root@5f20ae043968:/ $ jmap -histo 7

 num     #instances         #bytes  class name
----------------------------------------------
   1:           422        2256744  [I
   2:          1600         141520  [C
   3:           364          58560  [B
   4:           470          53544  java.lang.Class
   5:          1204          28896  java.lang.String
   6:           551          28152  [Ljava.lang.Object;
   7:           110           7920  java.lang.reflect.Field
   8:           258           4128  java.lang.Integer
   9:            97           3880  java.lang.ref.SoftReference
  10:           111           3552  java.util.Hashtable$Entry
  11:           133           3192  java.lang.StringBuilder
  12:             8           3008  java.lang.Thread
  13:            75           2400  java.io.File
  14:            54           2080  [Ljava.lang.String;
  15:            38           1824  sun.util.locale.LocaleObjectCache$CacheEntry
  16:            12           1760  [Ljava.util.Hashtable$Entry;
  17:            55           1760  java.util.concurrent.ConcurrentHashMap$Node
  18:            27           1728  java.net.URL
  19:            20           1600  [S
  ...
  222:             1             16  sun.reflect.ReflectionFactory
Total          6583        2642792

正如您在上面看到的,对于我们简单的HelloWorld程序,有222个不同类的混合的6583个实例,占据了2.6MB的堆! 当我第一次看到它时,它提出了很多问题-什么是[一世,为什么会有一个java.lang.String还有一个[Ljava.lang.String?

What are all these classes?

The single letter class names you see above are all documented under Class.getName().


编码方式元素类型ž布尔值乙字节C烧焦L * className *类/接口d双F浮动一世整型Ĵ长小号短

如果回头看映射输出,前几个实例都有[给它们加上前缀-例如[一世。[表示其类型的一维数组-[一世表示一个数组整型 e。g。new 整型[3]。[[一世表示2D数组,new 整型[2][3] and so on。Also in the 映射上面的输出是[L。java。lang。String这只是一个字符串数组-新的字符串[3]。

亲自查看:

// InstanceName.java
public class InstanceName {
    public static void main(String[] args) throws Exception {
      int[] is = new int[3];
      System.out.println(is.getClass().getName());

      boolean[][][] bs = new boolean[2][5][4];
      System.out.println(bs.getClass().getName());

      String[] ss = new String[3];
      System.out.println(ss.getClass().getName());
    } 
}

编译并运行它,我们得到:

$ javac InstanceName.java 
$ java InstanceName 
[I
[[[Z
[Ljava.lang.String;

这是查看堆中加载内容的一种方法的快速概述。 我之前提到了JVM中的其他内存区域,这些是什么?

Heap and Non-Heap memory

JVM可以分为许多不同的内存段(段/区域/区域,我将交替使用这些词,但通常它们是同一件事), 如果我们首先进行高层查看,我们有两个部分-用于堆上对象的内存和非堆内存。

如果我们放大,则根据我们要讨论的内容,堆可以讨论不同的区域-有伊甸园空间(最初创建大多数新对象的空间),幸存者空间(如果幸存对象存活的位置) 伊甸园空间垃圾收集(GC)和旧世代,其中包含在幸存者空间中生活了一段时间的物体。 具体来说,它包含已初始化的对象-例如List s = new 数组列表();将创建一个数组列表堆上的对象s将指向这一点。

在上一节中,我介绍了HelloWorld程序将哪些对象加载到堆中,那么非堆内存又如何呢?

Non-Heap Memory

如果您曾经用jdk8编写过简单的Java应用程序,您可能听说过元空间。 这是非堆内存的示例。 JVM将在其中存储类定义,静态变量,方法,类加载器和其他元数据。 但是JVM将使用许多其他非堆内存区域。 让我们列出它们!

为此,首先我们需要在您的Java应用程序中启用本机内存跟踪:

FROM openjdk:8-jdk
ADD HelloWorld.java .
RUN cat HelloWorld.java
RUN javac HelloWorld.java
ENTRYPOINT java -XX:NativeMemoryTracking=detail HelloWorld

现在构建并重新运行:

$ docker build --tag jvm-test . 
$ docker run -ti --rm --name hello-jvm jvm-test
Hello world!

在另一个终端中,执行到容器中,并获得总体内存使用情况的摘要jcmd的VM.native_memory命令:

$ docker exec --privileged -ti hello-jvm bash
root@aa5ae77e1305:/ $ jcmd 
33 sun.tools.jcmd.JCmd
7 HelloWorld

root@aa5ae77e1305:/ $ jcmd 7 VM.native_memory summary
7:

Native Memory Tracking:

Total: reserved=5576143KB, committed=1117747KB
-                 Java Heap (reserved=4069376KB, committed=920064KB)
                            (mmap: reserved=4069376KB, committed=920064KB) 

-                     Class (reserved=1066121KB, committed=14217KB)
                            (classes #405)
                            (malloc=9353KB #178) 
                            (mmap: reserved=1056768KB, committed=4864KB) 

-                    Thread (reserved=20646KB, committed=20646KB)
                            (thread #21)
                            (stack: reserved=20560KB, committed=20560KB)
                            (malloc=62KB #110) 
                            (arena=23KB #40)

-                      Code (reserved=249655KB, committed=2591KB)
                            (malloc=55KB #346) 
                            (mmap: reserved=249600KB, committed=2536KB) 

-                        GC (reserved=159063KB, committed=148947KB)
                            (malloc=10383KB #129) 
                            (mmap: reserved=148680KB, committed=138564KB) 

-                  Compiler (reserved=134KB, committed=134KB)
                            (malloc=3KB #37) 
                            (arena=131KB #3)

-                  Internal (reserved=9455KB, committed=9455KB)
                            (malloc=9423KB #1417) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=1358KB, committed=1358KB)
                            (malloc=902KB #85) 
                            (arena=456KB #1)

-    Native Memory Tracking (reserved=161KB, committed=161KB)
                            (malloc=99KB #1559) 
                            (tracking overhead=61KB)

-               Arena Chunk (reserved=175KB, committed=175KB)
                            (malloc=175KB) 


区域不仅仅是堆! 我们的hello world程序变得更加复杂...

What does all this mean? 1

  • Java Heap : heap memory.
  • Class : is the Metaspace region we previously spoke about.
  • Thread : is the space taken up by threads on this JVM's.
  • Code : is the code cache - this is used by the JIT to cache compiled code.
  • GC : space used by the garbage collector.
  • Compiler : space used by the JIT when generating code.
  • Symbols : this is for symbols, by which I believe field names, method signatures fall under. 2
  • Native Memory Tracking : memory used by the native memory tracker itself.
  • Arena Chunk : not entirely sure what this gets used for. 3

Practical memory issues

好的,那您为什么还要关心上述任何一项? 让我们创建一个消耗大量内存的应用程序。

// MemEater.java
import java.util.Vector;

public class MemEater {
    public static final void main(String[] args) throws Exception {
        Vector<byte[]> v = new Vector<byte[]>();
        for (int i = 0; i < 400; i++) {
            byte[] b = new byte[1048576]; // allocate 1 MiB
            v.add(b);
        }
        System.out.println(v.size());
        Thread.sleep(10000);
    }
}

This will create a Vector which contains 400 byte arrays of size 1 MiB 4, so this will use ~400MiB memory on the heap. It will then sleep for 10 seconds so we can get the memory usage easily while it runs. Let's constrain the heap to 450MiB and run this locally we can see the actual memory usage of the process. RSS Resident Set Size 5 is how this is measured, note that this value also contains pages mapped from shared memory, but we can gloss over that for this post.

因此,让我们编译我们的应用程序,在后台运行并获取其RSS:

$ javac MemEater.java 
$ nohup java -Xms450M -Xmx450M MemEater & 
$ ps aux | awk 'NR==1; /[M]emEater/' 
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
chaospie 18019 10.5  3.0 3138368 494448 pts/19 Sl   16:06   0:00 java -Xms450M -Xmx450M MemEater

总共,JVM进程需要大约500 MiB才能运行(RSS为494448 KiB)。 如果将堆设置为小于所需大小,会发生什么?

$ java -Xms400M -Xmx400M MemEater
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at MemEater.main(MemEater.java:7)

If you have used java (or any JVM language) before, you have more than likely come across this. It means that the JVM ran out of heap space to allocate objects. There are quite a few other types of OutOfMemoryError the JVM can throw in certain situations 6, but I won't go into more detail right now.

现在我们知道如果JVM没有足够的堆空间,在容器中运行并达到该容器的整体内存限制的情况会怎样?

重现此问题的最简单方法是打包我们的食蚁兽编程到Docker映像中,并以比所需内存少的内存来运行它。

FROM openjdk:8-jdk
ADD MemEater.java .
RUN cat MemEater.java
RUN javac MemEater.java
ENTRYPOINT java -Xms450M -Xmx450M MemEater

同样,我们需要构建图像。 但是,这一次我们运行时,将容器允许使用的内存限制为5M:

$ docker build --tag jvm-test .
$ docker run -ti --rm --memory 5M --memory-swappiness 0 --name memeater jvm-test
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Killed

几秒钟后,您应该会看到上面的输出,被杀。 发生了什么? 在我们深入探讨之前,先来看看 - 记忆和 - 记忆-swappiness使用的标志码头工人。

Limiting memory with docker

Lets digress for a second, and look at the two docker flags I used above for controlling memory settings 7. First, for these flags to work, your kernel will need to have cgroup support enabled and the following boot parameters set (assuming grub):

$ cat /etc/default/grub
...
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
...

-记忆设置容器内所有进程的内存使用量总和的上限,最小可以为4MiB,在上面我们将其设置为5m,即5MiB。设置后,容器小组memory.limit_in_bytesissettothevalue.Ican'tfindthecodethatdoesthisin码头工人,howeverwecanseeitasfollows:

$ docker run -d --rm --memory 500M --memory-swappiness 0 --name memeater jvm-test 
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
812dbc3417eacdaf221c2f0c93ceab41f7626dca17f959298a5700358f931897
$ CONTAINER_ID=`docker ps --no-trunc | awk '{if (NR!=1) print $1}'`
$ echo $CONTAINER_ID
812dbc3417eacdaf221c2f0c93ceab41f7626dca17f959298a5700358f931897
$ cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.swappiness 
0
$ cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.limit_in_bytes
524288000

# Again, this time without limits to see the difference
$ docker run -d --rm --name memeater jvm-test 
d3e25423814ee1d79759aa87a83d416d63bdb316a305e390c2b8b98777484822
$ CONTAINER_ID=`docker ps --no-trunc | awk '{if (NR!=1) print $1}'`
$ echo $CONTAINER_ID
d3e25423814ee1d79759aa87a83d416d63bdb316a305e390c2b8b98777484822
$ cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.swappiness 
60
$ cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.limit_in_bytes
9223372036854771712

注意警告,我不完全确定为什么启用了交换支持后,它似乎会起作用。 您现在可以忽略此。

---memory-swappiness sets the swappiness level of the cgroup herarchy the container runs in. This maps directly to the cgroup setting memory.swappiness (at least in version 17.12 of docker 8 ) as seen above. Setting this to 0 disables swap for the container.

What kills the container?

那么,为什么集装箱被杀死了? 让我们再次运行它:

$ docker run -ti --rm --memory 5M --memory-swappiness 0 --name memeater jvm-test
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Killed

要查看造成这种情况的原因,请运行journalctl -k并搜索杀人狂,您应该会看到类似以下的日志:

$ journalctl -k
...
Feb 18 17:34:47  kernel: java invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null),  order=0, oom_score_adj=0
Feb 18 17:34:47  kernel: java cpuset=35f18c48d432510c76e76f2e7a962e64a1372de1dc4abd830417263907bea6e0 mems_allowed=0
Feb 18 17:34:47  kernel: CPU: 0 PID: 16432 Comm: java Tainted: G           OE   4.13.0-32-generic #35~16.04.1-Ubuntu
Feb 18 17:34:47  kernel: Hardware name: Dell Inc. Precision 5520/0R6JFH, BIOS 1.3.3 05/08/2017
Feb 18 17:34:47  kernel: Call Trace:
Feb 18 17:34:47  kernel:  dump_stack+0x63/0x8b
Feb 18 17:34:47  kernel:  dump_header+0x97/0x225
Feb 18 17:34:47  kernel:  ? mem_cgroup_scan_tasks+0xc4/0xf0
Feb 18 17:34:47  kernel:  oom_kill_process+0x219/0x420
Feb 18 17:34:47  kernel:  out_of_memory+0x11d/0x4b0
Feb 18 17:34:47  kernel:  mem_cgroup_out_of_memory+0x4b/0x80
Feb 18 17:34:47  kernel:  mem_cgroup_oom_synchronize+0x325/0x340
Feb 18 17:34:47  kernel:  ? get_mem_cgroup_from_mm+0xa0/0xa0
Feb 18 17:34:47  kernel:  pagefault_out_of_memory+0x36/0x7b
Feb 18 17:34:47  kernel:  mm_fault_error+0x8f/0x190
Feb 18 17:34:47  kernel:  ? handle_mm_fault+0xcc/0x1c0
Feb 18 17:34:47  kernel:  __do_page_fault+0x4c3/0x4f0
Feb 18 17:34:47  kernel:  do_page_fault+0x22/0x30
Feb 18 17:34:47  kernel:  ? page_fault+0x36/0x60
Feb 18 17:34:47  kernel:  page_fault+0x4c/0x60
Feb 18 17:34:47  kernel: RIP: 0033:0x7fdeafb0fe2f
Feb 18 17:34:47  kernel: RSP: 002b:00007fdeb0e1db80 EFLAGS: 00010206
Feb 18 17:34:47  kernel: RAX: 000000000001dff0 RBX: 00007fdea802d490 RCX: 00007fdeac17b010
Feb 18 17:34:47  kernel: RDX: 0000000000003bff RSI: 0000000000075368 RDI: 00007fdeac17b010
Feb 18 17:34:47  kernel: RBP: 00007fdeb0e1dc20 R08: 0000000000000000 R09: 0000000000000000
Feb 18 17:34:47  kernel: R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000000000
Feb 18 17:34:47  kernel: R13: 00007fdeb0e1db90 R14: 00007fdeafff851b R15: 0000000000075368
Feb 18 17:34:47  kernel: Task in /docker/35f18c48d432510c76e76f2e7a962e64a1372de1dc4abd830417263907bea6e0 killed as a result of limit of /docker/35f18c48d432510c76e76f2e7a962e64a137
Feb 18 17:34:47  kernel: memory: usage 5120kB, limit 5120kB, failcnt 69
Feb 18 17:34:47  kernel: memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
Feb 18 17:34:47  kernel: kmem: usage 1560kB, limit 9007199254740988kB, failcnt 0
Feb 18 17:34:47  kernel: Memory cgroup stats for /docker/35f18c48d432510c76e76f2e7a962e64a1372de1dc4abd830417263907bea6e0: cache:176KB rss:3384KB rss_huge:0KB shmem:144KB mapped_fil
Feb 18 17:34:47  kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Feb 18 17:34:47  kernel: [16360]     0 16360     1073      178       8       3        0             0 sh
Feb 18 17:34:47  kernel: [16426]     0 16426   609544     3160      47       4        0             0 java
Feb 18 17:34:47  kernel: Memory cgroup out of memory: Kill process 16426 (java) score 2508 or sacrifice child
Feb 18 17:34:47  kernel: Killed process 16426 (java) total-vm:2438176kB, anon-rss:3200kB, file-rss:9440kB, shmem-rss:0kB
...

内核OOM杀手杀死了应用程序,因为它违反了小组内存限制。 从上面的日志中:内存:使用情况5120kB,限制5120kB,失败69显示它达到了极限杀死进程16426(java)total-vm:2438176kB,anon-rss:3200kB,file-rss:9440kB,shmem-rss:0kB表明它决定杀死进程16426这是我们的java过程。 日志中有很多信息可以帮助您确定OOM杀手杀死您的进程的原因,但是在我们的案例中,我们知道原因-我们违反了容器内存限制。

出现堆问题时,如果我们遇到了内存不足错误Java堆空间作为原因,我们立即知道原因是堆,或者要么分配过多,要么需要增加堆(实际上在代码中标识此过度分配的根本原因是另一个问题……)。 当OOM杀手杀死我们的进程时,它并不是那么简单-它可能是直接缓冲区,不受约束的非堆内存区域(元空间,代码缓存等...),甚至是容器内的另一个进程。 调查时有很多要讲的内容。 关于这一点,我将结束这篇文章。

Conclusion

关于JVM,docker和oom-killer中的堆/非堆内存,还有很多可以说的-但我想简短介绍一下,这只是对JVM内存使用的基本介绍。 。 希望,如果您从本文中删除了任何内容,那么使用JVM时,不仅要考虑堆,还需要考虑更多的事情,尤其是在内存绑定的容器中。


  1. See NMT details. ↩

  2. This one I need to look up more in-depth, as I have not been able to find solid information on it. ↩

  3. Arena Chunk seems to be related to malloc arenas, will definitely look into this in-depth. ↩

  4. 1 MiB = 1024 KiB = 1048576 bytes. Why use MiB? Because MB is ambiguous and can mean 1000 KB or 1024 KB, whereas MiB is always 1024 KiB. ↩

  5. See this great answer for a description of RSS. ↩

  6. A detailed description of them can be found here. ↩

  7. The docker documentation on this subject is excellent - see resource constraints. ↩

  8. See docker memory swappiness. ↩

from: https://dev.to//wayofthepie/jvm-basic-memory-overview-535m

你可能感兴趣的:(JVM内部:内存概述)