JAVA NIO之Direct Buffer 与 Heap Buffer的区别?

1、 劣势:创建和释放Direct Buffer的代价比Heap Buffer得要高;

2、 区别:Direct Buffer不是分配在堆上的,它不被GC直接管理(但Direct Buffer的JAVA对象是归GC管理的,只要GC回收了它的JAVA对象,操作系统才会释放Direct Buffer所申请的空间),它似乎给人感觉是“内核缓冲区(buffer in kernel)”。Heap Buffer则是分配在堆上的,或者我们可以简单理解为Heap Buffer就是byte[]数组的一种封装形式,查看JAVA源代码实现,Heap Buffer也的确是这样。

3、 优势:当我们把一个Direct Buffer写入Channel的时候,就好比是“内核缓冲区”的内容直接写入了Channel,这样显然快了,减少了数据拷贝(因为我们平时的 read/write都是需要在I/O设备与应用程序空间之间的“内核缓冲区”中转一下的)。而当我们把一个Heap Buffer写入Channel的时候,实际上底层实现会先构建一个临时的Direct Buffer,然后把Heap Buffer的内容复制到这个临时的Direct Buffer上,再把这个Direct Buffer写出去。当然,如果我们多次调用write方法,把一个Heap Buffer写入Channel,底层实现可以重复使用临时的Direct Buffer,这样不至于因为频繁地创建和销毁Direct Buffer影响性能。

简单的说,我们需要牢记三点:
(1) 平时的read/write,都会在I/O设备与应用程序空间之间经历一个“内核缓冲区”。
(2) Direct Buffer就好比是“内核缓冲区”上的缓存,不直接受GC管理;而Heap Buffer就仅仅是byte[]字节数组的包装形式。因此把一个Direct Buffer写入一个Channel的速度要比把一个Heap Buffer写入一个Channel的速度要快。
(3) Direct Buffer创建和销毁的代价很高,所以要用在尽可能重用的地方。




他山之石
REFER:
http://stackoverflow.com/questions/5670862/bytebuffer-allocate-vs-bytebuffer-allocatedirect
Operating systems perform I/O operations on memory areas. These memory areas, as far as the operating system is concerned, are contiguous sequences of bytes. It's no surprise then that only byte buffers are eligible to participate in I/O operations. Also recall that the operating system will directly access the address space of the process, in this case the JVM process, to transfer the data. This means that memory areas that are targets of I/O perations must be contiguous sequences of bytes. In the JVM, an array of bytes may not be stored contiguously in memory, or the Garbage Collector could move it at any time. Arrays are objects in Java, and the way data is stored inside that object could vary from one JVM implementation to another.
For this reason, the notion of a direct buffer was introduced.
Direct buffers are intended for interaction with channels and native I/O routines. They make a best effort to store the byte elements in a memory area that a channel can use for direct, or raw, access by using native code to tell the operating system to drain or fill the memory area directly.
Direct byte buffers are usually the best choice for I/O operations. By design, they support the most efficient I/O mechanism available to the JVM. Nondirect byte buffers can be passed to channels, but doing so may incur a performance penalty. It's usually not possible for a nondirect buffer to be the target of a native I/O operation. If you pass a nondirect ByteBuffer object to a channel for write, the channel may implicitly do the following on each call:
1. Create a temporary direct ByteBuffer object.
2. Copy the content of the nondirect buffer to the temporary buffer.
3. Perform the low-level I/O operation using the temporary buffer.
4. The temporary buffer object goes out of scope and is eventually garbage collected.
This can potentially result in buffer copying and object churn on every I/O, which are exactly the sorts of things we'd like to avoid. However, depending on the implementation, things may not be this bad. The runtime will likely cache and reuse direct buffers or perform other clever tricks to boost throughput.
If you're simply creating a buffer for one-time use, the difference is not significant.
如果我们构造一个ByteBuffer仅仅使用一次,不复用它,那么Direct Buffer和Heap Buffer没有明显的区别。两个地方我们可能通过Direct Buffer来提高性能:
1、 大文件,尽管我们Direct Buffer只用一次,但是如果内容很大,Heap Buffer的复制代价会很高,此时用Direct Buffer能提高性能。这就是为什么,当我们下载一个大文件时,服务端除了用SendFile机制,也可以用“内存映射”,把大文件映射到内存,也就是 MappedByteBuffer,是一种Direct Buffer,然后把这个MappedByteBuffer直接写入SocketChannel,这样减少了数据复制,从而提高了性能。

2、 重复使用的数据,比如HTTP的错误信息,例如404呀,这些信息是每次请求,响应数据都一样的,那么我们可以把这些固定的信息预先存放在Direct Buffer中(当然部分修改Direct Buffer中的信息也可以,重要的是Direct Buffer要能被重复使用),这样把Direct Buffer直接写入SocketChannel就比写入Heap Buffer要快了。

On the other hand, if you will be using the buffer repeatedly in a high-performance scenario, you're better off allocating direct buffers and reusing them.
Direct buffers are optimal for I/O, but they may be more expensive to create than nondirect byte buffers.
The memory used by direct buffers is allocated by calling through to native, operating system-specific code, bypassing the standard JVM heap. Setting up and tearing down direct buffers could be significantly more expensive than heap-resident buffers, depending on the host operating system and JVM implementation. The memory-storage areas of direct buffers are not subject to garbage collection because they are outside the standard JVM heap.
The performance tradeoffs of using direct versus nondirect buffers can vary widely by JVM, operating system, and code design. By allocating memory outside the heap, you may subject your application to additional forces of which the JVM is unaware. When bringing additional moving parts into play, make sure that you're achieving the desired effect. I recommend the old software maxim: first make it work, then make it fast. Don't worry too much about optimization up front; concentrate first on correctness. The JVM implementation may be able to perform buffer caching or other optimizations that will give you the performance you need without a lot of unnecessary effort on your part.

你可能感兴趣的:(java NIO)