小男孩‘自慰网亚洲一区二区,亚洲一级在线播放毛片,亚洲中文字幕av每天更新,黄aⅴ永久免费无码,91成人午夜在线精品,色网站免费在线观看,亚洲欧洲wwwww在线观看

分享

ByteBuffer的allocate和allocateDirect

 hh3755 2013-04-03
在Java中當(dāng)我們要對數(shù)據(jù)進(jìn)行更底層的操作時,通常是操作數(shù)據(jù)的字節(jié)(byte)形式,這時常常會用到ByteBuffer這樣一個類。ByteBuffer提供了兩種靜態(tài)實(shí)例方式: 
Java代碼  收藏代碼
  1. public static ByteBuffer allocate(int capacity)  
  2. public static ByteBuffer allocateDirect(int capacity)  

為什么要提供兩種方式呢?這與Java的內(nèi)存使用機(jī)制有關(guān)。第一種分配方式產(chǎn)生的內(nèi)存開銷是在JVM中的,而第二種的分配方式產(chǎn)生的開銷在JVM之外,以就是系統(tǒng)級的內(nèi)存分配。當(dāng)Java程序接收到外部傳來的數(shù)據(jù)時,首先是被系統(tǒng)內(nèi)存所獲取,然后在由系統(tǒng)內(nèi)存復(fù)制拷貝到JVM內(nèi)存中供Java程序使用。所以在第二種分配方式中,可以省去復(fù)制這一步操作,效率上會有所提高。但是系統(tǒng)級內(nèi)存的分配比起JVM內(nèi)存的分配要耗時得多,所以并不是任何時候allocateDirect的操作效率都是最高的。下面是一個不同容量情況下兩種分配方式的操作時間對比: 

 
由圖可以看出,當(dāng)操作數(shù)據(jù)量很小時,兩種分配方式操作使用時間基本是相同的,第一種方式有時可能會更快,但是當(dāng)數(shù)據(jù)量很大時,第二種方式會遠(yuǎn)遠(yuǎn)大于第一種的分配方式。

更多更細(xì)節(jié)的討論,請參看這里:http:///questions/5670862/bytebuffer-allocate-vs-bytebuffer-allocatedirect

Ron Hitches in his excellent book Java NIO seems to offer what I thought could be a good answer to your question:

Operating systems perform I/O operations on memory areas. These memory areas, as far as the operating system is concerned, are contiguous sequences of bytes. It's no surprise then that only byte buffers are eligible to participate in I/O operations. Also recall that the operating system will directly access the address space of the process, in this case the JVM process, to transfer the data. This means that memory areas that are targets of I/O perations must be contiguous sequences of bytes. In the JVM, an array of bytes may not be stored contiguously in memory, or the Garbage Collector could move it at any time. Arrays are objects in Java, and the way data is stored inside that object could vary from one JVM implementation to another.

For this reason, the notion of a direct buffer was introduced. Direct buffers are intended for interaction with channels and native I/O routines. They make a best effort to store the byte elements in a memory area that a channel can use for direct, or raw, access by using native code to tell the operating system to drain or fill the memory area directly.

Direct byte buffers are usually the best choice for I/O operations. By design, they support the most efficient I/O mechanism available to the JVM. Nondirect byte buffers can be passed to channels, but doing so may incur a performance penalty. It's usually not possible for a nondirect buffer to be the target of a native I/O operation. If you pass a nondirect ByteBuffer object to a channel for write, the channel may implicitly do the following on each call:

  1. Create a temporary direct ByteBuffer object.
  2. Copy the content of the nondirect buffer to the temporary buffer.
  3. Perform the low-level I/O operation using the temporary buffer.
  4. The temporary buffer object goes out of scope and is eventually garbage collected.

This can potentially result in buffer copying and object churn on every I/O, which are exactly the sorts of things we'd like to avoid. However, depending on the implementation, things may not be this bad. The runtime will likely cache and reuse direct buffers or perform other clever tricks to boost throughput. If you're simply creating a buffer for one-time use, the difference is not significant. On the other hand, if you will be using the buffer repeatedly in a high-performance scenario, you're better off allocating direct buffers and reusing them.

Direct buffers are optimal for I/O, but they may be more expensive to create than nondirect byte buffers. The memory used by direct buffers is allocated by calling through to native, operating system-specific code, bypassing the standard JVM heap. Setting up and tearing down direct buffers could be significantly more expensive than heap-resident buffers, depending on the host operating system and JVM implementation. The memory-storage areas of direct buffers are not subject to garbage collection because they are outside the standard JVM heap.

The performance tradeoffs of using direct versus nondirect buffers can vary widely by JVM, operating system, and code design. By allocating memory outside the heap, you may subject your application to additional forces of which the JVM is unaware. When bringing additional moving parts into play, make sure that you're achieving the desired effect. I recommend the old software maxim: first make it work, then make it fast. Don't worry too much about optimization up front; concentrate first on correctness. The JVM implementation may be able to perform buffer caching or other optimizations that will give you the performance you need without a lot of unnecessary effort on your part.

    本站是提供個人知識管理的網(wǎng)絡(luò)存儲空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點(diǎn)。請注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購買等信息,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點(diǎn)擊一鍵舉報。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多