netty源码阅读之ByteBuf之缓存分配流程

我们还是先回到PoolArena的allocate方法:

    private void allocate(PoolThreadCache cache, PooledByteBuf buf, final int reqCapacity) {
        final int normCapacity = normalizeCapacity(reqCapacity);
        if (isTinyOrSmall(normCapacity)) { // capacity < pageSize
            int tableIdx;
            PoolSubpage[] table;
            boolean tiny = isTiny(normCapacity);
            if (tiny) { // < 512
                if (cache.allocateTiny(this, buf, reqCapacity, normCapacity)) {
                    // was able to allocate out of the cache so move on
                    return;
                }
                tableIdx = tinyIdx(normCapacity);
                table = tinySubpagePools;
            } else {
                if (cache.allocateSmall(this, buf, reqCapacity, normCapacity)) {
                    // was able to allocate out of the cache so move on
                    return;
                }
                tableIdx = smallIdx(normCapacity);
                table = smallSubpagePools;
            }

            final PoolSubpage head = table[tableIdx];

            /**
             * Synchronize on the head. This is needed as {@link PoolChunk#allocateSubpage(int)} and
             * {@link PoolChunk#free(long)} may modify the doubly linked list as well.
             */
            synchronized (head) {
                final PoolSubpage s = head.next;
                if (s != head) {
                    assert s.doNotDestroy && s.elemSize == normCapacity;
                    long handle = s.allocate();
                    assert handle >= 0;
                    s.chunk.initBufWithSubpage(buf, handle, reqCapacity);

                    if (tiny) {
                        allocationsTiny.increment();
                    } else {
                        allocationsSmall.increment();
                    }
                    return;
                }
            }
            allocateNormal(buf, reqCapacity, normCapacity);
            return;
        }
        if (normCapacity <= chunkSize) {
            if (cache.allocateNormal(this, buf, reqCapacity, normCapacity)) {
                // was able to allocate out of the cache so move on
                return;
            }
            allocateNormal(buf, reqCapacity, normCapacity);
        } else {
            // Huge allocations are never served via the cache so just call allocateHuge
            allocateHuge(buf, reqCapacity);
        }
    }

我们从之前的文章中知道,分配arena的时候,先在缓存里面分配,如果分配成功,就返回,如果分配不成功,就在内存里面分配。也就是这一篇文章我们主要讲上面代码的cache.allocateTiny()。在讲解这个之前,我们也就需要知道需要分配多大的内存先。分配的内存,总是大于或者等于请求的内存,代码是第二行的这个:

 final int normCapacity = normalizeCapacity(reqCapacity);

也就是normCapacity>=reqCapacity。

进入这个方法:

int normalizeCapacity(int reqCapacity) {
        if (reqCapacity < 0) {
            throw new IllegalArgumentException("capacity: " + reqCapacity + " (expected: 0+)");
        }
        if (reqCapacity >= chunkSize) {
            return reqCapacity;
        }

        if (!isTiny(reqCapacity)) { // >= 512
            // Doubled

            int normalizedCapacity = reqCapacity;
            normalizedCapacity --;
            normalizedCapacity |= normalizedCapacity >>>  1;
            normalizedCapacity |= normalizedCapacity >>>  2;
            normalizedCapacity |= normalizedCapacity >>>  4;
            normalizedCapacity |= normalizedCapacity >>>  8;
            normalizedCapacity |= normalizedCapacity >>> 16;
            normalizedCapacity ++;

            if (normalizedCapacity < 0) {
                normalizedCapacity >>>= 1;
            }

            return normalizedCapacity;
        }

        // Quantum-spaced
        if ((reqCapacity & 15) == 0) {
            return reqCapacity;
        }

        return (reqCapacity & ~15) + 16;
    }

首先看如果这个请求的内存reqCapacity是大于chunkSize,直接不在缓存分配,直接返回:

 if (reqCapacity >= chunkSize) {
            return reqCapacity;
        }

如果不是tiny的就加倍:

        if (!isTiny(reqCapacity)) { // >= 512
            // Doubled

            int normalizedCapacity = reqCapacity;
            normalizedCapacity --;
            normalizedCapacity |= normalizedCapacity >>>  1;
            normalizedCapacity |= normalizedCapacity >>>  2;
            normalizedCapacity |= normalizedCapacity >>>  4;
            normalizedCapacity |= normalizedCapacity >>>  8;
            normalizedCapacity |= normalizedCapacity >>> 16;
            normalizedCapacity ++;

            if (normalizedCapacity < 0) {
                normalizedCapacity >>>= 1;
            }

            return normalizedCapacity;
        }

验证了上一篇文章smal和normal数组后一个数是前一个数的2倍。

如果是tiny,那就加16:

 return (reqCapacity & ~15) + 16;

tiny的数组里面的数据是后面比前面一个多16,也是验证了上一篇文章。

 

回来,看isTinyOrSmall的源码:

    // capacity < pageSize
    boolean isTinyOrSmall(int normCapacity) {
        return (normCapacity & subpageOverflowMask) == 0;
    }

如果需要分配的内存小于pagesize,那么就是tiny或者small。

看isTiny:

    // normCapacity < 512
    static boolean isTiny(int normCapacity) {
        return (normCapacity & 0xFFFFFE00) == 0;
    }

512=2^9,如果相与0XFFFFFE00为0,那肯定小于512了。

 

最后回来看cache.allocateTiny(this, buf, reqCapacity, normCapacity),分配tiny的内存,其他small或者normal的流程大致相同,我们以tiny来分析,流程如下:

1、找到对应size的MemoryRegionCache

2、从queue中弹出一个entry给ByteBuf初始化

3、将探出的entry扔到对象池里面复用。

我们从cache.allocateTiny(this, buf, reqCapacity, normCapacity)这里进来:

    /**
     * Try to allocate a tiny buffer out of the cache. Returns {@code true} if successful {@code false} otherwise
     */
    boolean allocateTiny(PoolArena area, PooledByteBuf buf, int reqCapacity, int normCapacity) {
        return allocate(cacheForTiny(area, normCapacity), buf, reqCapacity);
    }

 

一、找到对应size的MemoryRegionCache

cacheForTiny(area, normCapacity)这个就是找到对应的cache,进入:

    private MemoryRegionCache cacheForTiny(PoolArena area, int normCapacity) {
        int idx = PoolArena.tinyIdx(normCapacity);
        if (area.isDirect()) {
            return cache(tinySubPageDirectCaches, idx);
        }
        return cache(tinySubPageHeapCaches, idx);
    }

从tinyIdx先找到idx,然后通过tinySubPageDirectCaches的下标,找到需要MemoryRegionCache的大小。先看tinyIdx:

    static int tinyIdx(int normCapacity) {
        return normCapacity >>> 4;
    }

normCapacity是我需要分配的内存的大小,右移4也就是除以16 .上一篇文章我们知道,tiny数组是这样子的:

tiny[0]=0,tiny[1]=16,tiny[2]=32,tiny[3]=48,tiny[4]=64...tiny[15]=496。如果是496,除以16等于15,那就是下标为15的;如果是32,除以2,就是下标为2的,这也验证了我们tiny的数组的分布规则。

看cache(tinySubPageDirectCaches, idx);

    private static  MemoryRegionCache cache(MemoryRegionCache[] cache, int idx) {
        if (cache == null || idx > cache.length - 1) {
            return null;
        }
        return cache[idx];
    }

 

也就验证了通过下标找到怎么样的MemoryRegionCache。

small的和normal的也一样这样分析

 

 

二、从queue中弹出一个entry给ByteBuf初始化

分析了cacheForTiny,回到allocate:

    @SuppressWarnings({ "unchecked", "rawtypes" })
    private boolean allocate(MemoryRegionCache cache, PooledByteBuf buf, int reqCapacity) {
        if (cache == null) {
            // no cache found so just return false here
            return false;
        }
        boolean allocated = cache.allocate(buf, reqCapacity);
        if (++ allocations >= freeSweepAllocationThreshold) {
            allocations = 0;
            trim();
        }
        return allocated;
    }

继续:


        /**
         * Allocate something out of the cache if possible and remove the entry from the cache.
         */
        public final boolean allocate(PooledByteBuf buf, int reqCapacity) {
            Entry entry = queue.poll();
            if (entry == null) {
                return false;
            }
            initBuf(entry.chunk, entry.handle, buf, reqCapacity);
            entry.recycle();

            // allocations is not thread-safe which is fine as this is only called from the same thread all time.
            ++ allocations;
            return true;
        }

在这里我们可以看到就是从queue里面拿出一个Entry对象,我们看看这个Entry对象:

        static final class Entry {
            final Handle> recyclerHandle;
            PoolChunk chunk;
            long handle = -1;

            Entry(Handle> recyclerHandle) {
                this.recyclerHandle = recyclerHandle;
            }

            void recycle() {
                chunk = null;
                handle = -1;
                recyclerHandle.recycle(this);
            }
        }

有个handle和chunk,通过chunk可以找到内存,通过handle可以指定到特定的内存。

把entry的handle和chunk传入 initBuf(entry.chunk, entry.handle, buf, reqCapacity);由于现在是tiny的,所以我们找subpage的实现:

    private void initBufWithSubpage(PooledByteBuf buf, long handle, int bitmapIdx, int reqCapacity) {
        assert bitmapIdx != 0;

        int memoryMapIdx = memoryMapIdx(handle);

        PoolSubpage subpage = subpages[subpageIdx(memoryMapIdx)];
        assert subpage.doNotDestroy;
        assert reqCapacity <= subpage.elemSize;

        buf.init(
            this, handle,
            runOffset(memoryMapIdx) + (bitmapIdx & 0x3FFFFFFF) * subpage.elemSize, reqCapacity, subpage.elemSize,
            arena.parent.threadCache());
    }

会看到有这样一个方法buf.init,进去PoolUnsafeDirectByteBuf 的实现,然后继续来到父类PoolByteBuf的实现:

    void init(PoolChunk chunk, long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
        assert handle >= 0;
        assert chunk != null;

        this.chunk = chunk;
        this.handle = handle;
        memory = chunk.memory;
        this.offset = offset;
        this.length = length;
        this.maxLength = maxLength;
        tmpNioBuf = null;
        this.cache = cache;
    }

把chunk和handle给它,关键是这两个,通过这两个就可以定位到唯一一块内存地址了。

 

 

三、将弹出的entry扔到对象池里面复用。

回到entry.recycle();

netty为了减少对象的回收消耗的资源,专门做了一个对象池,不需要的对象可以回收,下次使用,看recycle()函数:

void recycle() {
                chunk = null;
                handle = -1;
                recyclerHandle.recycle(this);
            }

这里有看到了chunk和handle,chunk=null将对象置空,handle=-1将不指向任何地址。

继续看recyclerHandle.recycle(this);

        @Override
        public void recycle(Object object) {
            if (object != value) {
                throw new IllegalArgumentException("object does not belong to handle");
            }
            stack.push(this);
        }

很简单,返回对象池的stack里面。

你可能感兴趣的:(netty,源码,netty源码学习)