netty内存分析

netty内存规格

Netty-内存规格.png

netty内存分配器类图如下


ByteBufAllocator类图.png

我们直接看
io.netty.buffer.PooledByteBufAllocator#newDirectBuffer

@Override
    protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
      
        PoolThreadCache cache = threadCache.get();
        PoolArena directArena = cache.directArena;

        final ByteBuf buf;
        if (directArena != null) {
            buf = directArena.allocate(cache, initialCapacity, maxCapacity);
        } else {
            buf = PlatformDependent.hasUnsafe() ?
                    UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity) :
                    new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
        }

        return toLeakAwareBuffer(buf);
    }

内存分配策略

1.先PoolThreadLocalCache获取欲线程绑定的缓存池PoolThreadCache
2.如果不存在,则获取Arean数组中的PoolArena,然后从中创建一个新的PoolThreadCache作为缓存池使用

具体分配代码如下

private void allocate(PoolThreadCache cache, PooledByteBuf buf, final int reqCapacity) {
        final int normCapacity = normalizeCapacity(reqCapacity);
        if (isTinyOrSmall(normCapacity)) { // capacity < pageSize
            int tableIdx;
            PoolSubpage[] table;
            boolean tiny = isTiny(normCapacity);
            if (tiny) { // < 512
                if (cache.allocateTiny(this, buf, reqCapacity, normCapacity)) {
                    // was able to allocate out of the cache so move on
                    return;
                }
                tableIdx = tinyIdx(normCapacity);
                table = tinySubpagePools;
            } else {
                if (cache.allocateSmall(this, buf, reqCapacity, normCapacity)) {
                    // was able to allocate out of the cache so move on
                    return;
                }
                tableIdx = smallIdx(normCapacity);
                table = smallSubpagePools;
            }

            final PoolSubpage head = table[tableIdx];

            /**
             * Synchronize on the head. This is needed as {@link PoolChunk#allocateSubpage(int)} and
             * {@link PoolChunk#free(long)} may modify the doubly linked list as well.
             */
            synchronized (head) {
                final PoolSubpage s = head.next;
                if (s != head) {
                    assert s.doNotDestroy && s.elemSize == normCapacity;
                    long handle = s.allocate();
                    assert handle >= 0;
                    s.chunk.initBufWithSubpage(buf, handle, reqCapacity);
                    incTinySmallAllocation(tiny);
                    return;
                }
            }
            synchronized (this) {
                allocateNormal(buf, reqCapacity, normCapacity);
            }

            incTinySmallAllocation(tiny);
            return;
        }
        if (normCapacity <= chunkSize) {
            if (cache.allocateNormal(this, buf, reqCapacity, normCapacity)) {
                // was able to allocate out of the cache so move on
                return;
            }
            synchronized (this) {
                allocateNormal(buf, reqCapacity, normCapacity);
                ++allocationsNormal;
            }
        } else {
            // Huge allocations are never served via the cache so just call allocateHuge
            allocateHuge(buf, reqCapacity);
        }
    }

小结如下

系统会根据需要申请的内存规格选择合适的MemoryRegionCache

集合上图我们可以知道MemoryRegionCache其实包含了PoolChunk我们最终也是通过调用PoolChunk的allocate()方法类进行正在的分配内存。

而PoolChunk则是通过二叉树来记录每个PoolSubpage的分配情况.


PoolChunk二叉树.png

其中memoryMap存放的是PoolSubPage的分配信息,depthMap存放的是二叉树的深度。
depthMap初始化后就不会再变化,但memoryMap会随着PoolSubPage的变化而变化。

初始化时memoryMap与depthMap的取值相同。

节点分配情况有以下三种情况

(1)memroyMap[i]=depthMap[id]:表示当前节点可以分配内存
(2)memroyMap[i]>depthMap[id]:表示当前节点至少有一个子节点已经被分配,无法分配满足该深度的内存,但可以分配更小一些的内存(通过空闲的子节点)
(3)memroyMap[i]=最大深度+1:表示当前节点下的所有子节点都已经分配完了,没有课用内存

简要概述
分配内存,会从根节点往下找,找到合适的之后就会标记节点被占用.
标记完之后,依次向上更新父节点,直到根节点.将父节点的memoryMap[id]位置信息修改为两个子节点中的较小值.

获取到节点后,根据节点id计算并获取subpageIdx,通过subpageIdx从PoolSubpage[] subpages中获取可用的PoolSubpage。
如果PoolSubpage为空,则新建,并将其加入到数组中,便于参与后续的内存分配.不为空,说明是使用完之后被放入到内存池了,重新初始化后将其加入到内存池的双向链表中,参与内存分配.

然后调用PoolSubpage的allocate方法,返回代表PoolSubpage块存储区域的占用情况(long类型,高32位表示subPage中分配的位置,低32位表示二叉树中分配的节点)

最终PoolAreana根据上述信息调用PoolChunk的initBuf方法完成PoolByteBuf的初始化.

查找到合适的MemoryReginCache后调用PoolChunk的allocate方法
,具体核心方法如下

(1)查找PoolChunk所在的PoolArena的PooSubpage头结点,因为在内存分配中会修改PoolSubpage链表,考虑到并发安全,对头结点进行了加锁

private long allocateSubpage(int normCapacity) {
        // Obtain the head of the PoolSubPage pool that is owned by the PoolArena and synchronize on it.
        // This is need as we may add it back and so alter the linked-list structure.
        PoolSubpage head = arena.findSubpagePoolHead(normCapacity);
        synchronized (head) {
}

(2)从二叉树中查找合适的节点,获取到节点ID

private int allocateNode(int d) {
        int id = 1;
        int initial = - (1 << d); // has last d bits = 0 and rest all = 1
        byte val = value(id);
        if (val > d) { // unusable
            return -1;
        }
        while (val < d || (id & initial) == 0) { // id & initial == 1 << d for all ids at depth d, for < d it is 0
            id <<= 1;
            val = value(id);
            if (val > d) {
                id ^= 1;
                val = value(id);
            }
        }
        byte value = value(id);
        assert value == d && (id & initial) == 1 << d : String.format("val = %d, id & initial = %d, d = %d",
                value, id & initial, d);
        setValue(id, unusable); // mark as unusable
        updateParentsAlloc(id);
        return id;
    }

不断向下判断,当子节点满足分配条件时,深度加1,进入子节点,先判断左节点是否满足分配要求,如果不满足则寻找同层级的右节点,直到找到符合要求的节点,获取节点ID

(3)标记已分配的节点为不可用,并向上更新父节点的分配信息

private int allocateNode(int d) {
        .....
        updateParentsAlloc(id);
        return id;
    }

(4)根据nodeID获取对应的PoolSubPage

private long allocateSubpage(int normCapacity) {
           ....
            int subpageIdx = subpageIdx(id);
            PoolSubpage subpage = subpages[subpageIdx];
            ....
        }
    }

(5)判断PoolSubpage是新创建的还是被释放重用的,如果是新建的,创建完成之后加入到PoolSubpage subpages中,如果是重用的,则初始化PoolSubpage,更新Page的元数据信息(包括elemSize和bitmap等),并将更新后的PoolSubpage加入内存池的双向链表中

private long allocateSubpage(int normCapacity) {
            .....
            if (subpage == null) {
                subpage = new PoolSubpage(head, this, id, runOffset(id), pageSize, normCapacity);
                subpages[subpageIdx] = subpage;
            } else {
                subpage.init(head, normCapacity);
            }
            return subpage.allocate();
        }
    }

(6)调用PoolSubpage的allocate方法,返回PoolSubpage分配情况的位图索引

long allocate() {
        if (elemSize == 0) {
            return toHandle(0);
        }

        if (numAvail == 0 || !doNotDestroy) {
            return -1;
        }

        final int bitmapIdx = getNextAvail();
        int q = bitmapIdx >>> 6;
        int r = bitmapIdx & 63;
        assert (bitmap[q] >>> r & 1) == 0;
        bitmap[q] |= 1L << r;

        if (-- numAvail == 0) {
            removeFromPool();
        }

        return toHandle(bitmapIdx);
    }

(7)最后根据位图索引,需要申请的内存容量等参数交由PoolChunk进行内存分配

void initBuf(PooledByteBuf buf, long handle, int reqCapacity) {
        int memoryMapIdx = memoryMapIdx(handle);
        int bitmapIdx = bitmapIdx(handle);
        if (bitmapIdx == 0) {
            byte val = value(memoryMapIdx);
            assert val == unusable : String.valueOf(val);
            buf.init(this, handle, runOffset(memoryMapIdx) + offset, reqCapacity, runLength(memoryMapIdx),
                     arena.parent.threadCache());
        } else {
            initBufWithSubpage(buf, handle, bitmapIdx, reqCapacity);
        }
    }

你可能感兴趣的:(netty内存分析)