5.7.2.4 sYSMALLOc()
当_int_malloc() 函数尝试从 fast bins , last remainder chunk , small bins , large bins 和 top chunk 都失败之后,就会使用 sYSMALLOc() 函数直接向系统申请内存用于分配所需的 chunk 。其实现源代码如下:
/* sysmalloc handles malloc cases requiring more memory from the system. On entry, it is assumed that av->top does not have enough space to service request for nb bytes, thus requiring that av->top be extended or replaced. */ #if __STD_C static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av) #else static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av; #endif { mchunkptr old_top; /* incoming value of av->top */ INTERNAL_SIZE_T old_size; /* its size */ char* old_end; /* its end address */ long size; /* arg to first MORECORE or mmap call */ char* brk; /* return value from MORECORE */ long correction; /* arg to 2nd MORECORE call */ char* snd_brk; /* 2nd return val */ INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */ INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */ char* aligned_brk; /* aligned offset into brk */ mchunkptr p; /* the allocated/returned chunk */ mchunkptr remainder; /* remainder from allocation */ unsigned long remainder_size; /* its size */ unsigned long sum; /* for updating stats */ size_t pagemask = mp_.pagesize - 1; bool tried_mmap = false; #if HAVE_MMAP /* If have mmap, and the request size meets the mmap threshold, and the system supports mmap, and there are few enough currently allocated mmapped regions, try to directly map this request rather than expanding top. */ if ((unsigned long)(nb) >= (unsigned long)(mp_.mmap_threshold) && (mp_.n_mmaps < mp_.n_mmaps_max)) { char* mm; /* return value from mmap call*/ 如果所需分配的chunk大小大于mmap分配阈值,默认为128K,并且当前进程使用mmap()分配的内存块小于设定的最大值,将使用mmap()系统调用直接向操作系统申请内存。 try_mmap: /* Round up size to nearest page. For mmapped chunks, the overhead is one SIZE_SZ unit larger than for normal chunks, because there is no following chunk whose prev_size field could be used. */ #if 1 /* See the front_misalign handling below, for glibc there is no need for further alignments. */ size = (nb + SIZE_SZ + pagemask) & ~pagemask; #else size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask; #endif tried_mmap = true;
由于nb 为所需 chunk 的大小,在 _int_malloc() 函数中已经将用户需要分配的大小转化为 chunk 大小,当如果这个 chunk 直接使用 mmap() 分配的话,该 chunk 不存在下一个相邻的 chunk ,也就没有 prev_size 的内存空间可以复用,所以还需要额外 SIZE_SZ 大小的内存。由于 mmap() 分配的内存块必须页对齐。如果使用 mmap() 分配内存,需要重新计算分配的内存大小 size 。
/* Don't try if size wraps around 0 */ if ((unsigned long)(size) > (unsigned long)(nb)) { mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); if (mm != MAP_FAILED) { /* The offset to the start of the mmapped region is stored in the prev_size field of the chunk. This allows us to adjust returned start address to meet alignment requirements here and in memalign(), and still be able to compute proper address argument for later munmap in free() and realloc(). */ #if 1 /* For glibc, chunk2mem increases the address by 2*SIZE_SZ and MALLOC_ALIGN_MASK is 2*SIZE_SZ-1. Each mmap'ed area is page aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ assert (((INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK) == 0); #else front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { correction = MALLOC_ALIGNMENT - front_misalign; p = (mchunkptr)(mm + correction); p->prev_size = correction; set_head(p, (size - correction) |IS_MMAPPED); } else #endif { p = (mchunkptr)mm; set_head(p, size|IS_MMAPPED); }
如果重新计算所需分配的 size 小于 nb ,表示溢出了,不分配内存,否则,调用 mmap() 分配所需大小的内存。如果 mmap() 分配内存成功,将 mmap() 返回的内存指针强制转换为 chunk 指针,并设置该 chunk 的大小为 size ,同时设置该 chunk 的 IS_MMAPPED 标志位,表示本 chunk 是通过 mmap() 函数直接从系统分配的。由于 mmap() 返回的内存地址是按照页对齐的,也一定是按照 2*SIZE_SZ 对齐的,满足 chunk 的边界对齐规则,使用 chunk2mem() 获取 chunk 中实际可用的内存也没有问题,所以这里不需要做额外的对齐操作。
/* update statistics */ if (++mp_.n_mmaps > mp_.max_n_mmaps) mp_.max_n_mmaps = mp_.n_mmaps; sum = mp_.mmapped_mem += size; if (sum > (unsigned long)(mp_.max_mmapped_mem)) mp_.max_mmapped_mem = sum; #ifdef NO_THREADS sum += av->system_mem; if (sum > (unsigned long)(mp_.max_total_mem)) mp_.max_total_mem = sum; #endif
更新相关统计值,首先将当前进程 mmap 分配内存块的计数加一,如果使用 mmap() 分配的内存块数量大于设置的最大值,将最大值设置为最新值,这个判断不会成功,因为使用 mmap 分配内存的条件中包括了 mp_.n_mmaps < mp_.n_mmaps_max ,所以 ++mp_.n_mmaps > mp_.max_n_mmaps 不会成立。然后更新 mmap 分配的内存总量,如果该值大于设置的最大值,将当前值赋值给 mp_.max_mmapped_mem 。如果只支持单线程,还需要计数当前进程所分配的内存总数,如果总数大于设置的最大值 mp_.max_total_mem ,修改 mp_.max_total_mem 为当前值。
check_chunk(av, p); return chunk2mem(p); } } } #endif /* Record incoming configuration of top */ old_top = av->top; old_size = chunksize(old_top); old_end = (char*)(chunk_at_offset(old_top, old_size)); brk = snd_brk = (char*)(MORECORE_FAILURE); 保存当前top chunk的指针,大小和结束地址到临时变量中。 /* If not the first time through, we require old_size to be at least MINSIZE and to have prev_inuse set. */ assert((old_top == initial_top(av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse(old_top) && ((unsigned long)old_end & pagemask) == 0)); /* Precondition: not enough current space to satisfy nb request */ assert((unsigned long)(old_size) < (unsigned long)(nb + MINSIZE)); #ifndef ATOMIC_FASTBINS /* Precondition: all fastbins are consolidated */ assert(!have_fastchunks(av)); #endif
检查top chunk 的合法性,如果第一次调用本函数, top chunk 可能没有初始化,可能 old_size 为 0 ,如果 top chunk 已经初始化,则 top chunk 的大小必须大于等于 MINSIZE ,因为 top chunk 中包含了 fencepost , fencepost 需要 MINSIZE 大小的内存。 Top chunk 必须标识前一个 chunk 处于 inuse 状态,这是规定,并且 top chunk 的结束地址必定是页对齐的。另外 top chunk 的除去 fencepost 的大小必定小于所需 chunk 的大小,不然在 _int_malloc() 函数中就应该使用 top chunk 获得所需的 chunk 。最后检查如果没有开启 ATOMIC_FASTBINS 优化,在使用 _int_malloc() 分配内存时,获得了分配区的锁, free 时也要获得分配区的锁才能向 fast bins 中加入新的 chunk ,由于 _int_malloc() 在调用本函数前,已经将 fast bins 中的所有 chunk 都合并加入到 unsorted bin 中了,所以,本函数中 fast bins 中一定不会有空闲 chunk 存在。
if (av != &main_arena) { heap_info *old_heap, *heap; size_t old_heap_size; /* First try to extend the current heap. */ old_heap = heap_for_ptr(old_top); old_heap_size = old_heap->size; if ((long) (MINSIZE + nb - old_size) > 0 && grow_heap(old_heap, MINSIZE + nb - old_size) == 0) { av->system_mem += old_heap->size - old_heap_size; arena_mem += old_heap->size - old_heap_size; #if 0 if(mmapped_mem + arena_mem + sbrked_mem > max_total_mem) max_total_mem = mmapped_mem + arena_mem + sbrked_mem; #endif set_head(old_top, (((char *)old_heap + old_heap->size) - (char *)old_top) | PREV_INUSE); 如果当前分配区为非主分配区,根据top chunk的指针获得当前sub_heap的heap_info实例,如果top chunk的剩余有效空间不足以分配出所需的chunk(前面已经断言,这个肯定成立),尝试增长sub_heap的可读可写区域大小,如果成功,修改过内存分配的统计信息,并更新新的top chunk的size。 } else if ((heap = new_heap(nb + (MINSIZE + sizeof(*heap)), mp_.top_pad))) { 调用new_heap()函数创建一个新的sub_heap,由于这个sub_heap中至少需要容下大小为nb的chunk,大小为MINSIZE的fencepost和大小为sizeof(*heap)的heap_info实例,所以传入new_heap()函数的分配大小为nb + (MINSIZE + sizeof(*heap))。 /* Use a newly allocated heap. */ heap->ar_ptr = av; heap->prev = old_heap; av->system_mem += heap->size; arena_mem += heap->size; #if 0 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem) max_total_mem = mmapped_mem + arena_mem + sbrked_mem; #endif /* Set up the new top. */ top(av) = chunk_at_offset(heap, sizeof(*heap)); set_head(top(av), (heap->size - sizeof(*heap)) | PREV_INUSE); 使新创建的sub_heap保存当前的分配区指针,将该sub_heap加入当前分配区的sub_heap链表中,更新当前分配区内存分配统计,将新创建的sub_heap仅有的一个空闲chunk作为当前分配区的top chunk,并设置top chunk的状态。 /* Setup fencepost and free the old top chunk. */ /* The fencepost takes at least MINSIZE bytes, because it might become the top chunk again later. Note that a footer is set up, too, although the chunk is marked in use. */ old_size -= MINSIZE; set_head(chunk_at_offset(old_top, old_size + 2*SIZE_SZ), 0|PREV_INUSE); if (old_size >= MINSIZE) { set_head(chunk_at_offset(old_top, old_size), (2*SIZE_SZ)|PREV_INUSE); set_foot(chunk_at_offset(old_top, old_size), (2*SIZE_SZ)); set_head(old_top, old_size|PREV_INUSE|NON_MAIN_ARENA); #ifdef ATOMIC_FASTBINS _int_free(av, old_top, 1); #else _int_free(av, old_top); #endif } else { set_head(old_top, (old_size + 2*SIZE_SZ)|PREV_INUSE); set_foot(old_top, (old_size + 2*SIZE_SZ)); }
设置原 top chunk 的 fencepost , fencepost 需要 MINSIZE 大小的内存空间,将该 old_size 减去 MINSIZE 得到原 top chunk 的有效内存空间,首先设置 fencepost 的第二个 chunk 的 size 为 0 ,并标识前一个 chunk 处于 inuse 状态。接着判断原 top chunk 的有效内存空间上是否大于等于 MINSIZE ,如果是,表示原 top chunk 可以分配出大于等于 MINSIZE 大小的 chunk ,于是将原 top chunk 切分成空闲 chunk 和 fencepost 两部分,先设置 fencepost 的第一个 chunk 的大小为 2*SIZE_SZ ,并标识前一个 chunk 处于 inuse 状态, fencepost 的第一个 chunk 还需要设置 foot ,表示该 chunk 处于空闲状态,而 fencepost 的第二个 chunk 却标识第一个 chunk 处于 inuse 状态,因为不能有两个空闲 chunk 相邻,才会出现这么奇怪的 fencepost 。另外其实 top chunk 切分出来的 chunk 也是处于空闲状态,但 fencepost 的第一个 chunk 却标识前一个 chunk 为 inuse 状态,然后强制将该处于 inuse 状态的 chunk 调用 _int_free() 函数释放掉。这样做完全是要遵循不能有两个空闲 chunk 相邻的约定。
如果原 top chunk 中有效空间不足 MINSIZE ,则将整个原 top chunk 作为 fencepost ,并设置 fencepost 的第一个 chunk 的相关状态。
} else if (!tried_mmap) /* We can at least try to use to mmap memory. */ goto try_mmap; 如果增长sub_heap的可读可写区域大小和创建新sub_heap都失败了,尝试使用mmap()函数直接从系统分配所需chunk。 } else { /* av == main_arena */ /* Request enough space for nb + pad + overhead */ size = nb + mp_.top_pad + MINSIZE; 如果为当前分配区为主分配区,重新计算需要分配的size。 /* If contiguous, we can subtract out existing space that we hope to combine with new space. We add it back later only if we don't actually get contiguous space. */ if (contiguous(av)) size -= old_size; 一般情况下,主分配区使用sbrk()从heap中分配内存,sbrk()返回连续的虚拟内存,这里调整需要分配的size,减掉top chunk中已有空闲内存大小。 /* Round to a multiple of page size. If MORECORE is not contiguous, this ensures that we only call it with whole-page arguments. And if MORECORE is contiguous and this is not first time through, this preserves page-alignment of previous calls. Otherwise, we correct to page-align below. */ size = (size + pagemask) & ~pagemask; 将size按照页对齐,sbrk()必须以页为单位分配连续虚拟内存。 /* Don't try to call MORECORE if argument is so big as to appear negative. Note that since mmap takes size_t arg, it may succeed below even if we cannot call MORECORE. */ if (size > 0) brk = (char*)(MORECORE(size)); 使用sbrk()从heap中分配size大小的虚拟内存块。 if (brk != (char*)(MORECORE_FAILURE)) { /* Call the `morecore' hook if necessary. */ void (*hook) (void) = force_reg (__after_morecore_hook); if (__builtin_expect (hook != NULL, 0)) (*hook) (); 如果sbrk()分配成功,并且morecore的hook函数存在,调用morecore的hook函数。 } else { /* If have mmap, try using it as a backup when MORECORE fails or cannot be used. This is worth doing on systems that have "holes" in address space, so sbrk cannot extend to give contiguous space, but space is available elsewhere. Note that we ignore mmap max count and threshold limits, since the space will not be used as a segregated mmap region. */ #if HAVE_MMAP /* Cannot merge with old top, so add its size back in */ if (contiguous(av)) size = (size + old_size + pagemask) & ~pagemask; /* If we are relying on mmap as backup, then use larger units */ if ((unsigned long)(size) < (unsigned long)(MMAP_AS_MORECORE_SIZE)) size = MMAP_AS_MORECORE_SIZE; 如果sbrk()返回失败,或是sbrk()不可用,使用mmap()代替,重新计算所需分配的内存大小并按页对齐,如果重新计算的size小于1M,将size设为1M,也就是说使用mmap()作为morecore函数分配的最小内存块大小为1M。 /* Don't try if size wraps around 0 */ if ((unsigned long)(size) > (unsigned long)(nb)) { char *mbrk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); if (mbrk != MAP_FAILED) { /* We do not need, and cannot use, another sbrk call to find end */ brk = mbrk; snd_brk = brk + size; /* Record that we no longer have a contiguous sbrk region. After the first time mmap is used as backup, we do not ever rely on contiguous space since this could incorrectly bridge regions. */ set_noncontiguous(av); } 如果所需分配的内存大小合法,使用mmap()函数分配内存。如果分配成功,更新brk和snd_brk,并将当前分配区属性设置为可分配不连续虚拟内存块。 } #endif } if (brk != (char*)(MORECORE_FAILURE)) { if (mp_.sbrk_base == 0) mp_.sbrk_base = brk; av->system_mem += size; 如果brk合法,即sbrk()或mmap()分配成功,如果sbrk_base还没有初始化,更新sbrk_base和当前分配区的内存分配总量。 /* If MORECORE extends previous space, we can likewise extend top size. */ if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) set_head(old_top, (size + old_size) | PREV_INUSE); else if (contiguous(av) && old_size && brk < old_end) { /* Oops! Someone else killed our space.. Can't touch anything. */ malloc_printerr (3, "break adjusted to free malloc space", brk); }
如果sbrk() 分配成功,更新 top chunk 的大小,并设定 top chunk 的前一个 chunk 处于 inuse 状态。如果当前分配区可分配连续虚拟内存,原 top chunk 的大小大于 0 ,但新的 brk 值小于原 top chunk 的结束地址,出错了。
/* Otherwise, make adjustments: * If the first time through or noncontiguous, we need to call sbrk just to find out where the end of memory lies. * We need to ensure that all returned chunks from malloc will meet MALLOC_ALIGNMENT * If there was an intervening foreign sbrk, we need to adjust sbrk request size to account for fact that we will not be able to combine new space with existing space in old_top. * Almost all systems internally allocate whole pages at a time, in which case we might as well use the whole last page of request. So we allocate enough more memory to hit a page boundary now, which in turn causes future contiguous calls to page-align. */ else { front_misalign = 0; end_misalign = 0; correction = 0; aligned_brk = brk;
执行到这个分支,意味着 sbrk() 返回的 brk 值大于原 top chunk 的结束地址,那么新的地址与原 top chunk 的地址不连续,可能是由于外部其它地方调用 sbrk() 函数,这里需要处理地址的重新对齐问题。
/* handle contiguous cases */ if (contiguous(av)) { /* Count foreign sbrk as system_mem. */ if (old_size) av->system_mem += brk - old_end; 如果本分配区可分配连续虚拟内存,并且有外部调用了sbrk()函数,将外部调用sbrk()分配的内存计入当前分配区所分配内存统计中。 /* Guarantee alignment of first new chunk made from this space */ front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { /* Skip over some bytes to arrive at an aligned position. We don't need to specially mark these wasted front bytes. They will never be accessed anyway because prev_inuse of av->top (and any chunk created from its start) is always true after initialization. */ correction = MALLOC_ALIGNMENT - front_misalign; aligned_brk += correction; } 计算当前的brk要矫正的字节数据,保证brk地址按MALLOC_ALIGNMENT对齐。 /* If this isn't adjacent to existing space, then we will not be able to merge with old_top space, so must add to 2nd request. */ correction += old_size; /* Extend the end address to hit a page boundary */ end_misalign = (INTERNAL_SIZE_T)(brk + size + correction); correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign; assert(correction >= 0); snd_brk = (char*)(MORECORE(correction));
由于原 top chunk 的地址与当前 brk 不相邻,也就不能再使用原 top chunk 的内存了,需要重新为所需 chunk 分配足够的内存,将原 top chunk 的大小加到矫正值中,从当前 brk 中分配所需 chunk ,计算出未对齐的 chunk 结束地址 end_misalign ,然后将 end_misalign 按照页对齐计算出需要矫正的字节数加到矫正值上。然后再调用 sbrk() 分配矫正值大小的内存,如果 sbrk() 分配成功,则当前的 top chunk 中可以分配出所需的连续内存的 chunk 。
/* If can't allocate correction, try to at least find out current brk. It might be enough to proceed without failing. Note that if second sbrk did NOT fail, we assume that space is contiguous with first sbrk. This is a safe assumption unless program is multithreaded but doesn't use locks and a foreign sbrk occurred between our first and second calls. */ if (snd_brk == (char*)(MORECORE_FAILURE)) { correction = 0; snd_brk = (char*)(MORECORE(0)); 如果sbrk()执行失败,更新当前brk的结束地址。 } else { /* Call the `morecore' hook if necessary. */ void (*hook) (void) = force_reg (__after_morecore_hook); if (__builtin_expect (hook != NULL, 0)) (*hook) (); 如果sbrk()执行成功,并且有morecore hook函数存在,执行该hook函数。 } } /* handle non-contiguous cases */ else { /* MORECORE/mmap must correctly align */ assert(((unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK) == 0); /* Find out current end of memory */ if (snd_brk == (char*)(MORECORE_FAILURE)) { snd_brk = (char*)(MORECORE(0)); } 执行到这里,意味着brk是用mmap()分配的,断言brk一定是按MALLOC_ALIGNMENT对齐的,因为mmap()返回的地址按页对齐。如果brk的结束地址非法,使用morecore获得当前brk的结束地址。 } /* Adjust top based on results of second sbrk */ if (snd_brk != (char*)(MORECORE_FAILURE)) { av->top = (mchunkptr)aligned_brk; set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE); av->system_mem += correction; 如果brk的结束地址合法,设置当前分配区的top chunk为brk,设置top chunk的大小,并更新分配区的总分配内存量。 /* If not the first time through, we either have a gap due to foreign sbrk or a non-contiguous region. Insert a double fencepost at old_top to prevent consolidation with space we don't own. These fenceposts are artificial chunks that are marked as inuse and are in any case too small to use. We need two to make sizes and alignments work out. */ if (old_size != 0) { /* Shrink old_top to insert fenceposts, keeping size a multiple of MALLOC_ALIGNMENT. We know there is at least enough space in old_top to do this. */ old_size = (old_size - 4*SIZE_SZ) & ~MALLOC_ALIGN_MASK; set_head(old_top, old_size | PREV_INUSE); /* Note that the following assignments completely overwrite old_top when old_size was previously MINSIZE. This is intentional. We need the fencepost, even if old_top otherwise gets lost. */ chunk_at_offset(old_top, old_size )->size = (2*SIZE_SZ)|PREV_INUSE; chunk_at_offset(old_top, old_size + 2*SIZE_SZ)->size = (2*SIZE_SZ)|PREV_INUSE; /* If possible, release the rest. */ if (old_size >= MINSIZE) { #ifdef ATOMIC_FASTBINS _int_free(av, old_top, 1); #else _int_free(av, old_top); #endif }
设置原 top chunk 的 fencepost , fencepost 需要 MINSIZE 大小的内存空间,将该 old_size 减去 MINSIZE 得到原 top chunk 的有效内存空间,我们可以确信原 top chunk 的有效内存空间一定大于 MINSIZE ,将原 top chunk 切分成空闲 chunk 和 fencepost 两部分,首先设置切分出来的 chunk 的大小为 old_size ,并标识前一个 chunk 处于 inuse 状态,原 top chunk 切分出来的 chunk 本应处于空闲状态,但 fencepost 的第一个 chunk 却标识前一个 chunk 为 inuse 状态,然后强制将该处于 inuse 状态的 chunk 调用 _int_free() 函数释放掉。然后设置 fencepost 的第一个 chunk 的大小为 2*SIZE_SZ ,并标识前一个 chunk 处于 inuse 状态,然后设置 fencepost 的第二个 chunk 的 size 为 2*SIZE_SZ ,并标识前一个 chunk 处于 inuse 状态。这里的主分配区的 fencepost 与非主分配区的 fencepost 不同,主分配区 fencepost 的第二个 chunk 的大小设置为 2*SIZE_SZ ,而非主分配区的 fencepost 的第二个 chunk 的大小设置为 0 。
} } } /* Update statistics */ #ifdef NO_THREADS sum = av->system_mem + mp_.mmapped_mem; if (sum > (unsigned long)(mp_.max_total_mem)) mp_.max_total_mem = sum; #endif } 到此为止,对主分配区的分配出来完毕。 } /* if (av != &main_arena) */ if ((unsigned long)av->system_mem > (unsigned long)(av->max_system_mem)) av->max_system_mem = av->system_mem; 如果当前分配区所分配的内存量大于设置的最大值,更新当前分配区最大分配的内存量, check_malloc_state(av); /* finally, do the allocation */ p = av->top; size = chunksize(p); /* check that one of the above allocation paths succeeded */ if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) { remainder_size = size - nb; remainder = chunk_at_offset(p, nb); av->top = remainder; set_head(p, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head(remainder, remainder_size | PREV_INUSE); check_malloced_chunk(av, p, nb); return chunk2mem(p); } 如果当前top chunk中已经有足够的内存来分配所需的chunk,从当前的top chunk中分配所需的chunk并返回。 /* catch all failure paths */ MALLOC_FAILURE_ACTION; return 0; }