golang进程在docker中OOM后hang住问题解析

正文

golang版本:1.16

背景:golang进程在docker中运行,因为使用内存较多,经常在内存未达到docker上限时,就被oom-kill,为了避免程序频繁被杀,在docker启动时禁用了oom-kill,但是出现了新的问题。

现象:docker内存用满后,golang进程hang住,无任何响应(没有额外内存系统无法分配新的fd,无法服务),即使在程序内置了内存达到上限就重启,也不会生效,只能kill

因为pprof查看进程内存有很多是能在gc时释放的,起初怀疑是golang进程问题

在hang住之前,先登录到docker上,写一个golang测试程序,只申请一小段内存后sleep,启动时加GODEBUG=GCTRACE=1打印gc信息,发现mark 阶段stw耗时达到31s(31823+15+0.11 ms对应STW Mark Prepare,Concurrent Marking,STW Mark Termination)

image.png

怀疑是不是申请内存失败后,没有触发oom退出。在golang标准库中查看oom相关的逻辑

mgcwork.go:374

if s == nil {
   systemstack(func() {
      s = mheap_.allocManual(workbufAlloc/pageSize, spanAllocWorkBuf)
   })
   if s == nil {
      throw("out of memory")
   }
   // Record the new span in the busy list.
   lock(&work.wbufSpans.lock)
   work.wbufSpans.busy.insert(s)
   unlock(&work.wbufSpans.lock)
}

mheap分配内存使用了mmap,继续怀疑是mmap返回的错误码在docker中不是非0

func sysMap(v unsafe.Pointer, n uintptr, sysStat *sysMemStat) {
   sysStat.add(int64(n))
   p, err := mmap(v, n, _PROT_READ| _PROT_WRITE, _MAP_ANON| _MAP_FIXED| _MAP_PRIVATE, -1, 0)
   if err == _ENOMEM {
      throw("runtime: out of memory")
   }
   if p != v || err != 0 {
      throw("runtime: cannot map pages in arena address space")
   }
}

为了对比验证,用c写一段调用mmap的代码,在同一个docker中同时跑看下

#include 
#include 
#include 
#include 
#define BUF_SIZE 393216
void main() {
    char *addr;
    int i;
    for(i=0;i<1000000;i++) {
        addr = (char *)mmap(NULL, BUF_SIZE, PROT_READ | PROT_WRITE,
                MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
        if (addr != MAP_FAILED) {
            addr[0] = 'a';
            addr[BUF_SIZE-1] = 'b';
            printf("i:%d, sz: %d, addr[0]: %c, addr[-1]: %c\n", i, BUF_SIZE, addr[0], addr[BUF_SIZE-1]);
            munmap(addr, BUF_SIZE);
        } else {
            printf("error no: %d\n", errno);
        }
        usleep(1000000);
    }
}

mmap没有失败,而且同样会hang住,说明不是golang机制的问题,应该是阻塞在了系统调用上。查看调用堆栈,发现是hang在了cgroup中

[] mem_cgroup_oom_synchronize+0x275/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff

查看go程序,也有相同的调用堆栈

[] futex_wait_queue_me+0xc1/0x120
[] futex_wait+0xf6/0x250
[] do_futex+0x2fb/0xb20
[] SyS_futex+0x7a/0x170
[] do_syscall_64+0x68/0x100
[] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[] 0xffffffffffffffff
[] hrtimer_nanosleep+0xce/0x1e0
[] SyS_nanosleep+0x8b/0xa0
[] do_syscall_64+0x68/0x100
[] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[] 0xffffffffffffffff
[] mem_cgroup_oom_synchronize+0x16a/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff
[] mem_cgroup_oom_synchronize+0x16a/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff
[] mem_cgroup_oom_synchronize+0x16a/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff

看了下cgroup内存控制的代码,策略是没有可用内存并且未配置oom kill的程序,会锁在一个等待队列里,当有可用内存时再从队首唤醒。这个逻辑没办法通过配置或者其他方式绕过去。

elixir.bootlin.com/linux/v4.14…

 /**
 * mem_cgroup_oom_synchronize - complete memcg OOM handling
 * @handle: actually kill/wait or just clean up the OOM state
 *
 * This has to be called at the end of a page fault if the memcg OOM
 * handler was enabled.
 *
 * Memcg supports userspace OOM handling where failed allocations must
 * sleep on a waitqueue until the userspace task resolves the
 * situation.  Sleeping directly in the charge context with all kinds
 * of locks held is not a good idea, instead we remember an OOM state
 * in the task and mem_cgroup_oom_synchronize() has to be called at
 * the end of the page fault to complete the OOM handling.
 *
 * Returns %true if an ongoing memcg OOM situation was detected and
 * completed, %false otherwise.
 */
bool mem_cgroup_oom_synchronize(bool handle)
{
        struct mem_cgroup *memcg = current->memcg_in_oom;
        struct oom_wait_info owait;
        bool locked;
        /* OOM is global, do not handle */
        if (!memcg)
                return false;
        if (!handle)
                goto cleanup;
        owait.memcg = memcg;
        owait.wait.flags = 0;
        owait.wait.func = memcg_oom_wake_function;
        owait.wait.private = current;
        INIT_LIST_HEAD(&owait.wait.entry);
        prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
        mem_cgroup_mark_under_oom(memcg);
        locked = mem_cgroup_oom_trylock(memcg);
        if (locked)
                mem_cgroup_oom_notify(memcg);
        if (locked && !memcg->oom_kill_disable) {
                mem_cgroup_unmark_under_oom(memcg);
                finish_wait(&memcg_oom_waitq, &owait.wait);
                mem_cgroup_out_of_memory(memcg, current->memcg_oom_gfp_mask,
                                         current->memcg_oom_order);
        } else {
                schedule();
                mem_cgroup_unmark_under_oom(memcg);
                finish_wait(&memcg_oom_waitq, &owait.wait);
        }
        if (locked) {
                mem_cgroup_oom_unlock(memcg);
                /*
 * There is no guarantee that an OOM-lock contender
 * sees the wakeups triggered by the OOM kill
 * uncharges.  Wake any sleepers explicitly.
 */
                memcg_oom_recover(memcg);
        }
cleanup:
        current->memcg_in_oom = NULL;
        css_put(&memcg->css);
        return true;
}

结论:

docker内存耗光后,golang在gc的mark阶段,需要申请新的内存记录被标记的对象时,需要调用mmap,因为没有可用内存,就会被hang在cgroup中,gc无法完成也就无法释放内存,就会导致golang程序一直在stw阶段,无法对外服务,即使压力下降也无法恢复。最好还是不要关闭docker的oom-kill

以上就是golang进程在docker中OOM后hang住问题解析的详细内容,更多关于golang进程docker OOM hang的资料请关注脚本之家其它相关文章!

你可能感兴趣的:(golang进程在docker中OOM后hang住问题解析)