本文记录一次内核模块内存越界,导致故障的排查分析过程,和各位共享交流。
异常都是在系统启动阶段出的。
异常信息一:
[ 6.854984] BUG: Bad page state in process khelper pfn:6db6d9addc07010f [ 6.862471] page:ffff880821883b48 flags:ffff880821885e00 count:562584288 mapcount:-30711 mapping:ffff8808218857c0 index:ffff8808218854a0 [ 6.876156] Pid: 132, comm: khelper Not tainted 2.6.32.41- #17 [ 6.885093] Call Trace: [ 6.887837] [<ffffffff810ba056>] bad_page+0x115/0x130 [ 6.893571] [<ffffffff810bba56>] get_page_from_freelist+0x45c/0x6d4 [ 6.900669] [<ffffffff810cb6f3>] ? __inc_zone_state+0x56/0x66 [ 6.907183] [<ffffffff810bbee6>] __alloc_pages_nodemask+0x152/0x8b8 [ 6.914284] [<ffffffff810bbee6>] ? __alloc_pages_nodemask+0x152/0x8b8 [ 6.921577] [<ffffffff810e221a>] alloc_pages_current+0x96/0x9f [ 6.928191] [<ffffffff810e556d>] alloc_slab_page+0x1a/0x25 [ 6.934404] [<ffffffff810e55c3>] new_slab+0x4b/0x1ca [ 6.940040] [<ffffffff810e60c8>] __slab_alloc+0x1c3/0x366 [ 6.946166] [<ffffffff811c221d>] ? selinux_cred_prepare+0x1a/0x30 [ 6.953081] [<ffffffff810e5719>] ? new_slab+0x1a1/0x1ca [ 6.958999] [<ffffffff810e7718>] __kmalloc_track_caller+0xb0/0xff [ 6.965901] [<ffffffff81069ec4>] ? prepare_creds+0x1a/0xa4 [ 6.972116] [<ffffffff811c221d>] ? selinux_cred_prepare+0x1a/0x30 [ 6.979026] [<ffffffff810c8947>] kmemdup+0x1b/0x31 [ 6.984476] [<ffffffff811c221d>] selinux_cred_prepare+0x1a/0x30 [ 6.991185] [<ffffffff811bd02d>] security_prepare_creds+0x11/0x13 [ 6.998094] [<ffffffff81069f38>] prepare_creds+0x8e/0xa4 [ 7.004121] [<ffffffff8106a5e6>] copy_creds+0x74/0x186 [ 7.009963] [<ffffffff810488bb>] copy_process+0x261/0x110b [ 7.016185] [<ffffffff8105fec1>] ? __call_usermodehelper+0x0/0x6a [ 7.023079] [<ffffffff81049cb2>] do_fork+0x158/0x2fa [ 7.028725] [<ffffffff8101211a>] ? __cycles_2_ns+0x11/0x3d [ 7.034943] [<ffffffff81012212>] ? native_sched_clock+0x3b/0x3d [ 7.041649] [<ffffffff8105fec1>] ? __call_usermodehelper+0x0/0x6a [ 7.048543] [<ffffffff8100ce22>] kernel_thread+0x82/0xe0 [ 7.054566] [<ffffffff8105fec1>] ? __call_usermodehelper+0x0/0x6a [ 7.061482] [<ffffffff8105fcd8>] ? ____call_usermodehelper+0x0/0x118 [ 7.068666] [<ffffffff8100ce80>] ? child_rip+0x0/0x20 [ 7.074407] [<ffffffff8105ff0c>] ? __call_usermodehelper+0x4b/0x6a [ 7.081403] [<ffffffff8106168e>] worker_thread+0x14e/0x1f8 [ 7.087627] [<ffffffff810651a3>] ? autoremove_wake_function+0x0/0x38 [ 7.094818] [<ffffffff81061540>] ? worker_thread+0x0/0x1f8 [ 7.101039] [<ffffffff81064f69>] kthread+0x69/0x71 [ 7.106488] [<ffffffff8100ce8a>] child_rip+0xa/0x20 [ 7.112027] [<ffffffff81064f00>] ? kthread+0x0/0x71 [ 7.117564] [<ffffffff8100ce80>] ? child_rip+0x0/0x20
只看前两行信息,出现mapcount为负数,这个显然有问题。打印的代码是:
printk(KERN_ALERT "page:%p flags:%p count:%d mapcount:%d mapping:%p index:%lx\n", page, (void *)page->flags, page_count(page), page_mapcount(page), page->mapping, page->index);
起先page是0xffff880821883b48这个值,觉得忒奇怪,就认为是个非法的值。于是走了点弯路,初步判定是page的值不对了。
于是单从这一次的改写来说,直接原因是从伙伴系统取出的struct page值不对。
struct page的来源,可能来自每cpu冷热缓存链表pcp,也来自zone->free_area链表。
于是想到的两种可能:
1)如果运行过程中,上述链表的list_head结构体的next域被改,则取出来的第一个page指针就是非法值。
链表pcp是每cpu变量,一般bootmem分配;
链表free_area来自bootmem分配。
2)或者某个时候,page结构里的lru.next字段被改,那么当此page被删除后,链表头指针list_head.next = 非法的lru.next,
下回从伙伴取可用page的时候,取到了非法page值。
page结构位于node_map数组里,这个数组也是bootmem分配。
于是根据异常信息一判断是bootmem区间被覆盖。
还剩下的疑点是,0xffff880821883b48这个值正常不? 于是在正常环境下,以某次page的lru为起点,按照lru->next往后
打印链表上各个page的值。发现,在一条链表上,大多数page的地址都是0xffffea000fbab8c8这种值,只有一个'page'的地址是
0xffff880821883b48。后来想了一下,这个所谓的0xffff880821883b48开头的'page'其实就是zone->free_area->list_head或者pcp->list_head
头指针,因为这两个头指针都是在动态内存区分配,并且page所在的lru又是个双向环形链表,因此遍历到头指针也不足为奇了。
这么看来0xffff880821883b48这个值是正常的,不正常的是错误的取到list_head头作为page。
异常信息二:
[ 6.839518] BUG: unable to handle kernel NULL pointer dereference at 0000000000000006 [ 6.858841] IP: [<ffffffff810bb84a>] get_page_from_freelist+0x306/0x6d4 [ 6.866234] PGD 0 [ 6.868481] Oops: 0002 [#1] PREEMPT SMP [ 6.872895] last sysfs file: [ 6.876201] CPU 19 [ 6.878551] Modules linked in: [ 6.881971] Pid: 139, comm: khelper Not tainted 2.6.32.41 #1 To be filled by O.E.M. [ 6.893024] RIP: 0010:[<ffffffff810bb84a>] [<ffffffff810bb84a>] get_page_from_freelist+0x306/0x6d4 [ 6.903125] RSP: 0018:ffff880821addac0 EFLAGS: 00010092 [ 6.909046] RAX: 0000000000000006 RBX: ffff88047e4e98f8 RCX: ffff88047e4e9920 [ 6.916991] RDX: ffff88047e4e9960 RSI: 0000000000000001 RDI: 0000000000000001 [ 6.924944] RBP: ffff880821addb90 R08: 000000000037e329 R09: ffff8800000140c0 [ 6.932898] R10: 0000000000000000 R11: dead000000200200 R12: dead000000100100 [ 6.940843] R13: 0000000000000246 R14: ffff8800000140c0 R15: ffff88047e4e9900 [ 6.948798] FS: 0000000000000000(0000) GS:ffff880028360000(0000) knlGS:0000000000000000 [ 6.957818] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b [ 6.964223] CR2: 0000000000000006 CR3: 0000000001001000 CR4: 00000000000406e0 [ 6.972177] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 6.980123] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 6.988077] Process khelper (pid: 139, threadinfo ffff880821adc000, task ffff880821a39f40) [ 6.997290] Stack: [ 6.999531] ffff880821addaf0 ffffffff810b944a ffffea0000000041 0000000000000002 [ 7.007622] <0> 0000000200000246 ffff880000015140 0000000021addbd0 0000000000000000 [ 7.016220] <0> 00000040000212d0 ffff880000015148 ffff88047e4e98f8 0000000200000008 [ 7.025027] Call Trace: [ 7.027759] [<ffffffff810b944a>] ? prep_compound_page+0x45/0x66 [ 7.034459] [<ffffffff810bbe30>] __alloc_pages_nodemask+0x152/0x8b8 [ 7.041546] [<ffffffff810e229a>] alloc_pages_current+0x96/0x9f [ 7.048146] [<ffffffff810e55ed>] alloc_slab_page+0x1a/0x25 [ 7.054361] [<ffffffff810e5643>] new_slab+0x4b/0x1ca [ 7.059995] [<ffffffff810e6148>] __slab_alloc+0x1c3/0x366 [ 7.066111] [<ffffffff8106a0b0>] ? prepare_exec_creds+0x1b/0xb3 [ 7.072809] [<ffffffff810e3b8a>] ? bit_spin_lock+0x17/0x6c [ 7.079022] [<ffffffff8106a0b0>] ? prepare_exec_creds+0x1b/0xb3 [ 7.085719] [<ffffffff810e6433>] kmem_cache_alloc+0x57/0xd8 [ 7.092029] [<ffffffff8106a0b0>] prepare_exec_creds+0x1b/0xb3 [ 7.098533] [<ffffffff810f1275>] prepare_bprm_creds+0x30/0x53 [ 7.105038] [<ffffffff810f2648>] do_execve+0x80/0x2e0 [ 7.110769] [<ffffffff8100a5c3>] sys_execve+0x3e/0x58 [ 7.116498] [<ffffffff8105fea5>] ? __call_usermodehelper+0x0/0x6a [ 7.123391] [<ffffffff8100cf08>] kernel_execve+0x68/0xd0 [ 7.129411] [<ffffffff8105fea5>] ? __call_usermodehelper+0x0/0x6a [ 7.136303] [<ffffffff8105fdc9>] ? ____call_usermodehelper+0x10d/0x118 [ 7.143677] [<ffffffff8100ce8a>] child_rip+0xa/0x20 [ 7.149213] [<ffffffff8105fea5>] ? __call_usermodehelper+0x0/0x6a [ 7.156104] [<ffffffff8105fcbc>] ? ____call_usermodehelper+0x0/0x118 [ 7.163286] [<ffffffff8100ce80>] ? child_rip+0x0/0x20 [ 7.169011] Code: 00 00 00 ad de 4c 89 65 80 49 bc 00 01 10 00 00 00 ad de 48 8b 4d 80 48 8b 5d 80 48 83 c1 28 48 8b 53 28 48 8b 41 08 48 89 42 08 <48> 89 10 4c 89 59 08 4c 89 63 28 41 ff 0f e9 99 00 00 00 f7 85 [ 7.190840] RIP [<ffffffff810bb84a>] get_page_from_freelist+0x306/0x6d4 [ 7.198325] RSP <ffff880821addac0> [ 7.202212] CR2: 0000000000000006 [ 7.205903] ---[ end trace 4eaa2a86a8e2da22 ]---
根据RIP得到出错的位置是0xffffffff810bb8fc,附近反汇编是:
ffffffff810bb8d0: mov $0xdead000000200200,%r11 ffffffff810bb8d7: ffffffff810bb8da: mov %r12,-0x80(%rbp) ffffffff810bb8de: mov $0xdead000000100100,%r12 ffffffff810bb8e5: ffffffff810bb8e8: mov -0x80(%rbp),%rcx ffffffff810bb8ec: mov -0x80(%rbp),%rbx ffffffff810bb8f0: add $0x28,%rcx ffffffff810bb8f4: mov 0x28(%rbx),%rdx ffffffff810bb8f8: mov 0x8(%rcx),%rax ffffffff810bb8fc: mov %rax,0x8(%rdx) ffffffff810bb900: mov %rdx,(%rax) ffffffff810bb903: mov %r11,0x8(%rcx) ffffffff810bb907: mov %r12,0x28(%rbx)
单从汇编很难看出对应C代码是哪句,
不过这里有条线索,0xffffffff810bb8d0把一个立即数0xdead000000200200赋给r11,
0xffffffff810bb8de把0xdead000000100100赋给r12。这两个特殊值分别是LIST_POISON2和LIST_POISON1。
再加上0xffffffff810bb903和ffffffff810bb907的赋值,我们可以推断出这附近做了一个list_del操作,因为:
static inline void list_del(struct list_head *entry) { __list_del(entry->prev, entry->next); entry->next = LIST_POISON1; entry->prev = LIST_POISON2; }
因此推断出,这段反汇编翻译成C代码应该是:
ffffffff810bb8d0: mov $0xdead000000200200,%r11 ;LIST_POISON2 ffffffff810bb8d7: ffffffff810bb8da: mov %r12,-0x80(%rbp) ffffffff810bb8de: mov $0xdead000000100100,%r12 ;LIST_POISON1 ffffffff810bb8e5: ffffffff810bb8e8: mov -0x80(%rbp),%rcx ; rcx = page ffffffff810bb8ec: mov -0x80(%rbp),%rbx ;rbx = page ffffffff810bb8f0: add $0x28,%rcx ;rcx = page->lru ffffffff810bb8f4: mov 0x28(%rbx),%rdx ;rdx = page->lru->next ffffffff810bb8f8: mov 0x8(%rcx),%rax ;rax = page->lru->prev ffffffff810bb8fc: mov %rax,0x8(%rdx) ; page->lru->next->prev = page->lru->prev ffffffff810bb900: mov %rdx,(%rax) ; page->lru->prev->next = page->lru->next ffffffff810bb903: mov %r11,0x8(%rcx) ; page->lru->prev = LIST_POISON2; ffffffff810bb907: mov %r12,0x28(%rbx) ; page->lru->next = LIST_POISON1;
所以故障直接原因是在获取到一个可用的page,将它从lru上取下来时出问题了。
即执行list_del(&page->lru)时,由于&page->lru->prev指针非法导致飞了。
综上,根据异常信息一和二,很可能是lru链表坏掉导致的问题,配置CONFIG_DEBUG_LIST尝试复现也是情理之中。
不过,再回过头检查,在异常信息一里,曾出现过page->flags为ffff880821885e00的情况,这很有可能是栈溢出引起。
检查我们自己写的模块,发现有个函数里申明了一块8K多的静态数组!对x86_64来说,内核栈只有两个页8K,这么搞的话
肯定会引起溢出。但是内核配置了CONFIG_DEBUG_STACKOVERFLOW为什么没检查到这种情况?
由于内核配置了CONFIG_NO_HZ,并且系统又是在启动阶段,CONFIG_DEBUG_STACKOVERFLOW又是在中断里检查的,
所以可能造成溢出检查滞后。另外,更重要的原因,内核栈溢出检查的代码是:
WARN_ONCE(regs->sp >= curbase && regs->sp <= curbase + THREAD_SIZE && regs->sp < curbase + sizeof(struct thread_info) + sizeof(struct pt_regs) + 128, "do_IRQ: %s near stack overflow (cur:%Lx,sp:%lx)\n", current->comm, curbase, regs->sp);
如果栈一次性压栈过于厉害,超过thread_info了,那上面的代码是检查不出来的。
检查最新的3.10的内核,逻辑已经完善过来了:
if(regs->sp >= curbase && regs->sp <= curbase + THREAD_SIZE && regs->sp >= curbase + sizeof(struct thread_info) + sizeof(struct pt_regs) + 128) return OK; warn(xxx);
将模块里的动态数组改成kmalloc后拷机再未出现故障。