linux 清理core文件大小,在Linux上,2G是coredump文件的限制大小吗?

我的操作系统是Arch Linux。当有一个核心转储,我尝试使用gdb来调试它:在Linux上,2G是coredump文件的限制大小吗?

$ coredumpctl gdb 1621

......

Storage: /var/lib/systemd/coredump/core.runTests.1014.b43166f4bba84bcba55e65ae9460beff.1621.1491901119000000000000.lz4

Message: Process 1621 (runTests) of user 1014 dumped core.

Stack trace of thread 1621:

#0 0x00007ff1c0fcfa10 n/a (n/a)

GNU gdb (GDB) 7.12.1

......

Reading symbols from /home/xiaonan/Project/privDB/build/bin/runTests...done.

BFD: Warning: /var/tmp/coredump-28KzRc is truncated: expected core file size >= 2179375104, found: 2147483648.

我检查/var/tmp/coredump-28KzRc文件:

$ ls -alth /var/tmp/coredump-28KzRc

-rw------- 1 xiaonan xiaonan 2.0G Apr 11 17:00 /var/tmp/coredump-28KzRc

是2G在Linux核心转储文件的大小限制?因为我认为我的/var/tmp有足够的磁盘空间可供使用:

$ df -h

Filesystem Size Used Avail Use% Mounted on

dev 32G 0 32G 0% /dev

run 32G 3.1M 32G 1% /run

/dev/sda2 229G 86G 132G 40%/

tmpfs 32G 708M 31G 3% /dev/shm

tmpfs 32G 0 32G 0% /sys/fs/cgroup

tmpfs 32G 957M 31G 3% /tmp

/dev/sda1 511M 33M 479M 7% /boot

/dev/sda3 651G 478G 141G 78% /home

P.S. “ulimit -a” 输出:

$ ulimit -a

core file size (blocks, -c) unlimited

data seg size (kbytes, -d) unlimited

scheduling priority (-e) 0

file size (blocks, -f) unlimited

pending signals (-i) 257039

max locked memory (kbytes, -l) 64

max memory size (kbytes, -m) unlimited

open files (-n) 1024

pipe size (512 bytes, -p) 8

POSIX message queues (bytes, -q) 819200

real-time priority (-r) 0

stack size (kbytes, -s) 8192

cpu time (seconds, -t) unlimited

max user processes (-u) 257039

virtual memory (kbytes, -v) unlimited

file locks (-x) unlimited

更新:/etc/systemd/coredump.conf文件:

$ cat coredump.conf

# This file is part of systemd.

#

# systemd is free software; you can redistribute it and/or modify it

# under the terms of the GNU Lesser General Public License as published by

# the Free Software Foundation; either version 2.1 of the License, or

# (at your option) any later version.

#

# Entries in this file show the compile time defaults.

# You can change settings by editing this file.

# Defaults can be restored by simply deleting this file.

#

# See coredump.conf(5) for details.

[Coredump]

#Storage=external

#Compress=yes

#ProcessSizeMax=2G

#ExternalSizeMax=2G

#JournalSizeMax=767M

#MaxUse=

#KeepFree=

2017-04-11

Nan Xiao

+0

你真的可以在文件系统上创建足够大的文件吗? –

+0

@SergeiKurenkov:是的,我使用“dd if =/dev/zero = test bs = 1024 count = 4MB'”来创建一个4G文件。 –

+0

这里http://stackoverflow.com/questions/8768719/coredump-is-getting-truncated它建议也检查'ulimit -f' –

你可能感兴趣的:(linux,清理core文件大小)