How to limit the total size used by core files or automatically delete old corefiles.
Andrzej Kardas
andrzej-kardas at o2.pl
Thu May 26 12:05:20 EDT 2011
On 26.05.2011 14:31, SADA SIVA REDDY S wrote:
> My Questions:
>
> 1. Is there a provision in Linux to automatically cleanup the old
> corefiles when we reach a certain limit ?
>
I think there is no such feature. Core dump is regular file saved in
default process directory, and system doesn't trace these files, it
simply generates core dump and forget about it (on other words, system
treats core dumps as regular file and doesn't know that is a core dump
file).
>
> 1. Is there a provision in Linux to set a upper limit for space
> occupied by all core files (not individual core files) ?
>
I think no, you can limit size of generated core dump per file, per user
(ulimit -c).
But, you can change destination of all core dump files by add line
kernel.core_pattern = /vol/allcoredumps/%u/%e
in /etc/sysctl.conf
After that, you can write a simple script to check amount of free space,
schedule it into crontab. When free space will be below certain limit,
script should remove oldest or biggest files from above location.
Below, list of available patterns:
|
%p: pid
%: '%' is dropped
%%: output one '%'
%u: uid
%g: gid
%s: signal number
%t: UNIX time of dump
%h: hostname
%e: executable filename
%: both are dropped|
--
regards
Andrzej Kardas
http://www.linux.mynotes.pl
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110526/a0ea1a7f/attachment.html
More information about the Kernelnewbies
mailing list