Linux OOM

I’ve been working on a bug recently related to udev which under certain circumstances triggers an OOM on the Linux kernel. On another bug, I also wanted to look for the max open processes for a single file. The results I found and pretty good. The Linux Kernel’s OOM feature has imporved a lot.

This is the one liner used to trigger the OOM.

while true; do tail -f /tmp/test.txt & x=expr $x + 1; echo $x; done

And here’s the trigger.

[ 3007.922390] bash invoked oom-killer: gfp_mask=0xd0, order=1, oomkilladj=0
[ 3007.922404] Pid: 8527, comm: bash Tainted: P 2.6.26-1-686 #1
[ 3007.922435] [] oom_kill_process+0x4f/0x195
[ 3007.922481] [] out_of_memory+0x14e/0x17f
[ 3007.922523] [] __alloc_pages_internal+0x2c2/0x358
[ 3007.922567] [] __alloc_pages+0x7/0x9
[ 3007.922581] [] __get_free_pages+0xf/0x1b
[ 3007.922589] [] copy_process+0x9c/0xf5b
[ 3007.922625] [] do_fork+0xab/0x1dc
[ 3007.922647] [] do_pipe+0x7e/0xbd
[ 3007.922674] [] sys_clone+0x22/0x26
[ 3007.922691] [] sysenter_past_esp+0x78/0xb1
[ 3007.922721] [] serial_pnp_probe+0xde/0x16b
[ 3007.922748] =======================
[ 3007.922752] Mem-info:
[ 3007.922756] DMA per-cpu:
[ 3007.922760] CPU 0: hi: 0, btch: 1 usd: 0
[ 3007.922764] CPU 1: hi: 0, btch: 1 usd: 0
[ 3007.922768] Normal per-cpu:
[ 3007.922772] CPU 0: hi: 186, btch: 31 usd: 31
[ 3007.922777] CPU 1: hi: 186, btch: 31 usd: 0
[ 3007.922780] HighMem per-cpu:
[ 3007.922784] CPU 0: hi: 186, btch: 31 usd: 11
[ 3007.922789] CPU 1: hi: 186, btch: 31 usd: 0
[ 3007.922796] Active:136180 inactive:157488 dirty:38 writeback:8781 unstable:0
[ 3007.922798] free:75091 slab:32605 mapped:19662 pagetables:63330 bounce:0
[ 3007.922806] DMA free:3564kB min:68kB low:84kB high:100kB active:0kB inactive:20kB present:16256kB pages_scanned:0 all_unreclaimable? no
[ 3007.922812] lowmem_reserve[]: 0 873 2014 2014
[ 3007.922824] Normal free:272588kB min:3744kB low:4680kB high:5616kB active:428kB inactive:35148kB present:894080kB pages_scanned:289 all_unreclaimable? no
[ 3007.922830] lowmem_reserve[]: 0 0 9132 9132
[ 3007.922842] HighMem free:24212kB min:512kB low:1736kB high:2960kB active:544292kB inactive:594784kB present:1168980kB pages_scanned:0 all_unreclaimable? no
[ 3007.922848] lowmem_reserve[]: 0 0 0 0
[ 3007.922856] DMA: 94kB 1418kB 216kB 032kB 164kB 0128kB 1256kB 0512kB 01024kB 12048kB 04096kB = 3564kB
[ 3007.922876] Normal: 67737
4kB 18kB 116kB 032kB 164kB 0128kB 0256kB 1512kB 11024kB 02048kB 04096kB = 272572kB
[ 3007.922896] HighMem: 41574kB 6528kB 3616kB 732kB 364kB 1128kB 1256kB 0512kB 11024kB 02048kB 0*4096kB = 24244kB
[ 3007.922916] 84292 total pagecache pages
[ 3007.922921] Swap cache: add 141567, delete 78660, find 134588/141329
[ 3007.922926] Free swap = 1755340kB
[ 3007.922929] Total swap = 2252792kB
[ 3007.940403] 523923 pages of RAM
[ 3007.940403] 294547 pages of HIGHMEM
[ 3007.940403] 5172 reserved pages
[ 3007.940403] 1512373 pages shared
[ 3007.940403] 62907 pages swap cached
[ 3007.940403] 38 pages dirty
[ 3007.940403] 8781 pages writeback
[ 3007.940403] 19662 pages mapped
[ 3007.940403] 32605 pages slab
[ 3007.940403] 63330 pages pagetables
[ 3007.940403] Out of memory: kill process 8527 (bash) score 2382580 or a child
[ 3007.940403] Killed process 9674 (tail)

There were many more such OOM kills triggered and all were for the correct culprint, the tail program. Awesome. Long long back, when OOM used to be triggered, it wasn’t effecient enough in selecting the processes that it’d like to kill. Things are really better now. I ran this whole test on my laptop with KDE4 running, and nothing broke. A CTRL + C soon returned back to normal.

One interesting observation was the following.

rrs@learner:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16372
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 16372
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

My understanding has been that “open files” is the max the user can open. Here’s the relevant detail.

-n The maximum number of open file descriptors (most systems do not allow this value to be set)

With the above in mind, look at this:

rrs@learner:/tmp$ lsof /tmp/test.txt | wc -l
12503

Here’s the relevant trace for the tail program.

open("/tmp/test.txt", O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=18, …}) = 0
_llseek(3, 0, [0], SEEK_CUR) = 0
_llseek(3, 0, [18], SEEK_END) = 0
_llseek(3, 0, [0], SEEK_SET) = 0
read(3, “Ritesh Raj Sarraf\n”…, 18) = 18
_llseek(3, 0, [0], SEEK_SET) = 0
read(3, “Ritesh Raj Sarraf\n”…, 18) = 18
fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), …}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f58000
write(1, “Ritesh Raj Sarraf\n”…, 18Ritesh Raj Sarraf
) = 18
fstat64(3, {st_mode=S_IFREG|0644, st_size=18, …}) = 0
fcntl64(3, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
fcntl64(3, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
fstat64(3, {st_mode=S_IFREG|0644, st_size=18, …}) = 0
nanosleep({1, 0}, NULL) = 0
fstat64(3, {st_mode=S_IFREG|0644, st_size=18, …}) = 0
nanosleep({1, 0}, NULL) = 0
fstat64(3, {st_mode=S_IFREG|0644, st_size=18, …}) = 0
nanosleep({1, 0}, NULL) = 0
fstat64(3, {st_mode=S_IFREG|0644, st_size=18, …}) = 0
nanosleep({1, 0}, ^C <unfinished …>

Any ideas why this happens?