I’ve been running this WordPress blog on a Rackspace VPS for a few years now and at one point over 2 years ago, my server running Ubuntu 10.04 would crash from time to time, usually after I publish a new post. The server would run out of memory, which would then use the swap. The swap would run out as well, causing the server to be unresponsive. I had to do a hard reboot of the server from the Rackspace Control Panel to get it back up and running again.
I’ve messed around with different configurations, changing memory settings for PHP and trying different modules. It seemed to have worked for a while but the problem would still come up from time to time but it wasn’t too bad so I just dealt with it by rebooting. Then over a month ago it was happening almost every week and found it annoying enough that I had to look for another solution.
The syslog file would display something like this:
Sep 18 00:38:10 www kernel: [734558.226913] php5 invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
Sep 18 00:38:13 www kernel: [734558.226921] php5 cpuset=/ mems_allowed=0
Sep 18 00:38:13 www kernel: [734558.226926] Pid: 8880, comm: php5 Not tainted 188.8.131.52-rscloud #8
Sep 18 00:38:13 www kernel: [734558.226929] Call Trace:
Sep 18 00:38:13 www kernel: [734558.226942] [<ffffffff8107cc26>] ? dump_header+0x65/0x1a1
Sep 18 00:38:13 www kernel: [734558.226950] [<ffffffff81005f7f>] ? xen_restore_fl_direct_end+0x0/0x1
Sep 18 00:38:13 www kernel: [734558.226958] [<ffffffff8123bf92>] ? _raw_spin_unlock_irqrestore+0xf/0x10
Sep 18 00:38:13 www kernel: [734558.226963] [<ffffffff8107cda1>] ? oom_kill_process+0x3f/0x131
Sep 18 00:38:13 www kernel: [734558.226967] [<ffffffff8107d349>] ? __out_of_memory+0x8e/0x9b
Sep 18 00:38:13 www kernel: [734558.226971] [<ffffffff8107d3dc>] ? out_of_memory+0x86/0xb0
Sep 18 00:38:13 www kernel: [734558.226977] [<ffffffff8107ff33>] ? __alloc_pages_nodemask+0x4bc/0x5cb
Sep 18 00:38:13 www kernel: [734558.226983] [<ffffffff81081692>] ? __do_page_cache_readahead+0x9e/0x1b1
Sep 18 00:38:13 www kernel: [734558.226988] [<ffffffff810817c1>] ? ra_submit+0x1c/0x20
Luckily I found a solution from another blog post whose author was having the exact same problem I had. The fix is to modify the /etc/sysctl.conf file to include the following settings at the end of the file:
# Prevent OOM killer (Out of Memory) which causes the server to crash
vm.overcommit_memory = 2
vm.overcommit_ratio = 80
It’s been over a month so far since I applied this fix and the problems hasn’t come back. My memory usage has also been pretty consistent between 150-200 MB since then, so I’m pretty convinced this solution worked.