I had a box with 32GB RAM have its mysql just die (no logs, just die).
init tried to restart it (RHEL6) and I got this:
160104 8:55:55 InnoDB: Initializing buffer pool, size = 3.9G
160104 8:55:55 InnoDB: Error: cannot allocate 4194320384 bytes of
InnoDB: memory with malloc! Total allocated memory
InnoDB: by InnoDB 48401248 bytes. Operating system errno: 12
InnoDB: Check if you should increase the swap file or
InnoDB: ulimits of your operating system.
InnoDB: On FreeBSD check you have compiled the OS with
InnoDB: a big enough maximum process size.
InnoDB: Note that in most 32-bit computers the process
InnoDB: memory space is limited to 2 GB or 4 GB.
InnoDB: We keep retrying the allocation for 60 seconds...
The setting of a 4G buffer pool was working for months until just now.
I reduced it to 3G and mysql restarted ok.
Top shows:
Mem: 32827576k total, 2274940k used, 30552636k free, 165052k buffers
Swap: 2097148k total, 276068k used, 1821080k free, 413640k cached
Cacti (up until mysql crashed, wink) shows ps load going up, but that is
normal for this box, and RAM usage stayed constant, right before the
crash. However, we did launch a long ps that spawns a lot of little ps's
(like maybe 100 at a time). This is all normal for this box, and it never
has done this before.
I'm guessing RAM fragmentation:
#cat /proc/buddyinfo
Node 0, zone DMA 2 1 1 1 1 0 1 0 1 1 3
Node 0, zone DMA32 1815 1789 1553 1294 1007 688 415 212 103 67 477
Node 0, zone Normal 32859 67280 77301 62304 45390 29754 19144 11069 4786 530 177
(anything else to check?)
Looks like tons of pages avail of every size, assuming mysql uses
"normal".
Any ideas? My only thought now is to reboot server more often to clear
fragmentation. Maybe newer kernels solve this "bug" a bit better, but I'm
stuck on RHEL6 right now.