I am surprised. Doesn't a current high-grade OS like Linux (maybe not Windows) routinely close the gaps in RAM by relocating running code and data? This is quite efficiently do-able in CPU architectures that use base/index/displacement memory addressing in the instruction set. This was introduced in the IBM 360 in 1964, carried forward into Intel x86, and probably used everywhere else since then.
With base/index/displacement memory addressing, running code and data can easily be relocated by copying it to the new location and changing the contents of a few base registers, unlike the older architecture of entire linear addresses stored in the instructions that would require the whole program and data to have its addresses adjusted/rewritten after relocation.
(Yes, if this sounds "academic", I have an M.Sc. (1975) in computer science.) :)
Hartmut W Sager - Tel +1-204-339-8331
On 4 January 2016 at 11:32, Trevor Cordes trevor@tecnopolis.ca wrote:
It looks like a ISP DNS blockage caused one of our ps's to fall behind on and have around 5900 little sub-ps's pile up. Then a oom-killer triggered. oom-killer is actually wonderful in this case as it logged the state of everything to disk.
Jan 4 08:55:29 kernel: [5384435.328060] Node 0 DMA: 2*4kB 1*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15744kB Jan 4 08:55:29 kernel: [5384435.329687] Node 0 DMA32: 14*4kB 43*8kB 62*16kB 35*32kB 7*64kB 16*128kB 67*256kB 21*512kB 2*1024kB 3*2048kB 20*4096kB = 123024kB Jan 4 08:55:29 kernel: [5384435.331315] Node 0 Normal: 15448*4kB 66*8kB 7*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 62432kB
So it is fragmentation, or simple ram exhaustion, due to runaway small ps's due to blocked DNS. Time to rejig the app to handle DNS going down. :-) _______________________________________________ Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable