[gpaw-users] general comment on memory leaks.

abhishek khetan askhetan at gmail.com
Thu Jan 14 17:38:07 CET 2016


Dear gpaw developers,

i have found that in general for large systems (> 150 atoms) or systems
with memory intensive methods like the GW, there are always segfault errors
of a similar kind. I have a scalapack compiled working version of gpaw-0.12
which passes all tests in the suite. For a system, small in size, the
various methods in gpaw run properly but for bigger systems of the desired
sizes of the same kind, gpaw fails with the exact same kind of error.

gpaw-python:18622 terminated with signal 11 at PC=3d8d6acba8
SP=7ffe9b9d47b0.  Backtrace:

I have posted about this in the context of GW method on the gpaw forums a
couple of dozen times before, but i haven't seen anyone else report similar
errors. Now I am encountering the same unsolved errors in even simple
relaxation problems where the unit cell happens to be quite large. For
slightly smaller cases where the systems do converge, i see that the memory
reuirements are actually very modest (1-2) gigs per core for 60 cores.

Any ideas/ methods/ procedures that i can resolve this error as a user ? Am
I allowed to make a ticket on this or request for a ticket on this on the
TRAC ?

Thanks and Best,

askhetan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20160114/a17fafba/attachment.html>


More information about the gpaw-users mailing list