[gpaw-users] general comment on memory leaks.

Marcin Dulak mdul at dtu.dk
Mon Jan 18 14:42:02 CET 2016


Hi,

in https://listserv.fysik.dtu.dk/pipermail/gpaw-users/2016-January/003848.html
I see "Swap: 0.000k total". Does LSF allow you to control the amount of swap used?
This may be a way to avoid killing of gpaw-python processes which temporarily exceed the physical memory.

Best regards,

Marcin

________________________________


Dear Marcin, and Ask,

I am indeed on this cluster. And I have already used both these tools. When I use the r_memusage (to check the peak physical memory), the peak physical memory is in the order of a few MBs and the process gets killed right as the beginning with the output only as:


 |   |   |_  | | | |
 | | | | | . | | | |
 |__ |  _|___|_____|  0.12.0.13279
 |___|_|

The same is not the case when I take a pre-converged systems and run the r_memusage script. It shows me a good 2.5 GBs (and rising) before I kill the process, as I can see its running fine. This is what I mean by saying that the allocation doesn't even start for these unconverged cases. Using eigensolver=RMM_DIIS(keep_htpsit=False), has the exact same problems. Is there a way, I can trick gpaw into giving the cluster much less of a requirement. I want to try this because, as I have mentioned, at the peak condition my jobs don't need more than 2 GB per core and I'm providing it 8 GB usually (albiet, to no use).

Best,


On Sat, Jan 16, 2016 at 1:10 PM, Marcin Dulak <mdul at dtu.dk<mailto:mdul at dtu.dk>> wrote:
Hi,

are you one this cluster?
https://doc.itc.rwth-aachen.de/display/CC/r_memusage
https://doc.itc.rwth-aachen.de/display/CC/Resource+limitations+on+dialog+systems
It may be that the batch system (LSF) kills your jobs that exceed given resident memory.
The two links above may help you to diagnose that.
I recall GPAW's memory estimate is not very accurate for standard ground-state, PW or GRID mode jobs
(~20%) and may be very inaccurate (order of magnitude) for VDW or LCAO jobs (Ask correct me if this is not the case anymore).

Best regards,

Marcin
_______________________________________________
gpaw-users mailing list
gpaw-users at listserv.fysik.dtu.dk<mailto:gpaw-users at listserv.fysik.dtu.dk>
https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users



--
|| radhe radhe ||

abhishek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20160118/89e957b4/attachment-0001.html>


More information about the gpaw-users mailing list