[gpaw-users] All electron density
Andrew Logsdail
ajl340 at bham.ac.uk
Fri Mar 4 14:33:12 CET 2011
Dear Ask, and other GPAW users,
I have encountered memory problems with get_all_electron_density(), and
when using suggestions from previous mailing list comments (attached)
have received the below error:
rho =
atoms.calc.get_all_electron_density(gridrefinement=4,broadcast=False) *
Bohr**3
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
For smaller systems, I can collect the electron density fine without the
broadcast=False attribute, but here I am having problems. Do I need to
typecast the returned density before scaling up by the Bohr^3?
Many thanks,
Andy
On 19/09/10 17:29, Ask Hjorth Larsen wrote:
> Dear Anton
>
> On Tue, 14 Sep 2010, Anton Michael Havelund Rasmussen wrote:
>
>
>> Dear All,
>>
>> I'm trying to export all electron densities from GPAW but with grid
>> refinement set to 4 I run into memory problems. The script pasted
>> below will succesfully write a gpw file (using around 4GB of memory),
>> but when writing the cube file the memory usage will exceed the 24gb
>> RAM available, and start using swap until it dies: "gpaw-python:
>> c/extensions.h:40: gpaw_malloc: Assertion `p != ((void *)0)' failed."
>>
>> Am I doing this completely wrong? Should I just use more nodes to get
>> more memory? It seems strange to me that the interpolation needs so
>> much memory, since the gpw file is <40mb, making the interpolated
>> density around 4^3*40mb=2.6gb?
>>
> The gpw file contains other stuff as well, so this figure is not reliable.
> However the factor should be 8^3 since each refinement is a factor 2^3=8.
>
> (...)
>
>> calc = GPAW(mode='lcao',
>> basis='dzp',
>> parallel={'sl_default':(4,6,64)},
>> poissonsolver=PoissonSolver(relax='GS', eps=1e-8),
>> mixer=Mixer(nmaxold=7, beta=0.05, weight=100),
>> gpts=(96,84,136),
>> nbands=-200,
>> occupations=FermiDirac(0.1),
>> txt='calc_lcao_density.out',
>> xc='PBE')
>>
> (...)
>
> With that number of grid points, the array size will be:
>
>
>>>> 96*84*136*8./1024**3 * 8**3
>>>>
> 4.18359375
>
> (GiB)
>
> The grid descriptor will zero-pad the array, i.e. add zeros along the
> boundary in some way. To do this it will allocate a new array of the same
> size, momentarily doubling the memory use. This still doesn't quite
> explain why it runs out of memory of course.
>
> By default this happens on every core (it'll have the full density
> everywhere), so that's a minimum of 8.2 GiB per core, or 64 GiB on one
> node, which is definitely the problem. So set the broadcast=False when
> retrieving the all-electron density, that should solve the problem.
>
> (With apologies for all the extraneous less-important information, but I
> didn't figure out about the broadcast before having written all the other
> stuff)
>
>
> Regards
> Ask
> _______________________________________________
> gpaw-users mailing list
> gpaw-users at listserv.fysik.dtu.dk
> https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users
>
>
--
Andy
=====================
Andrew James Logsdail
Computational Chemistry
School of Chemistry
College of Engineering and Physical Sciences
University of Birmingham
Edgbaston
B15 2TT
T: +44(0)121 414 7479
F: +44(0)121 414 4403
E: ajl340 at bham.ac.uk
More information about the gpaw-users
mailing list