[gpaw-users] Some questions regarding the g0w0 method
abhishek khetan
askhetan at gmail.com
Fri Jun 17 15:02:40 CEST 2016
Dear gpaw developers,
I have been trying to make use of the g0w0 method in gpaw. My system is
pretty small, it has 12 atoms (4 times NaO2) in a 5.5 Angstrom box. I wrote
out the plane waves using the calculator with
*mode=PW(500),spinpol=True,kpts={'size': (6,6,6), 'gamma': True},*
and the commands
*calc.diagonalize_full_hamiltonian(expert=True,nbands=1000)*
Next I wanted to do the g0w0 part using two different ways, one with
{ppa=true} and other by checking explicitly for frequency dependence using
{domega0=0.02, omega2=10.0}. The parameters in both kinds of systems were
*calc='../PW_SPIN.gpw',nbands=400, bands=(0,81), ecut=100.0,
filename='G0W0_SPIN',savepckl=True*
I have a few questions now.
1) The freq-dependence simulations crash giving a segmentation fault after
calling the response function just one time. The memory requirement as far
as i saw in the topfile was < 6gb per core over 144 processors. the *w.txt
file with data about response functions has the last few lines as:
---------------------------------------------------------
*Initializing PAW Corrections10.9020440578s
|----------------------------------------| Time: 10.902sMemory used:
5538.625 MB / CPUHilbert transform doneThu Jun 16 14:17:31 2016Called
response.chi0.calculate with q_c: [0.000000, 0.000000, 0.333333]
Number of frequency points: 1003 Planewave cutoff: 100.000000 Number
of spins: 2 Number of bands: 400 Number of kpoints: 216 Number of
irredicible kpoints: 68 Number of planewaves: 373 Broadening (eta):
0.100000 world.size: 144 kncomm.size: 144 blockcomm.size: 1
Number of completely occupied states: 38 Number of partially occupied
states: 38 Keep occupied states: True Memory estimate of potentially
large arrays: chi0_wGG: 2129.308884 M / cpu Occupied states:
38.185547 M / cpu Memory usage before allocation: 5551.355469 M /
cpuThe ground calculation does not support time-reversal symmetry possibly
because it has an inversion center or that it has been manually
deactivated. Point group included. Time reversal not included. Disabled non
symmorphic symmetries. Found 2 allowed symmetries. 126 groups of equivalent
kpoints. 41.6666666667% reduction. ( 1 0 0) ( 0 1 0) ( 0 1 0) (
1 0 0) ( 0 0 1) ( 0 0 1)Initializing PAW Corrections*
---------------------------------------------------------
As you can see, the segfault occurs when initializing paw corrections for
the second time. I do not understand why is there segfaults when i have
more than 21 gb per core available. The error file doesn't print at all in
which python/gpaw routine it crases.
2) The ppa=true simulations are running (no memory problems yet, that's a
life saver!!!) but they will not be able to complete over all the 68 points
in the irreducible BZ in 24 hours which is the run-time limit on our
cluster. Could you please suggest the correct use of the "restartfile"
keyword within the g0w0 method. There is no documentation about it. Should
i just give in whatever was the filename as the restart file name?
3) Is truncation necessary for bulk systems ?
4) Is is still possible to use wpar for parallelizing over the frequency
grid ?
5) Is the memory requirement when doing the frequency dependence g0w0
usually much higher ?
My aim is to benchmark the parameters for ppa=True by checking against
frequency dependence and then do the g0w0 for similar systems.
Any help is greatly appreciated. Thanks
--
MfG,
abhishek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20160617/7de5eadc/attachment.html>
More information about the gpaw-users
mailing list