[gpaw-users] parallel settings

Ask Hjorth Larsen asklarsen at gmail.com
Fri Mar 13 11:11:48 CET 2015


Right.  Then k-point parallelization and band parallelization are the
only parallelization options.

Best regards
Ask

2015-03-12 21:45 GMT+01:00 Zhiyao Duan <gump_813 at hotmail.com>:
> Thank you Ask.
>
> I want to eventually calculate dielectric function of the system. From what
> I read from the online doc, pw mode is
> the only mode that enable calculating dielectric function for an extended
> system, is that right?
>
> Zhiyao
>
>> Date: Thu, 12 Mar 2015 19:13:19 +0100
>> Subject: Re: [gpaw-users] parallel settings
>> From: asklarsen at gmail.com
>> To: gump_813 at hotmail.com
>> CC: gpaw-users at listserv.fysik.dtu.dk
>
>>
>> Hello Zhiyao
>>
>> You have 4 k-points which probably reduce to 2 in the irreducible BZ.
>>
>> That means you have 48 cores sharing two k-points, so each group of 24
>> will band parallelize amongst themselves as chosen by the defaults.
>> Since PW mode does not have domain decompositions, the only allowed
>> amount of band parallelization is 24 (wherefore 4 is an illegal
>> number). That's quite a lot and I would expect it to run a bit
>> slowly.
>>
>> As Marcin says, FD mode might well be more efficient.
>>
>> Best regards
>> Ask
>>
>> 2015-03-12 17:22 GMT+01:00 Zhiyao Duan <gump_813 at hotmail.com>:
>> > Hello everyone,
>> >
>> > I am just starting to use gpaw and have a problem in setting parallel
>> > parameters.
>> > my system contains 96 atoms 840 valence electrons and 720 orbitals, 2
>> > kpts
>> > in IBZ and
>> > grid points is 28*180*336. I was trying to
>> > run the calculation with 48 cores using the following script:
>> >
>> > import numpy as np
>> > from ase import Atoms
>> > from gpaw import GPAW, PW, FermiDirac, MethfesselPaxton
>> > from ase.optimize import FIRE,BFGS
>> >
>> > np.seterr(under="ignore")
>> >
>> > model=read('au_tio2.vasp',format='vasp')
>> >
>> > calc = GPAW(xc='PBE',
>> > mode='pw',
>> > kpts=(4, 1, 1),
>> > random=True,
>> > eigensolver='rmm-diis',
>> > txt='au_tio2_relax.txt')
>> >
>> > model.set_calculator(calc)
>> > opt = BFGS(model)
>> > opt.run(0.05)
>> >
>> > write('final_relax.vasp',model, format='vasp',direct=True)
>> >
>> > the job can run normally but much slower than vasp with similar setups.
>> > I thought the slowness is due to parallelization, so I add
>> > parallel={'band':4} to divide bands to 4 groups. This time the job
>> > failed
>> > with
>> > error:
>> >
>> > File "/usr/local/gpaw/lib/python/gpaw/mpi/__init__.py", line 939, in
>> > autofinalize
>> > raise RuntimeError('All the CPUs must be used')
>> > RuntimeError: All the CPUs must be used
>> >
>> > can anybody help me figuring out why this happens?
>> >
>> > Another question is how to compare the speed of gpaw and vasp?
>> > As I mentioned above, on my system vasp seems much faster, maybe this is
>> > due
>> > to number of valence electrons included in the calculation? In my case,
>> > Ti
>> > atom has 12 valence electrons in gpaw comparing to 4 in vasp. Or the
>> > better
>> > performance of vasp is
>> > due to some other factors like parallelization?
>> >
>> > Thank you guys!
>> >
>> > Zhiyao
>> >
>> > _______________________________________________
>> > gpaw-users mailing list
>> > gpaw-users at listserv.fysik.dtu.dk
>> > https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users


More information about the gpaw-users mailing list