[gpaw-users] Query about parallelism scheme in GPAW
Richard Terrett
Richard.Terrett at anu.edu.au
Fri Jan 25 05:26:28 CET 2019
Hi,
I'm trying to probe efficient parallelism of GPAW on a desktop system (4 cores, 8 threads w/ hyperthreading). I note that when I run mpirun -n x gpaw-python where x > 1, I get 8*x gpaw-python processes. Where x = 8, that's 64 gpaw-python processes, which far exceeds the number of logical cores on my system. I note that for the system that I'm looking at (PW, 800 eV ecut, 64 kpts, PBE, 13 atoms), I get slower SCF iterations for -n 8 than for -n 1, indicating that contention is occurring. Nevertheless, when I run with -n 1, GPAW recognises that I am only running on '1 CPU' and therefore as far as I can tell does not parallelise over kpoints, bands, etc. Moreover, I cannot supply values greater than 1 to the relevant parallel dict keys in this scenario without GPAW crashing.
To be clear, when I run -n 1, I do get 8 GPAW processes, although the system does not get saturated.
Is this expected GPAW behaviour? I can't really tell if GPAW is effectively parallelising or just spinning its wheels, and I don't know how to nerf GPAW to run fewer than 8 threads for comparison. Even if I just run run my script via python3 (rather than gpaw-python) it still spawns 16 threads during the SCF loop, running at about the same speed as a job initiated with mpirun.
My specs: gpaw-1.5.1, ase-3.18.0b1 (reported by python import) or ase-3.15.0 (installed from ubuntu repos), mpirun (OpenMPI) 2.1.1, Intel Core i7-4790 (4 cores 8 threads), Ubuntu 18.04 x86_64.
Oh, I'm also using the Davidson eigensolver, but rmm-diis did not seem to affect the speed meaningfully.
Any help that can be provided regarding this question is appreciated.
Dr Richard Terrett
Postdoctoral Fellow, Computational Quantum Chemistry Group
Research School of Chemistry, ANU
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20190125/c275d635/attachment.html>
More information about the gpaw-users
mailing list