[gpaw-users] How does amount of K-points affect the performance of a run?
Vladislav Ivanistsev
olunet at gmail.com
Mon Jul 9 16:36:10 CEST 2012
Dear Ask,
could you please specify, how could I force parallelization over k-points.
In a script there is this row: kpts=(3, 4, 1) without specifying parallel
options. Than in the output I see:
Total number of cores used: 120
Parallelization over k-points: 6
Domain Decomposition: 2 x 2 x 5
Diagonalizer layout: Serial LAPACK
Does this mean, that not all (3*4) k-points are parallelized? How could
I suppress domain decomposition?
Sincerely yours,
Vladislav
On Wed, Jun 6, 2012 at 2:52 PM, Ask Hjorth Larsen <asklarsen at gmail.com>wrote:
> Hi
>
> 2012/6/6 Juho Arjoranta <juho.arjoranta at helsinki.fi>:
> > Hello all,
> >
> > I've been testing how the amount of k-points affect the performance of
> > a run. I ran some calculations for a fcc100 copper. The idea was to
> > check if having odd or even number of k-points would affect the
> > calculations.
> >
> > Here is an example code of one of the runs:
> >
> > import create_rev
> > from ase.parallel import paropen
> > from gpaw import GPAW
> > from ase.optimize import QuasiNewton
> >
> > resultfile = paropen('test-results.txt', 'w')
> >
> > # This function creates copper bulk with two conventional unit cells
> with a
> > # lattice constant of 'a' on top of each other and sets pbc in all
> directions
> >
> > bulk = create_rev.create(symbol = 'Cu', a = 3.643, size = (1, 1, 2),
> > fix = 0, pbc = (True, True, True))
> >
> > k = (12, 12, 7)
> >
> > calc = GPAW(h = 0.20,
> > kpts = k,
> > xc = 'PBE',
> > txt = '12-12-7-test.txt')
> >
> > bulk.set_calculator(calc)
> >
> > # Relaxation of the bulk
> >
> > relax = QuasiNewton(bulk)
> > relax.run(fmax = 0.05)
> >
> > energy = bulk.get_potential_energy()
> >
> > print >> resultfile, k, energy
> >
> > The data collected:
> >
> > kpts memory usage iterations time energy
> > 9-9-4 134.82 MB 30 220.390 -30.1238956995
> > 10-10-5 134.82 MB 31 285.270 -30.1031609666
> > 10-10-6 127.43 MB 31 278.306 -30.1166931548
> > 11-11-5 159.77 MB 31 396.678 -30.0954717007
> > 11-11-6 154.20 MB 32 408.589 -30.1100254919
> > 12-12-6 163.02 MB 30 386.364 -30.1079624183
> > 12-12-7 259.42 MB 31 609.823 -30.1022882132
> > 13-13-6 270.53 MB 30 590.099 -30.1067171138
> > 13-13-7 230.49 MB 31 796.104 -30.1023366701
> > 14-14-7 270.53 MB 30 760.521 -30.1013794403
> > 14-14-8 230.54 MB 30 759.547 -30.1037105899
> >
> > It would seem that having an even number of k-points in all directions
> > would be beneficial for the calculations. Is this only case specific
> > for the system studied here? If not, does anyone have an idea what
> > could cause this?
> >
> > Juho Arjoranta
>
> Unfortunately it isn't that simple. It depends very strongly on how
> many CPUs are used for kpt parallelization vs. domain decomposition,
> the shape of the domain, how many bands and so on. You should be sure
> to check the parallelization that GPAW chooses, and probably modify
> it, because if there's a large number of k-points it will probably not
> choose something particularly good.
>
> Rule of thumb: Make sure that your calculations are parallel mostly
> over k-points, but also a bit over domains if there are many k-points.
> The parallelization info is in the text output.
>
> Regards
> Ask
>
> _______________________________________________
> gpaw-users mailing list
> gpaw-users at listserv.fysik.dtu.dk
> https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20120709/4ddeca5c/attachment.html
More information about the gpaw-users
mailing list