[gpaw-users] Speeding up calculation for DFT/MD?

Ask Hjorth Larsen asklarsen at gmail.com
Sat Apr 20 18:18:11 CEST 2013


Dear Toma Susi

(I assume it was you who asked also on IRC; apologies for the
significant idle-time)

2013/4/20 Toma Susi <toma.susi at aalto.fi>:
> Dear gpaw-users,
>
>
> Background:
> I am setting up a series of calculations to study the electron beam damage
> of carbon nanomaterials. To get at the physically relevant number, the
> established procedure is to give a target atom a starting momentum, and then
> run a MD simulation to see if the target atom escapes the structure, or is
> recaptured by the vacancy. This methodology has been employed to great
> effect by Kotakoski et al. using VASP. They also found that tight-binding
> (at least in the typical parametrization) only gives a correct description
> for all-carbon system. For system which I am interested in, DFT is needed.
>
> Understandably, running a MD simulation with full DFT is rather heavy for
> the ~100 atom systems that are needed to ensure there are no spurious unit
> cell edge effects, and the up to 1000 timesteps of 0.1 fs required to give a
> sufficient trajectory. In VASP, choosing the parameter PREC = low or medium
> was used to speed up the calculations, with the justification that errors
> from the MD steps are anyway dominant.


You could consider using LCAO mode[1] which will run many times faster
at the expense of accuracy, but still gets ab-inito results and can be
used for MD on quite large systems with complex electronic
structure.[2]

For systems larger than ~40 atoms (at around 15 basis functions /
atom) you should use ScaLAPACK.  I also ran MD on 150-atom clusters
(about 2000 MD steps) using a basis set with two s-orbitals, one d
orbital and a p polarization function.

>
> I've established a 8x6 unit cell graphene with a grid spacing of 0.18 Å and
> 5x5x1 k-points as my reference system, ensuring full convergence in all
> characteristics I am interested in. However, running the MD simulation with
> this accuracy results a 6.5 minute SCF cycle time for each time step, when
> running with 128 cores on the new CSC supercluster Taito
> (http://datakeskus.csc.fi/en/superkoneet-ja-infra?param=param). Thus I would
> like to find ways to speed up the calculations without excessively
> compromising on accuracy. And yes, DFT really is needed for the systems I am
> interested in :)

LCAO, grid spacing at least 0.2 or more (although you may have to
check that the MD behaviour is still realistic if you use large
values!), and be sure to use xc='oldLDA' or 'oldPBE' if you can as
this is much faster than the default libxc implementation.  Also be
sure to optimize the Poisson solver and grid (this goes also for FD
calculations where performance is critical).[3]

>
>
> On to the actual question:
> Has anyone established GPAW parameters that would roughly correspond to the
> VASP setting for PREC = medium or low? I did preliminary checks with
> increasing the grid spacing, and did find that the SCF cycle time per time
> step decreased to about 4.25 min with a grid spacing of 0.25 Å. Have I
> understood correctly that a larger grid spacing than this is not
> recommended?

0.25 is very aggressive.  You should do some shorter test runs to
verify that the calculations are "sensible enough" for your purposes.

The FD mode is likely to be more severely affected by low grid spacing
than LCAO mode.

>
> I tried also to play with the calc convergence{ } parameters, but didn't
> immediately find a combination that decreased the computational time
> (convergence seemed to be slower with most changes). Following the
> instruction to "write {'bands': -10} to converge all bands except the last
> 10. It is often hard to converge the last few bands in a calculation." in
> the Wiki really didn't help since although the density perhaps converged
> faster, the WFs didn't converge at all.

You can perhaps set convergence={'density': 1e-2} or so.  This will
reduce the accuracy of the forces, but if the error is not too great
compared to the total forces, it's probably fine.  Again, do a few
test runs to verify the behaviour.

Don't change the convergence={'bands': ...} parameter, as the default
is only to converge bands that are occupied, which is the minimum
possible amount of work.  Changing that can be relevant only if
calculating unoccupied bands.

*Do* set the entirely different "nbands" variable (e.g.
GPAW(nbands=27)) to something which is sufficient to accommodate all
electrons plus a few.

>
> Any other tips / parameters to speed up the calculations? Since the density
> changes rather gradually between each small time step, could we somehow take
> advantage of this?
>

See [3].

Density will be reused between steps in an MD simulation,
significantly reducing the number of SCF steps.

>
> Many thanks in advance for any help you could offer,
> Toma Susi

Regards
Ask

[1] http://prb.aps.org/abstract/PRB/v80/i19/e195112
[2] http://prb.aps.org/abstract/PRB/v84/i24/e245429
[3] https://wiki.fysik.dtu.dk/gpaw/documentation/lcao/lcao.html#notes-on-performance



More information about the gpaw-users mailing list