[gpaw-users] Work function on a copper surface

Ask Hjorth Larsen asklarsen at gmail.com
Wed Jul 25 16:00:39 CEST 2012


Hi

2012/7/25 Juho Arjoranta <juho.arjoranta at helsinki.fi>:
> The calculation was made with a 7 layer surface with 32 atoms per layer. Two
> of the bottom layers were fixed. I tried to relax the surface before adding
> add atoms but the memory consumption was already huge. (If I remember
> correctly, the memory estimate was was something like 6 GB per core with 512
> cores. The parellelization was made purely over domain on that calculation.)
>
> Here is the calculator that was used with 288 cores:
>
> ps = PoissonSolver(nn = 3, relax = 'J', eps = 1e-10)
> ps.maxiter = 5000
> correction = DipoleCorrection(ps, 2)
>
> calc = GPAW(h = 0.12,
>             xc = 'PBE',
>             txt = name + '.txt',
>             basis = 'szp(dzp)',
>             maxiter = 150,
>             occupations = FermiDirac(width = 0.01, maxiter = 2000),
>             poissonsolver = correction,
>             mixer=Mixer(beta=0.05, nmaxold=5, weight=100.0),
>             parallel={'domain': (6,6,8)},
>             kpts = (2, 2, 1))
>
> Juho
>

Right, 224 is quite a lot of atoms.  0.12 is a very fine grid-spacing
for Cu.  I believe you should be able to do with something much more
coarse.  Here are some tests:
https://wiki.fysik.dtu.dk/gpaw/setups/Cu.html .  I think something
like 0.18 should work quite well.  DFT isn't that accurate in any
case.

For Cu the first mixer parameter should probably be 0.1 for faster
convergence, because it has a low DOS at the Fermi level.  (Other
transition metals may require lower values.)

I see you specified the basis.  If you want an LCAO calculation you
also need to specify mode='lcao' and the whole thing should fit on
just a few CPUs (provided you use ScaLAPACK).

Be sure to specify nbands, otherwise GPAW will add "plenty" of bands
which is very expensive in FD calculations.  nbands=6 * [number of
atoms] should be more than enough.

If the Poisson solver really needs 5000 iterations, then probably
things are not going well.  Try to specify the number of grid points
directly as gpts=(nx, ny, nz) instead of specifying h.  The number
should be divisible by 8 along all directions.

You will probably want to parallelize as much as possible over kpts
rather than just domains (there will probably be 2 kpts).  I think the
automatic choice of parallelization should be quite fine in this case,
so you probably won't need to specify 'parallel' here.

Regards
Ask


More information about the gpaw-users mailing list