[ase-users] Parallelization over images with arbitrary calculator
John Kitchin
jkitchin at andrew.cmu.edu
Sat Feb 16 13:59:07 CET 2013
> 1. Re: Parallelization over images with arbitrary calculator
> (Benedikt Ziebarth)
>
>
Here is an example of how I have used multiprocessing to run jobs in
parallel with VASP:
https://github.com/jkitchin/dft-book/blob/master/dft.org#L11502
It is not possible to "label" the vasp calculations, they must be run in
different directories. I use the jasp modification of vasp.py to
automatically change directories for each calculation with context manager.
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 15 Feb 2013 12:10:25 +0100
> From: Benedikt Ziebarth <benedikt.ziebarth at kit.edu>
> Subject: Re: [ase-users] Parallelization over images with arbitrary
> calculator
> To: <ase-users at listserv.fysik.dtu.dk>
> Message-ID: <511E17A1.3050607 at kit.edu>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
>
> Thanks for the comments, I found that this is exactly the problem when
> discussing it in the mpi4py mailing list.
> Right now I am working on a multithreaded version of NEB in order to
> circumvent the problem (Its running so far but I would like to make some
> more tests before sharing it).
>
> For Siesta, this is straight forward as you can asign different prefix
> for tmp. files. However, for other calculators like VASP where all temp
> and out files look exactly the same, one will needs some kind of
> "labeling" for simultaneous calculations in order to keep the
> calculations for each images apart from each other.
> I am not sure if this should be in the responsibility of the
> calculators or of the neb routine. For the case of the latter one, I
> have no idea how to do that.
>
>
> About the calculators:
> Why not create a global python file where one should put all necessary
> variables/commands for the programs to run, e.g.
>
>
> mpirun='/path/to/mpiexec'
>
> siesta_pp='/path/to/siesta/pp'
>
> def run_siesta(label,nprocs=1):
> if nprocs==1:
> return os.system('/path/to/siesta < %s.fdf > %s.txt' %
> (label,label) )
> else:
> return os.system('%s -np %d /path/to/siesta < %s.fdf > %s.txt'
> % (mpirun,nprocs,label,label) )
>
>
> and so on
>
> Of course this requires some changes in the calculators interface, but I
> guess they are relatively small.
>
>
> Cheers
> Benedikt
>
>
>
>
>
>
> On 02/15/2013 08:00 AM, Jens J?rgen Mortensen wrote:
> >> Hi there,
> >>
> >> I am looking for a way to start a NEB calculation with image
> >> parallelization using siesta.
> >> In my calculations 6 CPUs are distributed over 3 images, therefore each
> >> image should run with 2 CPUs.
> >> Since the siesta calculator does not directly support the
> >> parallelization, I was trying a work around by starting the neb-python
> >> script using
> >>
> >> mpirun -np 3 neb.py
> >>
> >> and in "run_siesta.py" I was changing the way how siesta is started
> >>
> >> from "siesta" to "mpirun -np 2 siesta"
> >>
> >> the neb.py starts up correctly but when siesta is initialized, it just
> >> crashed with the error message "OOB: Connection to HNP lost" and no
> >> further information or error message. The siesta output is empty.
> >>
> >> If I dont change "siesta" to "mpirun -np 2 siesta" it runs fine.
> > Looks like running MPI from inside MPI doesn't work - which means you
> > are out of luck with our current NEB implementation in ASE. Someone
> > could make our NEB implementation run in three threads instead of three
> > MPI processes - I think such a multithreaded NEB would work for your
> case.
> >
> > Jens J?rgen
> >
> >> Cheers and thanks for any help in advance
> >> Benedikt Ziebarth
> >>
> >>
> >>
> >>
> >> import mpi4py
> >> from ase import *
> >> import ase.io as io
> >> from ase.calculators.siesta import Siesta
> >> import time
> >> from ase.optimize import MDMin
> >> import os
> >> from ase.neb import NEB
> >> from ase.parallel import rank, size
> >> from ase.io.trajectory import PickleTrajectory
> >> import time
> >>
> >>
> >>
> >> initial = io.read('init.traj')
> >> final = io.read('final.traj')
> >>
> >> numimages=3
> >> print size
> >> print rank
> >> assert numimages == size
> >>
> >>
> >> images = [initial]
> >> calc=['z']*numimages
> >> for i in range(numimages):
> >> print calc[i]
> >> calc[i]=Siesta(label='IMAGE_%d'%i,\
> >> xc='PBE',\
> >> meshcutoff=200 * 13.6,\
> >> basis='dzp',\
> >> kpts=[1,1,4])
> >> calc[i].set_fdf('Diag.ParallelOverK',True)
> >> for i in range(numimages):
> >> image = initial.copy()
> >> if i == rank:
> >> image.set_calculator(calc[i])
> >>
> >> images.append(image)
> >> images.append(final)
> >> time.sleep(rank*1) #needed to avoid some copy errors of the pseudo
> >> potential files
> >> neb = NEB(images, parallel=True)
> >> neb.interpolate()
> >> qn = MDMin(neb)
> >>
> >> time.sleep(rank*1) #needed to avoid some copy errors of the pseudo
> >> potential files
> >> traj = PickleTrajectory('neb%d.traj' % rank, 'w', images[1 + rank],
> >> master=True)
> >> qn.attach(traj)
> >> qn.run(fmax=0.05)
> >
> > As a personal preference :-), I do not like that heavy calculations
> > get run when one creates an object (even though some GPAW
> > functionality behaves this way), but that user has to explicitly
> > request calculation by calling a function.
> >
> > Also, I think that the above approach brings up the question whether
> > one attaches atoms to a calculator, and asks calculator to determine
> > physical quantities for this atomic configuration, or attaches
> > calculator to atoms and asks physical quantities from atoms object,
> > i.e. which comes first, atoms or calculator? There is already some
> > controversy as some quantities are requested from atoms (energy,
> > forces), and some from the calculator (wavefunctions, densities).
> > Personally, I do not have strong feelings over this matter, asking
> > atoms is maybe a little bit more physics oriented (calculator is just
> > black box providing numbers), on the other hand asking calculator
> > would unify things in the sense that everything is requested from
> > calculator (quantities available from MD calculator are of course
> > quite different than the ones availabe from DFT calculator).
> >
> > One question that should maybe also discussed is if there should
> > standard way to specify the command/binary to be executed when
> > calculator is run. Some calculators (e.g. Siesta) request a python
> > script containing the actual command to be executed, while other
> > calculators (e.g. Castep) ask for a shell command. I prefer the shell
> > command, as it is easier to specify e.g. number of CPUs within a batch
> > job script (at least for casual user who is used to do something like
> > 'mpirun -np 8 siesta'). People wanting to script everything may prefer
> > the first option...
> >
> > Best regards,
> > Jussi
>
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 15 Feb 2013 20:50:29 -0500
> From: John Kitchin <jkitchin at andrew.cmu.edu>
> Subject: Re: [ase-users] ase-users Digest, Vol 56, Issue 18
> To: ase-users at listserv.fysik.dtu.dk
> Message-ID:
> <
> CAJ51ETpT+-Zbq5vko8aVXVprR7Uzc1-d+QkLf3kFTT-PqSPjhA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> >
> > Message: 1
> > Date: Wed, 13 Feb 2013 15:38:46 -0600
> > From: Arian Ghorbanpour <arian.ghorbanpour at gmail.com>
> > Subject: [ase-users] Vibrational calculation with VASP/ASE
> > To: ase-users at listserv.fysik.dtu.dk
> > Message-ID:
> > <CABE=
> > vyo9_q4ozFAd-bVsOPEmdVtO0dc724kScno_F0Z1Gsik6A at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Dear ASE users,
> >
> > I'm calculating vibrational frequencies with VASP. The issue I have is
> that
> > in many cases, the electronic configuration optimization exceeds its
> > default maximum number of electronic SC steps in between, but the job
> won't
> > stop, finally resulting in imaginary frequencies. Is there any way to
> > communicate error message from VASP to ASE so it terminates the
> > calculation?
> >
> > Thanks,
> > Arian
> >
>
> This is not possible in vasp.py at this point as far as I know. You would
> have to monkey patch vasp.py with a function to analyze the calculation
> after it is done to check for this condition.
>
> j
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://listserv.fysik.dtu.dk/pipermail/ase-users/attachments/20130215/8a6b5d9d/attachment-0001.html
>
> ------------------------------
>
> _______________________________________________
> ase-users mailing list
> ase-users at listserv.fysik.dtu.dk
> https://listserv.fysik.dtu.dk/mailman/listinfo/ase-users
>
> End of ase-users Digest, Vol 56, Issue 20
> *****************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/ase-users/attachments/20130216/519fe7b1/attachment.html>
More information about the ase-users
mailing list