[ase-users] Parallelization over images with arbitrary calculator
Jens Jørgen Mortensen
jensj at fysik.dtu.dk
Mon Feb 18 13:41:48 CET 2013
Den 15-02-2013 12:10, Benedikt Ziebarth skrev:
> Thanks for the comments, I found that this is exactly the problem when
> discussing it in the mpi4py mailing list.
> Right now I am working on a multithreaded version of NEB in order to
> circumvent the problem (Its running so far but I would like to make some
> more tests before sharing it).
I think the attached patch to neb.py should do the job.
> For Siesta, this is straight forward as you can asign different prefix
> for tmp. files. However, for other calculators like VASP where all temp
> and out files look exactly the same, one will needs some kind of
> "labeling" for simultaneous calculations in order to keep the
> calculations for each images apart from each other.
> I am not sure if this should be in the responsibility of the
> calculators or of the neb routine. For the case of the latter one, I
> have no idea how to do that.
I think it should be the responsibility of the calculators. We are
currently working on making it a standard for all ASE calculators to be
able to prefix output files or use directories.
Jens Jørgen
> About the calculators:
> Why not create a global python file where one should put all necessary
> variables/commands for the programs to run, e.g.
>
>
> mpirun='/path/to/mpiexec'
>
> siesta_pp='/path/to/siesta/pp'
>
> def run_siesta(label,nprocs=1):
> if nprocs==1:
> return os.system('/path/to/siesta < %s.fdf > %s.txt' %
> (label,label) )
> else:
> return os.system('%s -np %d /path/to/siesta < %s.fdf > %s.txt'
> % (mpirun,nprocs,label,label) )
>
>
> and so on
>
> Of course this requires some changes in the calculators interface, but I
> guess they are relatively small.
>
>
> Cheers
> Benedikt
>
>
>
>
>
>
> On 02/15/2013 08:00 AM, Jens Jørgen Mortensen wrote:
>>> Hi there,
>>>
>>> I am looking for a way to start a NEB calculation with image
>>> parallelization using siesta.
>>> In my calculations 6 CPUs are distributed over 3 images, therefore each
>>> image should run with 2 CPUs.
>>> Since the siesta calculator does not directly support the
>>> parallelization, I was trying a work around by starting the neb-python
>>> script using
>>>
>>> mpirun -np 3 neb.py
>>>
>>> and in "run_siesta.py" I was changing the way how siesta is started
>>>
>>> from "siesta" to "mpirun -np 2 siesta"
>>>
>>> the neb.py starts up correctly but when siesta is initialized, it just
>>> crashed with the error message "OOB: Connection to HNP lost" and no
>>> further information or error message. The siesta output is empty.
>>>
>>> If I dont change "siesta" to "mpirun -np 2 siesta" it runs fine.
>> Looks like running MPI from inside MPI doesn't work - which means you
>> are out of luck with our current NEB implementation in ASE. Someone
>> could make our NEB implementation run in three threads instead of three
>> MPI processes - I think such a multithreaded NEB would work for your case.
>>
>> Jens Jørgen
>>
>>> Cheers and thanks for any help in advance
>>> Benedikt Ziebarth
>>>
>>>
>>>
>>>
>>> import mpi4py
>>> from ase import *
>>> import ase.io as io
>>> from ase.calculators.siesta import Siesta
>>> import time
>>> from ase.optimize import MDMin
>>> import os
>>> from ase.neb import NEB
>>> from ase.parallel import rank, size
>>> from ase.io.trajectory import PickleTrajectory
>>> import time
>>>
>>>
>>>
>>> initial = io.read('init.traj')
>>> final = io.read('final.traj')
>>>
>>> numimages=3
>>> print size
>>> print rank
>>> assert numimages == size
>>>
>>>
>>> images = [initial]
>>> calc=['z']*numimages
>>> for i in range(numimages):
>>> print calc[i]
>>> calc[i]=Siesta(label='IMAGE_%d'%i,\
>>> xc='PBE',\
>>> meshcutoff=200 * 13.6,\
>>> basis='dzp',\
>>> kpts=[1,1,4])
>>> calc[i].set_fdf('Diag.ParallelOverK',True)
>>> for i in range(numimages):
>>> image = initial.copy()
>>> if i == rank:
>>> image.set_calculator(calc[i])
>>>
>>> images.append(image)
>>> images.append(final)
>>> time.sleep(rank*1) #needed to avoid some copy errors of the pseudo
>>> potential files
>>> neb = NEB(images, parallel=True)
>>> neb.interpolate()
>>> qn = MDMin(neb)
>>>
>>> time.sleep(rank*1) #needed to avoid some copy errors of the pseudo
>>> potential files
>>> traj = PickleTrajectory('neb%d.traj' % rank, 'w', images[1 + rank],
>>> master=True)
>>> qn.attach(traj)
>>> qn.run(fmax=0.05)
>> As a personal preference :-), I do not like that heavy calculations
>> get run when one creates an object (even though some GPAW
>> functionality behaves this way), but that user has to explicitly
>> request calculation by calling a function.
>>
>> Also, I think that the above approach brings up the question whether
>> one attaches atoms to a calculator, and asks calculator to determine
>> physical quantities for this atomic configuration, or attaches
>> calculator to atoms and asks physical quantities from atoms object,
>> i.e. which comes first, atoms or calculator? There is already some
>> controversy as some quantities are requested from atoms (energy,
>> forces), and some from the calculator (wavefunctions, densities).
>> Personally, I do not have strong feelings over this matter, asking
>> atoms is maybe a little bit more physics oriented (calculator is just
>> black box providing numbers), on the other hand asking calculator
>> would unify things in the sense that everything is requested from
>> calculator (quantities available from MD calculator are of course
>> quite different than the ones availabe from DFT calculator).
>>
>> One question that should maybe also discussed is if there should
>> standard way to specify the command/binary to be executed when
>> calculator is run. Some calculators (e.g. Siesta) request a python
>> script containing the actual command to be executed, while other
>> calculators (e.g. Castep) ask for a shell command. I prefer the shell
>> command, as it is easier to specify e.g. number of CPUs within a batch
>> job script (at least for casual user who is used to do something like
>> 'mpirun -np 8 siesta'). People wanting to script everything may prefer
>> the first option...
>>
>> Best regards,
>> Jussi
> _______________________________________________
> ase-users mailing list
> ase-users at listserv.fysik.dtu.dk
> https://listserv.fysik.dtu.dk/mailman/listinfo/ase-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: neb.patch
Type: text/x-patch
Size: 1463 bytes
Desc: not available
URL: <http://listserv.fysik.dtu.dk/pipermail/ase-users/attachments/20130218/07fbff37/attachment.bin>
More information about the ase-users
mailing list