[ase-users] Parallel NEB with CASTEP calculator
Offermans Willem
willem.offermans at vito.be
Tue Apr 7 14:01:09 CEST 2020
Dear Louie and ASE friends,
Ahah, now we have more details.
Parallel over and within images is more delicate.
I was speaking about parallel over images only, not over 2D parallelisation, as I called it.
I remember that I had a similar question related to this topic some time ago. Unfortunately
I forgot the final response, but I do know that I had to give up to go in the direction of 2D parallelisation.
I only realise now that the link you sent is a response to my original e-mail about this topic :)
My calculator was ABINIT, but nowadays I use Quantum Espresso. I also remember that
I was looking at mpi4py and related stuff.
Anyway, I’m afraid I cannot help you out, since I have abandoned this route for now.
I’m running my jobs on a Hadoop closer and I’m trying to have a coarse grained parallelisation running.
So I would already be happy to run NEB parallel over images.
Thank you for sharing the python script. It might be very helpful to read over it and understand your approach.
Not only for me, but for the total ASE community. It would be nice to have a collection of these scripts on the wiki.
Met vriendelijke groeten,
Mit freundlichen Grüßen,
With kind regards,
Willem Offermans
Researcher Electrocatalysis SCT
VITO NV | Boeretang 200 | 2400 Mol
Phone:+32(0)14335263 Mobile:+32(0)492182073
Willem.Offermans at Vito.be<mailto:Willem.Offermans at Vito.be>
[cid:982BA063-B96A-4A1B-89AB-5A01CA9FC70D at vito.local]
On 7 Apr 2020, at 13:21, Louie Slocombe <l.slocombe at surrey.ac.uk<mailto:l.slocombe at surrey.ac.uk>> wrote:
Dear Willem and the ASE community,
Thanks for your response. I also initially agreed with this idea, I thought that provided you split up the MPI tasks correctly it was possible to have parallel over and within images. However, there was a post on the forum by Ivan in Nov 2018 I quote,
"I think that the ASE/NEB and the Calculator can run both in parallel only if the Calculator is GPAW. This is because the NEB and the process spawned by the Calculator have to share an MPI communicator (I am not sure about any existing tricks with application/machinefiles and mpirun options). Thus parallel NEB will work with VASP or Abinit in serial mode only. Also serial NEB will work with the parallel versions of VASP, Abinit etc."
Here is a link to the thread https://listserv.fysik.dtu.dk/pipermail/ase-users/2018-November/004632.html
I was wondering if there were any recent developments with the code that avoids the issue mentioned by Ivan.
I made a first attempt however the parallel calculators failed to communicate with each other. See attachment for a full example. It is also unclear to me how I submit the job. Assuming I have a total of 16 tasks, I want the calculator to use 4 MPI tasks within each image,
export CASTEP_COMMAND="mpirun -np 4 castep.mpi"
With 4 parallel instances of NEB,
mpirun -n 4 python3 ase_parallel_neb_example.py
Would this be correct?
Any suggestions or advice would be greatly appreciated.
Many thanks,
Louie
Attachment also pasted here:
from ase.calculators.castep import Castep
from ase.constraints import FixAtoms
from ase.io<http://ase.io/> import read
from ase.neb import NEB
from ase.optimize import BFGS
from mpi4py import MPI
import ase.parallel
from ase.build import fcc100
f_parallel = True
f_gen_in = True
n_images = 4
f_max = 0.5
if f_gen_in:
# 2x2-Al(001) surface with 3 layers and an
# Au atom adsorbed in a hollow site:
slab = fcc100('Al', size=(2, 2, 3))
ase.build.add_adsorbate(slab, 'Au', 1.7, 'hollow')
slab.center(axis=2, vacuum=4.0)
# Fix second and third layers:
mask = [atom.tag > 1 for atom in slab]
calc = ase.calculators.castep.Castep(keyword_tolerance=1)
calc._export_settings = True
calc._pedantic = True
calc.param.num_dump_cycles = 0.
calc.param.reuse = True
# Initial state:
slab.set_constraint(FixAtoms(mask=mask))
slab.set_calculator(calc)
slab.calc.set_pspot('C19_LDA_OTF')
qn = BFGS(slab, trajectory='initial.traj',logfile='initial.log')
qn.run(fmax=f_max)
# Final state:
slab.set_constraint(FixAtoms(mask=mask))
slab.set_calculator(calc)
slab.calc.set_pspot('C19_LDA_OTF')
slab[-1].x += slab.get_cell()[0, 0] / 2
qn = BFGS(slab, trajectory='final.traj',logfile='final.log')
qn.run(fmax=f_max)
initial = read('initial.traj')
final = read('final.traj')
constraint = FixAtoms(mask=[atom.tag > 1 for atom in initial])
if f_parallel:
world = ase.parallel.MPI4PY(mpi4py_comm=MPI.COMM_WORLD)
rank = world.rank
size = world.size
n = size // n_images # number of cpu's per image
if rank == 0:
print('number of cpus per image:',n,flush=True)
j = 1 + rank // n # my image number
assert size >= n_images, print('fail 1')
assert n_images * n == size, print('fail 2')
images = [initial]
for i in range(n_images):
ranks = range(i * n, (i + 1) * n)
image = initial.copy()
# seed = 'data_'+str(i + 1) #nc='out%02i.nc' % (index + 1)
calc = Castep(keyword_tolerance=1)
calc._export_settings = True
calc._pedantic = True
calc.param.num_dump_cycles = 0.
calc.param.reuse = True
# Set working directory
# calc._seed = seed
calc._label = 'data'
calc._directory = 'data_' + str(i)
if rank in ranks:
image.set_calculator(calc)
image.calc.set_pspot('C19_LDA_OTF')
image.set_constraint(constraint)
images.append(image)
images.append(final)
neb = NEB(images, parallel=True)
neb.interpolate('idpp')
qn = BFGS(neb,trajectory='neb.traj', logfile='neb.log')
qn.run(fmax=f_max)
else:
images = [initial]
for i in range(n_images):
image = initial.copy()
calc = Castep(keyword_tolerance=1)
calc._export_settings = True
calc._pedantic = True
calc.param.num_dump_cycles = 0.
calc.param.reuse = True
# Set working directory
calc._label = 'data'
calc._directory = 'data_' + str(i)
image.set_calculator(calc)
image.calc.set_pspot('C19_LDA_OTF')
image.set_constraint(constraint)
images.append(image)
images.append(final)
neb = NEB(images, parallel=f_parallel)
neb.interpolate('idpp')
qn = BFGS(neb, trajectory='neb.traj', logfile='neb.log')
qn.run(fmax=f_max)
From: Offermans Willem <willem.offermans at vito.be<mailto:willem.offermans at vito.be>>
Sent: 03 April 2020 22:10
To: Slocombe, Louie (PG/R - Sch of Biosci & Med) <l.slocombe at surrey.ac.uk<mailto:l.slocombe at surrey.ac.uk>>
Cc: ase users <ase-users at listserv.fysik.dtu.dk<mailto:ase-users at listserv.fysik.dtu.dk>>
Subject: Re: [ase-users] Parallel NEB with CASTEP calculator
Dear Louie and ASE friends,
I don’t see any objection to run a NEB calculation in parallel over images with CASTEP or any other suitable calculator, if you have the right computer infrastructure.
ASE does support it, according the code. So it should be possible in principle.
What made you think that it isn’t possible?
Met vriendelijke groeten,
Mit freundlichen Grüßen,
With kind regards,
Willem Offermans
Researcher Electrocatalysis SCT
VITO NV | Boeretang 200 | 2400 Mol
Phone:+32(0)14335263 Mobile:+32(0)492182073
Willem.Offermans at Vito.be<mailto:Willem.Offermans at Vito.be>
<image001.jpg>
On 2 Apr 2020, at 19:44, Louie Slocombe via ase-users <ase-users at listserv.fysik.dtu.dk<mailto:ase-users at listserv.fysik.dtu.dk>> wrote:
parallel NEB calculation
Indien u VITO Mol bezoekt, hou aub er dan rekening mee dat de hoofdingang voortaan enkel bereikbaar is vanuit de richting Dessel-Retie, niet vanuit richting Mol, zie vito.be/route.<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.vito.be%2Froute&data=02%7C01%7Cl.slocombe%40surrey.ac.uk%7C373480e121b24c49d46408d7d8135d01%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C1%7C637215449990987038&sdata=v5hpUA6utB%2BI7zCKZh7RZ16jDnRUGxhShqq4Br2EytY%3D&reserved=0>
If you plan to visit VITO at Mol, then please note that the main entrance can only be reached coming from Dessel-Retie and no longer coming from Mol, see vito.be/en/contact/locations.<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.vito.be%2Fen%2Fcontact%2Flocations&data=02%7C01%7Cl.slocombe%40surrey.ac.uk%7C373480e121b24c49d46408d7d8135d01%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C1%7C637215449990997030&sdata=6F5QFY9geglZKN%2BK7vaCm%2BN4LJnZncqpbdZZFxHeH6o%3D&reserved=0>
VITO Disclaimer: http://www.vito.be/e-maildisclaimer<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.vito.be%2Fe-maildisclaimer&data=02%7C01%7Cl.slocombe%40surrey.ac.uk%7C373480e121b24c49d46408d7d8135d01%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C1%7C637215449990997030&sdata=EJV5taPHjYDfA%2FLiV5LJcW4ldHPU7M9wq4QQLAW7qSM%3D&reserved=0>
<ase_parallel_neb_example.py>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/ase-users/attachments/20200407/20e86943/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: vito.jpg
Type: image/jpeg
Size: 15232 bytes
Desc: vito.jpg
URL: <http://listserv.fysik.dtu.dk/pipermail/ase-users/attachments/20200407/20e86943/attachment-0001.jpg>
More information about the ase-users
mailing list