[ase-users] Parallel NEB with CASTEP calculator
Ask Hjorth Larsen
asklarsen at gmail.com
Tue Apr 7 16:53:24 CEST 2020
Dear Willem,
Am Di., 7. Apr. 2020 um 15:16 Uhr schrieb Offermans Willem <
willem.offermans at vito.be>:
> Dear Ask and ASE friends,
>
> Gitlab?
>
> What is the function of gitlab? To collect user stories or even epics?
>
> ASE on gitlab is not even documented in ``
> https://wiki.fysik.dtu.dk/ase/tutorials/tutorials.html#further-reading``
>
Well, no, Gitlab is where the source code lives. It's also the bugtracker.
>
> Ahah, I found something by searching for gitlab on the wiki:
>
> https://wiki.fysik.dtu.dk/ase/development/contribute.html?highlight=gitlab
>
> It is quite interesting and related to my following little project.
> I’m running my calculations on a Hadoop cluster. I would like to
> do some coarse grained parallelisation for NEB calculations.
> Maybe I can do it in the way suggested in above link.
>
> How do developers normally communicate? Via this mailing list?
>
We sometimes see questions about parallel NEB not working. Since this
happens repeatedly, I assume there's some problem with how we parallel NEB
is implemented, or how we document it. Most likely something needs to be
fixed. It would be good to understand what needs to be fixed.
Best regards
Ask
>
>
>
>
> Met vriendelijke groeten,
> Mit freundlichen Grüßen,
> With kind regards,
>
>
> Willem Offermans
> Researcher Electrocatalysis SCT
> VITO NV | Boeretang 200 | 2400 Mol
> Phone:+32(0)14335263 Mobile:+32(0)492182073
>
> Willem.Offermans at Vito.be
>
>
> On 7 Apr 2020, at 14:19, Ask Hjorth Larsen <asklarsen at gmail.com> wrote:
>
> Dear Willem and Louie,
>
> If something stops ASE-NEB to work in parallel alongside other calculators
> using MPI, then we should open an issue on Gitlab and fix it. If there is
> a way and it isn't obvious, then at least documentation or API should be
> improved.
>
> I didn't read through the whole discussion. If and once you come up with
> a final example script which should work but doesn't, or any other helpful
> material, could you think of opening the issue?
>
> Best regards
> Ask
>
> Am Di., 7. Apr. 2020 um 14:02 Uhr schrieb Offermans Willem via ase-users <
> ase-users at listserv.fysik.dtu.dk>:
>
>> Dear Louie and ASE friends,
>>
>> Ahah, now we have more details.
>>
>> Parallel over and within images is more delicate.
>>
>> I was speaking about parallel over images only, not over 2D
>> parallelisation, as I called it.
>> I remember that I had a similar question related to this topic some time
>> ago. Unfortunately
>> I forgot the final response, but I do know that I had to give up to go in
>> the direction of 2D parallelisation.
>> I only realise now that the link you sent is a response to my original
>> e-mail about this topic :)
>> My calculator was ABINIT, but nowadays I use Quantum Espresso. I also
>> remember that
>> I was looking at mpi4py and related stuff.
>>
>> Anyway, I’m afraid I cannot help you out, since I have abandoned this
>> route for now.
>> I’m running my jobs on a Hadoop closer and I’m trying to have a coarse
>> grained parallelisation running.
>> So I would already be happy to run NEB parallel over images.
>>
>> Thank you for sharing the python script. It might be very helpful to read
>> over it and understand your approach.
>> Not only for me, but for the total ASE community. It would be nice to
>> have a collection of these scripts on the wiki.
>>
>>
>>
>> Met vriendelijke groeten,
>> Mit freundlichen Grüßen,
>> With kind regards,
>>
>>
>> Willem Offermans
>> Researcher Electrocatalysis SCT
>> VITO NV | Boeretang 200 | 2400 Mol
>> Phone:+32(0)14335263 Mobile:+32(0)492182073
>>
>> Willem.Offermans at Vito.be
>>
>> <vito.jpg>
>>
>> On 7 Apr 2020, at 13:21, Louie Slocombe <l.slocombe at surrey.ac.uk> wrote:
>>
>> Dear Willem and the ASE community,
>>
>> Thanks for your response. I also initially agreed with this idea, I
>> thought that provided you split up the MPI tasks correctly it was possible
>> to have parallel over and within images. However, there was a post on the
>> forum by Ivan in Nov 2018 I quote,
>> "I think that the ASE/NEB and the Calculator can run both in parallel
>> only if the Calculator is GPAW. This is because the NEB and the process
>> spawned by the Calculator have to share an MPI communicator (I am not sure
>> about any existing tricks with application/machinefiles and mpirun
>> options). Thus parallel NEB will work with VASP or Abinit in serial mode
>> only. Also serial NEB will work with the parallel versions of VASP, Abinit
>> etc."
>> Here is a link to the thread
>> https://listserv.fysik.dtu.dk/pipermail/ase-users/2018-November/004632.html
>> I was wondering if there were any recent developments with the code that
>> avoids the issue mentioned by Ivan.
>>
>> I made a first attempt however the parallel calculators failed to
>> communicate with each other. See attachment for a full example. It is also
>> unclear to me how I submit the job. Assuming I have a total of 16 tasks, I
>> want the calculator to use 4 MPI tasks within each image,
>> export CASTEP_COMMAND="mpirun -np 4 castep.mpi"
>> With 4 parallel instances of NEB,
>> mpirun -n 4 python3 ase_parallel_neb_example.py
>> Would this be correct?
>>
>> Any suggestions or advice would be greatly appreciated.
>>
>> Many thanks,
>> Louie
>>
>> Attachment also pasted here:
>>
>> from ase.calculators.castep import Castep
>> from ase.constraints import FixAtoms
>> from ase.io import read
>> from ase.neb import NEB
>> from ase.optimize import BFGS
>> from mpi4py import MPI
>> import ase.parallel
>> from ase.build import fcc100
>>
>> f_parallel = True
>> f_gen_in = True
>> n_images = 4
>> f_max = 0.5
>>
>> if f_gen_in:
>> # 2x2-Al(001) surface with 3 layers and an
>> # Au atom adsorbed in a hollow site:
>> slab = fcc100('Al', size=(2, 2, 3))
>> ase.build.add_adsorbate(slab, 'Au', 1.7, 'hollow')
>> slab.center(axis=2, vacuum=4.0)
>>
>> # Fix second and third layers:
>> mask = [atom.tag > 1 for atom in slab]
>>
>> calc = ase.calculators.castep.Castep(keyword_tolerance=1)
>> calc._export_settings = True
>> calc._pedantic = True
>> calc.param.num_dump_cycles = 0.
>> calc.param.reuse = True
>>
>> # Initial state:
>> slab.set_constraint(FixAtoms(mask=mask))
>> slab.set_calculator(calc)
>> slab.calc.set_pspot('C19_LDA_OTF')
>> qn = BFGS(slab, trajectory='initial.traj',logfile='initial.log')
>> qn.run(fmax=f_max)
>>
>> # Final state:
>> slab.set_constraint(FixAtoms(mask=mask))
>> slab.set_calculator(calc)
>> slab.calc.set_pspot('C19_LDA_OTF')
>> slab[-1].x += slab.get_cell()[0, 0] / 2
>> qn = BFGS(slab, trajectory='final.traj',logfile='final.log')
>> qn.run(fmax=f_max)
>>
>>
>> initial = read('initial.traj')
>> final = read('final.traj')
>> constraint = FixAtoms(mask=[atom.tag > 1 for atom in initial])
>>
>> if f_parallel:
>> world = ase.parallel.MPI4PY(mpi4py_comm=MPI.COMM_WORLD)
>> rank = world.rank
>> size = world.size
>>
>> n = size // n_images # number of cpu's per image
>> if rank == 0:
>> print('number of cpus per image:',n,flush=True)
>> j = 1 + rank // n # my image number
>> assert size >= n_images, print('fail 1')
>> assert n_images * n == size, print('fail 2')
>>
>> images = [initial]
>> for i in range(n_images):
>> ranks = range(i * n, (i + 1) * n)
>> image = initial.copy()
>> # seed = 'data_'+str(i + 1) #nc='out%02i.nc' % (index + 1)
>> calc = Castep(keyword_tolerance=1)
>> calc._export_settings = True
>> calc._pedantic = True
>> calc.param.num_dump_cycles = 0.
>> calc.param.reuse = True
>> # Set working directory
>> # calc._seed = seed
>> calc._label = 'data'
>> calc._directory = 'data_' + str(i)
>>
>> if rank in ranks:
>> image.set_calculator(calc)
>> image.calc.set_pspot('C19_LDA_OTF')
>> image.set_constraint(constraint)
>> images.append(image)
>>
>> images.append(final)
>>
>> neb = NEB(images, parallel=True)
>> neb.interpolate('idpp')
>> qn = BFGS(neb,trajectory='neb.traj', logfile='neb.log')
>> qn.run(fmax=f_max)
>>
>> else:
>> images = [initial]
>> for i in range(n_images):
>> image = initial.copy()
>> calc = Castep(keyword_tolerance=1)
>> calc._export_settings = True
>> calc._pedantic = True
>> calc.param.num_dump_cycles = 0.
>> calc.param.reuse = True
>> # Set working directory
>> calc._label = 'data'
>> calc._directory = 'data_' + str(i)
>> image.set_calculator(calc)
>> image.calc.set_pspot('C19_LDA_OTF')
>> image.set_constraint(constraint)
>> images.append(image)
>>
>> images.append(final)
>>
>> neb = NEB(images, parallel=f_parallel)
>> neb.interpolate('idpp')
>> qn = BFGS(neb, trajectory='neb.traj', logfile='neb.log')
>> qn.run(fmax=f_max)
>>
>> *From:* Offermans Willem <willem.offermans at vito.be>
>> *Sent:* 03 April 2020 22:10
>> *To:* Slocombe, Louie (PG/R - Sch of Biosci & Med) <
>> l.slocombe at surrey.ac.uk>
>> *Cc:* ase users <ase-users at listserv.fysik.dtu.dk>
>> *Subject:* Re: [ase-users] Parallel NEB with CASTEP calculator
>>
>> Dear Louie and ASE friends,
>>
>> I don’t see any objection to run a NEB calculation in parallel over
>> images with CASTEP or any other suitable calculator, if you have the right
>> computer infrastructure.
>> ASE does support it, according the code. So it should be possible in
>> principle.
>>
>> What made you think that it isn’t possible?
>>
>>
>>
>>
>>
>>
>> Met vriendelijke groeten,
>> Mit freundlichen Grüßen,
>> With kind regards,
>>
>>
>> Willem Offermans
>> Researcher Electrocatalysis SCT
>> VITO NV | Boeretang 200 | 2400 Mol
>> Phone:+32(0)14335263 Mobile:+32(0)492182073
>>
>> Willem.Offermans at Vito.be
>>
>> <image001.jpg>
>>
>>
>> On 2 Apr 2020, at 19:44, Louie Slocombe via ase-users <
>> ase-users at listserv.fysik.dtu.dk> wrote:
>>
>> parallel NEB calculation
>>
>>
>> Indien u VITO Mol bezoekt, hou aub er dan rekening mee dat de hoofdingang
>> voortaan enkel bereikbaar is vanuit de richting Dessel-Retie, niet vanuit
>> richting Mol, zie vito.be/route.
>> <https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.vito.be%2Froute&data=02%7C01%7Cl.slocombe%40surrey.ac.uk%7C373480e121b24c49d46408d7d8135d01%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C1%7C637215449990987038&sdata=v5hpUA6utB%2BI7zCKZh7RZ16jDnRUGxhShqq4Br2EytY%3D&reserved=0>
>> If you plan to visit VITO at Mol, then please note that the main entrance
>> can only be reached coming from Dessel-Retie and no longer coming from Mol,
>> see vito.be/en/contact/locations.
>> <https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.vito.be%2Fen%2Fcontact%2Flocations&data=02%7C01%7Cl.slocombe%40surrey.ac.uk%7C373480e121b24c49d46408d7d8135d01%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C1%7C637215449990997030&sdata=6F5QFY9geglZKN%2BK7vaCm%2BN4LJnZncqpbdZZFxHeH6o%3D&reserved=0>
>> VITO Disclaimer: http://www.vito.be/e-maildisclaimer
>> <https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.vito.be%2Fe-maildisclaimer&data=02%7C01%7Cl.slocombe%40surrey.ac.uk%7C373480e121b24c49d46408d7d8135d01%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C1%7C637215449990997030&sdata=EJV5taPHjYDfA%2FLiV5LJcW4ldHPU7M9wq4QQLAW7qSM%3D&reserved=0>
>> <ase_parallel_neb_example.py>
>>
>>
>> _______________________________________________
>> ase-users mailing list
>> ase-users at listserv.fysik.dtu.dk
>> https://listserv.fysik.dtu.dk/mailman/listinfo/ase-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/ase-users/attachments/20200407/758cde10/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: vito.jpg
Type: image/jpeg
Size: 15232 bytes
Desc: not available
URL: <http://listserv.fysik.dtu.dk/pipermail/ase-users/attachments/20200407/758cde10/attachment-0001.jpg>
More information about the ase-users
mailing list