[gpaw-users] multinode run

Jay Wai jaywai412 at gmail.com
Wed Aug 28 19:46:21 CEST 2019


Hi Ask,

Thank you for your answer.

I added the following lines in my pbs script.

#PBS -l select=2:ncpus=40:mpiprocs=40:ompthreads=1
mpirun gpaw-python ./test.py > ./test.out

The log file shows that the total number of cores is 40, not 80, like when
I entered 'select=1' in the script.

I don't know about slurm. I'll ask system admin.

Best,
Jay

2019년 8월 29일 (목) 오전 1:37, Ask Hjorth Larsen <asklarsen at gmail.com>님이 작성:

> Hi,
>
> Am Mi., 28. Aug. 2019 um 14:41 Uhr schrieb Jay Wai via gpaw-users
> <gpaw-users at listserv.fysik.dtu.dk>:
> >
> > Dear users,
> >
> > I've used gpaw with openmpi, scalapack, and fftw installed in a
> supercomputer.
> > I've just started to try it in multiple nodes, but it is not working out.
> > For example, the submission of a pbs job with 2 nodes does not make any
> difference from that with a single node in terms of speed.
> >
> > If each node has 40 cores, should the number of cores be printed to be
> 80 in 2 nodes-run in a log file?
> > In my case, it is printed to be 40.
> > I could not find a line indicating the number of nodes in a log file.
>
> Yes, it should show 80 cores.  Something must have decided which cores
> were available.  Did you run mpirun -np 80, or did slurm/srun receive
> some information?.
>
> Best regards
> Ask
>
> >
> > I might have to ask system admins, but I wanted to first check if
> something specific is required to run gpaw in multiple nodes in compiling
> or something.
> >
> > Best,
> > Jay
> >
> >
> > _______________________________________________
> > gpaw-users mailing list
> > gpaw-users at listserv.fysik.dtu.dk
> > https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20190829/930b37b7/attachment-0001.html>


More information about the gpaw-users mailing list