[gpaw-users] questions about GPAW installation on a supercomputer

Marcin Dulak Marcin.Dulak at fysik.dtu.dk
Wed Jul 11 15:46:59 CEST 2012


Hi,

measuring code performance is tricky.
Some combinations of compiler/libraries are fast but result in an 
unstable program (see for example
https://wiki.fysik.dtu.dk/gpaw/devel/benchmarks.html#dual-socket-quad-core-64-bit-intel-nehalem-xeon-x5570-quad-core-2-93-ghz-3-gb-ram-per-core-el5).

The system (Archie-West) is probably managed using modules.
Here are example instructions 
(https://wiki.fysik.dtu.dk/gpaw/install/Linux/sun_chpc.html) how to 
install ase/gpaw using modules.
That with a small change in the modulefiles name for gpaw will allow you 
to easily switch between different gpaw versions
in order to compare stability/performance.

Specific software recommendations:
I'm note sure about openmpi, just use the latest released one.

Most likely you will have to build your own numpy, use 1.6.1 or later.

For GPAW I would recommend first trying the following combination:
the latest gcc available on the cluster + the corresponding acml.
The customize.py will be similar to this: 
https://trac.fysik.dtu.dk/projects/gpaw/browser/trunk/doc/install/Linux/Niflheim/el5-xeon-gcc43-acml-4.3.0.py

Later you can try open64 4.2.4 + acml 4.4.0 (if you plan to run 
calculations with > 500 bands also scalapack 2.0.2).
The customize will be similar to this one:
https://trac.fysik.dtu.dk/projects/gpaw/browser/trunk/doc/install/Linux/Niflheim/el5-xeon-open64-acml-4.4.0-acml-4.4.0-hdf-SL-2.0.1.py

I would recommend trying also the latest releases of open64/acml.

Most likely there will be intel mkl installed on the cluster.
Consider it with intel 
(https://wiki.fysik.dtu.dk/gpaw/install/Linux/SUNCAT/SUNCAT.html) or gcc 
compilers
(https://trac.fysik.dtu.dk/projects/gpaw/browser/trunk/doc/install/Linux/Niflheim/el5-xeon-gcc43-mkl-10.3.1.107.py)

The first thing to do for each version is to test the installation: 
https://wiki.fysik.dtu.dk/gpaw/install/installationguide.html#run-the-tests
(first on one core, and then on the available number of cores per node, 
but not larger than 8),
then you can use for example this scalability test 
https://wiki.fysik.dtu.dk/gpaw/devel/benchmarks.html#medium-size-system
Make sure to run each test few times (take average, or the fastest run) 
on the nodes exclusively reserved for you when collecting timing results.

Marcin

On 07/11/12 14:58, Ole Holm Nielsen wrote:
> Dear Vladislav,
>
> I recommend you to ask the gpaw-users mailing list.  I cannot help you.
>
> Best regards,
> Ole
>
> On 07/11/2012 02:56 PM, Vladislav Ivanistsev wrote:
>> Dear Ole,
>> I rely on experience asking for a council about GPAW installation on
>> Archie-West<http://www.archie-west.ac.uk/>.
>>
>> Would it be most efficient to install GPAW 0.9
>> (lapack;blas/scalapack+openmpi) at Intel Xeon (X5650,E7-430) under
>> Scientific Linux? Might ACML be better than Lapack in this case? Is the
>> difference between openmpi 1.4.5 and 1.6.0 important for GPAW
>> performance? Are there any special options to optimize installation for
>> Xeon?
>>
>> Sincerely yours,
>> --
>> M.Sc. Vladislav Ivaništšev
>> Institute of Chemistry, University of Tartu
>> Ravila 14a, 50411, Tartu, Estonia
>> +372 55 685357
>>


-- 
***********************************

Marcin Dulak
Technical University of Denmark
Department of Physics
Building 307, Room 229
DK-2800 Kongens Lyngby
Denmark
Tel.: (+45) 4525 3157
Fax.: (+45) 4593 2399
email: Marcin.Dulak at fysik.dtu.dk

***********************************



More information about the gpaw-users mailing list