[gpaw-users] Trouble Compiling GPAW

Tomlinson, Warren (CDR) wwtomlin at nps.edu
Tue Mar 15 20:16:33 CET 2016


Hello-
I’m having an issue compiling GPAW on a cluster and I’m wondering if someone can help.

The system is a Cray XC30 cluster (called Lightning)
I am attempting to compile in the GNU environment (although I get the same error when trying with Intel)

Relevant modules:
gcc/5.1.0
cray-mpich/7.2.6
cray-libsci/13.2.0
python/gnu/2.7.9
numpy/gnu/1.9.2
spicy/gnu/0.15.1


Relevant excerpts from system user’s manual:
5.1.1. Message Passing Interface (MPI)

This release of MPI-2 derives from Argonne National Laboratory MPICH-2 and implements the MPI-2.2 standard except for spawn support, as documented by the MPI Forum in "MPI: A Message Passing Interface Standard, Version 2.2."

The Message Passing Interface (MPI) is part of the software support for parallel programming across a network of computer systems through a technique known as message passing. MPI establishes a practical, portable, efficient, and flexible standard for message passing that makes use of the most attractive features of a number of existing message-passing systems, rather than selecting one of them and adopting it as the standard. See "man intro_mpi" for additional information.

When creating an MPI program on Lightning, ensure the following:

• That the default MPI module (cray-mpich) has been loaded. To check this, run the "module list" command. If cray-mpich is not listed and a different MPI module is listed, use the following command:
module swap other_mpi_module cray-mpich

If no MPI module is loaded, load the cray-mpich module.

module load cray-mpich

• That the source code includes one of the following lines:
INCLUDE "mpif.h"        ## for Fortran, or
#include <mpi.h>        ## for C/C++

To compile an MPI program, use the following examples:

ftn -o pi_program mpi_program.f ## for Fortran, or
cc -o mpi_program mpi_program.c         ## for C/C++

———————————————————————

5.1.3. Open Multi-Processing (OpenMP)

OpenMP is a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications. It supports shared-memory multiprocessing programming in C, C++ and Fortran, and consists of a set of compiler directives, library routines, and environment variables that influence compilation and run-time behavior.

When creating an OpenMP program on Lightning, ensure the following:

• That the default MPI module (cray-mpich) has been loaded. To check this, run the "module list" command. If cray-mpich is not listed and a different MPI module is listed, use the following command:
module swap other_mpi_module cray-mpich

If no MPI module is loaded, load the cray-mpich module.

module load cray-mpich

• That if using OpenMP functions (for example, omp_get_wtime), the source code includes one of the following lines:

INCLUDE 'omp.h'     ## for Fortran, or
#include <omp.h>    ## for C/C++

Or, if the code is written in Fortran 90 or later, the following line may be used instead:

USE omp_lib

• That the compile command includes an option to reference the OpenMP library. The PGI, Cray, Intel, and GNU compilers support OpenMP, and each one uses a different option.

To compile an OpenMP program, use the following examples:

For C/C++ codes:

cc -o OpenMP_program -mp=nonuma OpenMP_program.c  ## PGI
cc -o OpenMP_program -h omp OpenMP_program.c      ## Cray
cc -o OpenMP_program -openmp OpenMP_program.c     ## Intel
cc -o OpenMP_program -fopenmp OpenMP_program.c    ## GNU

———————————————————————

5.2. Available Compilers

Lightning has four programming environment suites.

• Portland Group (PGI)
• Cray Fortran and C/C++
• Intel
• GNU
On Lightning, different sets of compilers are used to compile codes for serial vs. parallel execution.

Compiling for the Compute Nodes

Codes compiled to run on the compute nodes may be serial or parallel. The x86-64 instruction set for Intel Ivy Bridge E5-2697 processors has extensions for the Floating Point Unit (FPU) that require the module craype-ivybridge to be loaded. This module is loaded for you by default. To compile codes for execution on the compute nodes, the same compile commands are available in all programming environment suites as shown in the following table:

Compute Node Compiler Commands
Language PGI Cray Intel GNU Serial/Parallel
C cc cc cc cc Serial/Parallel
C++ CC CC CC CC Serial/Parallel
Fortran 77 f77 f77 f77 f77 Serial/Parallel
Fortran 90 ftn ftn ftn ftn Serial/Parallel
——————————————————————————————


Building the serial version of GPAW gives me no trouble.  I have compiled it and run all the tests and they all pass.  When trying to build the custom interpreter, the compiling goes fine, but the linking fails.

Below is the link line causing the issue:
cc -o build/bin.linux-x86_64-2.7//gpaw-python build/temp.linux-x86_64-2.7/c/woperators.o build/temp.linux-x86_64-2.7/c/plt.o build/temp.linux-x86_64-2.7/c/lapack.o build/temp.linux-x86_64-2.7/c/symmetry.o build/temp.linux-x86_64-2.7/c/plane_wave.o build/temp.linux-x86_64-2.7/c/operators.o build/temp.linux-x86_64-2.7/c/mlsqr.o build/temp.linux-x86_64-2.7/c/transformers.o build/temp.linux-x86_64-2.7/c/utilities.o build/temp.linux-x86_64-2.7/c/spline.o build/temp.linux-x86_64-2.7/c/lfc2.o build/temp.linux-x86_64-2.7/c/localized_functions.o build/temp.linux-x86_64-2.7/c/wigner_seitz.o build/temp.linux-x86_64-2.7/c/mpi.o build/temp.linux-x86_64-2.7/c/lfc.o build/temp.linux-x86_64-2.7/c/bc.o build/temp.linux-x86_64-2.7/c/hdf5.o build/temp.linux-x86_64-2.7/c/blas.o build/temp.linux-x86_64-2.7/c/fftw.o build/temp.linux-x86_64-2.7/c/lcao.o build/temp.linux-x86_64-2.7/c/point_charges.o build/temp.linux-x86_64-2.7/c/_gpaw.o build/temp.linux-x86_64-2.7/c/cerf.o build/temp.linux-x86_64-2.7/c/blacs.o build/temp.linux-x86_64-2.7/c/bmgs/bmgs.o build/temp.linux-x86_64-2.7/c/xc/rpbe.o build/temp.linux-x86_64-2.7/c/xc/tpss.o build/temp.linux-x86_64-2.7/c/xc/xc.o build/temp.linux-x86_64-2.7/c/xc/revtpss_c_pbe.o build/temp.linux-x86_64-2.7/c/xc/pbe.o build/temp.linux-x86_64-2.7/c/xc/libxc.o build/temp.linux-x86_64-2.7/c/xc/m06l.o build/temp.linux-x86_64-2.7/c/xc/pw91.o build/temp.linux-x86_64-2.7/c/xc/revtpss.o build/temp.linux-x86_64-2.7/c/xc/ensemble_gga.o build/temp.linux-x86_64-2.7/c/xc/xc_mgga.o build/temp.linux-x86_64-2.7/c/xc/vdw.o  -L/home/wwtomlin/xc/lib -L/opt/gcc/5.1.0/snos/lib64 -L/opt/cray/libsci/13.2.0/GNU/5.1/x86_64/lib -L/app/COST/python/2.7.9/gnu/lib -L/opt/cray/mpt/7.2.6/gni/mpich-gnu/51/lib -L/app/COST/python/2.7.9/gnu/lib/python2.7/config -lxc -lpython2.7 -lpthread -ldl  -lutil -lm  -pg -L. -L/app/COST/bzip2/1.0.6/gnu//lib -L/app/COST/tcltk/8.6.4/gnu//lib -L/app/COST/dependencies/sqlite/3081101/gnu//lib -L/app/COST/dependencies/readline/6.3/gnu//lib -Xlinker -export-dynamic


Below are the warnings and error I get:
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(dynload_shlib.o): In function `_PyImport_GetDynLoadFunc':
/app/COST/source/Python-2.7.9/Python/dynload_shlib.c:130: warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(posixmodule.o): In function `posix_tmpnam':
/app/COST/source/Python-2.7.9/./Modules/posixmodule.c:7575: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp'
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(posixmodule.o): In function `posix_tempnam':
/app/COST/source/Python-2.7.9/./Modules/posixmodule.c:7522: warning: the use of `tempnam' is dangerous, better use `mkstemp'
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(posixmodule.o): In function `posix_initgroups':
/app/COST/source/Python-2.7.9/./Modules/posixmodule.c:4161: warning: Using 'initgroups' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(pwdmodule.o): In function `pwd_getpwall':
/app/COST/source/Python-2.7.9/./Modules/pwdmodule.c:165: warning: Using 'getpwent' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(pwdmodule.o): In function `pwd_getpwnam':
/app/COST/source/Python-2.7.9/./Modules/pwdmodule.c:139: warning: Using 'getpwnam' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(pwdmodule.o): In function `pwd_getpwuid':
/app/COST/source/Python-2.7.9/./Modules/pwdmodule.c:114: warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/app/COST/python/2.7.9/gnu/lib/libpython2.7.a(pwdmodule.o): In function `pwd_getpwall':
/app/COST/source/Python-2.7.9/./Modules/pwdmodule.c:164: warning: Using 'setpwent' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/app/COST/source/Python-2.7.9/./Modules/pwdmodule.c:176: warning: Using 'endpwent' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/usr/bin/ld: dynamic STT_GNU_IFUNC symbol `strcmp' with pointer equality in `/usr/lib/../lib64/libc.a(strcmp.o)' can not be used when making an executable; recompile with -fPIE and relink with -pie
collect2: error: ld returned 1 exit status


Finally, here are the lines from my customize.py file:
compiler = 'cc'
define_macros += [('PARALLEL', '1')]
mpicompiler = 'cc'
mpilinker = mpicompiler

libraries = ['xc']
scalapack = True

mpi_library_dirs += ['/opt/cray/mpt/7.2.6/gni/mpich-gnu/51/lib']
library_dirs += ['/home/wwtomlin/xc/lib']
library_dirs += ['/opt/gcc/5.1.0/snos/lib64']
library_dirs += ['/opt/cray/libsci/13.2.0/GNU/5.1/x86_64/lib']
library_dirs += ['/app/COST/python/2.7.9/gnu/lib']

mpi_include_dirs += ['/opt/cray/mpt/7.2.6/gni/mpich-gnu/51/include']
include_dirs += ['/home/wwtomlin/xc/include']
include_dirs += ['/opt/gcc/5.1.0/snos/include']
include_dirs += ['/opt/cray/libsci/13.2.0/GNU/5.1/x86_64/include']
include_dirs += ['/app/COST/python/2.7.9/gnu/include']

define_macros += [('GPAW_NO_UNDERSCORE_CBLACS', '1')]
define_macros += [('GPAW_NO_UNDERSCORE_CSCALAPACK', '1’)]


My limited experience in this area tells me that there might be a problem with the way libc.a was compiled, but I’m note sure.  I can’t do anything about that directly, but I could bring a problem to the attention of the system administrators and they’re usually pretty helpful getting things updated.  Is that what I need to do?

Thank you for any help,
Warren
PhD Student
Naval Postgraduate School

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20160315/41076bb5/attachment-0001.html>


More information about the gpaw-users mailing list