[gpaw-users] gpaw 1.5.2 installation

Richard Stana richard.stana at gmail.com
Mon Aug 9 14:26:05 CEST 2021


Dear Ask.

Sorry for misunderstanding. It was installed after installing gcc
version 4.8.

I think my previous message was really hard to read so I will color it a
little (blue - my comments, red - program versions, green - program outputs
):

I have installed libxc and BLAS from apt:
libxc-dev/bionic,now 3.0.0-1build1 amd64 [installed]
libblas-dev/bionic,now 3.7.1-4ubuntu1 amd64 [installed]

I am running Ubuntu 18, I created conda env with python 3.7 or 3.6 and
installed ase:
pip install ase==3.19.3
numpy 1.21.1 and scipy 1.7.1 were installed automatically.

Then I tried to install gpaw:
pip install gpaw==1.5.2
I got error message you can see in attached file "compiler_error.txt"

I am not sure what is the problem with the compiler but I tried more
versions of gcc and only one working is gcc 4.8.
With gcc 4.8 I was able to install gpaw-1.5.2.

With gpaw info command I am getting:
python-3.7.11            /home/richard_stana/.conda/envs/gpaw_c/bin/python
gpaw-1.5.2
/home/richard_stana/.conda/envs/gpaw_c/lib/python3.7/site-packages/gpaw/
ase-3.19.3
/home/richard_stana/.conda/envs/gpaw_c/lib/python3.7/site-packages/ase/
numpy-1.21.1
/home/richard_stana/.conda/envs/gpaw_c/lib/python3.7/site-packages/numpy/
scipy-1.7.1
 /home/richard_stana/.conda/envs/gpaw_c/lib/python3.7/site-packages/scipy/
_gpaw
 /home/richard_stana/.conda/envs/gpaw_c/lib/python3.7/site-packages/_
gpaw.cpython-37m-x86_64-linux-gnu.so
parallel
/home/richard_stana/.conda/envs/gpaw_c/bin/gpaw-python
MPI enabled              no
scalapack                no (MPI unavailable)
Elpa                     no (MPI unavailable)
FFTW                     yes
libvdwxc                 no
PAW-datasets             1:
/home/richard_stana/gpaw-dataset/gpaw-setups-0.9.20000
                         2: /usr/local/share/gpaw-setups
                         3: /usr/share/gpaw-setups

MPI not enabled.  Check parallel configuration with: gpaw -P1 info

gpaw -P1 info is giving:

[jupyter2:13741] PMIX ERROR: UNPACK-PAST-END in file unpack.c at line 205
[jupyter2:13741] PMIX ERROR: UNPACK-PAST-END in file unpack.c at line 146
[jupyter2:13741] PMIX ERROR: UNPACK-PAST-END in file client/pmix_client.c
at line 224
[jupyter2:13741] OPAL ERROR: Error in file pmix2x_client.c at line 112
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[jupyter2:13741] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not able to
guarantee that all other processes were killed!
-------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpiexec detected that one or more processes exited with non-zero status,
thus causing
the job to be terminated. The first process to do so was:

  Process name: [[21454,1],0]
  Exit code:    1
--------------------------------------------------------------------------

I have already installed openmpi with apt
libopenmpi-dev/bionic,now 2.1.1-8 amd64 [installed]
openmpi-bin/bionic,now 2.1.1-8 amd64 [installed,automatic]
openmpi-common/bionic,now 2.1.1-8 all [installed,automatic]

I am not sure i pip will automatically add support for mpi so I cloned
https://gitlab.com/gpaw/gpaw/-/tree/release-1.5.2
and change customize.py as you can see in attachments

After that I installed gpaw with "pip install ." and
when I run "gpaw info" it shows nothing and freezes (not reacting to
ctrl+c) and I have to close the ssh connection or kill the process.

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I tried many more combinations of gpaw installations and versions of mpi
and installation procedures (prom packages, sources, ...) but none of them
work.
For example in other setup with another versions of packages I am getting
this when running gpaw info:
[jupyter:09517] mca_base_component_repository_open: unable to open
mca_patcher_overwrite:
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_patcher_overwrite.so:
undefined symbol: mca_patcher_base_patch_t_class (ignored)
[jupyter:09517] mca_base_component_repository_open: unable to open
mca_shmem_mmap:
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_shmem_mmap.so: undefined
symbol: opal_show_help (ignored)
[jupyter:09517] mca_base_component_repository_open: unable to open
mca_shmem_posix:
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_shmem_posix.so: undefined
symbol: opal_shmem_base_framework (ignored)
[jupyter:09517] mca_base_component_repository_open: unable to open
mca_shmem_sysv:
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_shmem_sysv.so: undefined
symbol: opal_show_help (ignored)
--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_shmem_base_select failed
  --> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_init failed
  --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[jupyter:9517] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not able to
guarantee that all other processes were killed!

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Anyway "OK.  It says "lto1: internal compiler error:".  So the compiler
crashed and GPAW didn't compile.  This is a problem with the compiler."
What can be a problem with compiler and how can I solve it when I want to
use never compiler?

Thank you for your help.

S pozdravom / Best Regards
Richard Staňa


On Tue, 3 Aug 2021 at 15:58, Ask Hjorth Larsen <asklarsen at gmail.com> wrote:

> Dear Richard,
>
> Am Mo., 2. Aug. 2021 um 15:11 Uhr schrieb Richard Stana
> <richard.stana at gmail.com>:
> >
> > Hello Ask,
> >
> > Thank you for your answer.
> >
> > I have installed libxc and BLAS from apt:
> > libxc-dev/bionic,now 3.0.0-1build1 amd64 [installed]
> > libblas-dev/bionic,now 3.7.1-4ubuntu1 amd64 [installed]
> >
> > I am running Ubuntu 18, I created conda env with python 3.7 or 3.6 and
> installed ase:
> > pip install ase==3.19.3
> > numpy 1.21.1 and scipy 1.7.1 were installed automatically.
> >
> > Then I tried to install gpaw:
> > pip install gpaw==1.5.2
> > I got error message you can see in attached file "compiler_error.txt"
>
> OK.  It says "lto1: internal compiler error:".  So the compiler
> crashed and GPAW didn't compile.  This is a problem with the compiler.
>
> (Since GPAW didn't compile, the 'gpaw info' listing must refer to
> another, unrelated installation on that machine.)
>
> Best regards
> Ask
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.fysik.dtu.dk/pipermail/gpaw-users/attachments/20210809/c5292cc3/attachment-0001.htm>


More information about the gpaw-users mailing list