[gpaw-users] Trouble converging simple molecule with BEEF-vdW

Georg Kastlunger georg.kast at hotmail.com
Tue Jan 22 22:31:32 CET 2019


Dear Ask,

I did some more testing and found the reason for the segmentation fault 
i got last week.

When compiling gpaw with libvdwxc and mkl libraries the problem was not 
another BLAS library. It was that I used the intel openmpi compiler. 
With this compiler, I always get a segmentation fault when running a 
calculation using libvdwxc as the xc backend.

If I use the mvapich compiler instead it works without any problems.

I hope this helps people who run into the same trouble as I did.

This is my last mail on the topic, sorry for spamming. ;)

Best wishes,
Georg

On 1/16/19 5:59 PM, Ask Hjorth Larsen via gpaw-users wrote:
> Dear Georg,
>
> Am Mi., 16. Jan. 2019 um 22:53 Uhr schrieb Georg Kastlunger
> <georg_kastlunger at brown.edu>:
>> Dear Ask,
>>
>> I got it to work! The reason for the segmentation fault is an
>> embarassing one.
>>
>> I had compiled gpaw using mkl, but when I ran the test calculations I
>> also set the paths to other BLAS libraries. It seems like, that having
>> both mkl (and its blas) and blas in the library paths at the same time
>> is where a conflict appeared.
>>
>> Thank you very much for your help. I will use this version from now on.
> Glad to hear, thanks for using libdwxc!
>
> Best regards
> Ask
>
>> Best wishes,
>> Georg
>>
>> On 1/15/19 8:14 PM, Ask Hjorth Larsen wrote:
>>> Dear Georg, (re-added mailing list)
>>>
>>> Am Di., 15. Jan. 2019 um 23:32 Uhr schrieb Georg Kastlunger
>>> <georg_kastlunger at brown.edu>:
>>>> Dear Ask,
>>>>
>>>> thank you for your answer. I now updated to the current developers
>>>> version of gpaw and compiled including libvdwxc.
>>>>
>>>> I, however, get a segmentation fault if I want to run a calculation in
>>>> parallel. The script I used is the exact one you send in your last Mail.
>>> Right, combining fftw and MKL is tricky.  Hopefully we will solve that
>>> now though.
>>>
>>> What happens if you do "export GPAW_FFTWSO=<complete path to your
>>> fftw.so>" before running the calculation?  Make sure the compute node
>>> 'sees' the variable, e.g., by checking os.environ['GPAW_FFTWSO'] at
>>> runtime.
>>>
>>> I suspect there will be nothing to gain from putting 'vdwxc' *before*
>>> the other libraries in customize.py, but if the above line does not
>>> solve it, maybe try that anyway.
>>>
>>> Sorry for the 'maybe'-advice but things are sometimes quite tricky
>>> because MKL is both evil and nasty, and pollutes the C namespace with
>>> fftw-lookalike functions that don't implement the same functionality
>>> as the true fftw, and will easily cause segfaults when used together
>>> with the good and wholesome libraries such as the real fftw. :(
>>>
>>> Best regards
>>> Ask
>>>
>>>
>>>> I configured libvdwxc using:
>>>>
>>>> MPI_PATH="/gpfs/runtime/opt/mpi/openmpi_2.0.3_intel/"
>>>> PREFIX="/gpfs_home/gkastlun/data/gkastlun/opt/libvdwxc/install"
>>>> CFLAGS="-O3 "
>>>> FCFLAGS="-g "
>>>> ./configure CFLAGS=$CFLAGS FCFLAGS=$FCFLAGS --prefix=$PREFIX
>>>> --with-mpi=$MPI_PATH
>>>>
>>>> and it runs without warnings or errors. I guess the short summary is:
>>>>
>>>> configure: Final build parameters
>>>> configure: ----------------------
>>>> configure:
>>>> configure: TSTAMP   = 20190115T175432-0500
>>>> configure:
>>>> configure: DEBUG    = no (init: def)
>>>> configure: TIMING   = no (init: def)
>>>> configure:
>>>> configure: FFTW3    = yes (init: def)
>>>> configure: MPI      = yes (init: dir)
>>>> configure: PFFT     = no (init: def)
>>>> configure:
>>>> configure: CPP      =
>>>> /gpfs/runtime/opt/mpi/openmpi_2.0.3_intel//bin/mpicc -E
>>>> configure: CPPFLAGS =
>>>> configure: CC       = /gpfs/runtime/opt/mpi/openmpi_2.0.3_intel//bin/mpicc
>>>> configure: MPICC    =
>>>> configure: CFLAGS   = -O3
>>>> configure: FC       = /gpfs/runtime/opt/mpi/openmpi_2.0.3_intel//bin/mpif90
>>>> configure: MPIFC    =
>>>> configure: FCFLAGS  = -g
>>>> configure: LDFLAGS  =
>>>> configure: LIBS     = -lfftw3_mpi -lfftw3 -lm
>>>>
>>>> When I now run a calculation using "xc={'name': 'BEEF-vdW', 'backend':
>>>> 'libvdwxc'}" it works and converges fast in serial mode, giving the
>>>> following output in the preamble:
>>>>
>>>> XC parameters: vdW-BEEF with libvdwxc
>>>>      Mode: serial
>>>>      Semilocal: BEEVDW with 2 nearest neighbor stencil
>>>>      Corresponding non-local functional: vdW-DF2
>>>>      Local blocksize: 272 x 176 x 248
>>>>      PAW datasets: PBE
>>>>
>>>> In parallel mode it leads to a segmentation fault right after the
>>>> initialization with the preamble output:
>>>>
>>>> XC parameters: vdW-BEEF with libvdwxc
>>>>      Mode: mpi with 4 cores
>>>>      Semilocal: BEEVDW with 2 nearest neighbor stencil
>>>>      Corresponding non-local functional: vdW-DF2
>>>>      Local blocksize: 68 x 176 x 248
>>>>      PAW datasets: PBE
>>>>
>>>> I have compiled gpaw with the debug option and the error I get is
>>>> attached. Unfortunately, I can not really read it.
>>>>
>>>> I also attached my customize.py.
>>>>
>>>> Is this a known issue or did I do something wrong?
>>>>
>>>> Best wishes and thank you for your help,
>>>> Georg
>>>>
>>>>
>>>> On 1/15/19 8:47 AM, Ask Hjorth Larsen via gpaw-users wrote:
>>>>> Dear Georg,
>>>>>
>>>>> Am So., 13. Jan. 2019 um 17:57 Uhr schrieb Georg Kastlunger via
>>>>> gpaw-users <gpaw-users at listserv.fysik.dtu.dk>:
>>>>>> Dear Ask,
>>>>>>
>>>>>> thank you for your quick reply.
>>>>>>
>>>>>> I was able to converge the molecule now. The problem seemed to be the
>>>>>> grid spacing. If I use a grid spacing below 0.18 the calculation does
>>>>>> not converge.
>>>>> Strange.  I did a test with libvdwxc, vacuum 6.5, and spacing 0.13 on
>>>>> a desktop computer.  It converged in 19 iterations:
>>>>>
>>>>> http://dcwww.camd.dtu.dk/~askhl/files/vdw-molecule-kastlunger.py
>>>>> http://dcwww.camd.dtu.dk/~askhl/files/vdw-molecule-kastlunger.txt
>>>>>
>>>>> (I again recommend switching to libvdwxc if at all possible, since the
>>>>> old implementation is very inefficient)
>>>>>
>>>>> Best regards
>>>>> Ask
>>>>>
>>>>>> Best wishes,
>>>>>> Georg
>>>>>>
>>>>>>
>>>>>> On 1/11/19 9:11 PM, Ask Hjorth Larsen via gpaw-users wrote:
>>>>>>> Dear Georg,
>>>>>>>
>>>>>>> Am Sa., 12. Jan. 2019 um 02:00 Uhr schrieb Georg Kastlunger via
>>>>>>> gpaw-users <gpaw-users at listserv.fysik.dtu.dk>:
>>>>>>>> Dear GPAW community,
>>>>>>>>
>>>>>>>> I am currently having some trouble converging a calculation of a simple
>>>>>>>> molecule (4-mercaptobenzoic acid) applying the BEEF-vdW XC-functional.
>>>>>>>>
>>>>>>>> Relaxing the same structure with RPBE did not create any problems
>>>>>>>> before. Also, when adsorbing the molecule to a metal slab the
>>>>>>>> calculation converges like a charm.
>>>>>>>>
>>>>>>>> I have attached the structure and a minimal script. As you can see, I
>>>>>>>> have already played around with some parameters for improving
>>>>>>>> convergence. Unfortunately, nothing helped.
>>>>>>>>
>>>>>>>> Did anyone experience the same issue before and knows about some tricks
>>>>>>>> to converge this system?
>>>>>>> I ran it using the libvdwxc backend and it converged in 16 iterations,
>>>>>>> although I changed gpts/vacuum so it would run on an old laptop.
>>>>>>>
>>>>>>> calc = GPAW(h=0.2,
>>>>>>>                 xc={'name': 'BEEF-vdW', 'backend': 'libvdwxc'},
>>>>>>>                 txt='out.txt',
>>>>>>>                 eigensolver = Davidson(3),
>>>>>>>                 )
>>>>>>> atoms.set_calculator(calc)
>>>>>>> atoms.center(vacuum=4.0)
>>>>>>>
>>>>>>> The libvdwxc backend requires installing (drumroll) libvdwxc.  It is
>>>>>>> much faster, uses less memory, and is scalable (the old implementation
>>>>>>> scales to 20 cores max, and some things are not parallel).  However it
>>>>>>> uses a different parametrization of the kernel and does not include
>>>>>>> the 'soft correction', which means the values will differ slightly.
>>>>>>>
>>>>>>> As for the old (slow) implementation, I see now that it also converged
>>>>>>> after 16 iterations with the same parameters as above.  But I am using
>>>>>>> GPAW master (i.e. 1.5.0 basically).
>>>>>>>
>>>>>>> Best regards
>>>>>>> Ask
>>>>>>>
>>>>>>>> Thank you in advance,
>>>>>>>>
>>>>>>>> Georg
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> gpaw-users mailing list
>>>>>>>> gpaw-users at listserv.fysik.dtu.dk
>>>>>>>> https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users
>>>>>>> _______________________________________________
>>>>>>> gpaw-users mailing list
>>>>>>> gpaw-users at listserv.fysik.dtu.dk
>>>>>>> https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users
>>>>>> _______________________________________________
>>>>>> gpaw-users mailing list
>>>>>> gpaw-users at listserv.fysik.dtu.dk
>>>>>> https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users



More information about the gpaw-users mailing list