[gpaw-users] What to use spectral parallelization instead of over the kpoints when calculating the response function
Anders Hellman
ahell at chalmers.se
Wed Nov 9 21:28:57 CET 2011
Thank you, Jun, for answering so quick.
I notice that the memory estimate is drastically reduced when just using one frequency. Using the --gpaw=df_dry_run tool I get a memory usage of only 1.5 M/cpu, which is nothing. However, still when I run it using 8 cores and 16 Gb memory I get the following massage;
Traceback (most recent call last):
File "./run.py", line 42, in ?
df.get_macroscopic_dielectric_constant()
File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/df.py", line 281, in get_macroscopic_dielectric_con tant
df1, df2 = self.get_dielectric_function(xc=xc)
File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/df.py", line 134, in get_dielectric_function
dm_wGG = self.get_dielectric_matrix(xc=xc)
File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/df.py", line 47, in get_dielectric_matrix
self.initialize()
File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/chi.py", line 162, in initialize
calc.density.D_asp)
File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/kernel.py", line 100, in calculate_Kxc
rgd.r_g)).reshape(npw,npw,rgd.N)
MemoryError
GPAW CLEANUP (node 3): exceptions.MemoryError occurred. Calling MPI_Abort!
I interpret this as it uses to much memory. If I reduce the ecut from 150 to 100 than the calculation run without problems (the memory estimate is 0.5M/cpu). The number of k-point is 1728 and I guess that means that 0.5*1728=864M memory is used per core as compare to when I have ecut=150 and memory use of 1.5M, which woudl result in 2.5G/core (more than I have). I get the feeling that I real must use domain-decomp. However, I seem to be unable to use the kcoomsize. Have any ideas?
Cheers,
Anders
On Nov 9, 2011, at 7:45 PM, jun yan wrote:
> Dear Anders,
>
> How many w frequency points do you use in your calculation ? For the non Hilbert transform calculation you use, its better to use either 1 frequency point, or use the number of frequency points that can be divided by the number of cores.
>
> All the best,
> Jun
>
> On Nov 9, 2011, at 10:15 AM, Anders Hellman wrote:
>
>> Dear GPAW-users,
>>
>> I am trying to calculate the response function, however, I think I am hitting some memory limit and would like to parallelice over the spectral function. I introduce the kcommsize, but when ever the kcommsize is set to a number below the number of cores I get the following error massage;
>>
>> File "run.py", line 42, in ?
>> df.get_macroscopic_dielectric_constant()
>> File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/df.py", line 281, in get_macroscopic_dielectric_constant
>> df1, df2 = self.get_dielectric_function(xc=xc)
>> File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/df.py", line 134, in get_dielectric_function
>> dm_wGG = self.get_dielectric_matrix(xc=xc)
>> File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/df.py", line 47, in get_dielectric_matrix
>> self.initialize()
>> File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/chi.py", line 119, in initialize
>> self.parallel_init()
>> File "/lap/gpaw/0.8.0.8092/lib64/python2.4/site-packages/gpaw/response/chi.py", line 447, in parallel_init
>> assert self.Nw % (size / self.kcomm.size) == 0
>>
>>
>> When kcommsize is above number of cores the calculation runs but now it is parallel over the kpoint, and I hit the memory limit.
>>
>> My response function object looks like;
>>
>> df = DF(calc='out.gpw', q=q, w=w, eta=0.0001,
>> hilbert_trans=False, txt='df_1.out',
>> ecut=150, optical_limit=True, kcommsize=1)
>>
>> What am I doing wrong? Please, any help is very much appreciated.
>>
>> Cheers,
>> Anders
>>
>> _______________________________________________
>> gpaw-users mailing list
>> gpaw-users at listserv.fysik.dtu.dk
>> https://listserv.fysik.dtu.dk/mailman/listinfo/gpaw-users
>
More information about the gpaw-users
mailing list