8000 Issues with vanishing decay rates · Issue #10 · hep-mh/acropolis · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Issues with vanishing decay rates #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tegonzalo opened this issue Nov 23, 2021 · 6 comments
Closed

Issues with vanishing decay rates #10

tegonzalo opened this issue Nov 23, 2021 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@tegonzalo
Copy link
tegonzalo commented Nov 23, 2021

Occasionally some decay rates vanish, e.g. the Be7+a>He3+He4 rate is exactly 0 for low temperatures for the following parameter point (using the decay model)

3.17367 1.60439e+10 10 5.7220383e-10 0 1

I don't know how sensible these values are, but they are not unique, and many such points are easily reached in large parameter scans.

These decay rates are so small that are below the precision limit and are effectively set to 0, which causes problems down the line when setting up the interpolators using log scale, and acropolis fails with an NaN error. I don't think this is a conceptual problem, really small rates are possible as far as I can tell, so it should be just a computational problem on how to handle these really small rates. Locally I've just regularized all zeroes in the grids to the minimum float value of the system, which is enough for the log interpolator to work fine. This is my suggested fix to acropolis/utils.py

@@ -2,6 +2,8 @@
 from math import log, pow
 # numpy
 import numpy as np
+# sys
+import sys
 
 
 class LogInterp(object):
@@ -11,6 +13,9 @@ class LogInterp(object):
         self._sLogBase = log(self._sBase)
 
         self._sFillValue = fill_value
+ 
+        x_grid = [x if x > 0 else sys.float_info.min for x in x_grid]
+        y_grid = [y if y > 0 else sys.float_info.min for y in y_grid]
 
         self._sXLog = np.log(x_grid)/self._sLogBase
         self._sYLog = np.log(y_grid)/self._sLogBase

I hope this helps you with implementing a fix to this in this or any other way.

Cheers,
Tomas

@hep-mh
Copy link
Owner
hep-mh commented Nov 23, 2021

Hi Tomas,

thanks a lot for pointing this out!

Can you confirm that is issue mainly occurs for large lifetimes, i.e. tau > 1e10s, corresponding to small temperatures T <~ 1e-5MeV?
In our test scans, we never went beyond 1e10s, which is probably the reason why we missed this issue. While your parameter point seems sensible, you have to be careful when approaching 1e12s. At around this time, the thermalization of the injected particles with the background plasma is no longer efficient, meaning that one of our assumptions breaks down. However, in this region, CMB constraints should have you covered.

Also, I agree that this should mainly be a numerical problem and setting the rates to some small value instead of zero is a very sensible thing to do. I will play around with this issue a bit and implement your fix soon.

@hep-mh hep-mh self-assigned this Nov 23, 2021
@hep-mh hep-mh added the bug Something isn't working label Nov 23, 2021
@tegonzalo
8000 Copy link
Author

Can you confirm that is issue mainly occurs for large lifetimes, i.e. tau > 1e10s, corresponding to small temperatures T <~ 1e-5MeV?

The few points I've seen that hit this issue all had large lifetimes, but most points with large lifetimes won't. It seems to be quite fine tuned cases that have problems, hard to reproduce in small scales, but I've seen it in every large scale scan I've done. If it's useful I can try to find more of these problematic points.

@pstoecker
Copy link

If I may also add my two cents to it.

I have found the same bug for different sets of inputs. For me the fatal input values were

./decay 3.1734 1.00763e+11 10 3.745e-17 0 1

and

./decay 3.17388 1.3924e+06 10 3.18269e-13 0 1

So it seems to be rather unrelated to the lifetime and it is more the fact that the mass is so close to the threshold,such that the reaction rates are rendered that small.

I have actually found another possibility to fix it. My idea was to go further to the root of the actual problem and to set the reaction rate, calculated in _pdi_rates(self, T), back to approx_zero whenever the numerical result is below that value because that is what causes the reaction rate to be exactly zero.

The fix that I had is the following

diff --git a/acropolis/nucl.py b/acropolis/nucl.py
index 4543100..f2f4d77 100644
--- a/acropolis/nucl.py
+++ b/acropolis/nucl.py
@@ -411,8 +411,8 @@ class NuclearReactor(object):
             # Calculate the 'delta-term'
             I_dt = self._sS0[0](T)*self.get_cross_section(rid, self._sE0)/rate_photon_E0
 
-            # Add the delta term and save the result
-            pdi_rates[rid] = I_dt + I_Fs[0]
+            # Add the delta term and save the result and make sure that the result is at least 'approx_zero'.
+            pdi_rates[rid] = max(I_dt + I_Fs[0], approx_zero)
 
         # Go home and play
         return pdi_rates

Feel free to implement this fix, if you prefer it.

Best,
Patrick

@hep-mh
Copy link
Owner
hep-mh commented Nov 23, 2021

I was wondering why I never encountered this problem before and after digging in the code I found out that I already fixed it at some point. But I only ever pushed the changes to the dev channel (The implementation is very close to the one that Patrick suggested).

But even though the new v1.3 with the fix is not so far away, I will probably also push the change to v1.2.1, so that no one gets confused in the future.

So thanks again to both of you for pointing this out!

@pstoecker
Copy link

I was wondering why I never encountered this problem before and after digging in the code I found out that I already fixed it at some point. But I only ever pushed the changes to the dev channel (The implementation is very close to the one that Patrick suggested).

I guess the reason might just be that you did a grid scan and the spacing in the mass direction was such that you skipped the problematic values of the mass. From what I see, it seems to be a very fine tuned region in mass.

But even though the new v1.3 with the fix is not so far away, I will probably also push the change to v1.2.1, so that no one gets confused in the future.

Does that mean that you will reassign the v1.2.1 tag? This would actually change the tarball that we download and install the code in gambit. Also this would actually go against the principles of semantic versioning. Or do you just want to release a new version v1.2.2? Either way, it will be no problem for us to adapt our code and/or download script.

@hep-mh
Copy link
Owner
hep-mh commented Nov 25, 2021

I was thinking about simply pushing the changes without making a new version, maybe with a note on the website. But I will think about it a little more.

@hep-mh hep-mh closed this as completed Apr 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants
0