-
Notifications
You must be signed in to change notification settings - Fork 174
JP-1557: Update master_background for NRS IFU pathloss #5125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #5125 +/- ##
=======================================
Coverage 53.00% 53.01%
=======================================
Files 401 402 +1
Lines 35628 35654 +26
Branches 5521 5524 +3
=======================================
+ Hits 18885 18901 +16
- Misses 15603 15611 +8
- Partials 1140 1142 +2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add or modify one of the unit tests for this module? You might have to put in some empty pathloss_uniform
and pathloss_point
arrays.
""" | ||
|
||
log.info('Applying point source pathloss updates to IFU background') | ||
input.data *= (input.pathloss_point / input.pathloss_uniform) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens here if someone turns off the pathloss correction step? Might want a try
block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. But that raises an issue I was having while trying to run my first test on this new code. I was using an old cal file that still had the old "pathloss" array, but not the new "pathloss_point" or "pathloss_uniform" arrays. I could NOT find a useful way to test for the true existence of the new arrays. Any use of hasattr(model, 'pathloss_point')
always returns True
, because it's defined in the schema, model.pathloss_point.shape
returns a valid looking (2048,2048)
, etc. About the only way I could tell whether the data were real or not was to see if the data array was filled with zeros (which for some kinds of corrections could be valid). How can you reliably test for the true existence of an attribute like this in our datamodels?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is quite irritating. @stscieisenhamer is there a trick for doing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Just a small comment below.
pl_point = input.getarray_noinit('pathloss_point') | ||
except AttributeError: | ||
log.warning('Pathloss_point array not found in input') | ||
log.warning('Skipping pathloss background updates') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be good to set the step status keyword to "SKIPPED" in this block and the one below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the master background step itself is not being skipped in this case. Only the application of the corrections to the master bkg signal. Perhaps we should in fact skip the entire step if the corrections can't be applied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right. I guess if the pathloss arrays are not there, the pathloss step was not run? Does that mean this is just fine as is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. I was thinking of the case where someone was using old input files that don't have the proper pathloss arrays attached, but this would actually properly handle the case of the pathloss step having been skipped (I think). In that case the master bkg should already match the actual bkg, so there's no need to remove the previously applied correction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But overall it would perhaps be best to skip the whole step, so that we don't deliver improperly calibrated results from the master bkg step. If someone has turned off pathloss upstream, they're going to get bad calibration on several fronts (i.e. flux cal won't be right, ...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, at that point perhaps we are getting to the point where things need to be managed by the pipeline. If people decide to skip steps that are essential, then they do so at their own risk? Perhaps your original approach is correct. Just return the input, throw a big warning in the log, and be done with it.
Updated the
expand_to_2d
module to call the new functioncorrect_nrs_ifu_bgk
when the science data is a point source, in order to apply the point source vs. uniform source pathloss mods to the computed 2D background, before subtracting from the science data.Fixes #5104 / JP-1557