-
Notifications
You must be signed in to change notification settings - Fork 10
Adds test for example warnings and adds --undocumented
flag
#155
--undocumented
flag
#155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
70ee579
to
1e0bc9b
Compare
--undocumented
flag
I like the --undocumented flag you've added. Everything looks good. I just wanted to know what exactly are we trying to achieve with the regression test here. Do we need to lock the content in? It seems we are trying to ensure consistency in what fields we expect to be undocumented in our NXDL. But this doesn't really test the functionality of the tool to figure out whether these fields are the ones we want to show as warning. What this test really is checking for is whether our readers/NXDL still have the same undocumented combo as we had before. All I see this doing is adding another step to correct before we push commits. If we really want to enforce some sort of expected content from our NXDL/filled template, we can do that. But then why limit 8000 it to just the undocumented? I guess the other "nexus" test we had was already doing a content test on NXmpes. I would say we can, instead, make an example Template and NXDL, not related to any technique, where some fields in the Template are not documented in the accompanying NXDL. We can use the NXtest.nxdl.xml here. Then we test for that extra added list of fields. So we know this tool is correctly identifying undocumented fields. Am I missing something here? Probably, you have a better idea on why such a test is necessary. |
Yes, the background to the test is that we want to ensure consistency of the warnings we see in the mpes example. Lately, we had the problems that the changes to the base classes vanished and we got new undocumented warnings in the mpes example, which we only found as soon as we ran the examples again.
Yes, we do have a test which generates a regression file and compares it to the current output. I agree that this already does what we want, but in the case above it was just regenerated and overwritten. This procedure is often done because people don't really see what has changed and with lots of tiny changes to other base classes at the NIAC pullback it made sense to just regenerate the regression file. If we want to invest a little more time in this we could instead check the full template we generate for the example and check whether this is what we expect (including what is undocumented etc.). I just wanted to add this as a quick check so we don't run into the problems we had with the base class changes again.
That's not really what I want, because I want to specifically check the examples output. |
Let me also put my 5cents here, as I was the one asking for this kind of test. We already a couple of times had the situation that things were changed on either the code or the definitions that lead to warnings popping up during conversions in the examples, which however nobody noticed, because nobody of us actually ever runs these examples, and as mentioned above the typical practice with the existing tests is that the thousans-line-long regression logs are just overwritten when a new version comes, and be done with, and nobody notices if e.g. a nexus field disappears. |
As for the implementation, I would even really do this at the level of captureing the output of the conversion process (either command line or python function), and comparing it to a (very short) regression log. This way, you also catch output from the reader itself, and any output from the writer that might change... |
I corrected the config_file (thanks for pointing it out!)
There were several reasons I did it like this.
|
Thank you for the explanation. I get the motivation for this test. I think this is a good implementation. The regular reader test does check for paths that are documented. So if there is anything missing that test will fail. And now this covers the willfully ignored undocumented paths. I will still recommend having a detached to technique test for the functionality that spits out undocumented fields. But it's not necessarily in the scope of this PR. |
Yes, I agree we should add such a test, I created #156 for this. @rettigl Are you happy with the way it is now? If you are ok with this I would merge. |
@rettigl I will merge this now. If you still prefer the file log based approach I can add this in a separate PR. |
This ensures that changes do not change the warnings generated when converting the example files.
Currently, this is only for mpes but is parametrized and can be extended to other readers as well. We could also add this for all readers and add a script to generate the keys as a regression file to compare to.
It adds an
--undocumented
field, too. This enables the display of undocumented fields while writing the file. It also defaults to suppress warnings due to undocumented fields, but maybe it's better to default to showing them?