-
Notifications
You must be signed in to change notification settings - Fork 879
Bug fixing #519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug fixing #519
Conversation
@@ -364,7 +364,7 @@ def obj_df(dvs,config,state=None): | |||
# For multiple objectives are evaluated one-by-one rather than combined | |||
# MARKER_MONITORING should be updated to only include the marker for i_obj | |||
# For single objectives, multiple markers can be used | |||
config['MARKER_MONITORING'] = marker_monitored[i_obj] | |||
if (n_obj>1): config['MARKER_MONITORING'] = marker_monitored[i_obj] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the array of monitored markers is extended to accommodate this situation in config.py (~line 514), and there is a single objective-with-multiple-surfaces regression test; do you have a case you can send me where this wasn't working? Or if you see an issue with that test that I previously missed I wouldn't mind looking into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fpalacios yes, I commented a bit too quickly; I do see the behavior that you mention. I think the line previously mentioned helps with getting the value of the objective but not the gradient.
While I was testing this on my end I also noticed that there is an issue if you list the same objective multiple times (for example, if you wanted to weight the drag contribution from one surface more than another), where since it has the same key the previous dict entry is overwritten. Do you mind if I add an error catching that to your PR?
I am taking a look back through it but so far I think your solution will be good - and better to fix the performance in master now and worry about elegant improvements in the develop branch.
Thanks Heather,
Could you please take a look at the log_Adjoint.out file generated in the folder DESIGNS/DSN_001/ADJOINT_DRAG/ when you run the single objective-with-multiple-surfaces regression test (in the part whereSU2 specifies the Surface(s) where the force coefficients are evaluated). It seems that the shape optimization python script only copied the first surface to config_CFD.cfg
I found the "if (n_obj>1):” solution… but, you know better the multi objective implementation. Just let me know if there is another more elegant fix that we can implement.
Thanks!
Francisco
… On Mar 25, 2018, at 1:55 PM, Heather Kline ***@***.***> wrote:
@hlkline commented on this pull request.
In SU2_PY/SU2/eval/design.py <#519 (comment)>:
> @@ -364,7 +364,7 @@ def obj_df(dvs,config,state=None):
# For multiple objectives are evaluated one-by-one rather than combined
# MARKER_MONITORING should be updated to only include the marker for i_obj
# For single objectives, multiple markers can be used
- config['MARKER_MONITORING'] = marker_monitored[i_obj]
+ if (n_obj>1): config['MARKER_MONITORING'] = marker_monitored[i_obj]
I think that the array of monitored markers is extended to accommodate this situation in config.py (~line 514), and there is a single objective-with-multiple-surfaces regression test; do you have a case you can send me where this wasn't working? Or if you see an issue with that test that I previously missed I wouldn't mind looking into it.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#519 (review)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AccuRva3VNHDjv7H11VnmorjY5pCNlpkks5tiARDgaJpZM4S4O1W>.
|
Thanks! Please feel free to add anything you consider to my PR. And… do not worry too much (or at all)... we are doing this for free and for fun! As you know, after a major release we always find this kind of small bugs. The big picture is that thanks to you we all enjoy an awesome multipoint optimization capability! Thanks!
Best,
Francisco
… On Mar 25, 2018, at 6:04 PM, Heather Kline ***@***.***> wrote:
@hlkline commented on this pull request.
In SU2_PY/SU2/eval/design.py <#519 (comment)>:
> @@ -364,7 +364,7 @@ def obj_df(dvs,config,state=None):
# For multiple objectives are evaluated one-by-one rather than combined
# MARKER_MONITORING should be updated to only include the marker for i_obj
# For single objectives, multiple markers can be used
- config['MARKER_MONITORING'] = marker_monitored[i_obj]
+ if (n_obj>1): config['MARKER_MONITORING'] = marker_monitored[i_obj]
@fpalacios <https://github.com/fpalacios> yes, I commented a bit too quickly; I do see the behavior that you mention. I think the line previously mentioned helps with getting the value of the objective but not the gradient.
While I was testing this on my end I also noticed that there is an issue if you list the same objective multiple times (for example, if you wanted to weight the drag contribution from one surface more than another), where since it has the same key the previous dict entry is overwritten. Do you mind if I add an error catching that to your PR?
I am taking a look back through it but so far I think your solution will be good - and better to fix the performance in master now and worry about elegant improvements in the develop branch.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#519 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AccuRlzQ3wEuDogzT35DALgoNbrSv32Xks5tiD64gaJpZM4S4O1W>.
|
Thanks for sorting this out @hlkline and @fpalacios. Once this and the other PR that I have open for "memory fixes" are ready and merged, why don't we put out a small maintenance release, v6.0.1? |
Do you understand why two regressions test are failing with a exclamation mark?
… On Mar 26, 2018, at 11:11 AM, Thomas D. Economon ***@***.***> wrote:
Thanks for sorting this out @hlkline <https://github.com/hlkline> and @fpalacios <https://github.com/fpalacios>. Once this and the other PR that I have open for "memory fixes" are ready and merged, why don't we put out a small maintenance release, v6.0.1?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#519 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AccuRkIhOxa7Zm6t9u5OxUuquRym_1gZks5tiS9mgaJpZM4S4O1W>.
|
Not sure. I am having the same issue on the memory fixes PR. It seems to hang around the Jones cases, but I have run valgrind and don't see any issues. Perhaps it is a problem with Python v3. I am working on this... |
Ok, the tests are passing for this one now after the fix from the other PR. Anything else you would like to add? Otherwise, let's review and merge this next. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I may add a small commit or two for other clean up, but we will merge this very soon. Thanks!
Thanks Tom, what was the problem with the regression test? I'm going to add some other small fixes. |
… to .travis.yml to avoid time outs when downloading test cases.
@fpalacios : there were two main issues with the regressions. Both are time out problems if Travis waits for over 10 minutes without any console output. The first was that we need to flush the output for Python 3 for each regression test, otherwise there is too long between cases and sometimes it fails. The second is that sometimes it takes too long to download the test cases, so I have added the keywords 'travis_wait 90' in the call to the entire python script in .travis.yml in order to extend the 10 minute restriction to 90 min. We still need to reduce the time for the tests overall, but it seems to be stable for the time being. |
Thanks @economon, it was not an easy fix. There are still some problems with the memory deallocation. As soon as we fix those issues we should accept this pull request and create a new minor release because some of the changes in this branch are really important. |
Couldn’t agree more. We will release v6.0.1 right after this merge. Is it complete now? Or is there still a memory issue? I can help - please let me know. |
Thanks, now it is working but I don't like the solution: I have commented out the deallocation of CSurfaceMovement and CFreeFormDefBox in SU2_GEO.cpp. I have worked on this problem for a while without a satisfactory solution... to check the issue, just uncomment // if (surface_movement != NULL) delete [] surface_movement; // if (FFDBox != NULL) { in SU2_GEO and execute SU2_GEO default.cfg Please let me know if you want to work on this issue for a while or we should postpone the fixing of this memory problem. |
This pull request solves several issues:
Once this pull request is approved, the changes will be added to the master repository directly.