You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At SGA'18, Lorenzo Tamellini wanted to change the inner optimization method in the AugmentedLagrangian class (optimization module). The reason was that the default AdaptiveGradientDescent didn't converge well, as the optimum was near the boundary. He wanted to use NelderMead instead. This is currently not possible, so he had to hack it, i.e., create a new class.
It'd be nice if
it was possible to change the inner optimization method and
it was possible to access the inner history of $\mu$ parameters.
One could look at MultiStart to see a possible solution.
Additionally, Lorenzo said that one default parameter value (didn't know which) was off, so that the parameter was never increased. However, I think I checked it already sometime ago and it should be correct (parameters have been adapted from Toussaint's optimization script).
Thanks to Lorenzo for the input.
The text was updated successfully, but these errors were encountered:
In GitLab by @valentjn on Jul 28, 2018, 18:22
At SGA'18, Lorenzo Tamellini wanted to change the inner optimization method in the
AugmentedLagrangian
class (optimization
module). The reason was that the defaultAdaptiveGradientDescent
didn't converge well, as the optimum was near the boundary. He wanted to useNelderMead
instead. This is currently not possible, so he had to hack it, i.e., create a new class.It'd be nice if
One could look at
MultiStart
to see a possible solution.Additionally, Lorenzo said that one default parameter value (didn't know which) was off, so that the parameter was never increased. However, I think I checked it already sometime ago and it should be correct (parameters have been adapted from Toussaint's optimization script).
Thanks to Lorenzo for the input.
The text was updated successfully, but these errors were encountered: