-
Notifications
You must be signed in to change notification settings - Fork 73
Performance stuff #416
New issue
Have a question about this project? Sign up for 8000 a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
One idea that might help a bit relates to how we skip move evaluation when nothing's changed since the last time we considered the client pairs PyVRP/pyvrp/cpp/search/LocalSearch.cpp Lines 92 to 93 in 7604244
This works fine, but any change in the routes of We might make this more efficient by taking into account the size of our move operators (e.g., 1, 2, 3, etc. nodes), and checking whether there has been a recent modification that affected nodes sufficiently near either I think I know how to implement this efficiently, but I'm lacking the time right now to pursue it. This will probably have to wait until PyVRP 0.8.0. |
I like the idea but I'm not sure if this is correct? At least with time windows, removing a node towards the end of the route may create slack which enables inserting a customer early in the route that would previously result in a large penalty, even if the affected parts of the route are far apart. This situation may be hypothetical or not happen frequently so we may decide heuristically to skip those options but that's a functional change of the algorithm, not a more efficient implementation with the same result. Or am I missing something? |
Not really. This conclusion seems correct to me. |
Much more aggressive caching is also in scope here. Routes are queried at least two orders of magnitudes more than that they are changed. Every time something between two indices is calculated, can we somehow store the result? Is that worthwhile? |
Also relevant stuff that @leonlan and I discussed earlier today:
|
Leaving out some details, FILO uses a granular neighborhood that is 1) node-based and 2) dynamic.
I don't really understand yet what the "node involved in improving move" means, but these two improvements will likely improve the solver. Implementation details:
|
This comment was marked as resolved.
This comment was marked as resolved.
I'll have a look at this sometime soon. |
I did this a few months ago, and concluded that it doesn't matter much. |
Also tried this, and the extra indirection of drawing data from memory instead of just computing it again as needed seems to void any potential gains here. |
Have we ever tried caching
|
I have tried doing that lazily (cache when requested, re-use for future requests). That did not help much for the instances I tested. For instances with very long routes it might be helpful, but I don't know if it's worth it to us. The concatenation of |
Now with #821 on I don't have high hopes we can improve that 0.05% massively, but I do think we can reduce the number of evaluations quite a bit by being a little smarter. Right now, for example, we reset the last modified and last tested numbers each time we call If we can somehow maintain the last modified and tested numbers between calls we should be able to filter out a lot of pointless move evaluations. |
Uh oh!
There was an error while loading. Please reload this page.
Discussion thread for ideas to improve PyVRP's performance. Could be low-level stuff, like better code optimisation, smaller data types, and/or caching. Or a smarter data structure, better algorithm, etc. Anything's in scope.
Edit: there's also perhaps some stuff in #181 that we might want to implement as part of this.
The text was updated successfully, but these errors were encountered: