-
Notifications
You must be signed in to change notification settings - Fork 168
Tail (-t) breaks if aws spend more than 5 seconds to start the stack. #515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @Evnsan - sorry for the delay in getting back to you, the re:Invent + the holidays took up a lot of my focus. This does seem like an issue, though honestly I'm not sure what the solution is. We could increase the # of retries it allows, though I'm not sure what the right balance is. To be honest, we rarely (hardly ever) use |
I use it all the time. It is very nice to see the events as they flow past. My 2c: retry every 5 seconds instead of every 1, and keep the number of retries at 5. I think the 1-second loop is unnecessary, and there's no need to spend your API limits on that. On a related note, this exception now occurs (as of 1.2.0) when a stack is in a completed teardown state. So, if a stack is skipped because it didn't exist, the exception will show up. If you just reach the end of the tail on a destroy operation, the exception will show up. I've been in the code trying to figure out where to catch these issues, but no joy yet. Hopefully someone with deeper knowledge of the code can work it out pretty quickly. I think I'm going to have to downgrade for now. |
@ajk8 can you give 1.3 a try and let me know if you see the same issue? A lot of changes have been made around this code, so hopefully you won't be running into the issue as much any longer. |
This appears to still be an issue in |
Also, extend the way we retry/timeout, that should work around #515. Not sure of a great way to test this unfortunately.
* Get rid of recursion for tail retries Also, extend the way we retry/timeout, that should work around #515. Not sure of a great way to test this unfortunately. * Add some tests
* Get rid of recursion for tail retries Also, extend the way we retry/timeout, that should work around cloudtools#515. Not sure of a great way to test this unfortunately. * Add some tests
Hey folks, I am facing this problem when trying to deploy a stack with asg and codedeploy. Do you think it's a big deal?
stacker/providers/aws/default.py:
The text was updated successfully, but these errors were encountered: