-
Notifications
You must be signed in to change notification settings - Fork 781
Node: Buffer log messages #4294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node: Buffer log messages #4294
Conversation
22ba988
to
6607149
Compare
6607149
to
608fc0c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had some questions about the error-handling or early cancel cases, and also left some suggestions about comments and variable names. Overall I think the mechanism makes sense.
Testing Here is how I tested this PR.
Testing Code Added the following code into the EVM watcher to generate lots of logs.
|
8dbe683
to
fbb7c86
Compare
1f10b89
to
3700915
Compare
3700915
to
baafd77
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are a few edge-cases worth considering and handling here.
The changes to the log messages and so on could be done in a separate follow-up PR, but the control-flow related suggestions should be figured out before merging.
This PR addresses an issue where log messages being published to telemetry are getting dropped because the internal channel used to write to Loki is unbuffered (and is not configurable). A recent PR made the writes to that channel non-blocking because a Grafana outage caused the guardian to hang on logging.
This PR adds a buffered channel and a go routine that reads off that channel and writes to the Loki channel in a blocking fashion.
See the comment below for the tests that were executed to verify this PR.