-
Notifications
You must be signed in to change notification settings - Fork 65
Refactor metrics, add tests #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
39546b9
to
4d0a0f8
Compare
@@ -11,7 +11,7 @@ | |||
</head> | |||
<body> | |||
<h1 class="text-center">kube-applier</h1> | |||
{{ if .Start }} | |||
{{ if .CommitHash }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes an unrelated bug. The idea is to show a temporary loading page before the first run results come in. Unfortunately the "Start" field starts with a truthy value. CommitHash starts as empty string, which is falsy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for mentioning that.
metrics/prometheus_test.go
Outdated
// Note that filenames are reused in order to ensure that the metrics update iteratively. | ||
func TestPrometheusProcessResult(t *testing.T) { | ||
assert := assert.New(t) | ||
runMetrics := make(chan run.Result, 5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of having one long function doing all test cases, can you split this into multiple functions ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you thinking something like this?
TestPrometheusProcessResult() {
testCase1()
testCase2()
testCase3()
...
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Along those lines.
I imagine you would need to pass bunch of parameters too.
And also better names.
TestPrometheusProcessResult() {
testFullRunWithSuccessOnly(param1, param2, param3)
testQuckRunMultipleSuccessMultipleFailure(param1, param2, param3 )
....
...
}
<
8000
/pre>
What do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe something like this
func testFullRunWithSuccessAndFailure( t *testing.T, successCount failureCount int, .....) {
.....
}
TestPrometheusProcessResult() {
testFullRunWithSuccessAndFailure(t, 0 ,0)
testFullRunWithSuccessAndFailure(t, 3 ,5)
...
...
}
If this makes sense for your test cases.
run/result.go
Outdated
@@ -10,6 +10,7 @@ import ( | |||
// The functions associated with Result convert raw data into the desired formats for insertion into the status page template. | |||
type Result struct { | |||
RunID int | |||
FullRun bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you consider using an enumeration ?
type RunType string
const (
FullRun RunType = "FullRun"
PartialRun RunType = "PartialRun"
)
Then the field can be called
RunType RunType
The boolean makes the code more difficult to read and is more tedious to extend.
I like this rule very much.
https://en.wikipedia.org/wiki/Zero_one_infinity_rule
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with your suggestion. I will adjust the implementation.
run/runner.go
Outdated
@@ -79,7 +80,7 @@ func (r *Runner) fullRun(id int) (*Result, error) { | |||
return nil, err | |||
} | |||
log.Printf("RUN %v: Starting full run with hash %v", id, hash) | |||
result, err := r.run(id, rawList, hash) | |||
result, err := r.run(id, true, rawList, hash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example while reading this code, I would have to jump back to the definition to decipher what the "true" means
@@ -11,7 +11,7 @@ | |||
</head> | |||
<body> | |||
<h1 class="text-center">kube-applier</h1> | |||
{{ if .Start }} | |||
{{ if .CommitHash }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for mentioning that.
metrics/prometheus_test.go
Outdated
"time" | ||
) | ||
|
||
type testCase struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love this test. Thanks for adding it.
@ahakanbaba I am a little unsure about splitting up the test cases into separate functions because of the variation in the cases and the importance of their sequencing. I think I have cleaned up the test cases in a way that makes them more readable. Let me know what you think. |
metrics/prometheus.go
Outdated
@@ -42,7 +42,7 @@ func (p *Prometheus) Configure() { | |||
// Result: true if the run was successful, false otherwise | |||
"success", | |||
// True if full run, false if quick run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment needs to be updated.
metrics/prometheus.go
Outdated
@@ -42,7 +42,7 @@ func (p *Prometheus) Configure() { | |||
// Result: true if the run was successful, false otherwise | |||
"success", | |||
// True if full run, false if quick run | |||
"full_run", | |||
"run_type", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure you want to use snake case ?
Can't you use lowerCamelCase or just runtype ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm... I was just following the naming conventions for the metrics that come with prometheus by default. but those conventions seem more for the metrics themselves, I can't find a convention for labels https://prometheus.io/docs/practices/naming/.
Also, doing some more research into prometheus conventions is making me reflect that we may not be using labels idiomatically, but I don't want to worry too much about that right now.
}, | ||
}, | ||
// Case 4: Successes, failures, quick run | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is definitely more readable like this.
Are the test cases independent ?
If I comment out case 1 -2 can i run case 3 without changing its behavior.
That would be very very user friendly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately they are quite dependent at the moment, although this is by design. The test cases use the same file names to check that the values are updated properly (e.g. after first run, check the count is 1... after second run, check that the count is two... etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
however, this might be leaning towards unnecessary testing of the prometheus library. perhaps we just care that the initial metrics are created with the names we expect. in that case, we would be able to make the test cases independent
run/runner_test.go
Outdated
@@ -302,7 +302,7 @@ func TestRunnerStartQuickLoop(t *testing.T) { | |||
) | |||
expectedResult = Result{ | |||
2, | |||
false, | |||
QuickRun, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks much better.
Thanks for the responses. Looks good to me as is. |
0b7e72f
to
687fa05
Compare
Move the log output to std err
@ahakanbaba @ghodss @huggsboson
Refactor the metrics component so that it receives run results on a channel, rather than passing the metrics component around the program and calling its functions directly. This also eliminates the need for mocking the metrics component. The tests in the run pkg are updated accordingly.
Tag run latency metric with run type (full vs quick).
Add tests for metrics. The tests create fake run results, pass to the metrics component, and then query the metrics page handler. We then use regex to parse the page output and verify the metrics values are updated as expected. I could continue adding more test cases, but let me know what you think of the approach.