8000 Refactor metrics, add tests by gregory-lyons · Pull Request #22 · box/kube-applier · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Refactor metrics, add tests #22

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 1, 2017
Merged

Refactor metrics, add tests #22

merged 1 commit into from
Aug 1, 2017

Conversation

gregory-lyons
Copy link
Contributor
@gregory-lyons gregory-lyons commented Jul 29, 2017

@ahakanbaba @ghodss @huggsboson

  1. Refactor the metrics component so that it receives run results on a channel, rather than passing the metrics component around the program and calling its functions directly. This also eliminates the need for mocking the metrics component. The tests in the run pkg are updated accordingly.

  2. Tag run latency metric with run type (full vs quick).

  3. Add tests for metrics. The tests create fake run results, pass to the metrics component, and then query the metrics page handler. We then use regex to parse the page output and verify the metrics values are updated as expected. I could continue adding more test cases, but let me know what you think of the approach.

@@ -11,7 +11,7 @@
</head>
<body>
<h1 class="text-center">kube-applier</h1>
{{ if .Start }}
{{ if .CommitHash }}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes an unrelated bug. The idea is to show a temporary loading page before the first run results come in. Unfortunately the "Start" field starts with a truthy value. CommitHash starts as empty string, which is falsy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for mentioning that.

// Note that filenames are reused in order to ensure that the metrics update iteratively.
func TestPrometheusProcessResult(t *testing.T) {
assert := assert.New(t)
runMetrics := make(chan run.Result, 5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having one long function doing all test cases, can you split this into multiple functions ?

Copy link
Contributor Author
@gregory-lyons gregory-lyons Jul 31, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you thinking something like this?

TestPrometheusProcessResult() {
    testCase1()
    testCase2()
    testCase3()
    ...
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along those lines.
I imagine you would need to pass bunch of parameters too.
And also better names.

TestPrometheusProcessResult() {
    testFullRunWithSuccessOnly(param1, param2, param3)
    testQuckRunMultipleSuccessMultipleFailure(param1, param2, param3 )
    ....
    ...
}
<
8000
/pre>

What do you think ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe something like this


func testFullRunWithSuccessAndFailure( t *testing.T, successCount failureCount int, .....) {
.....
}
 
TestPrometheusProcessResult() {    
    testFullRunWithSuccessAndFailure(t, 0 ,0)
    testFullRunWithSuccessAndFailure(t, 3 ,5)
    ...
    ...
}

If this makes sense for your test cases.

run/result.go Outdated
@@ -10,6 +10,7 @@ import (
// The functions associated with Result convert raw data into the desired formats for insertion into the status page template.
type Result struct {
RunID int
FullRun bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you consider using an enumeration ?

type RunType string

const (
        FullRun RunType = "FullRun" 
        PartialRun RunType =  "PartialRun"
)

Then the field can be called
RunType RunType

The boolean makes the code more difficult to read and is more tedious to extend.
I like this rule very much.
https://en.wikipedia.org/wiki/Zero_one_infinity_rule

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with your suggestion. I will adjust the implementation.

run/runner.go Outdated
@@ -79,7 +80,7 @@ func (r *Runner) fullRun(id int) (*Result, error) {
return nil, err
}
log.Printf("RUN %v: Starting full run with hash %v", id, hash)
result, err := r.run(id, rawList, hash)
result, err := r.run(id, true, rawList, hash)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example while reading this code, I would have to jump back to the definition to decipher what the "true" means

@@ -11,7 +11,7 @@
</head>
<body>
<h1 class="text-center">kube-applier</h1>
{{ if .Start }}
{{ if .CommitHash }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for mentioning that.

"time"
)

type testCase struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love this test. Thanks for adding it.

@gregory-lyons
Copy link
Contributor Author
gregory-lyons commented Jul 31, 2017

@ahakanbaba I am a little unsure about splitting up the test cases into separate functions because of the variation in the cases and the importance of their sequencing. I think I have cleaned up the test cases in a way that makes them more readable. Let me know what you think.

@@ -42,7 +42,7 @@ func (p *Prometheus) Configure() {
// Result: true if the run was successful, false otherwise
"success",
// True if full run, false if quick run
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment needs to be updated.

@@ -42,7 +42,7 @@ func (p *Prometheus) Configure() {
// Result: true if the run was successful, false otherwise
"success",
// True if full run, false if quick run
"full_run",
"run_type",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure you want to use snake case ?
Can't you use lowerCamelCase or just runtype ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm... I was just following the naming conventions for the metrics that come with prometheus by default. but those conventions seem more for the metrics themselves, I can't find a convention for labels https://prometheus.io/docs/practices/naming/.

Also, doing some more research into prometheus conventions is making me reflect that we may not be using labels idiomatically, but I don't want to worry too much about that right now.

},
},
// Case 4: Successes, failures, quick run
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is definitely more readable like this.
Are the test cases independent ?
If I comment out case 1 -2 can i run case 3 without changing its behavior.

That would be very very user friendly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately they are quite dependent at the moment, although this is by design. The test cases use the same file names to check that the values are updated properly (e.g. after first run, check the count is 1... after second run, check that the count is two... etc)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

however, this might be leaning towards unnecessary testing of the prometheus library. perhaps we just care that the initial metrics are created with the names we expect. in that case, we would be able to make the test cases independent

@@ -302,7 +302,7 @@ func TestRunnerStartQuickLoop(t *testing.T) {
)
expectedResult = Result{
2,
false,
QuickRun,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks much better.

@ahakanbaba
Copy link
Contributor

Thanks for the responses. Looks good to me as is.

@gregory-lyons gregory-lyons merged commit 6ebfe2d into master Aug 1, 2017
brengarajalu pushed a commit to brengarajalu/kube-applier that referenced this pull request Mar 8, 2019
Move the log output to std err
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0