-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Project quality tests #9400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Project quality tests #9400
Conversation
…zm/project-quality-server
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #9400 +/- ##
===========================================
- Coverage 73.51% 73.47% -0.04%
===========================================
Files 437 438 +1
Lines 46331 46368 +37
Branches 3935 3936 +1
===========================================
+ Hits 34061 34071 +10
- Misses 12270 12297 +27
🚀 New features to boost your workflow:
|
|
samples = tasks | ||
is_staff = is_task_staff | ||
else: | ||
assert False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unclear what is the reason of a test failure in this case. Also it's better to raise a specific Exception class with the message so it can be correctly presented in a test report.
Also if a filter
test parameter can have two values then i don't think you should assert it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would something like
assert False | |
assert False, "reason" |
be enough for the report?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a safety fallback pattern, used to protect the code checking specific undocumented values from being changed in only 1 place. In this specific case it could be replaced with NotImplementedError
without losing meaning. In the end, we just need to code to fail with any error to check the test.
If you want to introduce a specific error classification for test failures, it will be better to discuss the use cases for this first.
else: | ||
return super()._get_field_samples(field) | ||
|
||
@pytest.mark.parametrize( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a field
parameter but i am not sure if it's used somewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used in the parent class.
): | ||
# Find a project where the user doesn't have permissions | ||
non_admin_user = next( | ||
u["username"] for u in users if not u["is_superuser"] and u["username"] != self.user |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you've written exactly the same code above so i think it's better to have a fixture for it. Another problem is about test data resolution during a test run. As i understand there is a pull of predefined users (or tasks/projects/jobs) and this code tries to find one that meets specific criteria. It's a common practice to have just a predefined user for a particular test case, so a test does not need to find a test data because it knows exactly where it is. Again this can be done via fixtures
with make_api_client(user) as api_client: | ||
(_, response) = api_client.quality_api.create_report( | ||
quality_report_create_request=models.QualityReportCreateRequest(task_id=task_id), | ||
quality_report_create_request=models.QualityReportCreateRequest( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a small suggestion to improve readability and maintainability. Instead of unpacking conditional dicts inline like this:
quality_report_create_request=models.QualityReportCreateRequest(
**{"task_id": task_id} if task_id else {},
**{"project_id": project_id} if project_id else {},
),
…it would be cleaner and more readable to build the parameters explicitly before passing them to the model:
params = {}
if task_id:
params["task_id"] = task_id
if project_id:
params["project_id"] = project_id
quality_report_create_request = models.QualityReportCreateRequest(**params)
This version avoids nested unpacking, is easier to follow, and scales better if more optional fields are added later.
) | ||
api_client.jobs_api.update_annotations( | ||
job.id, | ||
job_annotations_update_request=dict( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion it's better to use {}
over dict()
constructors because:
- it is clearer and easier to read
- it has better performance
- it is consistent with the other code (you use
[]
, notlist()
and(,)
, nottuple()
)
|
||
assert report["summary"]["conflict_count"] == 0 | ||
assert report["summary"]["valid_count"] == 2 | ||
assert report["summary"]["total_count"] == 2 | ||
|
||
def test_project_report_aggregates_nested_task_reports(self, quality_reports, tasks, jobs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it correct that the test does not make any API call? For me it seems that the test does verifications against static files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It verifies the static files, because they represent what the server generates in this situation. In other tests we check that these fixtures are relevant and match current server outputs.
) | ||
|
||
|
||
@pytest.mark.usefixtures("restore_db_per_function") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any specific reason why these tests need a clean state of data base?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests modify the DB state - they might create some objects or change others. By default, we mark such tests with this fixture. This helps to avoid unexpected problems by making the environment clean for each test.
for r in [report, report_data]: | ||
# Verify report was created | ||
assert r["project_id"] == project_id | ||
assert r.get("task_id") is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert "task_id" not in r
is more readable IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The suggested variant unnecessarily restricts cases where this key is present and equal to None
. The current implementation just ensures that this field is not informative.
project = next(p for p in projects if p["tasks"]["count"] == 0) | ||
project_id = project["id"] | ||
|
||
self.create_quality_report(user=admin_user, project_id=project_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.create_quality_report
does not check that the report was created, it verifies that the response has json encoded body and 200 code
def test_user_create_project_report_in_sandbox(self, is_staff, allow, find_sandbox_project): | ||
project, user = find_sandbox_project(is_staff) | ||
|
||
if allow: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to test OPA policies on unit level https://www.openpolicyagent.org/docs/policy-testing. Currently if you want to achieve a full coverage on integration level you'd need to implement too many test cases which is very inefficient
project = next( | ||
p | ||
for p in projects | ||
if p["organization"] is None | ||
if not users[p["owner"]["id"]]["is_superuser"] | ||
if any(l for l in labels if l.get("project_id") == p["id"]) | ||
if any(t for t in tasks if t["project_id"] == p["id"] and t["size"]) | ||
if ( | ||
has_gt_jobs is None | ||
or has_gt_jobs | ||
== any(t["validation_mode"] for t in tasks if t["project_id"] == p["id"]) | ||
) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To add to possible issues with test/data coupling;
What if this throws StopIteration
? There should be a guarantee that this would not happen, e.g. taking the required data from a predefined object/.json/fixture that is guaranteed to have at least 1 member instead of dynamically filtering. Otherwise, we would split hairs debugging this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be a guarantee that this would not happen
Why? Basically, we have all the test assets in the shared/assets/
directory. They are managed and updated as required. Most of them have at least 1 test relying on them.
If the relevant entry in not found, the test requesting it fails. It doesn't happen randomly. Then this is troubleshooted - either a test asset is added in the test DB or an existing one is modified in the fixture. Each case is individual.
Motivation and context
Depends on #9405
Added tests for changes in #9116
GET api/quality/reports
?job_id=
and?task_id=
filters. Now all return 403 when used with an inaccessible objectHow has this been tested?
Checklist
develop
branchLicense
Feel free to contact the maintainers if that's a concern.