8000 Add `bermuda` backend for faster triangulation by Czaki · Pull Request #7747 · napari/napari · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add bermuda backend for faster triangulation #7747

8000
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 55 commits into from
Apr 23, 2025
Merged

Conversation

Czaki
Copy link
Collaborator
@Czaki Czaki commented Mar 26, 2025

References and relevant issues

Description

Add bermuda as next backend for polygon triangulation.

Introduce mechanism for selecting triangulation backend at runtime.

@github-actions github-actions bot added tests Something related to our tests preferences labels Mar 26, 2025
@Czaki Czaki added the run-benchmarks Add this label to trigger a full benchmark run in a PR label Mar 26, 2025
@Czaki Czaki removed the run-benchmarks Add this label to trigger a full benchmark run in a PR label Mar 26, 2025
Copy link
Contributor

The Qt benchmark run requested by PR #7747 (d7ed4e2 vs 6f702fe) has finished with status 'cancelled'. See the CI logs and artifacts for further details.
The non-Qt benchmark run requested by PR #7747 (d7ed4e2 vs 6f702fe) has finished with status 'cancelled'. See the CI logs and artifacts for further details.

Copy link
codecov bot commented Mar 26, 2025

Codecov Report

Attention: Patch coverage is 87.20379% with 54 lines in your changes missing coverage. Please review.

Project coverage is 92.93%. Comparing base (f71b773) to head (e9b12d5).
Report is 5 commits behind head on main.

Files with missing lines Patch % Lines
...i/layers/shapes/_accelerated_triangulate_python.py 89.32% 19 Missing ⚠️
napari/layers/shapes/_shapes_models/shape.py 79.51% 17 Missing ⚠️
...layers/shapes/_accelerated_triangulate_dispatch.py 75.51% 12 Missing ⚠️
napari/utils/triangulation_backend.py 81.81% 6 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #7747      +/-   ##
==========================================
- Coverage   92.98%   92.93%   -0.05%     
==========================================
  Files         635      637       +2     
  Lines       59857    60023     +166     
==========================================
+ Hits        55656    55784     +128     
- Misses       4201     4239      +38     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jni
Copy link
Member
jni commented Mar 27, 2025

@Czaki are we sure we want to replace rather than allow user-selectable backends? Currently we have:

  • pure python
  • numba
  • triangle
  • PartSegCore-compiled-backend
  • bermuda

Kinda crazy but I'm pretty proud of how much experimentation we've done in a short time. 😃 It's plausible that different libraries have different bottlenecks, and different crash edge cases. In that case it might be useful for users to choose which backend they want to use?

The major downside to my proposal is that we would need to update the preferences schema, which might indeed not be worth the trouble.

Let me know what you're thinking — I can be convinced to drop PartSegCore-compiled-backend altogether if you prefer that.

@Czaki
Copy link
Collaborator Author
Czaki commented Mar 27, 2025

When this PR is ready to merge, the Bermuda should support all cases supported by PartSegCore (I need to investigate current crashes).

I see the advantage of allowing to enable/disable triangle, but not to see selection between pure python and numba. But if you think it is worth, it could be done.

@Czaki Czaki added the run-benchmarks Add this label to trigger a fu 8000 ll benchmark run in a PR label Mar 28, 2025
Copy link
Contributor

The Qt benchmark run requested by PR #7747 (3a3f01b vs 6f702fe) has finished with status 'success'. See the CI logs and artifacts for further details.

Change Before [6f702fe] After [3a3f01b] Ratio Benchmark (Parameter)
- 1.57±0.5ms 950±3μs 0.6 benchmark_qt_slicing.AsyncImage2DSuite.time_set_view_slice(0.1, 'skin_data')

SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.
The non-Qt benchmark run requested by PR #7747 (3a3f01b vs 6f702fe) has finished with status 'failure'. See the CI logs and artifacts for further details.

@github-actions github-actions bot removed the run-benchmarks Add this label to trigger a full benchmark run in a PR label Mar 28, 2025
@Czaki Czaki added the run-benchmarks Add this label to trigger a full benchmark run in a PR label Mar 31, 2025
@Czaki
Copy link
Collaborator Author
Czaki commented Mar 31, 2025

I have skipped the example that crashes benchmarks to measure performance improvements.

Once #7766 is merged (need to wait on CI) I will update this PR and revert the last commit to debug what happened to fix the bug in bermuda.

@jni
Copy link
Member
jni commented Mar 31, 2025

When this PR is ready to merge, the Bermuda should support all cases supported by PartSegCore (I need to investigate current crashes).

Well, the parentheses say it all, I think! 😂 Although the two backends are supposed to be equivalent, differences in syntax, compilers, and maybe even implementation bugs may cause crashes in different circumstances, and rapidly switching between backends (without having to modify the environment) could be helpful in debugging? 🤷 But, this could always be added in 0.6.1 so that we have time to figure out the settings schema migration.

@jni
Copy link
Member
jni commented Mar 31, 2025

... Although, as I write that: for users, it is probably a good idea to actually do the proper migration: if you already have it turned on and you have partsegcore-compiled-backend installed and you upgrade to 0.6.0 in the same environment, it would be quite annoying to have your shapes layers suddenly be slow again. With a migration, you could change True to partsegcore-compiled-backend, and then it would allow people to do in-place upgrades without this issue...

(We could leave numba/triangle/none for a future release.)

(btw, the reason to run the pure Python backend on demand is that sometimes you encounter bad data that causes crashes in the compiled backends or in numba but that are easier to diagnose in pure Python.)

Copy link
Contributor

The Qt benchmark run requested by PR #7747 (deda7e5 vs 6f702fe) has finished with status 'success'. See the CI logs and artifacts for further details.

Change Before [6f702fe] After [deda7e5] Ratio Benchmark (Parameter)
- 255±200ms 165±10ms 0.65 benchmark_qt_viewer_image.QtViewerAddImageSuite.time_add_image(32)
- 4.76±0.9ms 2.85±0.1ms 0.6 benchmark_qt_slicing.AsyncImage2DSuite.time_create_layer(0.1, 'skin_data')
- 1.20±0.4ms 722±50μs 0.6 benchmark_qt_viewer_image.QtImageRenderingSuite.time_change_gamma(4096)
- 2.33±0.6ms 1.33±0.05ms 0.57 benchmark_qt_viewer_image.QtViewerImageSuite.time_refresh(64)
- 77.2±20ms 34.6±1ms 0.45 benchmark_qt_slicing.QtViewerAsyncImage2DSuite.time_z_scroll(0.0, 'jrc_hela-2 (scale 3)')

SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.
The non-Qt benchmark run requested by PR #7747 (deda7e5 vs 6f702fe) has finished with status 'success'. See the CI logs and artifacts for further details.

Change Before [6f702fe] After [deda7e5] Ratio Benchmark (Parameter)
- 565±10μs 323±2μs 0.57 benchmark_shapes_layer.MeshTriangulationSuite.time_set_meshes('path', True)
- 4.83±0.02ms 2.60±0.03ms 0.54 benchmark_shapes_layer.MeshTriangulationSuite.time_set_meshes('polygon', True)

SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.

@github-actions github-actions bot removed the run-benchmarks Add this label to trigger a full benchmark run in a PR label Mar 31, 2025
jni pushed a commit that referenced this pull request Apr 1, 2025
# References and relevant issues

For easier debugging of #7747

# Description

In #7632 I have added dump of problematic shapes that crash
triangulation. I think that such an option to debug problematic cases
may be introduced in other part of the codebase.

This PR adds to benchmarks an upload of this data dump for easier
debugging of CI.
@Czaki Czaki added the run-benchmarks Add this label to trigger a full benchmark run in a PR label Apr 1, 2025
Copy link
Contributor
github-actions bot commented Apr 2, 2025

The Qt benchmark run requested by PR #7747 (64b6ccb vs 4362250) has finished with status 'success'. See the CI logs and artifacts for further details.
The non-Qt benchmark run requested by PR #7747 (64b6ccb vs 4362250) has finished with status 'failure'. See the CI logs and artifacts for further details.

@github-actions github-actions bot removed the run-benchmarks Add this label to trigger a full benchmark run in a PR label Apr 2, 2025
@Czaki
Copy link
Collaborator Author
Czaki commented Apr 22, 2025

I also expected that it will not work.
we need something like in local_migrator https://pypi.org/project/local-migrator/ that works on dict, before creating a model instance.

My opinion is that we should calculate default based on environment based on environment, to provide a user the fastest solution by default. We may name it cascade.

@jni
Copy link
Member
jni commented Apr 22, 2025

we need something like in local_migrator pypi.org/project/local-migrator that works on dict, before creating a model instance.

ha, cool stuff @Czaki!

I would like to get this in quickly, though, so I think what I will do is add back compiled_triangulation and document it as unused. What do you think?

@Czaki
Copy link
Collaborator Author
Czaki commented Apr 22, 2025

I would like to get this in quickly, though, so I think what I will do is add back compiled_triangulation and document it as unused. What do you think?

And hide from interface?

@jni
Copy link
Member
jni commented Apr 22, 2025

And hide from interface?

I would like this but is there an easy way to do so? For now it is at the bottom and described as unused, pointing to the correct setting.

My opinion is that we should calculate default based on environment based on environment, to provide a user the fastest solution by default. We may name it cascade.

Do you have time to work on this now? I need to make dinner soon but will come back to this afterwards.

@jni
Copy link
Member
jni commented Apr 22, 2025

Hmmm, even after using the default setting in the migration, I am still getting this failure:

_________________________ test_env_settings_restore _________________________

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x156dd7a40>

    def test_env_settings_restore(monkeypatch):
        monkeypatch.setenv('NAPARI_ASYNC', '0')
        s = NapariSettings()
        s.experimental.completion_radius = 1
        assert s.env_settings() == {'experimental': {'async_': '0'}}
>       assert s._save_dict()['experimental'] == {'completion_radius': 1}
E       AssertionError: assert {'completion_...: pure_python} == {'completion_radius': 1}
E
E         Omitting 1 identical items, use -vv to show
E         Left contains 1 more item:
E         {'triangulation_backend': pure_python}
E         Use -v to get more diff

napari/settings/_tests/test_settings.py:436: AssertionError

@Czaki
Copy link
Collaborator Author
Czaki commented Apr 22, 2025

Hmmm, even after using the default setting in the migration, I am still getting this failure:

It was caused by changes in conftest.py created to enforce the fastest triangulation in tests.

But if we set fastest_available to be default, then this change is obsolete.

@jni
Copy link
Member
jni commented Apr 22, 2025

Oh! ad8a0b7 is nice! Just missed a spot. 😊 I think 4a78872 should fix it. 🤞

@jni
Copy link
Member
jni commented Apr 22, 2025

@Czaki nice work with be11d2c! I was wondering about the setting writing. 👍 Do you think it might be even better to use all lowercase and underscores? (ie exactly the same as the enum value)

Copy link
Member
@jni jni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@jni jni added the ready to merge Last chance for comments! Will be merged in ~24h label Apr 22, 2025
@jni
Copy link
Member
jni commented Apr 22, 2025

Tested this locally and it's working great!

@Czaki
Copy link
Collaborator Author
Czaki commented Apr 22, 2025

Do you think it might be even better to use all lowercase and underscores? (ie exactly the same as the enum value)

It happens when use auto(), but it looks worse for me (when opening settings).

@jni
Copy link
Member
jni commented Apr 22, 2025

It happens when use auto(), but it looks worse for me (when opening settings).

Oh. I saw in one of the test failures that pure_python was recorded as 6, which looks real bad 😂

here: https://github.com/napari/napari/actions/runs/14595824772/job/40941394457?pr=7747#step:14:485

E AssertionError: assert NapariSetting...----------\n{}\n == NapariSetting..._backend: '6'\n

Does this mean that one would need to write settings.experimental.triangle_backend = 'Pure python' and 'pure_python' wouldn't work?

@jni
Copy link
Member
jni commented Apr 22, 2025

Does this mean that one would need to write settings.experimental.triangle_backend = 'Pure python' and 'pure_python' wouldn't work?

If this is the case, I am opposed to this change...

return self.name.replace('_', ' ').title()
return str(self.value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a step backward. I think it should be reverted, and it suggests to me that the changes from auto() were not needed...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes comes from this line in preference dialog code:

enums = [s.value for s in subfield.type_] # type: ignore

So instead of using str(enum) we use enum.value for preparing dialog. I do not want to touch this in this PR...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oooh, I thought this was in StrEnum, not in TriangulationBackend! Ok this is fine. 🙏

@jni jni merged commit 1ab3ac3 into napari:main Apr 23, 2025
37 checks passed
@jni jni deleted the switch_bermuda branch April 23, 2025 01:59
@github-actions github-actions bot removed the ready to merge Last chance for comments! Will be merged in ~24h label Apr 23, 2025
@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/upcoming-napari-0-6-0-release/108954/10

@imagesc-bot
Copy link

This pull request has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/napari-0-6-0-released/112147/1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement maintenance PR with maintance changes, preferences tests Something related to our tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0