8000 Add trio.CapacityLimiter.wait_no_borrows by VincentVanlaer · Pull Request #2880 · python-trio/trio · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add trio.CapacityLimiter.wait_no_borrows #2880

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

VincentVanlaer
Copy link
Member
@VincentVanlaer VincentVanlaer commented Nov 18, 2023

This is the equivalent of trio.testing.wait_all_tasks_blocked but for anything wrapped by a CapacityLimiter. This is useful when writing tests that use to_thread.

Copy link
codecov bot commented Nov 18, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (605aff3) 99.64% compared to head (c8e67ab) 99.64%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #2880   +/-   ##
=======================================
  Coverage   99.64%   99.64%           
=======================================
  Files         116      116           
  Lines       17501    17532   +31     
  Branches     3148     3152    +4     
=======================================
+ Hits        17439    17470   +31     
  Misses         43       43           
  Partials       19       19           
Files Coverage Δ
src/trio/_sync.py 100.00% <100.00%> (ø)
src/trio/_tests/test_sync.py 100.00% <100.00%> (ø)

@VincentVanlaer VincentVanlaer marked this pull request as draft November 18, 2023 23:43
This is the equivalent of trio.testing.wait_all_tasks_blocked but for
anything wrapped by a CapacityLimiter. This is useful when writing
tests that use to_thread
@VincentVanlaer VincentVanlaer marked this pull request as ready for review November 19, 2023 16:48
@njsmith
Copy link
Member
njsmith commented Nov 19, 2023 via email

@VincentVanlaer
Copy link
Member Author

I'm worried this will be an attractive nuisance. In tests where you know exactly what tasks are supposed to be where, it makes total sense, but it also sounds like something that would let you get exclusive access to whatever the limiter is protecting. But that's not true; by the time it returns, another task could have already claimed a token.

Doesn't the while loop guarantee that when the function returns, there are no borrowed tokens? Meaning this is just a more efficient while limiter.borrowed_tokens: await trio.lowlevel.checkpoint(). Or am I misunderstanding something here?

In any case, perhaps another possibility that would communicate the intent better, is to add a function to trio.testing which does the same as wait_no_borrowers. It would take a capacity limiter and hooks into the internals of the capacity limiter to provide similar functionality without a busy/semi-busy loop. This is less clean on the implementation side though.

For context, I'm currently using this function as follows:

async def wait_all_blocked():
    capacity_limiter = trio.to_thread.current_default_thread_limiter()
    while True:
        await capacity_limiter.wait_no_borrowers()
        await trio.testing.wait_all_tasks_blocked()
        if (capacity_limiter.borrowed_tokens == 0):
            break

@VincentVanlaer
Copy link
Member Author

@njsmith Does my explanation or alternative proposal alleviate your concerns?

@njsmith
Copy link
Member
njsmith commented Jan 24, 2024

Would a trio.testing.wait_all_threads_complete be a good solution?

@VincentVanlaer
Copy link
Member Author

I have implemented your suggestion in #2937 as it's a complete reimplementation

@VincentVanlaer VincentVanlaer deleted the wait-no-borrowers branch February 15, 2024 15:18
@VincentVanlaer
Copy link
Member Author

Superseded by #2937

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0