8000 Add basic support for asynchronous reads by natezb · Pull Request #403 · pyvisa/pyvisa · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add basic support for asynchronous reads #403

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

natezb
Copy link
@natezb natezb commented Mar 27, 2019

This PR added attributes related to IO_COMPLETION events, along with some methods for performing generator-based asynchronous reads.

I've tested this by pulling data from two oscilloscopes simultaneously, and it has worked well.

@MatthieuDartiailh
Copy link
Member

@natezb thanks for the PR it looks promising. I will take time later to review it in details.

However, it may take some time to merge it since the yield from syntax was introduced in Python 3.3 and cannot be easily implemented in Python 2. Currently PyVISA supports Python 2.7 and 3+. After the next release (for which I do not have a well defined timeline, but this summer or before hopefully), we will switch to 3.5+ since Python 2 EOL is 01-2020.

@MatthieuDartiailh MatthieuDartiailh self-requested a review March 28, 2019 02:30
@MatthieuDartiailh
Copy link
Member

Also I would love to hear @hgrecco and @tivek opinion on this.

@natezb
Copy link
Author
natezb commented Mar 28, 2019

Yeah, I noticed exactly the same thing, as my first two commits produced a broken build. My third commit separates out the code using yield from into a separate module so it can be skipped for versions < 3.3.

@hgrecco
Copy link
Member
hgrecco commented Mar 28, 2019

A few years ago I was toying with something like this (yours looks better, by the way). And I have a suggestion. Why not creating a separate package (async-pyvisa, or something like this)? People using async are still a niche (a growing nich, but still a niche). An pip installing is simple enough. In this way you will be able to evolve that package as frequently as you can until you feel satisfied.

Copy link
Member
@MatthieuDartiailh MatthieuDartiailh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks really good. I am not familiar enough with async to check the logic (but I will try to do some reading). From my point of view, I would prefer to have all read method to have equivalent async method. The main question here is how to avoid duplication, so far I do not have a good answer for this one.
@hgrecco I am not a big fan of having another pyvisa package with the added maintenance it represents given the limited time the maintainers have. Since this cannot go in before the end of the summer, I think we can discuss and experiment in this PR before merging. Furthermore I am fine with marking the API as unstable for 6 months after adding it.


job_id, _ = self.visalib.get_attribute(ctx, constants.VI_ATTR_JOB_ID)
count, _ = self.visalib.get_attribute(ctx, constants.VI_ATTR_RET_COUNT)
self.visalib.close(ctx)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to see this in a try/finally block to be sure we always close the context.

warnings.warn("read string doesn't end with "
"termination characters", stacklevel=2)

return message[:-len(termination)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to be complete here and hence have also async_read_bytes, async_read_ascii_values, async_read_binary_values. I am not sure what is the best way to limit code duplication here however.

@RossSmyth
Copy link

Any update on this?

@MatthieuDartiailh
Copy link
Member

No, but internal refactoring should make this easier now. (but also created some conflicts). I do not have experience with async and I would prefer somebody else to have a look honestly. Feel free to start another similar PR if you have a definite use case par it.

Base automatically changed from master to main January 19, 2021 09:57
@tobinjones
Copy link
tobinjones commented Jul 20, 2023

Hi folks, I'm quite interested in this. I've had a look at the pull request and have tried some experiments myself and have a few comments. First off, the code from @natezb is very interesting and he was way ahead of his time, implementing this 4 years ago, impressive!

Async/Await has come a long way in the intervening years and I think it would be a great time to try another run at this. I note you require Python 3.8, which has very robust asyncio features.

There's one main thing I'd like to bring up about this old proposal, and that's the tight loop in the center of the async_read_chunk coroutine:

    while True:
        yield
        msg, job_status = self._check_async_read(timeout=0)
        if msg is not None:
            return msg, job_status

This means that awaiting an async read is basically a spinwait. I'm not sure that's desirable. It would cause the event loop to use 100% of a core and could affect the scheduling of other coroutines.

As I understand it, at the VISA level, we are looking to be notified of the VI_EVENT_IO_COMPLETION event. There are two mechanisms to be notified of this, the queue mechanism and the callback mechanism. I see both of these have support in pyvisa. Following is pseudocode of the different options (for simplicity I have ignored error handling, chunked responses, basic syntax etc. etc.)

Queue mechanism

Poll the queue

async def _async_read_polling(self):
    while True:
        msg, status = self._check_async_read(timeout=0)
        if status is DONE:
            return msg
        asyncio.sleep(POLL_TIME)

Where _check_async_read is basically what natezb wrote. It waits on the event, gets attributes from the context and closes the context. If POLL_TIME is 0, it's a spinwait. If POLL_TIME is more than zero then it would be a better citizen, but our reads would be delayed by up to POLL_TIME. I don't prefer this solution.

Wait on queue in a thread or process pool

See documentation for run_in_executor

async def _async_read_executor(self):
    loop = asyncio.get_running_loop()
    msg, job_status = await loop.run_in_executor(executor, lambda: self._check_async_read(timeout=a_long_time))¶
    return msg, job_status

This uses an executor to run the blocking _check_async_read function, in either a thread or process. This conceptually quite simple, it just shunts our blocking call off to a thread and says "let me know when you're done". However, now you have the overhead of spinning up a thread/process for every async read.

Callback mechanism

We can get away from any blocking calls at all by using the callback mechanism. The callback must have a reference to the main asyncio event loop and some kind of asyncio.Queue or asyncio.Event that it can set when the VISA_IO_COMPLETION eve 8000 nt fires. The callback is run from outside of the main event loop, so the synchronization should be done with loop.call_soon_threadsafe()

async def _async_read_callback(self):
    queue = asyncio.Queue
    handler = self._make_callback(queue, asyncio.get_running_loop())
    self.install_handler(VI_EVENT_IO_COMPLETION, handler)
    buf, *_ = self.visalib.read_asynchronously(self.session, self.chunk_size)
    (job_status, ret_count) = await queue.get()
    msg = bytearray(buf[:ret_count])
    self.uninstall_handler(...)
    return msg, job_status

where the _handler_for_async_readmethod is something like

def _handler_for_async_read(self, queue, loop):
    def _handler(session, event_type, event_ctx, user_handle):
        job_status, _ = self.visalib.get_attribute(event_ctx, VI_ATTR_STATUS)
        ret_count, _ = self.visalib.get_attribute(event_ctx, VI_ATTR_RET_COUNT)
        loop.call_soon_threadsafe(
            lambda: queue.put_nowait((job_status, ret_count))
        )
    return _handler

Conceptually, I quite like this because it avoids blocking, polling, and creating any extra threads on our end. I have a proof-of-concept working and could start another pull request if there was interest in discussing that. I am NOT an expert on low level VISA stuff so I will probably need help to make sure I'm following best practices there.

Any thoughts on these options or next steps?

@tobinjones
Copy link

Ok, just some extra information about those options. I threw together a very ugly implementation:
https://gist.github.com/tobinjones/173a3aaab1773711e83d166eb55803d8

And benchmarked them, doing "*IDN?" on a real instrument via the IVI Visa library. The bottom line is:

Async reads via callback added ~0.4ms per read, compared to a synchronous read
Async reads via queue in a threadpool executor added ~1.6ms per read, compared to a synchronous read.

Thus, I think the callback method is significantly faster.

@MatthieuDartiailh
Copy link
Member

So since my last comment I gained experience with async, but did not find time to dive back into this.

I am not surprised to see the callback mechanism outperforming the poll based method. The last time I gave async support some though my concern was that to be useful we would have to duplicate all methods. Async makes most sense to use for binary data
but it would not make sense to have people duplicate that logic.
One option I considered was to have a different method on the ResourceManager returning async only version of resources, but I did not explore all the ramifications.

Honestly if you want to take this forward, I would like to see a detailed plan (PEP-like) addressing the above concerns and what solution you suggest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0