-
-
Notifications
You must be signed in to change notification settings - Fork 253
Use fewest transfers possible in read_binary_values
#874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Use fewest transfers possible in read_binary_values
#874
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #874 +/- ##
=======================================
Coverage 79.54% 79.54%
=======================================
Files 27 27
Lines 5378 5378
Branches 335 335
=======================================
Hits 4278 4278
Misses 1077 1077
Partials 23 23
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Could you update the release notes with the PR number ? Also could you add a note in the docs about the new behavior ?
d4a994c
to
1827ac4
Compare
Sorry, didn't notice the other items in the changelog had PR numbers. I did update the docstring for the |
You need to adjust the section |
1827ac4
to
0bbbf1c
Compare
Done. |
Thanks, it looks good. I am wondering if this could break when transferring very long responses in which case it may be better to cap the maximum chunck_size. I will try to test either tomorrow or next week before merging. |
I wonder that as well. Looks like the length field in the USBTMC header is a 4-byte value (up to 4GiB), while the IEEE header represents the length with up to 9 decimal digits (up to 999MB), so it should be impossible to overflow the length field of the USB header. Similarly, looks like the HP header is a 2-byte length value, so that should also be fine. I have tested this with 1MiB transfers, but I don't have other equipment to test this with, so further testing would be great to see if there are issues below these limits. |
Sorry I still did not get a chance to test this. If anybody has the bandwidth to do it it would be welcome. |
I just tested this with a Keysight DSOX4024A, Keithley 2450, Tektronix AFG 31000, and Keithley 2230G. All instruments seemed to work as expected with a few basic commands, and I was able to transfer 2MB of data from the scope without issue about 20% faster than without this PR (in 200ms). My tests aren't extensive, but I didn't have any issues. |
Great, that's already a very good data point. With your DSO, can you try transferring several 100 MB ? If it succeeds, I think we should be good to go |
I had a look through the programming manual and it seems I misread that the maximum transfer is actually 10M 10-bit points (limited to 4M I think on this model), not 1M. I can try that next time I go to the lab. Even now that I have access to a lab with much nicer equipment, I still don't think I have a device that can do several 100MB all in one go. |
Thanks I will check what I can test on my side. |
So I finally managed to run some test transferring 200M (after transferring easily 100k), but I ran out of time before figuring out what I was doing wrong (or not transferring so much data is slow) (and crashed the instrument through random disconnection). I will try again later this week. |
Ok, I can finally report some interesting testing results. Using the Pyvisa-py backend this PR and the main branch performs equally (within the error margin associated with such large trasnfers). As a consequence I think we should introduce a chunk_size_limit at the resource level to prevent such slow down to occur. The default value could be 2**18 (i.e. 256k). This way we could get the best of both worlds. @aWZHY0yQH81uOYvH would such a cap still work for you ? I remember you needed to avoid having too many transfers but could this work ? |
Another option could be to have the default value defined on the library class and put no limit with pyvisa-py. |
See pyvisa/pyvisa-py#472 for more details. Essentially, I have an instrument that will lose data when too many small reads are issued. This PR makes it so
read_binary_values
, when not passed an explicitchunk_size
, uses one transfer of default length to read the data header, then if there is more data to read, it reads it all in one more transfer. This should be more efficient and solves a problem with my instrument. Behavior for users that specifychunk_size
is unaffected.In order for this to work with the
pyvisa-py
backend, pyvisa/pyvisa-py#470 needs to be merged since it currently further breaks up reads.ruff check . && ruff format -c . --check
with no errors