-
Notifications
You must be signed in to change notification settings - Fork 747
[Q] How to download a lot of histories? #7778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and 8000 privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Jason Davenport commented:
Here’s an example implementation:
The fetch_run_history function includes a retry mechanism with exponential backoff. If a rate limit error (HTTP 429) occurs, it waits for a progressively longer time before retrying. The This approach should help you download run histories more efficiently without excessively hitting API rate limits. |
Jason Davenport commented: |
Jason Davenport commented: |
On a related note, is there a way to access multiple histories through the GraphQL API? Is that API even working at this time? -David |
I have several thousand runs in a project, and I'd like to download all their histories together. Manually looping over all the runs and querying their history with
run.history(...)
takes a very long time (hours), and it looks like the implementation ofruns.histories(...)
does the same thing.If I query the runs in parallel, I get a
requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://api.wandb.ai/graphql
. Any suggestions on what to do?The text was updated successfully, but these errors were encountered: