8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the AsyncAnthropic Python client.
AsyncAnthropic
#83
How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli
hatch run prepare
model = AnthropicModel(client_args={"api_key": "****"}, model_id="claude-3-7-sonnet-20250219", max_tokens=1000) agent = Agent(model=model, callback_handler=None) prompt = "What is 2+2? Think through the steps." async def func_a(): result = await agent.invoke_async(prompt) print(f"FUNC_A: {result.message}") async def func_b(): result = await agent.invoke_async(prompt) print(f"FUNC_B: {result.message}") async def func_c(): result = await agent.invoke_async(prompt) print(f"FUNC_C: {result.message}") async def func_d(): result = await agent.invoke_async(prompt) print(f"FUNC_D: {result.message}") async def main(): await asyncio.gather(func_a(), func_b(), func_c(), func_d()) asyncio.run(main())
Using the sync Anthropic Python client, every function ran sequentially and the total run time averaged 14s. Using the AsyncAnthropic Python client, every function ran concurrently and the total run time averaged 4s.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Sorry, something went wrong.
models - anthropic - async
f7fc787
Wait, is this saying that wall time went from 14s -> 4s?!
Using the sync Anthropic Python client, every function ran sequentially and the total run time averaged 14s. Using the AsyncAnthropic Python client, every function ran concurrently and the total run time averaged 4s. Wait, is this saying that wall time went from 14s -> 4s?!
Yes on my machine (very scientific lol). Of course this will all be hardware and prompt dependent. But no matter the case, customers can expect to see speed ups with concurrent executions.
cabed2f
models - anthropic - async (strands-agents#371)
4c77e88
37b8a67
zastrowm zastrowm approved these changes
Successfully merging this pull request may close these issues.