8000 Implement Model-Provider for Claude 3.7 with Thinking · Issue #1999 · obot-platform/obot · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Implement Model-Provider for Claude 3.7 with Thinking #1999

New issue

Have a question about this project? Sign up for a free GitHub account to ope 8000 n an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
StrongMonkey opened this issue Mar 8, 2025 · 5 comments
Open

Implement Model-Provider for Claude 3.7 with Thinking #1999

StrongMonkey opened this issue Mar 8, 2025 · 5 comments
Assignees

Comments

@StrongMonkey
Copy link
Contributor

Implement a model-provider for Claude 3.7 with thinking as discussed in the Slack thread. Refer to the documentation for extended thinking: <https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking|Extended Thinking Documentation>. The model is still Claude 3.7, but requires additional steps similar to what Daishan did for O1.

@sangee2004
Copy link

@iwilltry42 I am getting the following error when chatting with agents that are configured to use model - claude-3-7-sonnet-20250219-thinking nmodel.

failed to run: failed calling model for completion: request failed on non-retriable status-code 400: error, status code: 400 (400 Bad Request), message: messages.1.content.0.type: Expected thinking or redacted_thinking, but found text. When thinking is enabled, a final assistant message must start with a thinking block (preceeding the lastmost set of tool_use and tool_result blocks). We recommend you include thinking blocks from previous turns. To avoid this requirement, disable thinking. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking

Default model used for chatting:

Image

@iwilltry42
Copy link
Contributor

Hi @sangee2004 I had a couple of chats with the thinking models without hitting this - can you please provide some chat example that I can follow?

Or well... I don't have the access anymore to follow this actually 🙈

@sangee2004
Copy link

In my local deployment , created an obot which gets "Google Search" and "Images" as built in tools.
Following is the chat I had withe the agent that uses claude-3-7-sonnet-20250219-thinking model.

Image

@cjellick
Copy link
Contributor
cjellick commented Apr 9, 2025

I looked at how we implemented the claude thinking model. This seems wrong. We probably want to do something more "intelligent" than this and figure out how to dynamically turn thinking on and off in chat. Maybe this is a good start in that we can build on top if it.

@sangee2004
Copy link
sangee2004 commented May 14, 2025

Tested with latest version.

I have an agent with Github MCP server added.
Have anthropic model provider configured and claude-3-7-sonnet-20250219-thinking model as one of the allowed models.

Chat with the agent using this thinking model and ask to List all branches in https://github.com/obot-platform/obot

Even though the tool call succeeds and provides the required output , we see the following error message presented as chat response.

Image Image

Note - When there are no tool calls involved, then I am able to get chat responses.

@sangee2004 sangee2004 self-assigned this May 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants
0