[Bug] weird respones from gpt-4o or gpt-4o-mini from OpenAI #8109
-
📦 EnvironmentDocker 📌 Version1.49.5 💻 Operating SystemUbuntu 🌐 BrowserChrome 🐛 Bug DescriptionWeird response when using gpt-4o or gpt-4o-mini. Other models are OK. 📷 Recurrence StepsNo response 🚦 Expected BehaviorNo response 📝 Additional InformationNo response |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
Beta Was this translation helpful? Give feedback.
-
I couldn't find specific information about issues with the gpt-4o or gpt-4o-mini models in the Lobe Chat codebase. However, there are recent updates indicating that LobeChat v1.6 supports GPT-4o Mini, and the model lineup has been upgraded to GPT-4. This suggests that the models are relatively new, which might contribute to unexpected behavior [1][2]. For debugging, ensure that your API configuration settings are correct, as incorrect settings might lead to unexpected responses. Additionally, Lobe Chat has comprehensive error handling and logging features that could help diagnose the issue. You might want to enable debugging to capture more detailed logs of the API responses [3][4]. If the problem persists, consider checking for any recent changes or updates in the codebase that might affect these models [1]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
please check your params is correct |
Beta Was this translation helpful? Give feedback.
please check your params is correct