-
Notifications
You must be signed in to change notification settings - Fork 237
Half-finished org-mode conversions (at least with streaming on) #813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
bug
Something isn't working
Comments
I can compare the two and try to fix the conversion bug. |
[
{
"candidates": [
{
"content": {
"parts": [
{
"text": "Okay"
}
],
"role": "model"
}
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": ", that's a fascinating and relevant topic, especially given the increasing accessibility of powerful personal"
}
],
"role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": " computing hardware. To research the performance of local large language models (LLMs) on"
}
],
"role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": " the M1 Mac Studio Ultra, I propose the following plan:\n\n1. *Gather Search Terms:* Compile a list of relevant search terms to use with"
}
],
"role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": " the =internet_search= tool. These terms should include specific LLMs (e.g., Llama, Alpaca, GPT-J), the M1"
}
],
"role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": " Mac Studio Ultra, and performance metrics (e.g., inference speed, memory usage, quantization).\n2. *Initial Search:* Perform initial searches using a broad range of search terms to identify relevant articles, blog posts, forum discussions"
}
],
"role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": ", and GitHub repositories.\n3. *Filter and Prioritize Results:* Analyze the search results to identify the most credible and informative sources. Prioritize results that include quantitative performance data (e.g., benchmarks, latency measurements).\n4"
}
],
"role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": ". *Deep Dive into Key Resources:* Use the =read_website= tool to extract and analyze the content of the most promising articles and blog posts. Summarize the key findings and performance metrics.\n5. *GitHub Exploration:* If the search results reveal relevant GitHub repositories, examine the code and documentation to understand how LL"
}
],
"role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": "Ms are being optimized for the M1 Mac Studio Ultra.\n6. *Synthesize Findings:* Combine the information gathered from different sources to create a comprehensive overview of the performance of local LLMs on the M1 Mac Studio Ultra. Identify any trends, limitations, or areas for further research.\n\nDoes this plan"
}
],
"
7E64
role": "model"
},
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 16363,
"totalTokenCount": 16363,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 16363
}
]
},
"modelVersion": "gemini-2.0-flash"
},
{
"candidates": [
{
"content": {
"parts": [
{
"text": " sound good to you? If so, I'll start by generating some search terms.\n"
}
],
"role": "model"
},
"finishReason": "STOP",
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
}
]
}
],
"usageMetadata": {
"promptTokenCount": 15518,
"candidatesTokenCount": 347,
"totalTokenCount": 15865,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 15518
}
],
"candidatesTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 347
}
]
},
"modelVersion": "gemini-2.0-flash"
}
] And here's the corresponding output:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Please update gptel first -- errors are often fixed by the time they're reported.
Bug Description
With org-mode enabled in gptel buffers, sometimes streamed output from the LLM is partially converted --- for instance, a common problem is half-converted italics notation, so I end up with something like "/foo*" all over the place. There aren't any errors, as far as I can tell, just this issue.
Backend
None
Steps to Reproduce
Not sure how to repro this specifically, since it happens all the time for me.
Additional Context
Emacs version: GNU Emacs 31.0.50 (build 2, x86_64-pc-linux-gnu, GTK+ Version 3.24.49, cairo version 1.18.2) of 2025-04-20
Backtrace
Log Information
The text was updated successfully, but these errors were encountered: