8000 Half-finished org-mode conversions (at least with streaming on) · Issue #813 · karthink/gptel · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Half-finished org-mode conversions (at least with streaming on) #813

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
alexispurslane opened this issue Apr 29, 2025 · 2 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@alexispurslane
Copy link

Please update gptel first -- errors are often fixed by the time they're reported.

  • I have updated gptel to the latest commit and tested that the issue still exists

Bug Description

With org-mode enabled in gptel buffers, sometimes streamed output from the LLM is partially converted --- for instance, a common problem is half-converted italics notation, so I end up with something like "/foo*" all over the place. There aren't any errors, as far as I can tell, just this issue.

Backend

None

Steps to Reproduce

Not sure how to repro this specifically, since it happens all the time for me.

Additional Context

Emacs version: GNU Emacs 31.0.50 (build 2, x86_64-pc-linux-gnu, GTK+ Version 3.24.49, cairo version 1.18.2) of 2025-04-20

Backtrace

Log Information

@alexispurslane alexispurslane added the bug Something isn't working label Apr 29, 2025
@karthink
Copy link
Owner
  1. Turn on logging: (setq gptel-log-level 'info)
  2. Use gptel.
  3. When this error occurs, provide the latest response as it appears in the buffer and the corresponding contents of the buffer *gptel-log*.

I can compare the two and try to fix the conversion bug.

@alexispurslane
Copy link
Author

*gptel-log* output:

[
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": "Okay"
            }
          ],
          "role": "model"
        }
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": ", that's a fascinating and relevant topic, especially given the increasing accessibility of powerful personal"
            }
          ],
          "role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": " computing hardware. To research the performance of local large language models (LLMs) on"
            }
          ],
          "role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": " the M1 Mac Studio Ultra, I propose the following plan:\n\n1.  *Gather Search Terms:* Compile a list of relevant search terms to use with"
            }
          ],
          "role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": " the =internet_search= tool. These terms should include specific LLMs (e.g., Llama, Alpaca, GPT-J), the M1"
            }
          ],
          "role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": " Mac Studio Ultra, and performance metrics (e.g., inference speed, memory usage, quantization).\n2.  *Initial Search:* Perform initial searches using a broad range of search terms to identify relevant articles, blog posts, forum discussions"
            }
          ],
          "role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": ", and GitHub repositories.\n3.  *Filter and Prioritize Results:* Analyze the search results to identify the most credible and informative sources. Prioritize results that include quantitative performance data (e.g., benchmarks, latency measurements).\n4"
            }
          ],
          "role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": ".  *Deep Dive into Key Resources:* Use the =read_website= tool to extract and analyze the content of the most promising articles and blog posts. Summarize the key findings and performance metrics.\n5.  *GitHub Exploration:* If the search results reveal relevant GitHub repositories, examine the code and documentation to understand how LL"
            }
          ],
          "role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": "Ms are being optimized for the M1 Mac Studio Ultra.\n6.  *Synthesize Findings:* Combine the information gathered from different sources to create a comprehensive overview of the performance of local LLMs on the M1 Mac Studio Ultra. Identify any trends, limitations, or areas for further research.\n\nDoes this plan"
            }
          ],
          "
7E64
role": "model"
        },
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 16363,
      "totalTokenCount": 16363,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 16363
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  },
  {
    "candidates": [
      {
        "content": {
          "parts": [
            {
              "text": " sound good to you? If so, I'll start by generating some search terms.\n"
            }
          ],
          "role": "model"
        },
        "finishReason": "STOP",
        "safetyRatings": [
          {
            "category": "HARM_CATEGORY_HATE_SPEECH",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_HARASSMENT",
            "probability": "NEGLIGIBLE"
          },
          {
            "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
            "probability": "NEGLIGIBLE"
          }
        ]
      }
    ],
    "usageMetadata": {
      "promptTokenCount": 15518,
      "candidatesTokenCount": 347,
      "totalTokenCount": 15865,
      "promptTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 15518
        }
      ],
      "candidatesTokensDetails": [
        {
          "modality": "TEXT",
          "tokenCount": 347
        }
      ]
    },
    "modelVersion": "gemini-2.0-flash"
  }
]

And here's the corresponding output:

Okay, that's a fascinating and relevant topic, especially given the increasing accessibility of powerful personal computing hardware. To research the performance of local large language models (LLMs) on the M1 Mac Studio Ultra, I propose the following plan:

1.  /Gather Search Terms:* Compile a list of relevant search terms to use with the =internet_search= tool. These terms should include specific LLMs (e.g., Llama, Alpaca, GPT-J), the M1 Mac Studio Ultra, and performance metrics (e.g., inference speed, memory usage, quantization).
2.  /Initial Search:* Perform initial searches using a broad range of search terms to identify relevant articles, blog posts, forum discussions, and GitHub repositories.
3.  /Filter and Prioritize Results:* Analyze the search results to identify the most credible and informative sources. Prioritize results that include quantitative performance data (e.g., benchmarks, latency measurements).
4.  /Deep Dive into Key Resources:* Use the =read_website= tool to extract and analyze the content of the most promising articles and blog posts. Summarize the key findings and performance metrics.
5.  /GitHub Exploration:* If the search results reveal relevant GitHub repositories, examine the code and documentation to understand how LLMs are being optimized for the M1 Mac Studio Ultra.
6.  /Synthesize Findings:* Combine the information gathered from different sources to create a comprehensive overview of the performance of local LLMs on the M1 Mac Studio Ultra. Identify any trends, limitations, or areas for further research.

Does this plan sound good to you? If so, I'll start by generating some search terms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants
0