Releases: karthink/gptel
Version v0.9.8.5
Version 0.9.8.5 adds support for new Gemini, Anthropic and OpenAI models, for AWS Bedrock and other providers, brings better MCP support and a redesigned tools menu, support for "presets" and quick ways to invoke them, context inclusion via Org/Markdown links and better handling of LLM "reasoning" content.
Additionally, the gptel-request
pipeline is now fully asynchronous and tweakable, making it easy to add support for RAG steps or other prompt transformations.
Breaking changes
-
gptel-org-branching-context
is now a global variable. It was buffer-local by default in past releases. -
The following models have been removed from the default ChatGPT backend:
o1-preview
: useo1
instead.gpt-4-turbo-preview
: usegpt-4o
orgpt-4-turbo
instead.gpt-4-32k
,gpt-4-0125-preview
andgpt-4-1106-preview
: usegpt-4o
orgpt-4
instead.
Alternatively, you can add these models back to the backend in your personal configuration:
(push 'gpt-4-turbo-preview
(gptel-backend-models (gptel-get-backend "ChatGPT")))
- Only relevant if you use
gptel-request
in your elisp code, interactive gptel usage is unaffected:gptel-request
now takes a new, optional:transforms
argument. Any prompt modifications (like adding context to requests) must now be specified via this argument. See the definition ofgptel-send
for an example.
New models and backends
-
Add support for
gpt-4.1
,gpt-4.1-mini
,gpt-4.1-nano
,o3
ando4-mini
. -
Add support for
gemini-2.5-pro-exp-03-25
,gemini-2.5-flash-preview-04-17
,gemini-2.5-pro-preview-05-06
andgemini-2.5-pro-preview-06-05
. -
Add support for
claude-sonnet-4-20250514
andclaude-opus-4-20250514
. -
Add support for AWS Bedrock models. You can create an AWS Bedrock gptel backend with
gptel-make-bedrock
, which see. Please note: AWS Bedrock support requires Curl 8.5.0 or higher. -
You can now create an xAI backend with
gptel-make-xai
, which see. (xAI was supported before but the model configuration is now handled for you by this function.) -
Add support for GitHub Copilot Chat. See the README and
gptel-make-gh-copilot
. Please note: this is only the chat component of GitHub Copilot. Copilotβscompletion-at-point
(tab-completion) functionality is not supported by gptel. -
Add support for Sambanova. This is an OpenAI-compatible API so you can create a backend with
gptel-make-openai
, see the README for details. -
Add support for Mistral Le Chat. This is an an OpenAI-compatible API so you can create a backend with
gptel-make-openai
, see the README for details.
New features and UI changes
-
gptel can access MCP server tools by integrating with the mcp.el package, which is at https://github.com/lizqwerscott/mcp.el. (mcp.el is available on MELPA.) To help with the integration, two new commands are provided:
gptel-mcp-connect
andgptel-mcp-disconnect
. You can use these to start MCP servers selectively and add tools to gptel. These commands are also available from gptelβs tools menu.These commands are currently not autoloaded by gptel. To access them, require the
gptel-integrations
feature. -
Tools now run in the buffer from which the request originates. This can be significant when tools read or manipulate Emacsβ state.
-
The tools menu (
gptel-tools
) has been redesigned. It now displays tool categories and associated tools in two columns, and it should scale better to any number of tools. As a bonus, the new menu requires half as many keystrokes as before to enable individual tools or toggle categories. -
You can now define βpresetsβ, which are a bundle of gptel options, such as the backend, model, system message, included tools, temperature and so on. This set of options can be applied together, making it easy to switch between different tasks using gptel. From gptelβs transient menu, you can save the current configuration as a preset or apply another one. Presets can be applied globally, buffer-locally or for the next request only. To persist presets across Emacs sessions, define presets in your configuration using
gptel-make-preset
. -
When using
gptel-send
from anywhere in Emacs, you can now include a βcookieβ of the form@preset-name
in the prompt text to apply that preset before sending. The preset is applied for that request only. This is an easy way to specify models, tools, system messages (etc) on the fly. In chat buffers the preset cookie is fontified and available for completion viacompletion-at-point
. -
For scripting purposes, provide a
gptel-with-preset
macro to create an environment with a preset applied. -
Links to plain-text files in chat buffers can be followed, and their contents included with the request. Using Org or Markdown links is an easy, intuitive, persistent and buffer-local way to specify context. To enable this behavior, turn on
gptel-track-media
. This is a pre-existing option that also controls whether image/document links are followed and sent (when the model supports it). -
The current kill can be added to gptelβs context. To enable this, turn on
gptel-expert-commands
and use gptelβs transient menu. -
gptel now supports handling reasoning/thinking blocks in responses from Gemini models. This is controlled by
gptel-include-reasoning
, in the same way that it handles other APIs. -
A new hook
gptel-prompt-transform-functions
is provided for arbitrary transformations of the prompt prior to sending a request. This hook runs in a temporary buffer containing the text to be sent. Any aspect of the request (the text, destination, request parameters, response handling preferences) can be modified buffer-locally here. These hook functions can be asynchronous. -
The user option
gptel-use-curl
can now be used to specify a Curl path. -
The new option
gptel-curl-extra-args
can be used to specify extra arguments to the Curl command used for the request. This is the global version of the gptel-backend-specific:curl-args
slot, which can be used to specify Curl arguments when using a specific backend.
Notable Bug fixes
- Fix more Org markup conversion edge cases involving nested Markdown delimiters.
What's Changed
- elpaignore: Add .github by @pabl0 in #699
- gptel-transient: Return to menu with just RET by @pabl0 in #695
- Enable Markdown formatting for "o3-mini" by removing
nosystem
capability by @Inkbottle007 in #702 - README: Tweak list of alternatives by @pabl0 in #707
- elpaignore: Oopsie daisy by @pabl0 in #708
- gptel: Ensure restoring state does not flag the buffer modified by @pabl0 in #711
- README: Clarify backend configuration and registration by @spudlyo in #715
- Fix typo in README by @DivineDominion in #717
- README: Restructure authinfo explanation by @karthink in #718
- gptel-rewrite: Diff buffers, not files by @pabl0 in #731
- gptel-rewrite: Make rewrite work on Emacs < 29 again by @pabl0 in #721
- gptel-rewrite: Ensure kill-buffer does not ask for confirmation by @pabl0 in #733
- README: Clarify installation instructions by @pabl0 in #732
- prevent invalid duplication of tool_call messages by @tcahill in #734
- fix(gptel): add missing newline after end_tool marker by @LuciusChen in #735
- Add
gemini-2.5-pro-exp-03-25
togptel-gemini.el
by @surenkov in #742 - gptel-rewrite: Use correct form of
with-demoted-errors
by @pabl0 in #745 - gptel-ollama: Fix docstring by @pabl0 in #752
- README: Install without submodule in Doom by @real-or-random in #758
- Allow tool use if
gptel-use-tools
was non-nil at request time by @kmontag in #755 - UTF-8 encoding issue in ChatGPT buffer by @kayomarz in #754
- README: Code to register backend for Mistral Le Chat, per #703 by @ligon in #766
- gptel-rewrite: Make other modes not clobber rewrite overlay by @pabl0 in #744
- gptel--mode-description-alist: add OCaml by @mlemerre in #771
- gptel-openai-extras: Add support for Github Copilot Chat by @kiennq in #767
- allow customize-variable to set temperature to nil by @nleve in #777
- [README] provide update-to-date Gemini setting by @nohzafk in #778
- chore: add gemini 2.5 pro model for gh copilot chat by @tianrui-wei in #779
- Fix pairing of details tags by @Arclite in #780
- docs: update xAI backend with Grok 3 model variants by @axelknock in #775
- gptel-gemini: Add support for Gemini 2.5 Pro Preview by @benthamite in #781
- gptel--gh-models: add new models by @kiennq in #789
- Fix max tokens for o4-mini by @orge-dev in #791
...
Version v0.9.8
Version 0.9.8 adds support for new Gemini, Anthropic, OpenAI, Perplexity, and DeepSeek models, introduces LLM tool use/function calling, a redesign of gptel-menu
, includes new customization hooks, dry-run options and refined settings, improvements to the rewrite feature and control of LLM βreasoningβ content.
Breaking changes
-
gemini-pro
has been removed from the list of Gemini models, as this model is no longer supported by the Gemini API. -
Sending an active region in Org mode will now apply Org mode-specific rules to the text, such as branching context.
-
The following obsolete variables and functions have been removed:
gptel-send-menu
: Usegptel-menu
instead.gptel-host
: Usegptel-make-openai
instead.gptel-playback
: Usegptel-stream
instead.gptel--debug
: Usegptel-log-level
instead.
New models and backends
-
Add support for several new Gemini models including
gemini-2.0-flash
,gemini-2.0-pro-exp
andgemini-2.0-flash-thinking-exp
, among others. -
Add support for the Anthropic model
claude-3-7-sonnet-20250219
, including its βreasoningβ output. -
Add support for OpenAIβs
o1
,o3-mini
andgpt-4.5-preview
models. -
Add support for Perplexity. While gptel supported Perplexity in earlier releases by reusing its OpenAI support, there is now first class support for the Perplexity API, including citations.
-
Add support for DeepSeek. While gptel supported DeepSeek in earlier releases by reusing its OpenAI support, there is now first class support for the DeepSeek API, including support for handling βreasoningβ output.
New features and UI changes
-
gptel-rewrite
now supports iterating on responses. -
gptel supports the ability to simulate/dry-run requests so you can see exactly what will be sent. This payload preview can now be edited in place and the request continued.
-
Directories can now be added to gptelβs global context. Doing so will add all files in the directory recursively.
-
βOneshotβ settings: when using gptelβs Transient menus, request parameters, directives and tools can now be set for the next request only in addition to globally across the Emacs session and buffer-locally. This is useful for making one-off requests with different settings.
-
gptel-mode
can now be used in all modes derived fromtext-mode
. -
gptel now tries to handle LLM responses that are in mixed Org/Markdown markup correctly.
-
Add
gptel-org-convert-response
to toggle the automatic conversion of (possibly) Markdown-formatted LLM responses to Org markup where appropriate. -
You can now look up registered gptel backends using the
gptel-get-backend
function. This is intended to make scripting and configuring gptel easier.gptel-get-backend
is a generalized variable so you can (un)set backends withsetf
. -
Tool use: gptel now supports LLM tool use, or function calling. Essentially you can equip the LLM with capabilities (such as filesystem access, web search, control of Emacs or introspection of Emacsβ state and more) that it can use to perform tasks for you. gptel runs these tools using argument values provided by the LLMs. This requires specifying tools, which are elisp functions with plain text descriptions of their arguments and results. gptel does not include any tools out of the box yet.
-
You can look up registered gptel tools using the
gptel-get-tool
function. This is intended to make scripting and configuring gptel easier.gptel-get-tool
is a generalized variable so you can (un)set tools withsetf
. -
New hooks for customization:
gptel-prompt-filter-hook
runs in a temporary buffer containing the text to be sent, before the full query is created. It can be used for arbitrary text transformations to the source text.gptel-post-request-hook
runs after the request is sent, and (possibly) before any response is received. This is intended for preparatory/reset code.gptel-post-rewrite-hook
runs after agptel-rewrite
request is successfully and fully received.
-
gptel-menu
has been redesigned. It now shows a verbose description of what will be sent and where the output will go. This is intended to provide clarity on gptelβs default prompting behavior, as well as the effect of the various prompt/response redirection it provides. Incompatible combinations of options are now disallowed. -
The spacing between the end of the prompt and the beginning of the response in buffers is now customizable via
gptel-response-separator
, and can be any string. -
gptel-context-remove-all
is now an interactive command. -
gptel now handles βreasoningβ content produced by LLMs. Some LLMs include in their response a βthinkingβ or βreasoningβ section. This text improves the quality of the LLMβs final output, but may not be interesting to you by itself. The new user option
gptel-include-reasoning
controls whether and how gptel displays this content. -
(Anthropic API only) Some LLM backends can cache content sent to it by gptel, so that only the newly included part of the text needs to be processed on subsequent conversation turns. This results in faster and significantly cheaper processing. The new user option
gptel-cache
can be used to specify caching preferences for prompts, the system message and/or tool definitions. This is supported only by the Anthropic API right now. -
(Org mode) Org property drawers are now stripped from the prompt text before sending queries. You can control this behavior or specify additional Org elements to ignore via
gptel-org-ignore-elements
. (For more complex pre-processing you can usegptel-prompt-filter-hook
.)
Notable Bug fixes
- Fix response mix-up when running concurrent requests in Org mode buffers.
- gptel now works around an Org fontification bug where streaming responses in Org mode buffers sometimes caused source code blocks to remain unfontified.
Pull requests
- Fix bug where dry run updates status message by @daedsidog in #498
- README: add prefix string docs for gptel-org-branching-context by @erganemic in #499
- Fix typo by @gamboz in #505
- Possible bug fix 473 by @andrewdea in #490
- Support adding files in directories to context by @benthamite in #438
- Remove gptel-extensions package by @gavinhughes in #522
- Add windows platform check for the curl file size thresh by @AustinMooreT in #519
- chore: update gptel-openai.el by @eltociear in #535
- fix(private-gpt): passing system directive by @rosenstrauch in #531
- Add gemini-2.0-flash-thinking-exp by @jlcheng in #528
- Add support for OpenAI o1 model by @tillydray in #559
- Update examples by @Lyonsclay in #569
- Fix mark deactivation timing in gptel-rewrite by @ultronozm in #577
- Add gptel-perplexity backend by @pirminj in #581
- gptel: Use substitute-command-keys in context by @cashpw in #589
- Pass nil results through to model by @psionic-k in #596
- Parameters Required by @psionic-k in #600
- Variable number of newlines before responses by @psionic-k in #599
- Updates required Org version notice from 9.6.7 to 9.7 by @grettke in #602
- gptel-context: Use current buffer as default by @LemonBreezes in #595
- Improve robustness:
gptel-org--link-standalone-p
by @tillydray in #594 - gptel: Make gptel-model variable definition dynamic by @pabl0 in #606
- gptel-transient: Improve directive/system message editing by @pabl0 in #616
- gptel-transient: (really) unmark before editing crowdsourced prompt by @pabl0 in #618
- Fix read-only behaviour in gptel--edit-directive by @pabl0 in #622
- gptel-transient: Improve parsing of crowdsourced prompts CSV file by @pabl0 in #615
- Fix type error in read_buffer tool use example by @m-n in #632
- Small documentation tweaks by @pabl0 in #625
- gptel-transient: Ensure user enters a number when prompted by @pabl0 in #637
- Update gptel-gemini.el models by @braun-steven in #639
- gptel-transient: Fix face related bug by @pabl0 in #638
- gptel-transient: Use proper ellipsis for consistent UI by @pabl0 in #640
- gptel-transient: Report what is being sent from kill-ring by @pabl0 in #642
- gptel-openai-extras: reasoning and deep-research perplexity models by @markus1189 in #664
- gptel: Prefer LLM terminology over GPT by @pabl0 in #635
- Add support for gpt-4.5-preview by @benthamite in #673
- Fix typo in ...
Version v0.9.7
Version 0.9.7 adds dynamic directives, a better rewrite interface, streaming support to the gptel request API, and more flexible model/backend configuration
Breaking changes
gptel-rewrite-menu
has been obsoleted. Use gptel-rewrite
instead.
Backends
- Add support for OpenAI's o1-preview and o1-mini
- Add support for Anthropic's Claude 3.5 Haiku
- Add support for xAI (contributed by @WuuBoLin)
- Add support for Novita AI (contributed by @jasonhp)
Notable new features and UI changes
gptel's directives (see gptel-directives
) can now be dynamic, and include more than the system message. You can "pre-fill" a conversation with canned user/LLM messages. Directives can now be functions that dynamically generate the system message and conversation history based on the current context. This paves the way for fully flexible task-specific templates, which the UI does not yet support in full. This design was suggested by @meain. (#375)
gptel's rewrite interface has been reworked. If using a streaming endpoint, the rewritten text is streamed in as a preview placed over the original. In all cases, clicking on the preview brings up a dispatch you can use to easily diff, ediff, merge, accept or reject the changes (4ae9c1b), and you can configure gptel to run one of these actions automatically. See the README for examples. This design was suggested by @meain. (#375)
gptel-abort
, used to cancel requests in progress, now works across the board, including when not using Curl or with gptel-rewrite
(7277c00).
The gptel-request
API now explicitly supports streaming responses (7277c00), making it easy to write your own hel
8000
pers or features with streaming support. The API also supports gptel-abort
to stop and clean up responses.
You can now unset the system message -- different from setting it to an empty string. gptel will also automatically disable the system message when using models that don't support it (0a2c07a).
Support for including PDFs with requests to Anthropic models has been added. (These queries are cached, so you pay only 10% of the token cost of the PDF in follow-up queries.) Note that document support (PDFs etc) for Gemini models has been available since v0.9.5. (0f173ba, #459)
When defining a gptel model or backend, you can specify arbitrary parameters to be sent with each request. This includes the (many) API options across all APIs that gptel does not yet provide explicit support for (bcbbe67). This feature was suggested by @tillydray (#471).
New transient command option to easily remove all included context chunks (a844612), suggested by @metachip and @gavinhughes.
Bug fixes and other news
- Pressing
RET
on included files in the context inspector buffer now pops up the file correctly. - API keys are stripped of whitespace before sending.
- Multiple UI, backend and prompt construction bugs have been fixed.
Pull requests
- Remove chatgpt-arcana from the list of alternatives by @CarlQLange in #431
- gptel-anthropic: Add upgraded Claude 3.5 Sonnet model by @benthamite in #436
- docs: add Novita AI in README by @jasonhp in #448
- gptel-curl: don't try converting CR-LF on Windows by @ssafar in #456
- README: Add support for xAI by @WuuBoLin in #466
- fix typo by @tillydray in #469
- fix(private-gpt): convert model name from symbol to string in request by @rosenstrauch in #470
- Update
gptel--anthropic-models
by @benthamite in #483 - Added a section tell new users how to select a backend explicitly by @blais in #467
New Contributors
- @CarlQLange made their first contribution in #431
- @jasonhp made their first contribution in #448
- @ssafar made their first contribution in #456
- @WuuBoLin made their first contribution in #466
- @tillydray made their first contribution in #469
- @rosenstrauch made their first contribution in #470
- @blais made their first contribution in #467
Full Changelog: v0.9.6...v0.9.7
Version v0.9.6
Version 0.9.6 is a bugfix release
New features and UI changes
gptel now displays more model details when switching the active LLM. This includes model descriptions, capabilities, context window sizes, input and output costs and the last time the model was updated by the provider. This feature was contributed by @benthamite.
Bug fixes
The last release included bugs that made gptel unusable with the Gemini backend. These have been fixed.
Contributors
- @benthamite made their first contribution in #417
- Add annotations next to model completion candidates by @benthamite in #420
Full Changelog: v0.9.5...v0.9.6
Version v0.9.5
Version 0.9.5 adds support for media/images, better rewrite interface, more models/backends
Breaking changes
The value of gptel-model
, the LLM in use, is now expected to be a symbol, not a string. This change is necessary to support per-model capabilities like image support. As of this release, gptel deals with models specified as strings gracefully and issues a warning, but this will be an unhandled error in future releases.
Backends
- Add support for Deepseek (contributed by @jamorphy)
- Add support for Cerebras (#372, contributed by @bytegorilla)
- Add support for OpenAI's gpt-4o-mini
- Add support for Gemini's gemini-1.5-flash and gemini-1.5-pro
- Add support for Github Models (#386, contributed by @gs-101)
- Add support for system messages to Gemini models, as the API now supports them
New features
gptel now supports multi-modal models, i.e. models that can accept image or other binary input. Supported media files can be added to the context with gptel-add-file
, or included as links in chat buffers. See the README for more information.
You can now specify more model metadata when defining gptel-models. This includes a description string, supported capabilities and a list of supported mime-types for multi-modal input. See the documentation of any of the backend-creation functions gptel-make-*
for details.
UI changes
gptel's "rewrite" or "refactoring" interface has been redesigned. Rewriting is now a two step process, but stays out of your way and provides better control over how changes are applied: you can evaluate the suggested changes via a diff/ediff/merge conflict + smerge interface and apply them granularly. See the README for details.
Bug fixes and other news
- Fix incompatibilities with older Org versions (< 9.7).
- Fix prompt creation bugs in Org mode.
- Fix more bugs with setting transient switches buffer-locally in
gptel-menu
.
New Contributors, Acknowledgments
- @hron made their first contribution in #345
- @munen made their first contribution in #356
- @jdormit made their first contribution in #358
- @nfedyashev made their first contribution in #369
- @bytegorilla made their first contribution in #372
- @gs-101 made their first contribution in #386
- @xgqt made their first contribution in #398
- @jaeyeom made their first contribution in #412
Thanks to @daedsidog, @bradmont, @mateialexandru, @wlauppe and @nfedyashev for discussions and feedback.
Full Changelog: v0.9.0...v0.9.5
Version 0.9.0
Version 0.9.0 adds gptel-context and support for more models/backends
Backends
- Add support for OpenAI's gpt-4o (#313, contributed by @axelknock)
- Add support for Anthropic's claude-3-5-sonnet-20240620 (#331, contributed by @erwald).
- Add support for the PrivateGPT backend (#312, contributed by @Aquan1412).
New feature
gptel can now include arbitrary regions, buffers or files with requests. This is useful as background context for queries. For example, when you want to talk about the contents of code buffers/files in your project in a chat buffer. These additional contexts are "live" and not "snapshots", i.e. they are scanned at the time of the query.
This feature is available via the gptel-add
command or from gptel's transient menu. See the README for more details.
This feature was contributed by @daedsidog.
UI changes
- Calling
M-x gptel
now asks the user to pick an existing chat buffer or name a new one. A suitable default name is chosen if the user leaves the field blank. The prefix-arg behavior ofgptel
has been removed for now. - New option
gptel-track-response
in non-chat buffers to control whether gptel distinguishes between the user's prompts and LLM responses. (The two are always distinguished in dedicated chat buffers.) This option can also be set from the transient menu.
Bug fixes and other news
gptel-org-branching-context
now requires Org 9.67 or later.- Fix bugs with one-shot Markdown to Org conversion.
- Fix bugs with setting transient switches buffer-locally in
gptel-menu
.
Version 0.8.6
Version 0.8.6 is a bugfix release
NonGNU ELPA
gptel is now available on NonGNU ELPA. This means it is directly installable via M-x package-install
without the need to add MELPA to the list of package archives.
Backends
- Add support for OpenRouter (#282)
- Add support for Gemini 1.5 (#284)
- Add support for GPT 4 Turbo (#286)
Bug fixes
Several bugs have been fixed:
- gptel's status in the header line now updates when sending queries from the transient menu. (#293)
- More Org elements are supported in the Markdown to Org converter (#296)
- gptel now supports saving and resuming chats when using Ollama (#181)
- Ollama integration is now stateless, resolving a number of bugs (#270, #279)
- Fix Ollama response parsing errors (#179)
Version 0.8.5
Version 0.8.5 adds the following features:
gptel and Org mode
Additional features are now available when using gptel in Org mode:
- By default the context for a query is the contents of the buffer up to the cursor. You can set the context to be the lineage of the current Org heading by enabling
gptel-org-branching-context
. This makes each heading at a given level a different branch of the conversation. - Limit the conversation context to an Org heading by setting
gptel-org-set-topic
. (This is an alternative to branching context.) - Set the current gptel configuration (backend, model, system message, temperature and max tokens) as Org properties under the current heading with the command
gptel-org-set-properties
. When these properties are present, they override the buffer-specific or global settings. (#141) - On-the-fly conversion of responses to Org mode has been smoothed out further. For best results, ask the LLM to respond only in Markdown if you are using Org mode.
UI
- The current system prompt is now shown (in truncated form) in the header-line in gptel chat buffers (#274).
Bug fixes
Several bugs have been fixed:
- (Anthropic) Attach additional directive correctly when interacting with Anthropic Claude models (#276)
- (Anthropic) Handle edge cases when parsing partial responses (#261)
- (All backends) Fix empty responses caused by json.el parsing errors when libjansson support is not available (#251, #264)
- (Ollama) Fix parsing errors caused by the libjansson transition (#255)
- The dry-run commands now perform an actual dry-run, i.e. everything except sending the queries (#276).
- The cursor can now be moved by functions in
gptel-post-response-hook
. (#269)
Version 0.8.0
Version 0.8.0 adds the following features:
Backends
- Support for the Anthropic API and Claude 3 models
- Support for Groq
- Updated OpenAI model list.
UI and configuration
There have been many improvements to the gptel's transient menu interface:
- From the menu, you can now attach an additional directive to the next query on top of the system message. This is useful for specific instructions that change with each query, like when refactoring code and regenerating or refining responses.
- Some introspection commands have been added to the menu: you can see exactly what will be sent (as a Lisp or JSON object) with the next query. To enable these commands in the menu, turn on
gptel-log-level
, which see. - Various aspects of the menu have been tuned to be more efficient and cause less friction.
- Model and query parameters (including the system message) are now global variables by default. This is to make it easier to work at a project level without having to set them in each buffer. You can continue to set them at a buffer-local level (the previous default) using a switch in the menu.
Other
- gptel now uses libjansson if Emacs is compiled with support for it. This makes json parsing about 3x faster. (LLM responses are typically not large, so there is only a modest increase in parsing speed and Emacs' responsiveness when using gptel.)
- Org mode output when streaming responses is much improved, with most edge cases resolved.
Deprecation notice
- The dedicated "refactor" or "rewrite" menus are deprecated and will be removed in the next major release. Note that all of their functionality (including ediff-ing) is now available from the main gptel menu.
Version 0.7.0
Version 0.7.0 adds the following features:
Backends
- Support for Perplexity.ai (contributed by @dbactual)
- Updated OpenAI model list.
- Better support for Gemini
- Support for arbitrary curl arguments (for HTTP requests) that gptel does not provide customization options for (contributed by @r0man)
UI and configuration
- Response regeneration and history: You can now regenerate a response at point and cycle through past versions of the response at point. These are accessible from the transient menu when the point is over a response.
- Customizable display-buffer action to choose where gptel chat windows are placed when chat buffers are created with
M-x gptel
.