10000 feat: Switch chat model to use llm library by tomdyson · Pull Request #11 · tomdyson/microllama · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

feat: Switch chat model to use llm library #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tomdyson
Copy link
Owner
  • Adds llm and llm-openai as dependencies.

  • Refactors answer() and streaming_answer() to use llm.get_model() and model.chat() instead of openai.ChatCompletion.

    8000
  • Updates README, Dockerfile, and deploy_instructions() to reflect new dependencies, env var config (MODEL, API keys), and the continued need for OPENAI_API_KEY for embeddings.

- Adds llm and llm-openai as dependencies.

- Refactors answer() and streaming_answer() to use llm.get_model() and model.chat() instead of openai.ChatCompletion.

- Updates README, Dockerfile, and deploy_instructions() to reflect new dependencies, env var config (MODEL, API keys), and the continued need for OPENAI_API_KEY for embeddings.
@tomdyson tomdyson requested a review from Copilot April 21, 2025 09:18
Copy link
@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the chat answer functions to use the new llm library and updates relevant dependencies and documentation to reflect the changes.

  • Replaces OpenAI ChatCompletion with llm.get_model() and model.chat() in answer() and streaming_answer().
  • Adds llm and llm-openai dependencies and updates deployment instructions and the README accordingly.

Reviewed Changes

Copilot reviewed 3 out of 4 changed files in this pull request and generated no comments.

File Description
pyproject.toml Added new dependencies ("llm" and "llm-openai") to support chat models.
microllama/init.py Refactored answer functions to leverage the llm library; removed openai.ChatCompletion.
README.md Updated API key instructions and deployment notes to reflect the new changes.
Files not reviewed (1)
  • microllama/Dockerfile: Language not supported
Comments suppressed due to low confidence (1)

microllama/init.py:11

  • The code uses inspect.cleandoc() in answer() and streaming_answer() but there's no import for the 'inspect' module. Please add 'import inspect' near the other import statements.
from typing import Optional, Union

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant
0