A simple and easy-to-use Python client for the OpenRouter API that provides access to various AI models.
pip install ai-python-lab
from ai-python-lab import OpenRouterClient
# Initialize client
client = OpenRouterClient()
# Simple chat
response = client.simple_chat("What is the meaning of life?")
print(response)
# Chat with conversation history
messages = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing well, thank you! How can I help you today?"},
{"role": "user", "content": "Can you explain quantum computing?"}
]
response = client.chat(messages)
print(response)
# Use a specific model
response = client.simple_chat(
"Explain machine learning in simple terms",
model="anthropic/claude-3-haiku"
)
# Set default model for the client
client = OpenRouterClient(default_model="openai/gpt-4")
For simple one-off requests:
from ai-python-lab import quick_chat, ask_ai
# Quick chat function
response = quick_chat("Tell me a joke")
print(response)
# Even simpler alias
response = ask_ai("What's the weather like?")
print(response)
from ai-python-lab import OpenRouterClient
from ai-python-lab.exceptions import AuthenticationError, APIError, RateLimitError
try:
client = OpenRouterClient()
response = client.simple_chat("Hello")
except AuthenticationError as e:
print(f"Authentication failed: {e}")
except RateLimitError as e:
print(f"Rate limit exceeded: {e}")
except APIError as e:
print(f"API error: {e}")
# Get list of available models
models = client.get_models()
for model in models:
print(f"Model: {model['id']}")
Main client class for interacting with OpenRouter API.
96B6 chat(messages, model=None, **kwargs)
- Send chat completion requestsimple_chat(message, model=None, **kwargs)
- Send simple messageget_models()
- Get available models
messages
: List of message dictionaries with 'role' and 'content'model
: Model to use (optional, uses default if not specified)temperature
: Sampling temperature (0.0 to 2.0)max_tokens
: Maximum tokens to generatetop_p
: Nucleus sampling parameterfrequency_penalty
: Frequency penalty (-2.0 to 2.0)presence_penalty
: Presence penalty (-2.0 to 2.0)stop
: Stop sequences
quick_chat(message, api_key=None, model="openrouter/cypher-alpha:free", **kwargs)
ask_ai(message, **kwargs)
- Alias for quick_chat
OpenRouterError
- Base exceptionAPIError
- API-related errorsAuthenticationError
- Authentication failuresRateLimitError
- Rate limit exceeded
from ai-python-lab import OpenRouterClient
client = OpenRouterClient()
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit']:
break
response = client.simple_chat(user_input)
print(f"AI: {response}")
fromai-python-lab import OpenRouterClient
client = OpenRouterClient()
conversation = []
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit']:
break
# Add user message to conversation
conversation.append({"role": "user", "content": user_input})
# Get AI response
response = client.chat(conversation)
print(f"AI: {response}")
# Add AI response to conversation
conversation.append({"role": "assistant", "content": response})
from ai-python-lab import OpenRouterClient
client = OpenRouterClient()
# Try different models
models = [
"openrouter/cypher-alpha:free",
"anthropic/claude-3-haiku",
"openai/gpt-3.5-turbo"
]
question = "What is artificial intelligence?"
for model in models:
try:
response = client.simple_chat(question, model=model)
print(f"\n{model}:")
print(response)
except Exception as e:
print(f"Error with {model}: {e}")
- Python 3.7+
- openai >= 1.0.0
- requests >= 2.25.1
MIT License
Contributions are welcome! Please feel free to submit a Pull Request.
If you encounter any issues or have questions, please open an issue on GitHub.