When using Friendli Python SDK, you need to provide a Friendli Token for authentication and authorization purposes. A Friendli Token serves as an alternative method of authorization to signing in with an email and a password. You can generate a new Friendli Token through the Friendli Suite, at your "Personal settings" page by following the steps below.
- Go to the Friendli Suite and sign in with your account.
- Click the profile icon at the top-right corner of the page.
- Click "Personal settings" menu.
- Go to the "Tokens" tab on the navigation bar.
- Create a new Friendli Token by clicking the "Create token" button.
- Copy the token and save it in a safe place. You will not be able to see this token again once the page is refreshed.
Note
Python version upgrade policy
Once a Python version reaches its official end of life date, a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with either pip or poetry package managers.
PIP is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
pip install friendli
Poetry is a modern tool that simplifies dependency management and package publishing by using a single pyproject.toml
file to handle project metadata and dependencies.
poetry add friendli
You can use this SDK in a Python shell with uv and the uvx
command that comes with it like so:
uvx --from friendli python
It's also possible to write a standalone Python script without needing to set up a whole project like so:
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.9"
# dependencies = [
# "friendli",
# ]
# ///
from friendli import SyncFriendli
sdk = SyncFriendli(
# SDK arguments
)
# Rest of script here...
Once that is saved to a file, you can run it with uv run script.py
where
script.py
can be replaced with the actual file name.
Given a list of messages forming a conversation, the model generates a response.
# Synchronous Example
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.serverless.chat.complete(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
model="meta-llama-3.1-8b-instruct",
max_tokens=200,
stream=False,
)
# Handle response
print(res)
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
import os
from friendli import AsyncFriendli
async def main():
async with AsyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = await friendli.serverless.chat.complete(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
model="meta-llama-3.1-8b-instruct",
max_tokens=200,
stream=False,
)
# Handle response
print(res)
asyncio.run(main())
Given a list of messages forming a conversation, the model generates a response. Additionally, the model can utilize built-in tools for tool calls, enhancing its capability to provide more comprehensive and actionable responses.
# Synchronous Example
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.serverless.tool_assisted_chat.complete(
messages=[
{
"content": "What is 3 + 6?",
"role": "user",
},
],
model="meta-llama-3.1-8b-instruct",
max_tokens=200,
stream=False,
tools=[
{
"type": "math:calculator",
},
],
)
# Handle response
print(res)
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
import os
from friendli import AsyncFriendli
async def main():
async with AsyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = await friendli.serverless.tool_assisted_chat.complete(
messages=[
{
"content": "What is 3 + 6?",
"role": "user",
},
],
model="meta-llama-3.1-8b-instruct",
max_tokens=200,
stream=False,
tools=[
{
"type": "math:calculator",
},
],
)
# Handle response
print(res)
asyncio.run(main())
This SDK supports the following security scheme globally:
Name | Type | Scheme | Environment Variable |
---|---|---|---|
token |
http | HTTP Bearer | FRIENDLI_TOKEN |
To authenticate with the API the token
parameter must be set when initializing the SDK client instance. For example:
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.dedicated.chat.complete(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
max_tokens=200,
model="(adapter-route)",
stream=False,
)
# Handle response
print(res)
Available methods
- generate - Image generations
- tokenization - Tokenization
- detokenization - Detokenization
- create_dataset - Create a new dataset
- list_datasets - List datasets
- get_dataset - Get dataset info
- delete_dataset - Delete dataset
- create_version - Create a version
- list_versions - List versions
- get_version - Get version info
- delete_version - Delete a version
- create_split - Create a split
- list_splits - List splits
- get_split - Get split info
- delete_split - Delete split
- add_samples - Add samples
- list_samples - List samples
- update_samples - Update samples
- delete_samples - Delete samples
- transcribe - Audio transcriptions
- wandb_artifact_create - Create endpoint from W&B artifact
- create - Create a new endpoint
- list - List all endpoints
- get_spec - Get endpoint specification
- update - Update endpoint spec
- delete - Delete endpoint
- get_version_history - Get endpoint version history
- get_status - Get endpoint status
- sleep - Sleep endpoint
- wake - Wake endpoint
- terminate - Terminate endpoint
- restart - Restart endpoint
- generate - Image generations
- tokenization - Tokenization
- detokenization - Detokenization
- init_upload - Initiate file upload
- complete_upload - Complete file upload
- get_info - Get file info
- get_download_url - Get file download URL
- retrieve - Retrieve contexts from chosen knowledge base
- list - Retrieve serverless models
- tokenization - Tokenization
- detokenization - Detokenization
Server-sent events are used to stream content from certain
operations. These operations will expose the stream as Generator that
can be consumed using a simple for
loop. The loop will
terminate when the server no longer has any events to send and closes the
underlying connection.
The stream is also a Context Manager and can be used with the with
statement and will close the
underlying connection when the context is exited.
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.dedicated.chat.stream(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
model="(endpoint-id)",
max_tokens=200,
stream=True,
)
with res as event_stream:
for event in event_stream:
# handle event
print(event, flush=True)
Certain SDK methods accept file objects as part of a request body or multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
Tip
For endpoints that handle file uploads bytes arrays can also be used. However, using streams is recommended for large files.
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.dedicated.audio.transcribe(
file={
"file_name": "example.file",
"content": open("example.file", "rb"),
},
model="(endpoint-id)",
)
# Handle response
print(res)
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a RetryConfig
object to the call:
import os
from friendli import SyncFriendli
from friendli.utils import BackoffStrategy, RetryConfig
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.dedicated.chat.complete(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
max_tokens=200,
model="(adapter-route)",
stream=False,
retries=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
)
# Handle response
print(res)
If you'd like to override the default retry strategy for all operations that support retries, you can use the retry_config
optional parameter when initializing the SDK:
import os
from friendli import SyncFriendli
from friendli.utils import BackoffStrategy, RetryConfig
with SyncFriendli(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.dedicated.chat.complete(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
max_tokens=200,
model="(adapter-route)",
stream=False,
)
# Handle response
print(res)
Handling errors in this SDK should largely match your expectations. All operations return a response object or raise an exception.
By default, an API error will raise a models.SDKError exception, which has the following properties:
Property | Type | Description |
---|---|---|
.status_code |
int | The HTTP status code |
.message |
str | The error message |
.raw_response |
httpx.Response | The raw HTTP response |
.body |
str | The response content |
When custom error responses are specified for an operation, the SDK may also raise their associated exceptions. You can refer to respective Errors tables in SDK docs for more details on possible exception types for each operation. For example, the delete
method may raise the following exceptions:
Error Type | Status Code | Content Type |
---|---|---|
models.HTTPValidationError | 422 | application/json |
models.SDKError | 4XX, 5XX | */* |
import os
from friendli import SyncFriendli, models
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = None
try:
res = friendli.dedicated.endpoint.delete(endpoint_id="<id>")
# Handle response
print(res)
except models.HTTPValidationError as e:
# handle e.data: models.HTTPValidationErrorData
raise (e)
except models.SDKError as e:
# handle exception
raise (e)
The default server can be overridden globally by passing a URL to the server_url: str
optional parameter when initializing the SDK client instance. For example:
import os
from friendli import SyncFriendli
with SyncFriendli(
server_url="https://api.friendli.ai",
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.container.chat.complete(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
max_tokens=200,
model="(adapter-route)",
stream=False,
)
# Handle response
print(res)
The server URL can also be overridden on a per-operation basis, provided a server list was specified for the operation. For example:
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.container.chat.complete(
messages=[
{
"content": "You are a helpful assistant.",
"role": "system",
},
{
"content": "Hello!",
"role": "user",
},
],
max_tokens=200,
model="(adapter-route)",
stream=False,
server_url="http://localhost:8000",
)
# Handle response
print(res)
The Python SDK makes API calls using the httpx HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.
Depending on whether you are using the sync or async version of the SDK, you can pass an instance of HttpClient
or AsyncHttpClient
respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.
This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of httpx.Client
or httpx.AsyncClient
directly.
For example, you could specify a header for every request that this sdk makes as follows:
import httpx
from friendli import SyncFriendli
http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = SyncFriendli(client=http_client)
or you could wrap the client with your own custom logic:
from typing import Any, Optional, Union
import httpx
from friendli import AsyncFriendli
from friendli.httpclient import AsyncHttpClient
class CustomClient(AsyncHttpClient):
client: AsyncHttpClient
def __init__(self, client: AsyncHttpClient):
self.client = client
async def send(
self,
request: httpx.Request,
*,
stream: bool = False,
auth: Union[
httpx._types.AuthTypes, httpx._client.UseClientDefault, None
] = httpx.USE_CLIENT_DEFAULT,
follow_redirects: Union[
bool, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
) -> httpx.Response:
request.headers["Client-Level-Header"] = "added by client"
return await self.client.send(
request, stream=stream, auth=auth, follow_redirects=follow_redirects
)
def build_request(
self,
method: str,
url: httpx._types.URLTypes,
*,
content: Optional[httpx._types.RequestContent] = None,
data: Optional[httpx._types.RequestData] = None,
files: Optional[httpx._types.RequestFiles] = None,
json: Optional[Any] = None,
params: Optional[httpx._types.QueryParamTypes] = None,
headers: Optional[httpx._types.HeaderTypes] = None,
cookies: Optional[httpx._types.CookieTypes] = None,
timeout: Union[
httpx._types.TimeoutTypes, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
extensions: Optional[httpx._types.RequestExtensions] = None,
) -> httpx.Request:
return self.client.build_request(
method,
url,
content=content,
data=data,
files=files,
json=json,
params=params,
headers=headers,
cookies=cookies,
timeout=timeout,
extensions=extensions,
)
s = AsyncFriendli(async_client=CustomClient(httpx.AsyncClient()))
The SyncFriendli
class implements the context manager protocol and registers a finalizer function to close the underlying sync and async HTTPX clients it uses under the hood. This will close HTTP connections, release memory and free up other resources held by the SDK. In short-lived Python programs and notebooks that make a few SDK method calls, resource management may not be a concern. However, in longer-lived programs, it is beneficial to create a single SDK instance via a context manager and reuse it across the application.
from friendli import SyncFriendli, AsyncFriendli
import os
def main():
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
# Rest of application here...
# Or when using async:
async def amain():
async with AsyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
# Rest of application here...
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass your own logger class directly into your SDK.
import logging
from friendli import SyncFriendli
logging.basicConfig(level=logging.DEBUG)
s = SyncFriendli(debug_logger=logging.getLogger("friendli"))
You can also enable a default debug logger by setting an environment variable FRIENDLI_DEBUG
to true.
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.