8000 GitHub - WorkflowAI/WorkflowAI: WorkflowAI is an open-source platform where product and engineering teams 
collaborate to build and iterate on AI features.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

WorkflowAI is an open-source platform where product and engineering teams 
collaborate to build and iterate on AI features.

License

Notifications You must be signed in to change notification settings

WorkflowAI/WorkflowAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

header

License: Apache 2.0 Discord

Demo: build an AI feature in 1 minute

Demo

Key Features

  • Faster Time to Market: Build production-ready AI features in minutes through a web-app – no coding required.

  • Interactive Playground: Test and compare 80+ leading AI models side-by-side in our visual playground. See the difference in responses, costs, and latency. Try it now.

interactive-playground.mp4
  • Model-agnostic: Works with all major AI models including OpenAI, Anthropic, Claude, Google/Gemini, Mistral, DeepSeek, Grok with a unified interface that makes switching between providers seamless. View all 80+ supported models.

Model-agnostic

  • Open-source and flexible deployment: WorkflowAI is fully open-source with flexible deployment options. Run it self-hosted on your own infrastructure for maximum data control, or use the managed WorkflowAI Cloud service for hassle-free updates and automatic scaling.

  • Observability integrated: Built-in monitoring and logging capabilities that provide insights into your AI workflows, making debugging and optimization straightforward. Learn more about observability features.

observability.mp4
  • Cost tracking: Automatically calculates and tracks the cost of each AI model run, providing transparency and helping you manage your AI budget effectively. Learn more about cost tracking.

cost-tracking

  • Structured output: WorkflowAI ensures your AI responses always match your defined structure, simplifying integrations, reducing parsing errors, and making your data reliable and ready for use. Learn more about structured input and output.

structured-output

  • Easy integration with SDKs for Python, Typescript and a REST API. View code examples here.
code.mp4
  • Instant Prompt Updates: Tired of creating tickets just to tweak a prompt? Update prompts and models with a single click - no code changes or engineering work required. Go from feedback to fix in seconds.
deployments.mp4
  • Automatic Provider Failover: OpenAI experiences 40+ minutes of downtime per month. With WorkflowAI, traffic automatically reroutes to backup providers (like Azure OpenAI for OpenAI, or Amazon Bedrock for Anthropic) during outages - no configuration needed and at no extra cost. Your users won't even notice the switch.

provider-failover

  • Streaming supported: Enables real-time streaming of AI responses for low latency applications, with immediate validation of partial outputs. Learn more about streaming capabilities.
streaming.mp4
  • Hosted tools: Comes with powerful hosted tools like web search and web browsing capabilities, allowing your agents to access real-time information from the internet. These tools enable your AI applications to retrieve up-to-date data, research topics, and interact with web content without requiring complex integrations. Learn more about hosted tools.
hosted-tools.mp4
  • Multimodality support: Build agents that can handle multiple modalities, such as images, PDFs, documents, and audio. Try it here.
multimodality.mp4
  • Developer-Friendly: Need more control? Seamlessly extend functionality with our Python SDK when you need custom logic.
import workflowai
from pydantic import BaseModel
from workflowai import Model

class MeetingInput(BaseModel):
    meeting_transcript: str

class MeetingOutput(BaseModel):
    summary: str
    key_items: list[str]
    action_items: list[str]

@workflowai.agent()
async def extract_meeting_info(meeting_input: MeetingInput) -> MeetingOutput:
    ...

Deploy WorkflowAI

WorkflowAI Cloud

Fully managed solution with zero infrastructure setup required. Pay exactly what you'd pay the model providers — billed per token, with no minimums and no per-seat fees. No markups. We make our margin from provider discounts, not by charging you extra. Enterprise-ready with SOC2 compliance and high-availability infrastructure. We maintain strict data privacy - your data is never used for training.

Self-hosted

Quick start

The Docker Compose file is provided as a quick way to spin up a local instance of WorkflowAI. It is configured to be self contained viable from the start.

# Create a base environment file that will be used by the docker compose
# You should likely update the .env file to include some provider keys, see Configuring Provider keys below
cp .env.sample .env
# Build the client and api docker image
# By default the docker compose builds development images, see the `target` keys
docker-compose build
# [Optional] Start the dependencies in the background, this way we can shut down the app while
# keeping the dependencies running
docker-compose up -d clickhouse minio redis mongo
# Start the docker images
docker-compose up
# The WorkflowAI api is also a WorkflowAI user
# Since all the agents the api uses are hosted in WorkflowAI
# So you'll need to create a Workflow AI api key
# Open http://localhost:3000/organization/settings/api-keys and create an api key
# Then update the WORKFLOWAI_API_KEY in your .env file
open http://localhost:3000/organization/settings/api-keys
# The kill the containers (ctrl c) and restart them
docker-compose up

Although it is configured for local development via hot reloads and volumes, Docker introduces significant latencies for development. Detailed setup for both the client and api are provided in their respective READMEs.

Configuring Provider keys

WorkflowAI connects to a variety of providers (see the Provider enum). There are two ways to configure providers:

  • Globally, using environment variables. The provider environment sample provides information on requirements for each provider.

  • Per tenant, through the UI, by navigating to ../organization/settings/providers

Several features of the website rely on providers being configured either globally or for the tenant that is used internally. For example, at the time of writing, the agent that allows building agents with natural language uses Claude 3.7 so either Anthropic or Bedrock should be configured. All the agents that WorkflowAI uses are located in the agents directory

Additional MinIO setup

For now, we rely on public read access to the storage in the frontend. The URLs are not discoverable though so it should be ok until we implement temporary leases for files. On minio that's possible with the following commands

# Run sh inside the running minio container
docker-compose exec minio sh
# Create an alias for the bucket
mc anonymous set download myminio/workflowai-task-runs
# Set download permissions
mc alias set myminio http://minio:9000 minio miniosecret

Structure

API

The api provides is the Python backend for WorkflowAI. It is structured as a FastAPI server and a TaskIQ based worker.

Client

The client is a NextJS app that serves as a 92E7 frontend

Dependencies

  • MongoDB: we use MongoDB to store all the internal data
  • Clickhouse: Clickhouse is used to store the run data. We first stored the run data in Mongo but it quickly got out of hand with storage costs and query duration.
  • Redis: We use Redis as a broker for messages for taskiq. TaskIQ supports a number of different message broker.
  • Minio is used to store files but any S3 compatible storage will do. We also have a plugin for Azure Blob Storage. The selected storage depends on the WORKFLOWAI_STORAGE_CONNECTION_STRING env variable. A variable starting with s3:// will result in the S3 storage being used.

Setting up providers

WorkflowAI supports a variety of LLM providers (OpenAI, Anthropic, Amazon Bedrock, Azure OpenAI, Grok, Gemini, FireworksAI, ...). View all supported providers here.

Each provider has a different set of credentials and configuration. Providers that have the required environment variables are loaded by default (see the sample env for the available variables). Providers can also be configured per tenant through the UI.

Support

To find answers to your questions, please refer to the Documentation, ask a question in the Q&A section of our GitHub Discussions or join our Discord.

License

WorkflowAI is licensed under the Apache 2.0 License.

0