open source contribution done in empire ui #81
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hackathon Track: EmpireUI – Build AI-Focused Components & Contribute
to Open Source
Objective
In this track, you're not just building — you're contributing. Your goal is to create original, AI-focused
UI components and contribute them directly to EmpireUI, an open-source, AI-powered component
library built for modern web apps.
We’re not asking you to use EmpireUI in your project — we want you to expand it.
This is a call to all frontend hackers, AI builders, and open-source lovers to push the boundaries of
what AI components can look like — and have your work become part of something bigger.
What You Need to Do
Build ONE or more AI-focused UI components.
These components should use or interact with AI in a meaningful way (examples below).
Submit a well-documented pull request (PR) to the EmpireUI GitHub repository.
Ensure your code follows our conventions, includes documentation, and is visually polished.
Don’t use existing EmpireUI components.
Your job is to create new AI-focused components, not consume existing ones.
What Counts as an AI-Focused Component?
We’re looking for UI components that either use AI under the hood, or are meant to be used in AIbased workflows or apps. Some examples include:
Speech-to-Text Component
→ Wraps Whisper/OpenAI/Google ASR to transcribe voice input in real-time.
Chat Interface Component
→ Styled chat UI that connects to Langchain/OpenAI and handles state, memory, and loading
indicators.
AI Image Generator Card
→ UI that takes prompts, lets users tweak parameters (like size, style), and displays generated images
(e.g., via DALL·E, Replicate, etc.).
AI-Powered Summarizer UI
→ A document dropzone with a summary output using GPT-4 or Claude.
Smart Data Table
→ Allows querying large datasets with natural language using a LLM backend.
Prompt Editor Component
→ A visual tool for editing, testing, and saving prompts (like a mini PromptFlow).
Voice Command Button
→ A mic button that listens and triggers actions using voice commands.
You can integrate APIs like OpenAI, Hugging Face, Replicate, ElevenLabs, Whisper, Langchain, or any
AI SDK.