Content Core is a powerful, AI-powered content extraction and processing platform that transforms any source into clean, structured content. Extract text from websites, transcribe videos, process documents, and generate AI summaries—all through a unified interface with multiple integration options.
Extract content from anywhere:
- 📄 Documents - PDF, Word, PowerPoint, Excel, Markdown, HTML, EPUB
- 🎥 Media - Videos (MP4, AVI, MOV) with automatic transcription
- 🎵 Audio - MP3, WAV, M4A with speech-to-text conversion
- 🌐 Web - Any URL with intelligent content extraction
- 🖼️ Images - JPG, PNG, TIFF with OCR text recognition
- 📦 Archives - ZIP, TAR, GZ with content analysis
Process with AI:
- ✨ Clean & format extracted content automatically
- 📝 Generate summaries with customizable styles (bullet points, executive summary, etc.)
- 🎯 Context-aware processing - explain to a child, technical summary, action items
- 🔄 Smart engine selection - automatically chooses the best extraction method
# Extract content from any source
uvx --from "content-core" ccore https://example.com
uvx --from "content-core" ccore document.pdf
# Generate AI summaries
uvx --from "content-core" csum video.mp4 --context "bullet points"
One-click setup with Model Context Protocol (MCP) - extract content directly in Claude conversations.
Smart auto-detection commands:
- Extract Content - Full interface with format options
- Summarize Content - 9 summary styles available
- Quick Extract - Instant clipboard extraction
Right-click any file in Finder → Services → Extract or Summarize content instantly.
import content_core as cc
# Extract from any source
result = await cc.extract("https://example.com/article")
summary = await cc.summarize_content(result, context="explain to a child")
- 🎯 Intelligent Auto-Detection: Automatically selects the best extraction method based on content type and available services
- 🔧 Smart Engine Selection:
- URLs: Firecrawl → Jina → BeautifulSoup fallback chain
- Documents: Docling → Simple extraction fallback
- Media: OpenAI Whisper transcription
- Images: OCR with multiple engine support
- 🌍 Multiple Integrations: CLI, Python library, MCP server, Raycast extension, macOS Services
- ⚡ Zero-Install Options: Use
uvx
for instant access without installation - 🧠 AI-Powered Processing: LLM integration for content cleaning and summarization
- 🔄 Asynchronous: Built with
asyncio
for efficient processing
Install Content Core using pip
:
# Install the package
pip install content-core
# Install with MCP server support
pip install content-core[mcp]
Alternatively, if you’re developing locally:
# Clone the repository
git clone https://github.com/lfnovo/content-core
cd content-core
# Install with uv
uv sync
Content Core provides three CLI commands for extracting, cleaning, and summarizing content: ccore, cclean, and csum. These commands support input from text, URLs, files, or piped data (e.g., via cat file | command).
Zero-install usage with uvx:
# Extract content
uvx --from "content-core" ccore https://example.com
# Clean content
uvx --from "content-core" cclean "messy content"
# Summarize content
uvx --from "content-core" csum "long text" --context "bullet points"
Extracts content from text, URLs, or files, with optional formatting. Usage:
ccore [-f|--format xml|json|text] [-d|--debug] [content]
Options:
-f
,--format
: Output format (xml, json, or text). Default: text.-d
,--debug
: Enable debug logging.content
: Input content (text, URL, or file path). If omitted, reads from stdin.
Examples:
# Extract from a URL as text
ccore https://example.com
# Extract from a file as JSON
ccore -f json document.pdf
# Extract from piped text as XML
echo "Sample text" | ccore --format xml
Cleans content by removing unnecessary formatting, spaces, or artifacts. Accepts text, JSON, XML input, URLs, or file paths. Usage:
cclean [-d|--debug] [content]
Options:
-d
,--debug
: Enable debug logging.content
: Input content to clean (text, URL, file path, JSON, or XML). If omitted, reads from stdin.
Examples:
# Clean a text string
cclean " messy text "
# Clean piped JSON
echo '{"content": " messy text "}' | cclean
# Clean content from a URL
cclean https://example.com
# Clean a file’s content
cclean document.txt
Summarizes content with an optional context to guide the summary style. Accepts text, JSON, XML input, URLs, or file paths.
Usage:
csum [--context "context text"] [-d|--debug] [content]
Options:
--context
: Context for summarization (e.g., "explain to a child"). Default: none.-d
,--debug
: Enable debug logging.content
: Input content to summarize (text, URL, file path, JSON, or XML). If omitted, reads from stdin.
Examples:
# Summarize text
csum "AI is transforming industries."
# Summarize with context
csum --context "in bullet points" "AI is transforming industries."
# Summarize piped content
cat article.txt | csum --context "one sentence"
# Summarize content from URL
csum https://example.com
# Summarize a file's content
csum document.txt
You can quickly integrate content-core
into your Python projects to extract, clean, and summarize content from various sources.
import content_core as cc
# Extract content from a URL, file, or text
result = await cc.extract("https://example.com/article")
# Clean messy content
cleaned_text = await cc.clean("...messy text with [brackets] and extra spaces...")
# Summarize content with optional context
summary = await cc.summarize_content("long article text", context="explain to a child")
For more information on how to use the Content Core library, including details on AI model configuration and customization, refer to our Usage Documentation.
Content Core includes a Model Context Protocol (MCP) server that enables seamless integration with Claude Desktop and other MCP-compatible applications. The MCP server exposes Content Core's powerful extraction capabilities through a standardized protocol.
# Install with MCP support
pip install content-core[mcp]
# Or use directly with uvx (no installation required)
uvx --from "content-core[mcp]" content-core-mcp
Add to your claude_desktop_config.json
:
{
"mcpServers": {
"content-core": {
"command": "uvx",
"args": [
"--from",
"content-core[mcp]",
"content-core-mcp"
]
}
}
}
For detailed setup instructions, configuration options, and usage examples, see our MCP Documentation.
Content Core provides powerful right-click integration with macOS Finder, allowing you to extract and summarize content from any file without installation. Choose between clipboard or TextEdit output for maximum flexibility.
Create 4 convenient services for different workflows:
- Extract Content → Clipboard - Quick copy for immediate pasting
- Extract Content → TextEdit - Review before using
- Summarize Content → Clipboard - Quick summary copying
- Summarize Content → TextEdit - Formatted summary with headers
-
Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh
-
Create services manually using Automator (5 minutes setup)
Right-click any supported file in Finder → Services → Choose your option:
- PDFs, Word docs - Instant text extraction
- Videos, audio files - Automatic transcription
- Images - OCR text recognition
- Web content - Clean text extraction
- Multiple files - Batch processing support
- Zero-install processing: Uses
uvx
for isolated execution - Multiple output options: Clipboard or TextEdit display
- System notifications: Visual feedback on completion
- Wide format support: 20+ file types supported
- Batch processing: Handle multiple files at once
- Keyboard shortcuts: Assignable hotkeys for power users
For complete setup instructions with copy-paste scripts, see macOS Services Documentation.
Content Core provides a powerful Raycast extension with smart auto-detection that handles both URLs and file paths seamlessly. Extract and summarize content directly from your Raycast interface without switching applications.
From Raycast Store (coming soon):
- Open Raycast and search for "Content Core"
- Install the extension by
luis_novo
- Configure API keys in preferences
Manual Installation:
- Download the extension from the repository
- Open Raycast → "Import Extension"
- Select the
raycast-content-core
folder
🔍 Extract Content - Smart URL/file detection with full interface
- Auto-detects URLs vs file paths in real-time
- Multiple output formats (Text, JSON, XML)
- Drag & drop support for files
- Rich results view with metadata
📝 Summarize Content - AI-powered summaries with customizable styles
- 9 different summary styles (bullet points, executive summary, etc.)
- Auto-detects source type with visual feedback
- One-click snippet creation and quicklinks
⚡ Quick Extract - Instant extraction to clipboard
- Type → Tab → Paste source → Enter
- No UI, works directly from command bar
- Perfect for quick workflows
- Smart Auto-Detection: Instantly recognizes URLs vs file paths
- Zero Installation: Uses
uvx
for Content Core execution - Rich Integration: Keyboard shortcuts, clipboard actions, Raycast snippets
- All File Types: Documents, videos, audio, images, archives
- Visual Feedback: Real-time type detection with icons
For detailed setup, configuration, and usage examples, see Raycast Extension Documentation.
For users integrating with the Langchain framework, content-core
exposes a set of compatible tools. These tools, located in the src/content_core/tools
directory, allow you to leverage content-core
extraction, cleaning, and summarization capabilities directly within your Langchain agents and chains.
You can import and use these tools like any other Langchain tool. For example:
from content_core.tools import extract_content_tool, cleanup_content_tool, summarize_content_tool
from langchain.agents import initialize_agent, AgentType
tools = [extract_content_tool, cleanup_content_tool, summarize_content_tool]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Extract the content from https://example.com and then summarize it.")
Refer to the source code in src/content_core/tools
for specific tool implementations and usage details.
The core functionality revolves around the extract_content function.
import asyncio
from content_core.extraction import extract_content
async def main():
# Extract from raw text
text_data = await extract_content({"content": "This is my sample text content."})
print(text_data)
# Extract from a URL (uses 'auto' engine by default)
url_data = await extract_content({"url": "https://www.example.com"})
print(url_data)
# Extract from a local video file (gets transcript, engine='auto' by default)
video_data = await extract_content({"file_path": "path/to/your/video.mp4"})
print(video_data)
# Extract from a local markdown file (engine='auto' by default)
md_data = await extract_content({"file_path": "path/to/your/document.md"})
print(md_data)
# Per-execution override with Docling for documents
doc_data = await extract_content({
"file_path": "path/to/your/document.pdf",
"document_engine": "docling",
"output_format": "html"
})
# Per-execution override with Firecrawl for URLs
url_data = await extract_content({
"url": "https://www.example.com",
"url_engine": "firecrawl"
})
print(doc_data)
if __name__ == "__main__":
asyncio.run(main())
(See src/content_core/notebooks/run.ipynb
for more detailed examples.)
Content Core supports an optional Docling-based extraction engine for rich document formats (PDF, DOCX, PPTX, XLSX, Markdown, AsciiDoc, HTML, CSV, Images).
Docling is not the default engine when parsing documents. If you don't want to use it, you need to set engine to "simple".
In your cc_config.yaml
or custom config, set:
extraction:
document_engine: docling # 'auto' (default), 'simple', or 'docling'
url_engine: auto # 'auto' (default), 'simple', 'firecrawl', or 'jina'
docling:
output_format: markdown # markdown | html | json
from content_core.config import set_document_engine, set_url_engine, set_docling_output_format
# switch document engine to Docling
set_document_engine("docling")
# switch URL engine to Firecrawl
set_url_engine("firecrawl")
# choose output format: 'markdown', 'html', or 'json'
set_docling_output_format("html")
# now use ccore.extract or ccore.ccore
result = await cc.extract("document.pdf")
Configuration settings (like API keys for external services, logging levels) can be managed through environment variables or .env
files, loaded automatically via python-dotenv
.
Example .env
:
OPENAI_API_KEY=your-key-here
GOOGLE_API_KEY=your-key-here
Content Core allows you to define custom prompt templates for content processing. By default, the library uses built-in prompts located in the prompts
directory. However, you can create your own prompt templates and store them in a dedicated directory. To specify the location of your custom prompts, set the PROMPT_PATH
environment variable in your .env
file or system environment.
Example .env
with custom prompt path:
OPENAI_API_KEY=your-key-here
GOOGLE_API_KEY=your-key-here
PROMPT_PATH=/path/to/your/custom/prompts
When a prompt template is requested, Content Core will first look in the custom directory specified by PROMPT_PATH
(if set and exists). If the template is not found there, it will fall back to the default built-in prompts. This allows you to override specific prompts while still using the default ones for others.
To set up a development environment:
# Clone the repository
git clone <repository-url>
cd content-core
# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate
uv sync --group dev
# Run tests
make test
# Lint code
make lint
# See all commands
make help
This project is licensed under the MIT License. See the LICENSE file for details.
Contributions are welcome! Please see our Contributing Guide for more details on how to get started.