8000 GitHub - TigerHix/komu: A kawaii self-hosted manga reader for Japanese learners!
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

TigerHix/komu

Repository files navigation

komu

Komu Manga Reader

A kawaii self-hosted manga reader for Japanese learning! Supports English and Chinese.

Library View Upload Interface Reading Mode Text Overlay Grammar Breakdown Text Analysis Settings Page

Features

  • Import PDFs/Images: Drag & drop manga PDFs/images to add to your library
  • Dialogue OCR: Dialogue text recognition in the background (using manga-ocr)
  • Grammar Breakdown: Break down the dialogue into tokens with furigana and part-of-speech highlighting (using ichiran)
  • AI-driven Q&A: Select words and ask Komi (our kanban girl) to explain them in the context of the dialogue
  • Responsive UI: Works on both desktop and mobile
  • 3 Reading Modes: RTL/LTR page navigation and vertical scrolling
  • PWA Support: Can be added to home screen as an app on both iOS and Android

Tech Stack

  • Frontend: React 18 + Vite + shadcn/ui + framer-motion
  • Backend: Elysia + Bun + SQLite + Prisma
  • Japanese Tokenization: ichiran
  • OCR Inference: FastAPI + manga-ocr + comic-text-detector

Quick Start

Installation

Linux / WSL

  1. Install Docker following the instructions on the official website.

Start Docker:

systemctl start docker
# or
sudo /etc/init.d/docker start

Install Node.js 18+ following the instructions on the official website.

And finally, install PyTorch following the instructions on the official website.

  1. Run the following commands to install Git LFS, Bun, and ImageMagick.
# Install Git LFS
sudo apt install git-lfs

# Install Bun
curl -fsSL https://bun.sh/install | bash
source ~/.bashrc

# Install ImageMagick
sudo apt install imagemagick graphicsmagick
  1. Clone the repository and install dependencies.
# Setup project (Important: --recurse-submodules to clone submodules in the packages folder!)
git clone --recurse-submodules https://github.com/TigerHix/komu.git
cd komu

# Install frontend & backend dependencies
cd frontend && bun install && cd ..

cd backend
cp .env.example .env
bun install && bunx prisma generate && bunx prisma db push
sudo apt-get install -y poppler-utils
cd ..

# Create Python virtual environment for inference service
# Need to create at the project root directory!
python -m venv .venv
source .venv/bin/activate
cd inference
cp .env.example .env
pip install -r requirements.txt
cd ..
  1. To use the LLM-powered features, request an API key from OpenRouter and add it to the backend/.env file. You may also customize the LLM model (CHAT_MODEL, SEARCH_MODEL).

Windows & macOS

TODO. Should be highly similar to the above.

Running the Application

Linux / WSL

# Terminal 1: Frontend  
cd frontend && bun run dev

# Terminal 2: Backend
cd backend && bun run dev

# Terminal 3: Inference Service
cd inference && bash ./start_service.sh

⚠️ First-time setup note: The initial Docker build & run for ichiran (grammar analysis service) takes approximately 5 minutes. Gram 6FE7 mar breakdown features will become available once you see "✅ Ichiran service ready!" in the console. If the build exceeds 10 minutes, please check if the build is stuck or increase the timeout in backend/src/lib/ichiran-service.ts.

In your browser, go to http://localhost:5847 to access Komu.

We recommend using a VPN such as Tailscale to access the application from your other devices.

Contributing

This project was pretty much vibe-coded in a few days so beware of the code quality! We invite you to vibe together using Claude Code. Just don't spend too much money on it...

License

Everything except Komi's visual assets are licensed under the GPL-3.0.

Character Designer & Illustrator: @凛星Rin

About

A kawaii self-hosted manga reader for Japanese learners!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0