8000 GitHub - pivlab/manugen-ai
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

pivlab/manugen-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Manugen AI

Project Logo

Writing academic manuscripts can be tedious. Imagine that you could bring together your results, prior research, source code, and some brief bullet points per section to generate a manuscript automatically. That is Manugen-AI.

This repo holds our submission for the 2025 Agent Development Kit (ADK) Hackathon with Google Cloud - Manugen AI. Manugen AI is a multi-agent tool for drafting academic manuscripts from assets and guidance: a collection of figures, text/instructions, source code, and other content files. It uses agents based on large language models (LLMs) and the Google ADK. See the Manugen AI package README for more details on the package itself.

The project consists of a web-based frontend, backend, and additional services. It includes Docker Compose configuration to run the application stack locally.

Β 

Get started

Project Structure

The project is a standard three-tier web application, with the following components:

  • ./frontend/: A web-based user interface for interacting with the application, built with React.
  • ./backend/: A REST API that serves the frontend and handles requests from the web interface.
  • ./packages/manugen-ai/: The Manugen AI package, which is used to generate academic manuscripts from content files. The backend relies on this package to perform the actual manuscript generation.

The project includes an optional PostgreSQL database that, if available, ADK will use to persist session data between stack runs.

Installation

Docker and Docker Compose

To run the full stack, you'll need Docker and Docker Compose installed on your machine. See here for installation instructions for Docker.

Docker Compose is typically included with Docker Desktop installations, but if you need to install it separately, see the Docker Compose installation instructions.

(Optional) Ollama for local LLM models

If you want to run LLM models locally, you'll need to have Ollama installed on your machine. Once you have Ollama installed, you want to run the Ollama server, which will allow the application to access the model. Make sure the Ollama server is listening to all addresses so you can use our Docker images. If the Ollama server is running as a system service already, stop it first (in Ubuntu Linux, you'll need to run sudo systemctl stop ollama.service). Then, start it manually:

OLLAMA_HOST=0.0.0.0 ollama serve

Make sure the Ollama server is accepting requests by inspecting the logs. The command above should output something like "Listening on [::]:11434" (it should not include 127.0.0.1, otherwise we won't be able to connect to it from a Docker container). To check the logs in macOS, you can run the following command to tail the server logs: tail -f ~/.ollama/logs/server.log.

Then, you'll need to pull a model of your choice, wh 8000 ich is used by the Manugen AI package for invoking tools, generating text, and interpreting figures. Make sure the model you pick supports "tools", such as Qwen3:

# qwen3 supports tools and thinking
ollama pull qwen3:8b

If you want to use Manugen-AI to upload figures and interpret them, you'll also need a model that supports "vision", such as Gemma3:

# gemma3 supports vision, which can be used to interpret your figures
ollama pull gemma3:4b

For the sake of keeping computational demands low, the models suggested here are small and may not yield the best results. You'll need to test which model works best for your task. In general, we've observed that large models or those with high context size perform better (you can try larger versions of Qwen3 and Gemma3), provided you have a GPU with sufficient VRAM to maintain acceptable inference times.

Usage

First, clone the repository, and in a terminal, change directory to the repo folder.

Second, copy .env.TEMPLATE in the root of the project to a file named .env:

cp .env.TEMPLATE .env

Then take a look at the .env file for other options. You'll also want to fill out API keys for services you're using. Currently, the project uses Google's Gemini 2.5 model (gemini-2.5-flash), so if you want to use it as well, you'll need a valid API key value for the GOOGLE_API_KEY entry.

If you want to use an Ollama model, make sure you select it by changing the value of MANUGENAI_MODEL_NAME (and maybe MANUGENAI_FIGURE_MODEL_NAME) in the .env file:

# if you want to use Ollama, select a model that supports "tools" such as Qwen3:
MANUGENAI_MODEL_NAME="ollama/qwen3:8b"

# Qwen3, however, does not support "vision" to interpret your figures
# So you can select a model that supports "vision" such as Gemma3:
MANUGENAI_FIGURE_MODEL_NAME="ollama/gemma3:4b"

Third, run the project:

docker compose up --build

This will build the Docker images and start the application. You'll be attached to the logs of the application, which will show you live output from the various services that make up the stack. Pressing Ctrl+C will stop the application.

Wait until you see a message resembling the following in the logs:

Accessing the Frontend

VITE v6.3.5  ready in 1276 ms

➜  Local:   http://localhost:5173/
➜  Network: http://172.22.0.2:5173/

Use a web browser to open http://localhost:8901 and to view the frontend user interface. It might take a few seconds to establish a connection.

(FYI: Despite the port being 5173 in the logs, the Docker Compose configuration maps it to port 8901 on the host machine.)

Accessing the Backend API

While it's not necessary to use the app, you might want to access the backend API directly for testing or debugging purposes.

Wait until you see the following message in the logs:

INFO:     Will watch for changes in these directories: ['/app']
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [102] using WatchFiles
INFO:     Started server process [104]
INFO:     Waiting for application startup.
INFO:     Application startup complete.

Visit http://localhost:8900 in your web browser to access the backend API; docs can be found at http://localhost:8900/docs.

(Again, note that the port within the container, 8000, is different than the one on the host to prevent collisions with other host services.)

Purging Session State

If you want to clear the session state, you can run the following command to stop any application containers that are running and remove the database volume:

docker compose down -v

This will purge any sessions or other data that ADK stores to the database.

(Optional) Running the App in Production

The app includes configuration, located in docker-compose.prod.yml, for running the application in production mode.

To use it, ensure that you've updated DOMAIN_NAME in your .env file to the domain name under which the app will be served. You should also ensure that you have access to map ports 80 and 443 on the machine that will be running the production app.

The production configuration differs from the development configuration in a few ways:

  • The frontend is built and served as static files, rather than being served by a development server.
  • Volumes are disabled, and the code for each container is baked into the image. As a result, hot-reloading is disabled for all containers.
  • It uses Caddy as a reverse proxy to serve the frontend and backend as well as to obtain an SSL cert for your chosen domain.
    • The backend API is served from /api/
    • The frontend is served from /

To launch the production version of the app, you can run the following command:

docker compose -f docker-compose.yml -f docker-compose.prod.yml up --build -d

This will build the production images and start the application in detached mode.

To tail the container logs, you can run:

docker compose -f docker-compose.yml -f docker-compose.prod.yml logs -f

Other Resources

(Optional) Running ADK Web

If you'd like to run the ADK web agent interface, you can do so by running the following command:

docker compose run -p 8905:8000 --rm -it backend /app/start_adk_web.sh

Wait until you see the following message in the logs:

+----------------------------------------------------------+
| ADK Web Server started                                   |
|                                                          |
| For local testing, access at http://localhost:8000.      |
+----------------------------------------------------------+

Visit http://localhost:8905 in your web browser to access the ADK web agent interface.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  
0