Needle is a deployment-ready system for Image retrieval, designed to empower researchers and developers with a powerful tool for querying images using natural language descriptions. It’s based on the research presented in our paper, introducing a novel approach to efficient and scalable retrieval.
🚀 Why Needle?
- Seamlessly retrieve image content from large datasets.
- Extendable and modular design to fit various research needs.
- Backed by cutting-edge research for accurate and robust retrieval.
- 200% improvement over CLIP from OpenAI
Watch as Needle transforms natural language queries into precise image retrieval results in real time.
Installing Needle is quick and straightforward. Make sure you have Docker and Docker Compose installed, then, use the one-liner below to install Needle:
curl -fsSL https://raw.githubusercontent.com/UIC-InDeXLab/Needle/main/scripts/install.sh -o install.sh && bash install.sh && rm install.sh
Then, you can start needle service using this command:
needlectl service start
To launch the CPU-based stack with your production configs (located in $NEEDLE_HOME/configs/
),
use the production override:
docker compose -f docker/docker-compose.cpu.yaml -f docker/docker-compose.prod.yaml up -d
You can start the full CPU-based stack (etcd, MinIO, Milvus, Postgres, image-generator-hub) and launch the backend in hot‑reload dev mode with one command:
make dev
This runs core services in detached mode, then rebuilds and starts the backend with your local code mounted and Uvicorn reload enabled.
For GPU development use:
make dev-gpu
Checkout Needle documentation to learn more about Needle CLI and its capabilities.
Needle is developed as part of the research presented in our paper:
If you use Needle in your work, please cite our paper to support the project:
@article{erfanian2024needle,
title={Needle: A Generative-AI Powered Monte Carlo Method for Answering Complex Natural Language Queries on Multi-modal Data},
author={Erfanian, Mahdi and Dehghankar, Mohsen and Asudeh, Abolfazl},
journal={arXiv preprint arXiv:2412.00639},
year={2024}
}
We welcome contributions, feedback, and discussions! Feel free to open issues or submit pull requests in our GitHub repository.
Let’s build the future of multimodal content retrieval together!