This is a fork of advimman/lama, an awesome open-source inpainting project. We've modified bin/predict.py
to enable GPU support via command-line arguments, making it faster on systems with CUDA-compatible GPUs like the RTX 4070 Ti SUPER.
The original predict.py
was hardcoded to use CPU (device = torch.device("cpu")
), which slowed down inference despite GPU availability. This fork allows you to specify device=cuda
for blazing-fast performance. Tested on RTX 4070 Ti SUPER (Inference: 0.5s)
-
Clone this repository:
git clone https://github.com/Rootport-AI/lama.git
-
Follow the original LaMa instructions for setup.
-
Run with GPU: python bin/predict.py model.path= indir= outdir= device=cuda
- Huge thanks to advimman and the LaMa team for their amazing work!
- Licensed under Apache 2.0, same as the original (see
LICENSE
).
Happy inpainting!