Releases: roboflow/rf-detr
RF-DETR 1.1.0
Changelog
barcelona-1-30s-rfdetr-result.mp4
🚀 Added
- Early stopping - Early stopping monitors validation mAP and halts training if improvements remain below a threshold for a set number of epochs. This can reduce wasted computation once the model converges. Additional parameters—such as
early_stopping_patience
,early_stopping_min_delta
, andearly_stopping_use_ema
—let you fine-tune the stopping behavior. (#87)
from rfdetr import RFDETRBase
model = RFDETRBase()
model.train(dataset_dir=<DATASET_PATH>, epochs=12, batch_size=4, grad_accum_steps=4, early_stopping=True)
- Gradient checkpointing - Gradient checkpointing - Gradient checkpointing re-computes certain parts of the forward pass during backpropagation to reduce peak memory usage. This allows training larger models or higher batch sizes on limited GPU memory at the cost of slightly longer training time. Enable it by setting
gradient_checkpointing=True
. (#91)
from rfdetr import RFDETRBase
model = RFDETRBase()
model.train(dataset_dir=<DATASET_PATH>, epochs=12, batch_size=8, grad_accum_steps=2, gradient_checkpointing=True)
- Saving metrics - Training and validation metrics (e.g., losses, mAP) are now automatically saved to your output directory after training. (#58)
-
Logging with TensorBoard - Added support for logging training progress and metrics to TensorBoard, providing live visualizations of your model’s performance. Simply pass
tensorboard=True
to.train()
, then runtensorboard --logdir <OUTPUT_DIR>
to monitor. (#62)Using TensorBoard with RF-DETR
-
TensorBoard logging requires additional packages. Install them with:
pip install "rfdetr[metrics]"
-
To activate logging, pass the extra parameter
tensorboard=True
to.train()
:from rfdetr import RFDETRBase model = RFDETRBase() model.train(dataset_dir=<DATASET_PATH>, epochs=12, batch_size=4, grad_accum_steps=4, tensorboard=True, output_dir=<OUTPUT_PATH>)
-
To use TensorBoard locally, navigate to your project directory and run:
tensorboard --logdir <OUTPUT_DIR>
Then open
http://localhost:6006/
in your browser to view your logs. -
To use TensorBoard in Google Colab run:
%load_ext tensorboard %tensorboard --logdir <OUTPUT_DIR>
-
-
Logging with Weights and Biases - Integrated Weights and Biases (W&B) for collaborative, cloud-based experiment tracking. Passing wandb=True to .train() will automatically log metrics, hyperparameters, and system stats to your W&B project. (#70)
Using Weights and Biases with RF-DETR
-
Weights and Biases logging requires additional packages. Install them with:
pip install "rfdetr[metrics]"
-
Before using W&B, make sure you are logged in:
wandb login
You can retrieve your API key at wandb.ai/authorize.
-
To activate logging, pass the extra parameter
wandb=True
to.train()
:from rfdetr import RFDETRBase model = RFDETRBase() model.train(dataset_dir=<DATASET_PATH>, epochs=12, batch_size=4, grad_accum_steps=4, wandb=True, project=<PROJECT_NAME>, run=<RUN_NAME>)
In W&B, projects are collections of related machine learning experiments, and runs are individual sessions where training or evaluation happens. If you don't specify a name for a run, W&B will assign a random one automatically.
-
-
Automated Python package publish - Implemented a GitHub Actions workflow to build and publish the
rfdetr
package to PyPI on each new release, ensuring the latest version is immediately available. (#71)
🔧 Fixed
- Resume training - You can resume training from a previously saved checkpoint by passing the path to the
checkpoint.pth
file using theresume
argument. This is useful when training is interrupted or you want to continue fine-tuning an already partially trained model. The training loop will automatically load the weights and optimizer state from the provided checkpoint file. (#88)
from rfdetr import RFDETRBase
model = RFDETRBase()
model.train(dataset_dir=<DATASET_PATH>, epochs=12, batch_size=4, grad_accum_steps=4, resume=<CHECKPOINT_PATH>)
🏆 Contributors
@mario-dg (Mario da Graca), @onuralpszr (Onuralp SEZER), @farukalamai (Md Faruk Alam), @probicheaux (Peter Robicheaux), @isaacrob-roboflow (Isaac Robinson), @Matvezy (Matvei Popov), @SkalskiP (Piotr Skalski)