The TBH project is a deep learning framework designed for retrieval tasks, utilizing a hashing-based approach. It leverages a ResNet50 backbone for feature extraction and includes custom layers for feature adaptation and binary bottlenecking.
- backup/: Contains backup files and configurations.
- schematic/: Includes schematic representations and diagrams.
- scripts/: Contains scripts for training, evaluation, and validation.
eval.py
: Script for evaluating retrieval performance.validate.py
: Script for validating model performance.train.py
: Script for training the model.
- result/: Stores results and output files.
- utils/: Utility functions and modules.
- config/: Configuration files for setting hyperparameters and other settings.
- dataset/: Data loading and preprocessing scripts.
- data/: Raw and processed data files.
- models/: Model architecture and layers.
tbh.py
: Implementation of the TBH model.
- .git/: Git version control directory.
To set up the environment, use the environment.yml
file to create a conda environment:
conda env create -f environment.yml
conda activate tbh
To train the model, run the following command:
bash run_train.sh
To evaluate the model, use:
bash run_eval.sh
The config/config.py
file contains various hyperparameters and settings, such as:
HASH_DIM
: Dimensionality of the hash codes.FEATURE_DIM
: Dimensionality of the feature space.BATCH_SIZE
: Batch size for training.LEARNING_RATE
: Learning rate for the optimizer.
The TBH model is built on a ResNet50 backbone, with additional layers for feature adaptation and binary bottlenecking. The architecture is defined in models/tbh.py
.
Contributions are welcome! Please fork the repository and submit a pull request for any improvements or bug fixes.
This project is licensed under the MIT License.
Feel free to modify or expand upon this draft as needed. If you have any specific sections or details you'd like to include, let me know!