Welcome to the NSFW Image Detection repository! This project focuses on detecting and categorizing explicit, suggestive, or safe media in images. It utilizes a vision-language encoder model fine-tuned from the siglip2-base-patch16-256
architecture.
The NSFW Image Detection model is designed to help developers and researchers filter and categorize image content effectively. It is built on the SiglipForImageClassification
architecture and aims to provide a reliable solution for identifying inappropriate content.
This project can be particularly useful for applications that require content moderation, parental controls, or user-generated content platforms.
- Multi-class Classification: The model can identify various content types, including explicit, suggestive, and safe media.
- Easy Integration: Built with popular libraries like Hugging Face Transformers, making it easy to integrate into existing projects.
- User-friendly Interface: Utilize Gradio for a simple web interface to interact with the model.
- High Accuracy: Fine-tuned on a comprehensive dataset to ensure reliable detection results.
To get started, you need to install the necessary dependencies. You can do this using pip.
pip install -r requirements.txt
Make sure you have Python 3.6 or higher installed on your machine.
To use the NSFW Image Detection model, follow these steps:
-
Clone the repository:
git clone https://github.com/pedropqv/nsfw-image-detection.git cd nsfw-image-detection
-
Run the Gradio app:
python app.py
-
Access the web interface: Open your browser and go to
http://localhost:7860
to start using the model.
You can upload images and see the classification results in real-time.
If you're interested in training the model further, follow these steps:
-
Prepare your dataset: Make sure your images are organized into folders based on their categories (e.g., explicit, suggestive, safe).
-
Update the configuration file: Modify the
config.yaml
file to point to your dataset. -
Start training:
python train.py
-
Monitor the training process: You can check the logs for any issues and track the training progress.
-
Save the model: Once training is complete, make sure to save your model for future use.
We welcome contributions! If you'd like to contribute to this project, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them.
- Push your changes to your fork.
- Open a pull request.
Please ensure your code follows the existing style and includes tests where applicable.
This project is licensed under the MIT License. See the LICENSE file for details.
For questions or suggestions, feel free to open an issue or contact me directly.
You can find the latest releases of this project here. Download the necessary files and execute them to get started with the model.
For more details on how to use the released versions, please refer to the documentation provided in the releases section.
The NSFW Image Detection project offers a robust solution for identifying and categorizing explicit content in images. By utilizing state-of-the-art models and easy-to-use interfaces, this tool can help enhance content moderation across various platforms.
Explore the repository, contribute, and help improve this valuable resource for the community.
Thank you for your interest!