Spaces:
Runtime error
A newer version of the Gradio SDK is available:
5.22.0
title: YoloV3 GradCam
emoji: 🦀
colorFrom: pink
colorTo: yellow
sdk: gradio
sdk_version: 3.40.1
app_file: app.py
pinned: false
license: mit
YOLO V2 & V3 and Object Detection Techniques
How to Use the App
The app has two tabs:
YoloV3 Object Detection : In this tab, you can upload your own image of dimensions 416 x 416 pixels or choose an example image provided already to classify and visualize the Class Activation Maps using GradCAM.
You can adjust the number of top predicted classes, select multiple target layers from the model trained, control the transparency of the overlay, and allow/hide GradCam visualizations.GradCam Visualization : In this tab, we are able visualize a gallery of misclassified images from PASCAL_VOC test dataset. You can control the transparency of the overlay, select a target layer, control the number of misclassified examples shown and allow/hide the GradCAM overlay.
- YoloV3 Object Detection
- Input Image : Upload your own image of dimensions 416 x 416 pixels or select one of the example images given below.
- Enable GradCAM : Allows the GradCAM overlay on the input image. Unchecking it allows to view the original image.
- Network Layers : Select the target layers for GradCAM visualization. The values range from [-4,-1] and the default values are -2 and -1.
- Transparency : Control the transparency of the GradCAM overlay. The default value is 0.5.
- Threshold : Control the threshold for the boxes plotted on the images.
- GradCam Visualization
- Input Image : Upload your own image of dimensions 416 x 416 pixels or select one of the example images given below.
- Network Layer : Adjust the target layer for GradCAM visualization in the model's layers. The default value is -2.
- Transparency : Control the transparency of the GradCAM overlay. The default value is 0.5.
- Enable GradCAM : Allows to display the GradCAM overlay on the misclassified images. Unchecking it allows to view the original images.
- Threshold : Control the threshold for the boxes plotted on the images.
- After adjusting the parameters, click the
Submit
button to see the results.
- To reset the parameters back to default, click on
Clear
button.
Training code
The Pytorch-Lightning code used to train, validate the model can be viewed at - https://github.com/TharunSivamani/ERA-V1/blob/main/Session%2013/S13.ipynb
License
This project is licensed under the MIT License - see the LICENSE file for details.
Credits
- This app is built using the Gradio library (https://www.gradio.app/) for interactive model interfaces.
- The PASCAL VOC dataset (https://www.kaggle.com/datasets/aladdinpersson/pascal-voc-dataset-used-in-yolov3-video) is used for training and evaluation.
- The PyTorch library (https://pytorch.org/) is used for the deep learning model and GradCAM visualization.
- Pytorch Lightning Framework (https://lightning.ai/) is used in training and other steps
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference