File size: 2,797 Bytes
f840838
 
 
 
b96bc27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
base_model:
- Ultralytics/YOLOv8
pipeline_tag: object-detection
---

# Backbone Finetuned YOLO Models for Oriented Object Detection

## Overview
This repository contains finetuned YOLO models using a specific backbone for two sizes: Nano and Small. These models were trained and evaluated on the DIOR dataset, showcasing their performance in satellite object detection tasks. The finetuning process emphasized optimizing both training time and detection metrics.

This YOLO models were finetuned to perform [Oriented Bounding Box detection](https://docs.ultralytics.com/es/tasks/obb/#visual-samples).

## Usage
The models can be used for oriented object detection tasks and are compatible with the [YOLO framework](https://docs.ultralytics.com). Detailed instructions for downloading and using these models are provided in the `README`.

## Dataset
The models were trained on the [DIOR Dataset](https://paperswithcode.com/dataset/dior), which is tailored for detecting elements in satellite images. Evaluation metrics include mAP50 and mAP50-95 for a comprehensive assessment of detection accuracy.

---

## Model Performance and Comparison Table

| Model            | Training Time (s) | mAP50  | mAP50-95 |
|------------------|-------------------|--------|----------|
| Backbone Nano    | 7861.18           | 0.7552 | 0.5905   |
| Backbone Small   | 7719.13           | 0.7824 | 0.6219   |

---

## Key Observations
- **Backbone Nano**: Achieves a respectable mAP50-95 of 0.5905, with moderate training time.
- **Backbone Small**: Outperforms the Nano model in both mAP50 and mAP50-95, while requiring slightly less training time.

---

## Examples


![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66605a873adae5fadc3ec6cf/aXRUjndIBsSgzlUwYF03X.jpeg)

![image/png](https://huggingface.co/pauhidalgoo/yolov8-DIOR/resolve/main/DIOR_yolov8s_confusion_matrix_normalized.png)

![image/png](https://huggingface.co/pauhidalgoo/yolov8-DIOR/resolve/main/DIOR_yolov8s_F1_curve.png)

---

## How to Use the Models
1. Clone the repository.
2. Install the ultralytics library (pip install ultralytics)
3. Load the model size of your choice.

```python
from ultralytics import YOLO

# Load a finetuned YOLO model
model = YOLO('path-to-model.pt')

# Perform inference
results = model('path-to-image.jpg')
results.show()
```

---

If your goal is to use it in matlab, you need to:
1. Clone the repository.
2. Clone the [Matlab YOLOv8](https://github.com/matlab-deep-learning/Pretrained-YOLOv8-Network-For-Object-Detection) repository.
3. Use the [convert_to_onnx.py](https://huggingface.co/pauhidalgoo/yolov8-indoor/blob/main/convert_to_onnx.py)
4. Load the model in Matalb like shown in [3_YOLO_matlab.mlx](https://huggingface.co/pauhidalgoo/yolov8-indoor/blob/main/3_YOLO_matlab.mlx)