MnLgt commited on
Commit
e5a62f2
·
1 Parent(s): d7de9f0

fixed readme

Browse files
Files changed (1) hide show
  1. README.md +10 -101
README.md CHANGED
@@ -1,104 +1,13 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - vision
5
- - image-classification
6
- widget:
7
- - src: >-
8
- https://huggingface.co/jordandavis/yolo-human-parse/blob/main/sample_images/image_one.jpg
9
- example_title: Straight ahead
10
- - src: >-
11
- Looking back
12
- example_title: Teapot
13
- - src: >-
14
- https://huggingface.co/jordandavis/yolo-human-parse/blob/main/sample_images/image_three.jpg
15
- example_title: Sweats
16
  ---
17
 
18
-
19
- # YOLO Segmentation Model for Human Body Parts and Objects
20
-
21
- This repository contains a fine-tuned YOLO (You Only Look Once) segmentation model designed to detect and segment various human body parts and objects in images.
22
-
23
- ## Model Overview
24
-
25
- The model is based on the YOLO architecture and has been fine-tuned to detect and segment the following classes:
26
-
27
- 0. Hair
28
- 1. Face
29
- 2. Neck
30
- 3. Arm
31
- 4. Hand
32
- 5. Back
33
- 6. Leg
34
- 7. Foot
35
- 8. Outfit
36
- 9. Person
37
- 10. Phone
38
-
39
- ## Installation
40
-
41
- To use this model, you'll need to have the appropriate YOLO framework installed. Please follow these steps:
42
-
43
- 1. Clone this repository:
44
- ```
45
- git clone https://github.com/your-username/yolo-segmentation-human-parts.git
46
- cd yolo-segmentation-human-parts
47
- ```
48
-
49
- 2. Install the required dependencies:
50
- ```
51
- pip install -r requirements.txt
52
- ```
53
-
54
- ## Usage
55
-
56
- To use the model for inference, you can use the following Python script:
57
-
58
- ```python
59
- from ultralytics import YOLO
60
-
61
- # Load the model
62
- model = YOLO('path/to/your/model.pt')
63
-
64
- # Perform inference on an image
65
- results = model('path/to/your/image.jpg')
66
-
67
- # Process the results
68
- for result in results:
69
- boxes = result.boxes # Bounding boxes
70
- masks = result.masks # Segmentation masks
71
- # Further processing...
72
- ```
73
-
74
- ## Training
75
-
76
- If you want to further fine-tune the model on your own dataset, please follow these steps:
77
-
78
- 1. Prepare your dataset in the YOLO format.
79
- 2. Modify the `data.yaml` file to reflect your dataset structure and classes.
80
- 3. Run the training script:
81
- ```
82
- python train.py --img 640 --batch 16 --epochs 100 --data data.yaml --weights yolov5s-seg.pt
83
- ```
84
-
85
- ## Evaluation
86
-
87
- To evaluate the model's performance on your test set, use:
88
-
89
- ```
90
- python val.py --weights path/to/your/model.pt --data data.yaml --task segment
91
- ```
92
-
93
- ## Contributing
94
-
95
- Contributions to improve the model or extend its capabilities are welcome. Please submit a pull request or open an issue to discuss proposed changes.
96
-
97
- ## License
98
-
99
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
100
-
101
- ## Acknowledgments
102
-
103
- - Thanks to the YOLO team for the original implementation.
104
- - Gratitude to all contributors who helped in fine-tuning and improving this model.
 
1
  ---
2
+ title: YOLO Human Parse
3
+ emoji: 🧑
4
+ colorFrom: red
5
+ colorTo: red
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
 
 
 
 
 
11
  ---
12
 
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference