magichampz commited on
Commit
6e6e2d0
·
1 Parent(s): 766fce4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -120
README.md CHANGED
@@ -21,145 +21,40 @@ Achieved a 93% validation accuracy
21
  ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/652dc3dab86e108d0fea458c/E7UZXLWPvU_39cxrF49jD.gif)
22
 
23
  ## Uses
24
- The tflite model (model.tflite) was loaded into a Raspberry Pi running a live object detection script. <br>
25
- The Pi could then detect lego technic pieces in real time as the pieces rolled on a conveyor belt towards the Pi Camera
 
 
26
 
27
  ## Bias, Limitations and Recommendations
28
- The images of the lego pieces used to train the model were taken in
 
 
29
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
30
 
31
- [More Information Needed]
32
-
33
- ### Recommendations
34
-
35
-
36
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
37
-
38
-
39
  ## Training Details
40
 
41
  ### Training Data
42
  - **Data:** https://huggingface.co/datasets/magichampz/lego-technic-pieces
 
43
 
44
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
45
 
46
- [More Information Needed]
47
-
48
  ### Training Procedure
49
-
50
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
51
 
52
- #### Preprocessing [optional]
53
-
54
- [More Information Needed]
55
 
56
 
57
- #### Training Hyperparameters
58
-
59
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
60
-
61
- #### Speeds, Sizes, Times [optional]
62
-
63
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
64
-
65
- [More Information Needed]
66
-
67
  ## Evaluation
68
 
69
- <!-- This section describes the evaluation protocols and provides the results. -->
70
-
71
- ### Testing Data, Factors & Metrics
72
-
73
- #### Testing Data
74
-
75
- <!-- This should link to a Dataset Card if possible. -->
76
-
77
- [More Information Needed]
78
-
79
- #### Factors
80
-
81
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
82
-
83
- [More Information Needed]
84
-
85
- #### Metrics
86
-
87
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
88
-
89
- [More Information Needed]
90
-
91
  ### Results
 
 
 
92
 
93
- [More Information Needed]
94
-
95
- #### Summary
96
-
97
-
98
-
99
- ## Model Examination [optional]
100
-
101
- <!-- Relevant interpretability work for the model goes here -->
102
-
103
- [More Information Needed]
104
-
105
- ## Environmental Impact
106
-
107
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
108
-
109
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
110
-
111
- - **Hardware Type:** [More Information Needed]
112
- - **Hours used:** [More Information Needed]
113
- - **Cloud Provider:** [More Information Needed]
114
- - **Compute Region:** [More Information Needed]
115
- - **Carbon Emitted:** [More Information Needed]
116
-
117
- ## Technical Specifications [optional]
118
-
119
- ### Model Architecture and Objective
120
-
121
- [More Information Needed]
122
-
123
- ### Compute Infrastructure
124
-
125
- Trained on Google Collabs using the GPU available
126
-
127
- #### Hardware
128
-
129
- Model loaded into a raspberry pi 3 connected to a PiCamera v2 <br>
130
- RPi mounted on a holder and conveyor belt set-up built with lego
131
-
132
-
133
- ## Citation
134
- Model implemented on the raspberry pi using the ideas from PyImageSearch's blog: <br>
135
- https://pyimagesearch.com/2017/09/18/real-time-object-detection-with-deep-learning-and-opencv/
136
-
137
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
138
-
139
- **BibTeX:**
140
-
141
- [More Information Needed]
142
-
143
- **APA:**
144
-
145
- [More Information Needed]
146
-
147
- ## Glossary [optional]
148
-
149
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
150
-
151
- [More Information Needed]
152
-
153
- ## More Information [optional]
154
-
155
- [More Information Needed]
156
-
157
- ## Model Card Authors [optional]
158
-
159
- [More Information Needed]
160
-
161
- ## Model Card Contact
162
-
163
- [More Information Needed]
164
 
165
 
 
21
  ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/652dc3dab86e108d0fea458c/E7UZXLWPvU_39cxrF49jD.gif)
22
 
23
  ## Uses
24
+ The files in the computer folder are meant for use on your own computer.
25
+ You can create and train your own deep learning model using your own data and also test this model on your computer.
26
+ The model was trained on Google colab, so create_training_data_array.py was used to upload data in the form of a numpy array to Google colab.
27
+ After transfering the tflite model to your Pi, you can then run the image classification file in the raspberry-pi folder to detect and classify lego pieces in real time.
28
 
29
  ## Bias, Limitations and Recommendations
30
+ The images of the lego pieces used to train the model were taken in room lighting conditions, illuminated with a torchlight. <br>
31
+ To use the model, would recommend trying to recreate the conditions and achieve photographs with a similar lighting. <br>
32
+ Otherwise, it might be better to retrain the model with a new dataset of images corresponding to your lighting conditions
33
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
34
 
 
 
 
 
 
 
 
 
35
  ## Training Details
36
 
37
  ### Training Data
38
  - **Data:** https://huggingface.co/datasets/magichampz/lego-technic-pieces
39
+ More images can be taken by editing the motion_detection_and_image_classification.py script.
40
 
41
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
42
 
 
 
43
  ### Training Procedure
44
+ The model was trained using the GPU's available on Google Collab. The jupyter notebook loaded the data from a npy file (in the dataset card), which contained all the images as well as their category labels.
45
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
46
 
47
+ #### Preprocessing
48
+ Images were normalized before being fed into the model. Their contrast was also increased using the increase_contrast_more function defined in the notebook attached.
 
49
 
50
 
 
 
 
 
 
 
 
 
 
 
51
  ## Evaluation
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ### Results
54
+ Our model was trained with 6000 images across 7 different categories of lego technic pieces, split into a 80/20 train/test split. <br>
55
+ It achieved 93% testing accuracy and graphs of the accuracy and loss are shown below. <br>
56
+ A confusion matrix was also plotted to visualize the performance of the classification algorithm. It depicts the count value of true versus false predictions across each category.
57
 
58
+ ![Unknown-5](https://user-images.githubusercontent.com/91732309/190358182-58fa5671-263d-490b-8f54-616cb2daf764.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60