File size: 9,112 Bytes
55f323c 380c545 55f323c 380c545 55f323c 380c545 55f323c 380c545 55f323c 380c545 55f323c 380c545 55f323c 380c545 55f323c 380c545 55f323c a82842a 4ecda58 a82842a 6974080 a82842a 6974080 a82842a 6974080 a82842a 6974080 a82842a 76f0345 a82842a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 |
---
size_categories:
- 1K<n<10K
configs:
- config_name: S1
data_files:
- split: input
path: Input-Output Videos/SET 1/S1_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 1/S1_OUTPUT_*.mp4
- config_name: S2
data_files:
- split: input
path: Input-Output Videos/SET 2/S2_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 2/S2_OUTPUT_*.mp4
- config_name: S3
data_files:
- split: input
path: Input-Output Videos/SET 3/S3_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 3/S3_OUTPUT_*.mp4
- config_name: S4
data_files:
- split: input
path: Input-Output Videos/SET 4/S4_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 4/S4_OUTPUT_*.mp4
---
<style>
.vertical-container {
display: flex;
flex-direction: column;
gap: 60px;
}
.image-container img {
width: 560px;
height: auto;
border-radius: 15px;
}
.container {
width: 90%;
margin: 0 auto;
}
.container2 {
width: 70%;
margin: 0 auto;
}
.text-center {
text-align: center;
}
.score-amount {
margin: 20 inherit
}
.image-container {
display: flex;
justify-content: space-between;
}
</style>
# RouletteVision: a video dataset of >1000 roulette games divided in input/output
( Disclaimer: these roulette recordings are for research purposes only and do not promote gambling.)
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<link rel="stylesheet" href="styles.css">
<style>
html, body {
height: 100%;
margin: 0;
display: flex;
justify-content: center;
background-color: #f02222;
}
.above-text {
margin-top: 100px;
font-size: 2em;
color: rgba(255, 255, 255, 1);
text-align: center;
}
.center-gif {
display: flex;
justify-content: center;
align-items: center;
}
.center-gif img {
max-width: 50%;
max-height: 50%;
}
</style>
</head>
<body>
<div class="above-text ">ORIGINAL VIDEO</div>
<div class="center-gif">
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-ORIG.gif" alt="Centered GIF">
</div>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Simple Text Styling</title>
<style>
.simple-text {
font-family: Arial, sans-serif;
font-size: 1.5em;
color: rgba(235, 255, 51, 1);
text-align: center; /* Center-align the text */
}
</style>
</head>
<body>
<div class="simple-text"> THE SPLIT OF THE VIDEO </div>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Simple Text Styling</title>
<style>
.simple-text2 {
font-family: Arial, sans-serif;
font-size: 1em;
color: rgba(255, 255, 255, 0.7);
text-align: center; /* Center-align the text */
}
</style>
</head>
<body>
<div class="simple-text2"> (The original video and the algorithm that splits the video are not available yet. I'm considering publishing them, stay updated on my X: @mp_coder) </div>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<div class="container">
<div class="text-center">
</div>
<div class="image-container">
<div>
<h3 class="Input">INPUT: CIRCULAR MOVEMENT OF THE BALL</h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-INPUT.gif" width=500>
</div>
<div>
<h3 class="Output">OUTPUT: JUST BEFORE FALLING IN A NUMBER</h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-OUTPUT.gif" width=500>
</div>
</div>
</div>
</html>
# Overview
The purpose of this dataset is not to predict the next value of a roulette to make profit(that's impossible), but to share the videos that I have used for a Computer Vision project.
The project, called RouletteVision, is described in the purpose part. This dataset may be used for other CV projects, but I have not explored all the possibilities, if you have developed anything with it I'll be happy to hear about it.
This dataset contains 1703 pairs of videos of roulette games, the first video of the pair, which I called the input, contains the part of the roulette game where the ball is still spining around the wheel.
The second video, the output, contains the last seconds of the game, where the ball stops spining around and falls into the inner part of the wheel, making some rebounds and finally falling in a number.
If the dataset raises interest, I may consider augmenting the data volume: it's not a big deal to do it although it takes a little bit of time.
Any inquiry about the project will be welcomed either through HuggingFace or my [X account](https://x.com/mp_coder), you can ask whatever :)
# Purpose of the dataset and analysis algorithms
I have used this dataset for the development of an analysis algorithm with the purpose of using the extracted data in a Neural Network. The code uses the OpenCV library and I'm in the process of making it public.
The idea is to extract data from the input and output videos, use the data to train a model and finally, be able to upload a input video and get a precise approximation of the number in which will fall the ball.
As you can imagine, it's impossible to use the algorithm to win money. On top of that, the algorithm is not yet working as it should, more about it will be published soon.
I suppose that the dataset could be used to develop other ideas and that's why I published it, it's also kind of a unique dataset.
(The actual result of the analysis algorithm is just a .txt, these videos are just to show how it works)
<!DOCTYPE html>
<html lang="en">
<div class="container2">
<div class="text-center">
</div>
<div class="image-container">
<div>
<h3 class="Input">EXAMPLE 1: INPUT VIDEO ANALYSIS </h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-EX1.gif" width=300>
</div>
<div>
<h3 class="Output">EXAMPLE 2: OUTPUT VIDEO ANALYSIS</h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-EX2.gif" width=300>
</div>
</div>
</div>
</html>
# Division of the dataset
The data is divided into 4 sets, each composed of at least 300 input-output pairs. The criteria for the division is the length of the input video of each pair, the videos that are shorter than 2 seconds have been discard.
Set 1 contains videos of a length between 2 and 3 seconds, set 2 between 3-4 , set 3 between 4-5 and the set 4 videos that are longer than 5 seconds.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Table Example</title>
<link rel="stylesheet" href="styles.css">
</head>
<body>
<table>
<thead>
<tr>
<th></th>
<th>SET 1: 2-3</th>
<th>SET 2: 3-4</th>
<th>SET 3: 4-5</th>
<th>SET 4: 5-x</th>
</tr>
</thead>
<tbody>
<tr>
<th>INPUT</th>
<td>438</td>
<td>430</td>
<td>326</td>
<td>509</td>
</tr>
<tr>
<th>OUTPUT</th>
<td>438</td>
<td>430</td>
<td>326</td>
<td>509</td>
</tr>
</tbody>
</table>
</body>
</html>
# Future developments
After the release of this dataset, the next step is to publish the code that analyses the video to extract data from it; that algorithm considers both the ball and the wheels movement.
Once I publish it, I will probably get my hands on another project, you can know about it on my X.
# X: [( @mp_coder )](https://x.com/mp_coder)
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Tip for CV Project Development:<br><br>💠Always approach a problem from different perspectives<br><br>I have spent a lot of time trying to improve an algorithm for video analysis through redefining it. <br>Even if it did improved, another factor has made it much more precise📹 </p>— Mister P coder - mainly CV🚀 (@mp_coder) <a href="https://twitter.com/mp_coder/status/1869730297576833238?ref_src=twsrc%5Etfw">December 19, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|