mp-coder's picture
Update README.md
4ecda58 verified
|
raw
history blame
9.11 kB
---
size_categories:
- 1K<n<10K
configs:
- config_name: S1
data_files:
- split: input
path: Input-Output Videos/SET 1/S1_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 1/S1_OUTPUT_*.mp4
- config_name: S2
data_files:
- split: input
path: Input-Output Videos/SET 2/S2_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 2/S2_OUTPUT_*.mp4
- config_name: S3
data_files:
- split: input
path: Input-Output Videos/SET 3/S3_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 3/S3_OUTPUT_*.mp4
- config_name: S4
data_files:
- split: input
path: Input-Output Videos/SET 4/S4_INPUT_*.mp4
- split: output
path: Input-Output Videos/SET 4/S4_OUTPUT_*.mp4
---
<style>
.vertical-container {
display: flex;
flex-direction: column;
gap: 60px;
}
.image-container img {
width: 560px;
height: auto;
border-radius: 15px;
}
.container {
width: 90%;
margin: 0 auto;
}
.container2 {
width: 70%;
margin: 0 auto;
}
.text-center {
text-align: center;
}
.score-amount {
margin: 20 inherit
}
.image-container {
display: flex;
justify-content: space-between;
}
</style>
# RouletteVision: a video dataset of >1000 roulette games divided in input/output
( Disclaimer: these roulette recordings are for research purposes only and do not promote gambling.)
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<link rel="stylesheet" href="styles.css">
<style>
html, body {
height: 100%;
margin: 0;
display: flex;
justify-content: center;
background-color: #f02222;
}
.above-text {
margin-top: 100px;
font-size: 2em;
color: rgba(255, 255, 255, 1);
text-align: center;
}
.center-gif {
display: flex;
justify-content: center;
align-items: center;
}
.center-gif img {
max-width: 50%;
max-height: 50%;
}
</style>
</head>
<body>
<div class="above-text ">ORIGINAL VIDEO</div>
<div class="center-gif">
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-ORIG.gif" alt="Centered GIF">
</div>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Simple Text Styling</title>
<style>
.simple-text {
font-family: Arial, sans-serif;
font-size: 1.5em;
color: rgba(235, 255, 51, 1);
text-align: center; /* Center-align the text */
}
</style>
</head>
<body>
<div class="simple-text"> THE SPLIT OF THE VIDEO </div>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Simple Text Styling</title>
<style>
.simple-text2 {
font-family: Arial, sans-serif;
font-size: 1em;
color: rgba(255, 255, 255, 0.7);
text-align: center; /* Center-align the text */
}
</style>
</head>
<body>
<div class="simple-text2"> (The original video and the algorithm that splits the video are not available yet. I'm considering publishing them, stay updated on my X: @mp_coder) </div>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<div class="container">
<div class="text-center">
</div>
<div class="image-container">
<div>
<h3 class="Input">INPUT: CIRCULAR MOVEMENT OF THE BALL</h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-INPUT.gif" width=500>
</div>
<div>
<h3 class="Output">OUTPUT: JUST BEFORE FALLING IN A NUMBER</h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-OUTPUT.gif" width=500>
</div>
</div>
</div>
</html>
# Overview
The purpose of this dataset is not to predict the next value of a roulette to make profit(that's impossible), but to share the videos that I have used for a Computer Vision project.
The project, called RouletteVision, is described in the purpose part. This dataset may be used for other CV projects, but I have not explored all the possibilities, if you have developed anything with it I'll be happy to hear about it.
This dataset contains 1703 pairs of videos of roulette games, the first video of the pair, which I called the input, contains the part of the roulette game where the ball is still spining around the wheel.
The second video, the output, contains the last seconds of the game, where the ball stops spining around and falls into the inner part of the wheel, making some rebounds and finally falling in a number.
If the dataset raises interest, I may consider augmenting the data volume: it's not a big deal to do it although it takes a little bit of time.
Any inquiry about the project will be welcomed either through HuggingFace or my [X account](https://x.com/mp_coder), you can ask whatever :)
# Purpose of the dataset and analysis algorithms
I have used this dataset for the development of an analysis algorithm with the purpose of using the extracted data in a Neural Network. The code uses the OpenCV library and I'm in the process of making it public.
The idea is to extract data from the input and output videos, use the data to train a model and finally, be able to upload a input video and get a precise approximation of the number in which will fall the ball.
As you can imagine, it's impossible to use the algorithm to win money. On top of that, the algorithm is not yet working as it should, more about it will be published soon.
I suppose that the dataset could be used to develop other ideas and that's why I published it, it's also kind of a unique dataset.
(The actual result of the analysis algorithm is just a .txt, these videos are just to show how it works)
<!DOCTYPE html>
<html lang="en">
<div class="container2">
<div class="text-center">
</div>
<div class="image-container">
<div>
<h3 class="Input">EXAMPLE 1: INPUT VIDEO ANALYSIS </h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-EX1.gif" width=300>
</div>
<div>
<h3 class="Output">EXAMPLE 2: OUTPUT VIDEO ANALYSIS</h3>
<img src="https://huggingface.co/datasets/mp-coder/RouletteVision-Dataset/resolve/main/Examples/ONL-EX2.gif" width=300>
</div>
</div>
</div>
</html>
# Division of the dataset
The data is divided into 4 sets, each composed of at least 300 input-output pairs. The criteria for the division is the length of the input video of each pair, the videos that are shorter than 2 seconds have been discard.
Set 1 contains videos of a length between 2 and 3 seconds, set 2 between 3-4 , set 3 between 4-5 and the set 4 videos that are longer than 5 seconds.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Table Example</title>
<link rel="stylesheet" href="styles.css">
</head>
<body>
<table>
<thead>
<tr>
<th></th>
<th>SET 1: 2-3</th>
<th>SET 2: 3-4</th>
<th>SET 3: 4-5</th>
<th>SET 4: 5-x</th>
</tr>
</thead>
<tbody>
<tr>
<th>INPUT</th>
<td>438</td>
<td>430</td>
<td>326</td>
<td>509</td>
</tr>
<tr>
<th>OUTPUT</th>
<td>438</td>
<td>430</td>
<td>326</td>
<td>509</td>
</tr>
</tbody>
</table>
</body>
</html>
# Future developments
After the release of this dataset, the next step is to publish the code that analyses the video to extract data from it; that algorithm considers both the ball and the wheels movement.
Once I publish it, I will probably get my hands on another project, you can know about it on my X.
# X: [( @mp_coder )](https://x.com/mp_coder)
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Tip for CV Project Development:<br><br>💠Always approach a problem from different perspectives<br><br>I have spent a lot of time trying to improve an algorithm for video analysis through redefining it. <br>Even if it did improved, another factor has made it much more precise📹 </p>&mdash; Mister P coder - mainly CV🚀 (@mp_coder) <a href="https://twitter.com/mp_coder/status/1869730297576833238?ref_src=twsrc%5Etfw">December 19, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>