maxhuber commited on
Commit
df1c1c5
·
1 Parent(s): 9ad2f73

Updated readme, added repo card, fixed rgb/bgr swap

Browse files
Files changed (3) hide show
  1. README.md +12 -1
  2. deepsquid.png +0 -0
  3. helpers.py +1 -0
README.md CHANGED
@@ -5,4 +5,15 @@ colorFrom: gray
5
  colorTo: pink
6
  title: DeepSquid
7
  short_description: Deepfake Detection RNN
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
5
  colorTo: pink
6
  title: DeepSquid
7
  short_description: Deepfake Detection RNN
8
+ ---
9
+
10
+ ![](./deepsquid.png)
11
+
12
+ The DeepSquid model uses [Mesonet](https://arxiv.org/abs/1809.00888) to predict deepfake confidence values for each sampled frame of a given YouTube video, and then applies a Recurrent Neural Network (RNN) to the sequence of these frame predictions to produce a final classification for the entire video.
13
+
14
+ Trained using data from the open-source [Deepstar](https://www.zerofox.com/deepstar-open-source-toolkit/) toolkit, DeepSquid demonstrated promising results, even achieving 100% validation accuracy during training. However, its performance is limited by the relatively small and somewhat outdated dataset. Consequently, newer deepfake models or highly advanced video models like SORA are likely to evade detection by DeepSquid.
15
+
16
+ Time taken by the demo may vary broadly based on internet connection, but usually takes about 10 seconds to load a video and 20 seconds to classify it. The option to classify a video only becomes available once the video is properly loaded.
17
+
18
+ You can try the demo here:
19
+ [DeepSquid](https://deepsquid.vercel.app/)
deepsquid.png ADDED
helpers.py CHANGED
@@ -118,6 +118,7 @@ def sample_frames_from_video_file(capture, sample_count=10, frames_per_sample=10
118
 
119
 
120
  def format_frames(frame, output_size):
 
121
  frame = tf.image.convert_image_dtype(frame, tf.float32)
122
  frame = tf.image.resize_with_pad(frame, *output_size)
123
  return frame
 
118
 
119
 
120
  def format_frames(frame, output_size):
121
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
122
  frame = tf.image.convert_image_dtype(frame, tf.float32)
123
  frame = tf.image.resize_with_pad(frame, *output_size)
124
  return frame