pou876 commited on
Commit
8b09b06
·
verified ·
1 Parent(s): f129127

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LLaVA-Deepfake Model
2
+
3
+ ## Overview
4
+ The `LLaVA-Deepfake` model is a fine-tuned version of `LLaVA-v1.5-13B`, specifically designed for detecting and analyzing deepfake images. This multimodal large language model (MLLM) not only identifies whether an image is a deepfake but also provides detailed explanations of manipulated areas, highlighting specific features such as irregularities in the eyes, mouth, or overall facial texture. The model leverages advanced vision and language processing capabilities, making it a powerful tool for forensic deepfake detection.
5
+
6
+ ---
7
+
8
+ ## Installation
9
+
10
+ Follow these steps to set up and run the `LLaVA-Deepfake` model:
11
+
12
+ ### Step 1: Clone the Repository
13
+ Start by cloning the model repository:
14
+ ```bash
15
+ git clone https://huggingface.co/pou876/llava-deepfake-model
16
+ cd llava-deepfake-model
17
+ ```
18
+
19
+
20
+ ### Step 2: Create a Python Environment
21
+ Set up a dedicated Python environment for running the model:
22
+ ```bash
23
+ conda create -n llava_deepfake python=3.10 -y
24
+ conda activate llava_deepfake
25
+ pip install --upgrade pip
26
+ pip install -r requirements.txt
27
+ ```
28
+
29
+ ## Running the Model
30
+
31
+ ### Step 1: Start the Controller
32
+ The controller manages the communication between components:
33
+ ```bash
34
+ python -m llava.serve.controller --host 0.0.0.0 --port 10000
35
+ ```
36
+
37
+ ### Step 2: Start the Model Worker
38
+ The worker loads the deepfake detection model and processes inference requests:
39
+ ```bash
40
+ python -m llava.serve.model_worker --host 0.0.0.0 \
41
+ --controller http://localhost:10000 --port 40000 \
42
+ --worker http://localhost:40000 \
43
+ --model-path ./llava-deepfake-model --load-4bit
44
+ ```
45
+
46
+ ### Step 3: Start the Gradio Web Server
47
+ The Gradio web server provides a user-friendly interface for interacting with the model:
48
+ ```bash
49
+ python -m llava.serve.gradio_web_server \
50
+ --controller http://localhost:10000 --model-list-mode reload --share
51
+ ```
52
+ Once the web server is running, a URL (e.g., http://127.0.0.1:7860) will be generated. Open this link in your browser to start using the interface.
53
+