AIWarper commited on
Commit
69312d7
·
verified ·
1 Parent(s): b0bcb0b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +165 -2
README.md CHANGED
@@ -1,3 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WarpTuber
2
+
3
+ A real-time face animation tool that brings static images and videos to life using your webcam.
4
+
5
+ ## Overview
6
+
7
+ WarpTuber allows you to animate portraits or character images by mapping your facial expressions and movements to them in real-time. It supports both static images and video animations, making it perfect for virtual avatars, content creation, and streaming.
8
+
9
+ ## Quick Start
10
+
11
+ 1. Connect your webcam
12
+ 2. Choose one of the following launch options:
13
+ - `camera_image.bat` - For animating a static image
14
+ - `camera_video.bat` - For using a video animation with facial expressions
15
+
16
+ ## Setup Requirements
17
+
18
+ - Windows 10/11
19
+ - Webcam
20
+ - Python environment (included in the `venv` folder)
21
+ - NVIDIA GPU with CUDA support (recommended)
22
+
23
+ ## Usage Instructions
24
+
25
+ ### Using Static Images (`camera_image.bat`)
26
+
27
+ Use this mode when you want to animate a single static portrait or character image with your facial movements.
28
+
29
+ 1. Place your image in `assets/examples/source/` and name it `main.jpg`
30
+ 2. Run `camera_image.bat`
31
+ 3. When prompted, select your webcam index (usually 0 for the default webcam)
32
+ 4. The application will open with your animated image
33
+
34
+ ### Using Video Animations (`camera_video.bat`)
35
+
36
+ Use this mode when you want to combine a pre-recorded animation/idle loop with your facial expressions.
37
+
38
+ 1. Place your video in `assets/examples/source/` and name it `main.mp4`
39
+ 2. Run `camera_video.bat`
40
+ 3. When prompted, select your webcam index (usually 0 for the default webcam)
41
+ 4. The application will open with your video animation that responds to your facial movements
42
+
43
+ ### File Requirements
44
+
45
+ - **Static Image**: Must be named `main.jpg` and placed in `assets/examples/source/`
46
+ - **Animation Video**: Must be named `main.mp4` and placed in `assets/examples/source/`
47
+
48
+ For best results:
49
+ - Images should be clear portraits with visible facial features
50
+ - Videos should be looping animations with consistent lighting
51
+ - Both should have dimensions of at least 512x512 pixels
52
+
53
+ ### Image Compatibility and Facial Landmark Detection
54
+
55
+ WarpTuber relies on facial landmark detection to animate images. Not all images will work properly, especially:
56
+
57
+ - Highly stylized anime or cartoon characters
58
+ - Images with unusual facial proportions
59
+ - Artwork with abstract facial features
60
+ - Images with poor lighting or low contrast
61
+ - Side-profile portraits (faces should be mostly front-facing)
62
+
63
+ If you encounter an error message like `"no face in [image path]! exit!"`, this means the system failed to detect facial landmarks in your image.
64
+
65
+ **Solutions for facial landmark detection issues:**
66
+
67
+ 1. **Try a different image**: Use photographs or realistic illustrations with clear facial features
68
+ 2. **Enable animal mode**: For cartoon characters, try running with the `--animal` parameter:
69
+ ```
70
+ camera_image.bat --animal
71
+ ```
72
+ 3. **Adjust the image**: Edit your image to make facial features more prominent:
73
+ - Increase contrast around eyes, nose, and mouth
74
+ - Ensure the face is well-lit and centered
75
+ - Crop the image to focus more on the face
76
+ - Convert stylized characters to a more realistic style using AI tools
77
+
78
+ 4. **Pre-process with face enhancement**: Use photo editing software to enhance facial features before using with WarpTuber
79
+
80
+ Note that even with these adjustments, some highly stylized images may not work with the current facial landmark detection system.
81
+
82
+ ## Advanced Configuration
83
+
84
+ ### Command Line Parameters
85
+
86
+ Both batch files support additional parameters:
87
+
88
+ - `--src_image [path]` - Specify a custom source image/video path
89
+ - `--animal` - Enable animal face mode
90
+ - `--paste_back` - Enable background preservation (enabled by default)
91
+ - `--interactive` - Enable interactive controls (enabled by default)
92
+ - `--advanced_ui` - Enable advanced UI controls
93
+
94
+ Example: `camera_image.bat --src_image assets/my_custom_folder/portrait.jpg --animal`
95
+
96
+ ### TensorRT Optimization
97
+
98
+ This repository includes pre-compiled TensorRT models for optimal performance. If you encounter issues with the included models, you may need to compile your own:
99
+
100
+ 1. Navigate to the `checkpoints` directory and locate any existing `.trt` files:
101
+ - Human model files: `checkpoints/liveportrait_onnx/*.trt`
102
+ - Animal model files: `checkpoints/liveportrait_animal_onnx/*.trt`
103
+ - Example path: `C:\FLivePort\WarpTuber\checkpoints\liveportrait_onnx\stitching_lip.trt`
104
+
105
+ 2. Delete these `.trt` files if you're experiencing compatibility issues
106
+
107
+ 3. Run `scripts/all_onnx2trt.bat` to recompile all models
108
+ - This will convert all ONNX models to TensorRT format optimized for your specific GPU
109
+ - The conversion process may take several minutes to complete
110
+
111
+ Note: Compiling your own TensorRT models is only necessary if the pre-compiled models don't work on your system. The compilation process creates optimized models specifically for your GPU hardware.
112
+
113
+ ## Troubleshooting
114
+
115
+ - If no webcams are detected, ensure your camera is properly connected and not in use by another application
116
+ - If the animation appears laggy, try closing other GPU-intensive applications
117
+ - If you encounter model loading errors, try running the `scripts/all_onnx2trt.bat` script to compile models for your specific GPU
118
+ - If you see the error `"no face in driving frame"`, ensure your webcam can clearly see your face with good lighting
119
+
120
+ ## License
121
+
122
+ MIT License
123
+
124
+ Copyright (c) 2024 Kuaishou Visual Generation and Interaction Center
125
+
126
+ Permission is hereby granted, free of charge, to any person obtaining a copy
127
+ of this software and associated documentation files (the "Software"), to deal
128
+ in the Software without restriction, including without limitation the rights
129
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
130
+ copies of the Software, and to permit persons to whom the Software is
131
+ furnished to do so, subject to the following conditions:
132
+
133
+ The above copyright notice and this permission notice shall be included in all
134
+ copies or substantial portions of the Software.
135
+
136
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
137
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
138
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
139
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
140
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
141
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
142
+ SOFTWARE.
143
+
144
  ---
145
+
146
+ **Important Note**:
147
+ - The code of InsightFace is released under the MIT License.
148
+ - The models of InsightFace are for non-commercial research purposes only.
149
+ - If you want to use the WarpTuber project for commercial purposes, you should remove and replace InsightFace's detection models to fully comply with the MIT license.
150
+
151
+ ## Acknowledgements
152
+
153
+ This project is based on [FasterLivePortrait](https://github.com/warmshao/FasterLivePortrait) by [warmshao](https://github.com/warmshao), which is an optimized implementation of the original [LivePortrait](https://github.com/KwaiVGI/LivePortrait) technology developed by the Kuaishou Technology team.
154
+
155
+ The original LivePortrait technology was created by Jianzhu Guo, Dingyun Zhang, Xiaoqiang Liu, Zhizhou Zhong, Yuan Zhang, Pengfei Wan, and Di Zhang from Kuaishou Technology and academic institutions.
156
+
157
+ WarpTuber is a customized implementation that simplifies the usage of this technology with a focus on ease of use for content creators and streamers.
158
+
159
+ Key technologies used:
160
+ - Face landmark detection and tracking
161
+ - Neural rendering for portrait animation
162
+ - TensorRT optimization for real-time performance
163
+
164
+ Special thanks to the original developers for making this technology accessible and open source.
165
+
166
+ For more information about the original research, please visit [liveportrait.github.io](https://liveportrait.github.io/).