AIWarper commited on
Commit
fd36070
·
verified ·
1 Parent(s): 6d4abb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +167 -165
README.md CHANGED
@@ -1,166 +1,168 @@
1
- # WarpTuber
2
-
3
- A real-time face animation tool that brings static images and videos to life using your webcam.
4
-
5
- ## Overview
6
-
7
- WarpTuber allows you to animate portraits or character images by mapping your facial expressions and movements to them in real-time. It supports both static images and video animations, making it perfect for virtual avatars, content creation, and streaming.
8
-
9
- ## Quick Start
10
-
11
- 1. Connect your webcam
12
- 2. Choose one of the following launch options:
13
- - `camera_image.bat` - For animating a static image
14
- - `camera_video.bat` - For using a video animation with facial expressions
15
-
16
- ## Setup Requirements
17
-
18
- - Windows 10/11
19
- - Webcam
20
- - Python environment (included in the `venv` folder)
21
- - NVIDIA GPU with CUDA support (recommended)
22
-
23
- ## Usage Instructions
24
-
25
- ### Using Static Images (`camera_image.bat`)
26
-
27
- Use this mode when you want to animate a single static portrait or character image with your facial movements.
28
-
29
- 1. Place your image in `assets/examples/source/` and name it `main.jpg`
30
- 2. Run `camera_image.bat`
31
- 3. When prompted, select your webcam index (usually 0 for the default webcam)
32
- 4. The application will open with your animated image
33
-
34
- ### Using Video Animations (`camera_video.bat`)
35
-
36
- Use this mode when you want to combine a pre-recorded animation/idle loop with your facial expressions.
37
-
38
- 1. Place your video in `assets/examples/source/` and name it `main.mp4`
39
- 2. Run `camera_video.bat`
40
- 3. When prompted, select your webcam index (usually 0 for the default webcam)
41
- 4. The application will open with your video animation that responds to your facial movements
42
-
43
- ### File Requirements
44
-
45
- - **Static Image**: Must be named `main.jpg` and placed in `assets/examples/source/`
46
- - **Animation Video**: Must be named `main.mp4` and placed in `assets/examples/source/`
47
-
48
- For best results:
49
- - Images should be clear portraits with visible facial features
50
- - Videos should be looping animations with consistent lighting
51
- - Both should have dimensions of at least 512x512 pixels
52
-
53
- ### Image Compatibility and Facial Landmark Detection
54
-
55
- WarpTuber relies on facial landmark detection to animate images. Not all images will work properly, especially:
56
-
57
- - Highly stylized anime or cartoon characters
58
- - Images with unusual facial proportions
59
- - Artwork with abstract facial features
60
- - Images with poor lighting or low contrast
61
- - Side-profile portraits (faces should be mostly front-facing)
62
-
63
- If you encounter an error message like `"no face in [image path]! exit!"`, this means the system failed to detect facial landmarks in your image.
64
-
65
- **Solutions for facial landmark detection issues:**
66
-
67
- 1. **Try a different image**: Use photographs or realistic illustrations with clear facial features
68
- 2. **Enable animal mode**: For cartoon characters, try running with the `--animal` parameter:
69
- ```
70
- camera_image.bat --animal
71
- ```
72
- 3. **Adjust the image**: Edit your image to make facial features more prominent:
73
- - Increase contrast around eyes, nose, and mouth
74
- - Ensure the face is well-lit and centered
75
- - Crop the image to focus more on the face
76
- - Convert stylized characters to a more realistic style using AI tools
77
-
78
- 4. **Pre-process with face enhancement**: Use photo editing software to enhance facial features before using with WarpTuber
79
-
80
- Note that even with these adjustments, some highly stylized images may not work with the current facial landmark detection system.
81
-
82
- ## Advanced Configuration
83
-
84
- ### Command Line Parameters
85
-
86
- Both batch files support additional parameters:
87
-
88
- - `--src_image [path]` - Specify a custom source image/video path
89
- - `--animal` - Enable animal face mode
90
- - `--paste_back` - Enable background preservation (enabled by default)
91
- - `--interactive` - Enable interactive controls (enabled by default)
92
- - `--advanced_ui` - Enable advanced UI controls
93
-
94
- Example: `camera_image.bat --src_image assets/my_custom_folder/portrait.jpg --animal`
95
-
96
- ### TensorRT Optimization
97
-
98
- This repository includes pre-compiled TensorRT models for optimal performance. If you encounter issues with the included models, you may need to compile your own:
99
-
100
- 1. Navigate to the `checkpoints` directory and locate any existing `.trt` files:
101
- - Human model files: `checkpoints/liveportrait_onnx/*.trt`
102
- - Animal model files: `checkpoints/liveportrait_animal_onnx/*.trt`
103
- - Example path: `C:\FLivePort\WarpTuber\checkpoints\liveportrait_onnx\stitching_lip.trt`
104
-
105
- 2. Delete these `.trt` files if you're experiencing compatibility issues
106
-
107
- 3. Run `scripts/all_onnx2trt.bat` to recompile all models
108
- - This will convert all ONNX models to TensorRT format optimized for your specific GPU
109
- - The conversion process may take several minutes to complete
110
-
111
- Note: Compiling your own TensorRT models is only necessary if the pre-compiled models don't work on your system. The compilation process creates optimized models specifically for your GPU hardware.
112
-
113
- ## Troubleshooting
114
-
115
- - If no webcams are detected, ensure your camera is properly connected and not in use by another application
116
- - If the animation appears laggy, try closing other GPU-intensive applications
117
- - If you encounter model loading errors, try running the `scripts/all_onnx2trt.bat` script to compile models for your specific GPU
118
- - If you see the error `"no face in driving frame"`, ensure your webcam can clearly see your face with good lighting
119
-
120
- ## License
121
-
122
- MIT License
123
-
124
- Copyright (c) 2024 Kuaishou Visual Generation and Interaction Center
125
-
126
- Permission is hereby granted, free of charge, to any person obtaining a copy
127
- of this software and associated documentation files (the "Software"), to deal
128
- in the Software without restriction, including without limitation the rights
129
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
130
- copies of the Software, and to permit persons to whom the Software is
131
- furnished to do so, subject to the following conditions:
132
-
133
- The above copyright notice and this permission notice shall be included in all
134
- copies or substantial portions of the Software.
135
-
136
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
137
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
138
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
139
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
140
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
141
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
142
- SOFTWARE.
143
-
144
- ---
145
-
146
- **Important Note**:
147
- - The code of InsightFace is released under the MIT License.
148
- - The models of InsightFace are for non-commercial research purposes only.
149
- - If you want to use the WarpTuber project for commercial purposes, you should remove and replace InsightFace's detection models to fully comply with the MIT license.
150
-
151
- ## Acknowledgements
152
-
153
- This project is based on [FasterLivePortrait](https://github.com/warmshao/FasterLivePortrait) by [warmshao](https://github.com/warmshao), which is an optimized implementation of the original [LivePortrait](https://github.com/KwaiVGI/LivePortrait) technology developed by the Kuaishou Technology team.
154
-
155
- The original LivePortrait technology was created by Jianzhu Guo, Dingyun Zhang, Xiaoqiang Liu, Zhizhou Zhong, Yuan Zhang, Pengfei Wan, and Di Zhang from Kuaishou Technology and academic institutions.
156
-
157
- WarpTuber is a customized implementation that simplifies the usage of this technology with a focus on ease of use for content creators and streamers.
158
-
159
- Key technologies used:
160
- - Face landmark detection and tracking
161
- - Neural rendering for portrait animation
162
- - TensorRT optimization for real-time performance
163
-
164
- Special thanks to the original developers for making this technology accessible and open source.
165
-
 
 
166
  For more information about the original research, please visit [liveportrait.github.io](https://liveportrait.github.io/).
 
1
+ # WarpTuber
2
+
3
+ A real-time face animation tool that brings static images and videos to life using your webcam.
4
+
5
+ Tutorial: https://www.youtube.com/watch?v=w1_KY5E7Aig
6
+
7
+ ## Overview
8
+
9
+ WarpTuber allows you to animate portraits or character images by mapping your facial expressions and movements to them in real-time. It supports both static images and video animations, making it perfect for virtual avatars, content creation, and streaming.
10
+
11
+ ## Quick Start
12
+
13
+ 1. Connect your webcam
14
+ 2. Choose one of the following launch options:
15
+ - `camera_image.bat` - For animating a static image
16
+ - `camera_video.bat` - For using a video animation with facial expressions
17
+
18
+ ## Setup Requirements
19
+
20
+ - Windows 10/11
21
+ - Webcam
22
+ - Python environment (included in the `venv` folder)
23
+ - NVIDIA GPU with CUDA support (recommended)
24
+
25
+ ## Usage Instructions
26
+
27
+ ### Using Static Images (`camera_image.bat`)
28
+
29
+ Use this mode when you want to animate a single static portrait or character image with your facial movements.
30
+
31
+ 1. Place your image in `assets/examples/source/` and name it `main.jpg`
32
+ 2. Run `camera_image.bat`
33
+ 3. When prompted, select your webcam index (usually 0 for the default webcam)
34
+ 4. The application will open with your animated image
35
+
36
+ ### Using Video Animations (`camera_video.bat`)
37
+
38
+ Use this mode when you want to combine a pre-recorded animation/idle loop with your facial expressions.
39
+
40
+ 1. Place your video in `assets/examples/source/` and name it `main.mp4`
41
+ 2. Run `camera_video.bat`
42
+ 3. When prompted, select your webcam index (usually 0 for the default webcam)
43
+ 4. The application will open with your video animation that responds to your facial movements
44
+
45
+ ### File Requirements
46
+
47
+ - **Static Image**: Must be named `main.jpg` and placed in `assets/examples/source/`
48
+ - **Animation Video**: Must be named `main.mp4` and placed in `assets/examples/source/`
49
+
50
+ For best results:
51
+ - Images should be clear portraits with visible facial features
52
+ - Videos should be looping animations with consistent lighting
53
+ - Both should have dimensions of at least 512x512 pixels
54
+
55
+ ### Image Compatibility and Facial Landmark Detection
56
+
57
+ WarpTuber relies on facial landmark detection to animate images. Not all images will work properly, especially:
58
+
59
+ - Highly stylized anime or cartoon characters
60
+ - Images with unusual facial proportions
61
+ - Artwork with abstract facial features
62
+ - Images with poor lighting or low contrast
63
+ - Side-profile portraits (faces should be mostly front-facing)
64
+
65
+ If you encounter an error message like `"no face in [image path]! exit!"`, this means the system failed to detect facial landmarks in your image.
66
+
67
+ **Solutions for facial landmark detection issues:**
68
+
69
+ 1. **Try a different image**: Use photographs or realistic illustrations with clear facial features
70
+ 2. **Enable animal mode**: For cartoon characters, try running with the `--animal` parameter:
71
+ ```
72
+ camera_image.bat --animal
73
+ ```
74
+ 3. **Adjust the image**: Edit your image to make facial features more prominent:
75
+ - Increase contrast around eyes, nose, and mouth
76
+ - Ensure the face is well-lit and centered
77
+ - Crop the image to focus more on the face
78
+ - Convert stylized characters to a more realistic style using AI tools
79
+
80
+ 4. **Pre-process with face enhancement**: Use photo editing software to enhance facial features before using with WarpTuber
81
+
82
+ Note that even with these adjustments, some highly stylized images may not work with the current facial landmark detection system.
83
+
84
+ ## Advanced Configuration
85
+
86
+ ### Command Line Parameters
87
+
88
+ Both batch files support additional parameters:
89
+
90
+ - `--src_image [path]` - Specify a custom source image/video path
91
+ - `--animal` - Enable animal face mode
92
+ - `--paste_back` - Enable background preservation (enabled by default)
93
+ - `--interactive` - Enable interactive controls (enabled by default)
94
+ - `--advanced_ui` - Enable advanced UI controls
95
+
96
+ Example: `camera_image.bat --src_image assets/my_custom_folder/portrait.jpg --animal`
97
+
98
+ ### TensorRT Optimization
99
+
100
+ This repository includes pre-compiled TensorRT models for optimal performance. If you encounter issues with the included models, you may need to compile your own:
101
+
102
+ 1. Navigate to the `checkpoints` directory and locate any existing `.trt` files:
103
+ - Human model files: `checkpoints/liveportrait_onnx/*.trt`
104
+ - Animal model files: `checkpoints/liveportrait_animal_onnx/*.trt`
105
+ - Example path: `C:\FLivePort\WarpTuber\checkpoints\liveportrait_onnx\stitching_lip.trt`
106
+
107
+ 2. Delete these `.trt` files if you're experiencing compatibility issues
108
+
109
+ 3. Run `scripts/all_onnx2trt.bat` to recompile all models
110
+ - This will convert all ONNX models to TensorRT format optimized for your specific GPU
111
+ - The conversion process may take several minutes to complete
112
+
113
+ Note: Compiling your own TensorRT models is only necessary if the pre-compiled models don't work on your system. The compilation process creates optimized models specifically for your GPU hardware.
114
+
115
+ ## Troubleshooting
116
+
117
+ - If no webcams are detected, ensure your camera is properly connected and not in use by another application
118
+ - If the animation appears laggy, try closing other GPU-intensive applications
119
+ - If you encounter model loading errors, try running the `scripts/all_onnx2trt.bat` script to compile models for your specific GPU
120
+ - If you see the error `"no face in driving frame"`, ensure your webcam can clearly see your face with good lighting
121
+
122
+ ## License
123
+
124
+ MIT License
125
+
126
+ Copyright (c) 2024 Kuaishou Visual Generation and Interaction Center
127
+
128
+ Permission is hereby granted, free of charge, to any person obtaining a copy
129
+ of this software and associated documentation files (the "Software"), to deal
130
+ in the Software without restriction, including without limitation the rights
131
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
132
+ copies of the Software, and to permit persons to whom the Software is
133
+ furnished to do so, subject to the following conditions:
134
+
135
+ The above copyright notice and this permission notice shall be included in all
136
+ copies or substantial portions of the Software.
137
+
138
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
139
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
140
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
141
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
142
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
143
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
144
+ SOFTWARE.
145
+
146
+ ---
147
+
148
+ **Important Note**:
149
+ - The code of InsightFace is released under the MIT License.
150
+ - The models of InsightFace are for non-commercial research purposes only.
151
+ - If you want to use the WarpTuber project for commercial purposes, you should remove and replace InsightFace's detection models to fully comply with the MIT license.
152
+
153
+ ## Acknowledgements
154
+
155
+ This project is based on [FasterLivePortrait](https://github.com/warmshao/FasterLivePortrait) by [warmshao](https://github.com/warmshao), which is an optimized implementation of the original [LivePortrait](https://github.com/KwaiVGI/LivePortrait) technology developed by the Kuaishou Technology team.
156
+
157
+ The original LivePortrait technology was created by Jianzhu Guo, Dingyun Zhang, Xiaoqiang Liu, Zhizhou Zhong, Yuan Zhang, Pengfei Wan, and Di Zhang from Kuaishou Technology and academic institutions.
158
+
159
+ WarpTuber is a customized implementation that simplifies the usage of this technology with a focus on ease of use for content creators and streamers.
160
+
161
+ Key technologies used:
162
+ - Face landmark detection and tracking
163
+ - Neural rendering for portrait animation
164
+ - TensorRT optimization for real-time performance
165
+
166
+ Special thanks to the original developers for making this technology accessible and open source.
167
+
168
  For more information about the original research, please visit [liveportrait.github.io](https://liveportrait.github.io/).