WarpTuber
A real-time face animation tool that brings static images and videos to life using your webcam.
Tutorial: https://www.youtube.com/watch?v=w1_KY5E7Aig
Overview
WarpTuber allows you to animate portraits or character images by mapping your facial expressions and movements to them in real-time. It supports both static images and video animations, making it perfect for virtual avatars, content creation, and streaming.
Quick Start
- Connect your webcam
- Choose one of the following launch options:
camera_image.bat
- For animating a static imagecamera_video.bat
- For using a video animation with facial expressions
Setup Requirements
- Windows 10/11
- Webcam
- Python environment (included in the
venv
folder) - NVIDIA GPU with CUDA support (recommended)
- Python local install
Usage Instructions
Using Static Images (camera_image.bat
)
Use this mode when you want to animate a single static portrait or character image with your facial movements.
- Place your image in
assets/examples/source/
and name itmain.jpg
- Run
camera_image.bat
- When prompted, select your webcam index (usually 0 for the default webcam)
- The application will open with your animated image
Using Video Animations (camera_video.bat
)
Use this mode when you want to combine a pre-recorded animation/idle loop with your facial expressions.
- Place your video in
assets/examples/source/
and name itmain.mp4
- Run
camera_video.bat
- When prompted, select your webcam index (usually 0 for the default webcam)
- The application will open with your video animation that responds to your facial movements
File Requirements
- Static Image: Must be named
main.jpg
and placed inassets/examples/source/
- Animation Video: Must be named
main.mp4
and placed inassets/examples/source/
For best results:
- Images should be clear portraits with visible facial features
- Videos should be looping animations with consistent lighting
- Both should have dimensions of at least 512x512 pixels
Image Compatibility and Facial Landmark Detection
WarpTuber relies on facial landmark detection to animate images. Not all images will work properly, especially:
- Highly stylized anime or cartoon characters
- Images with unusual facial proportions
- Artwork with abstract facial features
- Images with poor lighting or low contrast
- Side-profile portraits (faces should be mostly front-facing)
If you encounter an error message like "no face in [image path]! exit!"
, this means the system failed to detect facial landmarks in your image.
Solutions for facial landmark detection issues:
Try a different image: Use photographs or realistic illustrations with clear facial features
Enable animal mode: For cartoon characters, try running with the
--animal
parameter:camera_image.bat --animal
Adjust the image: Edit your image to make facial features more prominent:
- Increase contrast around eyes, nose, and mouth
- Ensure the face is well-lit and centered
- Crop the image to focus more on the face
- Convert stylized characters to a more realistic style using AI tools
Pre-process with face enhancement: Use photo editing software to enhance facial features before using with WarpTuber
Note that even with these adjustments, some highly stylized images may not work with the current facial landmark detection system.
Advanced Configuration
Command Line Parameters
Both batch files support additional parameters:
--src_image [path]
- Specify a custom source image/video path--animal
- Enable animal face mode--paste_back
- Enable background preservation (enabled by default)--interactive
- Enable interactive controls (enabled by default)--advanced_ui
- Enable advanced UI controls
Example: camera_image.bat --src_image assets/my_custom_folder/portrait.jpg --animal
TensorRT Optimization
This repository includes pre-compiled TensorRT models for optimal performance. If you encounter issues with the included models, you may need to compile your own:
Navigate to the
checkpoints
directory and locate any existing.trt
files:- Human model files:
checkpoints/liveportrait_onnx/*.trt
- Animal model files:
checkpoints/liveportrait_animal_onnx/*.trt
- Example path:
C:\FLivePort\WarpTuber\checkpoints\liveportrait_onnx\stitching_lip.trt
- Human model files:
Delete these
.trt
files if you're experiencing compatibility issuesRun
scripts/all_onnx2trt.bat
to recompile all models- This will convert all ONNX models to TensorRT format optimized for your specific GPU
- The conversion process may take several minutes to complete
Note: Compiling your own TensorRT models is only necessary if the pre-compiled models don't work on your system. The compilation process creates optimized models specifically for your GPU hardware.
Troubleshooting
- If no webcams are detected, ensure your camera is properly connected and not in use by another application
- If the animation appears laggy, try closing other GPU-intensive applications
- If you encounter model loading errors, try running the
scripts/all_onnx2trt.bat
script to compile models for your specific GPU - If you see the error
"no face in driving frame"
, ensure your webcam can clearly see your face with good lighting
License
MIT License
Copyright (c) 2024 Kuaishou Visual Generation and Interaction Center
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Important Note:
- The code of InsightFace is released under the MIT License.
- The models of InsightFace are for non-commercial research purposes only.
- If you want to use the WarpTuber project for commercial purposes, you should remove and replace InsightFace's detection models to fully comply with the MIT license.
Acknowledgements
This project is based on FasterLivePortrait by warmshao, which is an optimized implementation of the original LivePortrait technology developed by the Kuaishou Technology team.
The original LivePortrait technology was created by Jianzhu Guo, Dingyun Zhang, Xiaoqiang Liu, Zhizhou Zhong, Yuan Zhang, Pengfei Wan, and Di Zhang from Kuaishou Technology and academic institutions.
WarpTuber is a customized implementation that simplifies the usage of this technology with a focus on ease of use for content creators and streamers.
Key technologies used:
- Face landmark detection and tracking
- Neural rendering for portrait animation
- TensorRT optimization for real-time performance
Special thanks to the original developers for making this technology accessible and open source.
For more information about the original research, please visit liveportrait.github.io.