eq-vae / README.md
nielsr's picture
nielsr HF staff
Improve model card: Add pipeline tag, license and link to project page
eaadfaf verified
|
raw
history blame
998 Bytes
metadata
library_name: diffusers
pipeline_tag: image-to-image
license: mit

EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling

EQ-VAE regularizes the latent space of pretrained autoencoders by enforcing equivariance under scaling and rotation transformations.

Project page: https://eq-vae.github.io/.


Model Description

This model is a regularized version of SD-VAE. We finetune it with EQ-VAE regularization for 5 epochs on OpenImages.

Model Usage

  1. Loading the Model
    You can load the model from the Hugging Face Hub:
    from transformers import AutoencoderKL
    model = AutoencoderKL.from_pretrained("zelaki/eq-vae")
    

Metrics

Reconstruction performance of eq-vae-ema on Imagenet Validation Set.

Metric Score
FID 0.82
PSNR 25.95
LPIPS 0.141
SSIM 0.72