shreenithi20/fmnist-t2i-diffusion
Updated
•
5
•
1
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
This dataset contains Fashion MNIST images converted to 8×8×4 VAE latent representations, optimized for training transformer-based diffusion models.
image_latents.npy: Latent representations (60000, 4, 8, 8)label_encodings.npy: One-hot encoded class labels (60000, 10)import numpy as np
# Load latents
latents = np.load("image_latents.npy") # Shape: (60000, 4, 8, 8)
labels = np.load("label_encodings.npy") # Shape: (60000, 10)
# Use with diffusion models
model_input = torch.from_numpy(latents).float()
class_conditioning = torch.from_numpy(labels).float()
The latents in this dataset are scaled by a factor of 0.18215. This scaling is applied by the VAE encoder to normalize the latent representations. When using these latents:
Example of correct decoding:
# Load scaled latents
scaled_latents = np.load("image_latents.npy")
# Unscale before decoding
unscaled_latents = scaled_latents / 0.18215
# Now pass to VAE decoder
decoded_images = vae_decoder(unscaled_latents)
This scaling factor is standard for the stabilityai/sd-vae-ft-mse model and ensures optimal training stability.