Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Fashion MNIST 8×8 Latents Dataset

This dataset contains Fashion MNIST images converted to 8×8×4 VAE latent representations, optimized for training transformer-based diffusion models.

Dataset Information

  • Source: Fashion MNIST (60,000 training images)
  • Latent Resolution: 8×8×4 (compressed from 64×64×3)
  • Total Samples: 60,000
  • Data Type: float32
  • Latent Range: [-3.65, 3.66]
  • VAE Model: stabilityai/sd-vae-ft-mse

Files

  • image_latents.npy: Latent representations (60000, 4, 8, 8)
  • label_encodings.npy: One-hot encoded class labels (60000, 10)

Class Labels (0-9)

  1. T-shirt/top
  2. Trouser
  3. Pullover
  4. Dress
  5. Coat
  6. Sandal
  7. Shirt
  8. Sneaker
  9. Bag
  10. Ankle boot

Usage

import numpy as np

# Load latents
latents = np.load("image_latents.npy")  # Shape: (60000, 4, 8, 8)
labels = np.load("label_encodings.npy")  # Shape: (60000, 10)

# Use with diffusion models
model_input = torch.from_numpy(latents).float()
class_conditioning = torch.from_numpy(labels).float()

VAE Configuration

  • Input Resolution: 64×64×3 (FMNIST images resized)
  • Output Resolution: 8×8×4 (compressed latents)
  • Compression Factor: 64× (64×64×3 → 8×8×4)
  • Latent Scale: 0.18215

Important Note on Scaling

The latents in this dataset are scaled by a factor of 0.18215. This scaling is applied by the VAE encoder to normalize the latent representations. When using these latents:

  • For training: Use the scaled latents as-is (the model learns to work with this scaling)
  • For decoding back to images: Manually divide the latents by 0.18215 before passing to the VAE decoder
  • For custom processing: If you need the original unscaled latents, divide by 0.18215

Example of correct decoding:

# Load scaled latents
scaled_latents = np.load("image_latents.npy")

# Unscale before decoding
unscaled_latents = scaled_latents / 0.18215

# Now pass to VAE decoder
decoded_images = vae_decoder(unscaled_latents)

This scaling factor is standard for the stabilityai/sd-vae-ft-mse model and ensures optimal training stability.

Downloads last month
8

Models trained or fine-tuned on shreenithi20/fmnist-8x8-latents