File size: 4,234 Bytes
7a07d47
 
 
 
 
a9bf51d
4ea6260
e591627
 
 
4ea6260
 
 
 
 
 
 
 
 
 
 
e591627
 
4ea6260
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03d4258
4ea6260
 
8d2a085
 
 
03d4258
 
8d2a085
03d4258
4ea6260
 
 
 
e591627
 
 
 
 
 
 
 
 
 
4ea6260
 
e591627
03d4258
4ea6260
 
e591627
 
 
 
 
 
 
 
 
 
 
 
 
4ea6260
 
 
 
 
 
 
 
 
 
 
 
 
 
f7f7b5d
4ea6260
 
 
e591627
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
viewer: false
tags: [uv-script]
---

# Atlas Export

Generate and deploy interactive embedding visualizations to HuggingFace Spaces with a single command using Apple's [Embedding Atlas](https://github.com/apple/embedding-atlas) library!

![Example Visualization](https://huggingface.co/datasets/huggingface/documentation-images/resolve/58e8cf44ec5d74712775b7049b1203f9b0b15ebb/hub/atlas-dataset-library-screenshot.png)

## Quick Start

```bash
# Create a Space from any text dataset
uv run atlas-export.py stanfordnlp/imdb --space-name my-imdb-viz

# Your Space will be live at:
# https://huggingface.co/spaces/YOUR_USERNAME/my-imdb-viz
```

**⚠️ Note:** Currently this approach is limited by storage capacity for files on Spaces so for big datasets consider using the `--sample` option to limit the number of points visualized!

## Examples

### Image Datasets

```bash
# Visualize image datasets with CLIP
uv run atlas-export.py \
    beans \
    --space-name bean-disease-atlas \
    --image-column image \
    --model openai/clip-vit-base-patch32
```

### Custom Embeddings

```bash
# Use a specific embedding model
uv run atlas-export.py \
    wikipedia \
    --space-name wiki-viz \
    --model nomic-ai/nomic-embed-text-v1.5 \
    --text-column text \
    --sample 50000
```

### Pre-computed Embeddings

```bash
# If you already have embeddings in your dataset
uv run atlas-export.py \
    my-dataset-with-embeddings \
    --space-name my-viz \
    --no-compute-embeddings \
    --x-column umap_x \
    --y-column umap_y
```

### GPU Acceleration (HF Jobs)

```bash
# First, get your HF token (if not already set)
python -c "from huggingface_hub import get_token; print(get_token())"

# Run on HF Jobs with GPU using experimental UV support
hf jobs uv run --flavor t4-small \
    -s HF_TOKEN=your-token-here \
    https://huggingface.co/datasets/uv-scripts/build-atlas/raw/main/atlas-export.py \
    stanfordnlp/imdb \
    --space-name imdb-viz \
    --model sentence-transformers/all-mpnet-base-v2 \
    --sample 10000

# With larger GPU and custom batch size for faster processing
hf jobs uv run --flavor a10g-large \
    -s HF_TOKEN=your-token-here \
    https://huggingface.co/datasets/uv-scripts/build-atlas/raw/main/atlas-export.py \
    your-dataset \
    --space-name your-atlas \
    --batch-size 64 \
    --sample 50000 \
    --text-column output
```

Note: Replace `your-token-here` with your actual token. Available GPU flavors: `t4-small`, `t4-medium`, `l4x1`, `a10g-small`, `a10g-large`.

## Key Options

| Option           | Description                         | Default                |
| ---------------- | ----------------------------------- | ---------------------- |
| `dataset_id`     | HuggingFace dataset to visualize    | Required               |
| `--space-name`   | Name for your Space                 | Required               |
| `--model`        | Embedding model to use              | Auto-selected          |
| `--text-column`  | Column containing text              | "text"                 |
| `--image-column` | Column containing images            | None                   |
| `--sample`       | Number of samples to visualize      | All                    |
| `--batch-size`   | Batch size for embedding generation | 32 (text), 16 (images) |
| `--split`        | Dataset split to use                | "train"                |
| `--local-only`   | Generate locally without deploying  | False                  |
| `--output-dir`   | Local output directory              | Temp dir               |
| `--hf-token`     | HuggingFace API token               | From env/CLI           |

Run without arguments to see all options and more examples.

## How It Works

1. Loads dataset from HuggingFace Hub
2. Generates embeddings (or uses pre-computed)
3. Creates static web app with embedded data
4. Deploys to HF Space

The resulting visualization runs entirely in the browser using WebGPU acceleration.

## Credits

Built on [Embedding Atlas](https://github.com/apple/embedding-atlas) by Apple. See the [documentation](https://apple.github.io/embedding-atlas/) for more details about the underlying technology.

---

Part of the [UV Scripts](https://huggingface.co/uv-scripts) collection 🚀