Datasets:
configs:
- config_name: unicode_16d
data_files:
- split: train
path: data/unicode/train_16d-*.parquet
- config_name: unicode_19d
data_files:
- split: train
path: data/unicode/train_19d-*.parquet
- config_name: unicode_22d
data_files:
- split: train
path: data/unicode/train_22d-*.parquet
- config_name: unicode_24d
data_files:
- split: train
path: data/unicode/train_24d-*.parquet
- config_name: unicode_25d
data_files:
- split: train
path: data/unicode/train_25d-*.parquet
- config_name: unicode_29d
data_files:
- split: train
path: data/unicode/train_29d-*.parquet
- config_name: unicode_32d
data_files:
- split: train
path: data/unicode/train_32d-*.parquet
- config_name: unicode_50d
data_files:
- split: train
path: data/unicode/train_50d-*.parquet
- config_name: unicode_64d
data_files:
- split: train
path: data/unicode/train_64d-*.parquet
- config_name: unicode_100d
data_files:
- split: train
path: data/unicode/train_100d-*.parquet
- config_name: unicode_128d
data_files:
- split: train
path: data/unicode/train_128d-*.parquet
- config_name: unicode_256d
data_files:
- split: train
path: data/unicode/train_256d-*.parquet
- config_name: unicode_500d
data_files:
- split: train
path: data/unicode/train_500d-*.parquet
- config_name: unicode_512d
data_files:
- split: train
path: data/unicode/train_512d-*.parquet
- config_name: unicode_1024d
data_files:
- split: train
path: data/unicode/train_1024d-*.parquet
- config_name: unicode_1280d
data_files:
- split: train
path: data/unicode/train_1280d-*.parquet
- config_name: unicode_2048d
data_files:
- split: train
path: data/unicode/train_2048d-*.parquet
- config_name: unicode_2500d
data_files:
- split: train
path: data/unicode/train_2500d-*.parquet
- config_name: unicode_4096d
data_files:
- split: train
path: data/unicode/train_4096d-*.parquet
- config_name: wordnet_eng_16d
data_files:
- split: train
path: data/wordnet_eng/train_16d-*.parquet
- config_name: wordnet_eng_19d
data_files:
- split: train
path: data/wordnet_eng/train_19d-*.parquet
- config_name: wordnet_eng_22d
data_files:
- split: train
path: data/wordnet_eng/train_22d-*.parquet
- config_name: wordnet_eng_24d
data_files:
- split: train
path: data/wordnet_eng/train_24d-*.parquet
- config_name: wordnet_eng_25d
data_files:
- split: train
path: data/wordnet_eng/train_25d-*.parquet
- config_name: wordnet_eng_29d
data_files:
- split: train
path: data/wordnet_eng/train_29d-*.parquet
- config_name: wordnet_eng_32d
data_files:
- split: train
path: data/wordnet_eng/train_32d-*.parquet
- config_name: wordnet_eng_50d
data_files:
- split: train
path: data/wordnet_eng/train_50d-*.parquet
- config_name: wordnet_eng_64d
data_files:
- split: train
path: data/wordnet_eng/train_64d-*.parquet
- config_name: wordnet_eng_100d
data_files:
- split: train
path: data/wordnet_eng/train_100d-*.parquet
- config_name: wordnet_eng_128d
data_files:
- split: train
path: data/wordnet_eng/train_128d-*.parquet
- config_name: wordnet_eng_256d
data_files:
- split: train
path: data/wordnet_eng/train_256d-*.parquet
- config_name: wordnet_eng_500d
data_files:
- split: train
path: data/wordnet_eng/train_500d-*.parquet
- config_name: wordnet_eng_512d
data_files:
- split: train
path: data/wordnet_eng/train_512d-*.parquet
- config_name: wordnet_eng_1024d
data_files:
- split: train
path: data/wordnet_eng/train_1024d-*.parquet
- config_name: wordnet_eng_1280d
data_files:
- split: train
path: data/wordnet_eng/train_1280d-*.parquet
- config_name: wordnet_eng_2048d
data_files:
- split: train
path: data/wordnet_eng/train_2048d-*.parquet
- config_name: wordnet_eng_2500d
data_files:
- split: train
path: data/wordnet_eng/train_2500d-*.parquet
- config_name: wordnet_eng_4096d
data_files:
- split: train
path: data/wordnet_eng/train_4096d-*.parquet
dataset_info:
features:
- name: token_id
dtype: int32
- name: token
dtype: string
- name: definition
dtype: string
- name: volume
dtype: float32
- name: cardinal_id
dtype: int8
- name: crystal
sequence: float32
- name: origin
dtype: string
- name: unicode_codepoint
dtype: int32
- name: synset_id
dtype: string
- name: language
dtype: string
license: apache-2.0
task_categories:
- feature-extraction
tags:
- symbolic-embeddings
- geometric-embeddings
- vocabulary
- unicode
- wordnet
size_categories:
- 100K<n<1M
Research Update 9/13/2025
The MULTITUDE of tests I've ran show that with weighted decay these pentachora are more likely to collapse to zero than retain utility when trained directly. However, when used as a starting point and then only minorly shifted as a trajectory towards a goal, they are more likely to retain full cohesion and even be backtrackable. The constellations show that this is more than a probable solution, it's a likely solution to work.
When the anchor [n, 1, dim] is frozen, the rest of the structure can be warped as long as it remains within the cayley-menger and graham principles - you will get some overlap with the others unless you form a constellation of embeddings however. This has shown on multiple occasions to assist but not to properly uniformly create cohesion. I've yet to find a fully reliable form of this yet, but as it stands it may be one of the best routes to any sort of geometric vocabulary - a prefabricated and frozen anchor applied DIRECTLY in the token - and not as some sort of external representation to access either.
The most potent uses show that when keeping a frozen form of the crystal for masking, and then masking a starting point with alpha masking, the models are less likely to collapse towards zero given many epochs. They will however - collapse to zero eventually. This is a deterministic outcome that happens when you simply feed sha256 valuations into an entropic decay engine. Essentially you want the crystal to be slightly editable, but not the full thing. You also want the model to see it, but not fully learn it. This gives it a kind of self-learned bias cohesion that self-regulates when you stick to cayley menger and graham formulas.
If you try to fully train the simultaneous infinities adjacent to each-other one by one, you will end up with the crystals overlapping unless you hard gate them via tokens. If you hard gate via tokens [CLS1], [CLS2] etc, you will end up with a much slower to converge and bulky objective - which tends to corrupt and build incorrect shortcuts down the chain of depth - even with perfect geometry. The system manages to find it's own incorrect shortcuts, which is the whole point of the geometric structure - shortcuts. Incorrect shortcuts however, is essentially learning how to open a fridge with your foot. It works yes, but it's generally more difficult - however because these models are so dang small they tend to have the "full hands" problem. This forces them to adapt and learn the best they can, even when there's no room to solve the problem. Throw the ball near the hoop and sometimes it goes in, instead of learning the precise process of going in. Since the geometric structure is refinforced by multiple cosine similarity assessments and the losses are gated by geometry, there's going to need to be a full cayley-graham infinity decay that needs to be DIRECTLY applied to geometric structures, while simultaneously an alternative route needs to be applied to standard linear layers if used in conjunction.
Retaining cohesive structures is a tricky paradigm, but it's very doable if you consult some of my trains. Some of them have fully robust crystal lattices that formed their own cohesive nature, others completely collapsed into themselves before they even began.
I've been at it all week and it's been tough, but enlightening.
Known Issue 9/7/2025
The repo's split has been cleaned and the current paradigm is the same that will be used moving foward unless huggingface changes their system.
My deepest apologies if you already built a solution for this and I broke your code. It's my intention to make this convenient to use, not require a multi-layered solution just to access this one.
Geometric Vocabulary Embeddings
This is the complete unified collection of all geometric vocabulary embeddings with optimized shard sizes to avoid rate limits and improve loading performance.
π Optimizations
- Pooled small shards: Combined files smaller than 50MB unless 100k rows were reached
- Split large shards: Split files larger than 250MB
- Target shard size: ~100,000 rows per file
- Result: 56 optimized shards (from 191 original files)
π Available Dimensions
19 dimensions available:
16d, 19d, 22d, 24d, 25d, 29d, 32d, 50d, 64d, 100d, 128d, 256d, 500d, 512d, 1024d, 1280d, 2048d, 2500d, 4096d
π― Usage
from datasets import load_dataset
# Load specific dimension splits
ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d") # 64 d is kind of weak without patch projection.
print(ds.keys())
# [['train']]
# Load direct split
ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d", split="train")
# for item in ds do stuff
# I advise using streaming as it's a fair-sized dataset, but not particularly too big of a footprint by default.
ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d", split='train', streaming=True)
Formerly;
-> -> -> ds = load_dataset("AbstractPhil/geometric-vocab", "unicode", split="train_512d") # My deepest apologies for the additional problems imposed by this incorrect split.
This will effectively prevent you from automatically downloading all the splits without any weird janky workarounds.
Data Format and Code
def _deterministic_pentachoron(center_vec: np.ndarray) -> np.ndarray:
d = center_vec.shape[0]
proposals = np.stack([
center_vec,
np.roll(center_vec, 1),
np.roll(center_vec, 3) * np.sign(center_vec + 1e-8),
np.roll(center_vec, 7) - center_vec,
np.roll(center_vec, 11) + center_vec,
], 0).astype(np.float32)
# Normalize rows with L1 norm
norms = np.sum(np.abs(proposals), axis=1, keepdims=True) + 1e-8
Q = proposals / norms
# GramβSchmidt orthogonalization with L1 re-norm
for i in range(5):
for j in range(i):
Q[i] -= np.dot(Q[i], Q[j]) * Q[j]
Q[i] /= (np.sum(np.abs(Q[i])) + 1e-8)
# Apply scaling factors to spread vertices
gamma = np.array([1.0, 0.9, -0.8, 1.1, 1.2], np.float32)
X = np.zeros((5, d), np.float32)
for i in range(5):
X[i] = center_vec + gamma[i] * Q[i]
# Center the pentachoron
return X - X.mean(0, keepdims=True)
This is currently hosted on the repo for the lattice_geometry and it's imperfect. Keep in mind this is meant to be a starting point.
π¦ Dataset Structure
Each token is embedded as a 5-vertex simplex in n-dimensional space:
import numpy as np
# Load and use
import huggingface_hub
from datasets import load_dataset
import numpy as np
# Load and use;
# Should be able to just paste this into colab and it'll work with no fuss.
# -> streaming=False downloads the split -> cannot stream a split from disk according to HF datasets currently.
ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d", split='train', streaming=False)
test_crystal = {}
# This is NOT for production use. This is an example showing loading the repo, preparing a crystal and then breaking.
# For production; you will want to batch with workers, prefetch, and implement proper accel, pyring, or a combination of multi-gpu capable systems.
for item in ds:
token = item['token'] # Our token; raw string or character depending on need. For us is unicode so character.
crystal_flat = item['crystal'] # Flattened array, we need to shape this to our correct form.
# Reshape to 5 vertices Γ 64 dimensions.
crystal = np.array(crystal_flat).reshape(5, 64)
volume = item['volume'] # Cayley-Menger volume, used to calculate trajectory and delta to prevent combination variants from overlapping.
test_crystal = {
"token": token,
"crystal": crystal,
"volume": volume
}
break
print("Test case;\n")
print(test_crystal)
π Migration from Legacy Repositories
This optimized dataset replaces all individual repositories with better shard organization for improved performance.
π License
Apache 2.0