The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ValueError Message: Dataset has several default configs: 'unicode' and 'wordnet_eng'. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory return HubDatasetModuleFactory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 648, in get_module builder_configs, default_config_name = create_builder_configs_from_metadata_configs( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 313, in create_builder_configs_from_metadata_configs default_config_name = metadata_configs.get_default_config_name() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 186, in get_default_config_name raise ValueError( ValueError: Dataset has several default configs: 'unicode' and 'wordnet_eng'.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Geometric Vocabulary Embeddings
Dataset Description
This dataset provides geometric "crystal" embeddings for symbolic vocabulary from multiple sources. Each token is embedded as a 5-vertex simplex in n-dimensional space, creating a unique geometric representation.
Dataset Summary
- Sources: unicode, wordnet_eng
- Embedding Dimensions: 19, 22, 24, 25, 29, 500, 2500
- Total Tokens: ~150,000
- Format: Parquet files with optional SafeTensors embeddings
Supported Tasks
- Symbolic representation learning
- Geometric embedding analysis
- Cross-lingual vocabulary alignment
- Character and word-level modeling
Dataset Structure
Data Instances
Each instance contains:
token_id
: Unique identifier for the tokentoken
: The actual token (character or word)definition
: Textual definition or descriptionvolume
: Geometric volume of the crystal embeddingcardinal_id
: Cardinal axis identifiercrystal
: 5×d dimensional embedding (5 vertices in d dimensions)- Additional fields depending on source (e.g.,
unicode_codepoint
,synset_id
)
Data Splits
Each configuration has multiple splits based on embedding dimension:
unicode:
train_19d
: 19-dimensional embeddingstrain_22d
: 22-dimensional embeddingstrain_24d
: 24-dimensional embeddingstrain_25d
: 25-dimensional embeddingstrain_29d
: 29-dimensional embeddingstrain_500d
: 500-dimensional embeddingstrain_2500d
: 2500-dimensional embeddings
wordnet_eng:
train_19d
: 19-dimensional embeddingstrain_22d
: 22-dimensional embeddingstrain_24d
: 24-dimensional embeddingstrain_25d
: 25-dimensional embeddingstrain_29d
: 29-dimensional embeddingstrain_500d
: 500-dimensional embeddingstrain_2500d
: 2500-dimensional embeddings
Usage
from datasets import load_dataset
# Load Unicode embeddings at 1024 dimensions
dataset = load_dataset("AbstractPhil/geometric-vocab", "unicode", split="train_1024d")
# Access the crystal embeddings
for item in dataset:
token = item['token']
crystal = item['crystal'] # Shape: (5 * dim,) flattened
# Reshape to (5, dim) for 5 vertices in dim dimensions
crystal_3d = np.array(crystal).reshape(5, -1)
Dataset Creation
Curation Rationale
This dataset provides deterministic, geometrically-structured embeddings for symbolic vocabulary. The crystal embedding approach ensures:
- Unique representation for each token
- Geometric interpretability
- Deterministic generation from definitions
- Multi-scale representation (different dimensions)
Source Data
- Unicode: Character names and categories from Unicode standard
- WordNet: Synsets with definitions and examples from WordNet/OMW
Embedding Algorithm
Each token is embedded as a 5-vertex simplex (4-simplex) in n-dimensional space:
- Generate cardinal axes from token definition
- Create orthonormal frame
- Position vertices using geometric constraints
- Compute volume for quality metrics
Additional Information
Licensing Information
Apache 2.0
Citation Information
@dataset{geometric_vocab_2024,
title={Geometric Vocabulary Embeddings},
author={AbstractPhil},
year={2024},
publisher={Hugging Face}
}
Contributions
Thanks to the Unicode Consortium and Princeton WordNet for the underlying vocabulary data.
- Downloads last month
- -