Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Dataset has several default configs: 'unicode' and 'wordnet_eng'.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
                  return HubDatasetModuleFactory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 648, in get_module
                  builder_configs, default_config_name = create_builder_configs_from_metadata_configs(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 313, in create_builder_configs_from_metadata_configs
                  default_config_name = metadata_configs.get_default_config_name()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 186, in get_default_config_name
                  raise ValueError(
              ValueError: Dataset has several default configs: 'unicode' and 'wordnet_eng'.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "text-representation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Geometric Vocabulary Embeddings

Dataset Description

This dataset provides geometric "crystal" embeddings for symbolic vocabulary from multiple sources. Each token is embedded as a 5-vertex simplex in n-dimensional space, creating a unique geometric representation.

Dataset Summary

  • Sources: unicode, wordnet_eng
  • Embedding Dimensions: 19, 22, 24, 25, 29, 500, 2500
  • Total Tokens: ~150,000
  • Format: Parquet files with optional SafeTensors embeddings

Supported Tasks

  • Symbolic representation learning
  • Geometric embedding analysis
  • Cross-lingual vocabulary alignment
  • Character and word-level modeling

Dataset Structure

Data Instances

Each instance contains:

  • token_id: Unique identifier for the token
  • token: The actual token (character or word)
  • definition: Textual definition or description
  • volume: Geometric volume of the crystal embedding
  • cardinal_id: Cardinal axis identifier
  • crystal: 5×d dimensional embedding (5 vertices in d dimensions)
  • Additional fields depending on source (e.g., unicode_codepoint, synset_id)

Data Splits

Each configuration has multiple splits based on embedding dimension:

unicode:

  • train_19d: 19-dimensional embeddings
  • train_22d: 22-dimensional embeddings
  • train_24d: 24-dimensional embeddings
  • train_25d: 25-dimensional embeddings
  • train_29d: 29-dimensional embeddings
  • train_500d: 500-dimensional embeddings
  • train_2500d: 2500-dimensional embeddings

wordnet_eng:

  • train_19d: 19-dimensional embeddings
  • train_22d: 22-dimensional embeddings
  • train_24d: 24-dimensional embeddings
  • train_25d: 25-dimensional embeddings
  • train_29d: 29-dimensional embeddings
  • train_500d: 500-dimensional embeddings
  • train_2500d: 2500-dimensional embeddings

Usage

from datasets import load_dataset

# Load Unicode embeddings at 1024 dimensions
dataset = load_dataset("AbstractPhil/geometric-vocab", "unicode", split="train_1024d")

# Access the crystal embeddings
for item in dataset:
    token = item['token']
    crystal = item['crystal']  # Shape: (5 * dim,) flattened
    # Reshape to (5, dim) for 5 vertices in dim dimensions
    crystal_3d = np.array(crystal).reshape(5, -1)

Dataset Creation

Curation Rationale

This dataset provides deterministic, geometrically-structured embeddings for symbolic vocabulary. The crystal embedding approach ensures:

  1. Unique representation for each token
  2. Geometric interpretability
  3. Deterministic generation from definitions
  4. Multi-scale representation (different dimensions)

Source Data

  • Unicode: Character names and categories from Unicode standard
  • WordNet: Synsets with definitions and examples from WordNet/OMW

Embedding Algorithm

Each token is embedded as a 5-vertex simplex (4-simplex) in n-dimensional space:

  1. Generate cardinal axes from token definition
  2. Create orthonormal frame
  3. Position vertices using geometric constraints
  4. Compute volume for quality metrics

Additional Information

Licensing Information

Apache 2.0

Citation Information

@dataset{geometric_vocab_2024,
  title={Geometric Vocabulary Embeddings},
  author={AbstractPhil},
  year={2024},
  publisher={Hugging Face}
}

Contributions

Thanks to the Unicode Consortium and Princeton WordNet for the underlying vocabulary data.

Downloads last month
-