Spaces:
Running
Running
title: README | |
emoji: π | |
colorFrom: yellow | |
colorTo: yellow | |
sdk: static | |
pinned: false | |
license: apache-2.0 | |
## Hierarchy Transformer | |
Hierarchy Transformer (HiT) is a framework that enables transformer encoder-based language models (LMs) to learn hierarchical structures in hyperbolic space. | |
## Get Started | |
Install `hierarchy_tranformers` (check our [repository](https://github.com/KRR-Oxford/HierarchyTransformers)) through `pip` or `GitHub`. | |
Use the following code to get started with HiTs: | |
```python | |
from hierarchy_transformers import HierarchyTransformer | |
# load the model | |
model = HierarchyTransformer.from_pretrained('Hierarchy-Transformers/HiT-MiniLM-L12-WordNetNoun') | |
# entity names to be encoded. | |
entity_names = ["computer", "personal computer", "fruit", "berry"] | |
# get the entity embeddings | |
entity_embeddings = model.encode(entity_names) | |
``` | |
## Citation | |
*Yuan He, Zhangdie Yuan, Jiaoyan Chen, Ian Horrocks.* **Language Models as Hierarchy Encoders.** Advances in Neural Information Processing Systems 37 (NeurIPS 2024). | |
``` | |
@inproceedings{NEURIPS2024_1a970a3e, | |
author = {He, Yuan and Yuan, Moy and Chen, Jiaoyan and Horrocks, Ian}, | |
booktitle = {Advances in Neural Information Processing Systems}, | |
editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang}, | |
pages = {14690--14711}, | |
publisher = {Curran Associates, Inc.}, | |
title = {Language Models as Hierarchy Encoders}, | |
url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/1a970a3e62ac31c76ec3cea3a9f68fdf-Paper-Conference.pdf}, | |
volume = {37}, | |
year = {2024} | |
} | |
``` | |