README / README.md
lawhy's picture
Update README.md
fda4560 verified
|
raw
history blame
1.64 kB
---
title: README
emoji: πŸ‘
colorFrom: yellow
colorTo: yellow
sdk: static
pinned: false
license: apache-2.0
---
Hierarchy Transformers (HiTs) are capable of interpreting and encoding hierarchies explicitly.
The relevant code in [HierarchyTransformers](https://github.com/KRR-Oxford/HierarchyTransformers) extends from [Sentence-Transformers](https://huggingface.co/sentence-transformers).
## Get Started
Install `hierarchy_tranformers` (check our [repository](https://github.com/KRR-Oxford/HierarchyTransformers)) through `pip` or `GitHub`.
Use the following code to get started with HiTs:
```python
from hierarchy_transformers import HierarchyTransformer
# load the model
model = HierarchyTransformer.from_pretrained('Hierarchy-Transformers/HiT-MiniLM-L12-WordNetNoun')
# entity names to be encoded.
entity_names = ["computer", "personal computer", "fruit", "berry"]
# get the entity embeddings
entity_embeddings = model.encode(entity_names)
```
## Models
See available HiT models under this organisation.
## Datasets
The datasets for training and evaluating HiTs are available at [Zenodo](https://zenodo.org/doi/10.5281/zenodo.10511042).
## Citation
Our paper has been accepted at NeurIPS 2024 (to appear).
Preprint on arxiv: https://arxiv.org/abs/2401.11374.
*Yuan He, Zhangdie Yuan, Jiaoyan Chen, Ian Horrocks.* **Language Models as Hierarchy Encoders.** arXiv preprint arXiv:2401.11374 (2024).
```
@article{he2024language,
title={Language Models as Hierarchy Encoders},
author={He, Yuan and Yuan, Zhangdie and Chen, Jiaoyan and Horrocks, Ian},
journal={arXiv preprint arXiv:2401.11374},
year={2024}
}
```